Clarifications for downloading and loading model weights

Hello!

We have a few questions about the model weights in the submissions:

  • Can we make multiple submissions with the same GitHub repository but with different weight files, or are we restricted to a single submission per repository?
  • Can we internally load model weights, provided they are the same as the weights specified in the submission?
  • Is the filename of the weights file retained after being downloaded during evaluation?
  • Can the filename itself be used to set hyperparameters in the model?

Thanks!

  • For the monthly leaderboard, we can only guarantee we will run the last model submitted. If we are doing well on the compute budget, which we are currently, we are happy to run more models for you.
  • Can you explain why you want to load model weights internally? Our preference (for sandboxing reasons) is that we load the weights through the specified API, but we are interested in understanding your use-case.
  • We have not decided whether we will retain the filename or not. Is it important for you, and why?
  • Our preference would be that all model weights and hyperparams be contained within the file, because that’s more portable and people tend to rename weights files. We are open to this, if you can explain.

Thanks for the helpful response! :slight_smile: Our particular use case is directly using openl3 in our submission code, which internally handles model loading given a configuration. After some discussion though, we decided it would be best to slightly modify the openl3 API so custom model weights can be loaded, rather than doing hacky workarounds like parsing the filename to get the configuration.

We are also interested in submitting different configurations of openl3 that aren’t tied to model weights. Given that there’s not a guarantee right now that multiple configurations can be submitted, we can just choose hardcoded configurations.

Actually, we may need to still parse the filename in order to determine the model architecture to create. openl3 loads only the model weight, and constructs the model architecture in code, for which we require the configuration. In that case, we would need to make sure the filename is retained. We can still modify the API to load from a given weight file, we’ll just need to parse it still. We maintain a naming convention for our model weights that already works for this, so they can be used as is. Do you think you would be able to accommodate this use case?

Thank you for your time!

@auroracramer Adding the filename to model load API is a breaking change, but it is also relatively harmless.

We are happy to talk this over with you, we just want to understand the pros and cons before potentially introducing this change to other participants.

  1. Have you considered a wrapper around the openl3 model, in which the model file now is a PKL that contains the metadata AND the original model? Or similar approaches that embed the metadata within the model file somehow? I haven’t thought too hard about whether this would be simple or a PITA to implement, just curious your thoughts too.
  2. What ML library are you using? If torch, we have a pretty good torch openl3 port and are happy to add improvements that will make your life easier. If TF, be warned that openl3’s particularly arcane tensorflow<1.14 requirements are tricky to install on many cloud GPU cloud architectures and don’t currently hew to the challenge’s TF>=2.0 requirements. (In fact, the difficulty in running openl3 on GCP and AWS was one motivation for the challenge.) If tf <1.14 is a strong requirement of yours, let us know because that is a bigger discussion.
  3. Submitting different configurations is something we are open to for intermediate leaderboards, and for model selection and tuning, but we definitely are leaning towards one submission for the final results. The reason being, the challenge encourages one holistic audio model for all purposes, rather than the current choose-your-own-adventure-depending-upon-your-application approach.

As always, please let us know your thoughts because we are excited for dialogue with the community!

Last question: If your submission is implementing the API call load_model(model_file_path: Str) -> Model , can’t you grab the model file path from there?

  1. This would work, since in the upcoming release we can load arbitrary model files. Though as is we’d need to save the weights to disk as a temporary file so we can pass the filename along to OpenL3 to load. Would that be acceptable?
  2. We’re using Tensorflow 2.x now (OpenL3 supports it now)
  3. Got it!

If I’m understanding correctly, for load_model, we can grab the model file path from there but if the filename changed we wouldn’t be able to parse it. Though if we go the wrapper route, this shouldn’t matter.

@auroracramer Saving the weights as a temporary file is fine. We can help you if there are any submission problems.

We were originally considering to restrict disk access for security and cheating prevention, but we now think this is unncessarily tricky.