Hi organizers. First a big thank you to the tech stack behind this challenge. This is really well thought out and seems to be designed very well.
I have a question regarding the eval-kit. Correct me please, if I misunderstand the purpose of the kit:
Participants are supposed to design embeddings that should ideally perform well on at least the three open downstream tasks. Therefore, I thought, the obvious workflow would be to also train the downstream tasks to get a rough idea about the performance.
the eval-kit seems to exactly do this, however it doesn’t come equipped with the public tasks and it is stated:
This assumes that your current working directory contains a folder called
heareval.tasks.runner. If this directory is in a different location or named something different you can use the option
So I’d have to check out the preprocess repository to add the tasks, right?
There it’s stated that:
Unless you are a HEAR organizer or want to contribute a task, you won’t need this repo
Do I miss something or is it just not recommended to train on the participant’s site?