Understanding eval-kit

Hi organizers. First a big thank you to the tech stack behind this challenge. This is really well thought out and seems to be designed very well.

I have a question regarding the eval-kit. Correct me please, if I misunderstand the purpose of the kit:
Participants are supposed to design embeddings that should ideally perform well on at least the three open downstream tasks. Therefore, I thought, the obvious workflow would be to also train the downstream tasks to get a rough idea about the performance.

the eval-kit seems to exactly do this, however it doesn’t come equipped with the public tasks and it is stated:

This assumes that your current working directory contains a folder called tasks produced by heareval.tasks.runner . If this directory is in a different location or named something different you can use the option --tasks-dir .

So I’d have to check out the preprocess repository to add the tasks, right?

There it’s stated that:

Unless you are a HEAR organizer or want to contribute a task, you won’t need this repo

Do I miss something or is it just not recommended to train on the participant’s site?

Hi @faroit,

Thanks for your question! We have updated the hear-eval-kit and now include download links for the open tasks on the github README. You can now run the evaluation using your model using only the eval-kit and the pre-made open task datasets.

Don’t hesitate to ask if you have any additional questions about using the eval kit.

1 Like