Availability of dev-kit for HEAR 2021

the challenge website[1] says “We will provide you with a dev-kit including the data for the open tasks, and a script for performing evaluation. This dev-kit will also include a baseline embedding model in a standardized API (see below).”

Is there any estimate for when such a dev-kit will be available? Even in a work-in-progress form it would be quite useful to ensure what we submit will be compatible and go through the leaderboard evaluation without major issues.

  1. HEAR 2021 NeurIPS Challenge · Neural Audio AI

@jonnor The following repos are almost stable, and currently can help you check compatibility:

  • hear-validator: This package provides a command-line tool to verify that a python module follows the HEAR 2021 common API.
  • hear-baseline: A simple DSP-based audio embedding consisting of a Mel-frequency spectrogram followed by a random projection. Serves as the naive baseline model for the HEAR 2021 and implements the common API required by the competition evaluation.

Currently less mature is heal-eval-kit, which will demonstrate how downstream evaluation will happen on the three open tasks.

Please let us know if you have any questions or encounter any issues.

@turian Thank you those resources are excellent! I tested the validator, and was easily able to find and fix a couple of bugs in our implementation of the API.

At some later point I will try out the evaluation kit, to get an idea for the performance.

1 Like

@jonnor The first leaderboard will be up in two weeks. Please see recent updates about rolling leaderboard submissions

1 Like