Is Random Projection Necessary?

I’m preparing a submission based on the hear-eval and hear-baseline repo.

According to hear-baseline, evaluation is performed through random projection after embedding extraction.

But I want to train a projection layer specialized for each task. Can I save the projection layer weight and attach it?

One concern is that the API doesn’t have a task type condition. If I use a custom projection layer, is only one projection layer allowed?


@SeungHeonDoh We are excited to have you participate!

The code in hear-baseline is a sample submission model that matches the hear API. It’s just an example. You can submit whatever model you want.

The secret tasks are not available for training a projection layer. Whatever embeddings your model outputs, we will plugin in as input features to a fully-connected network that does evaluation on the downstream secret tasks.

Please let us know if you have any more questions or clarifications you need!