Participants must submit for the closed-condition; open-condition submission is optional. For each condition (closed and, if submitted, open), the final evaluation consists of two tasks:
More detail on protocols, trial formats, and submission will be specified in the full evaluation plan and on the Baseline Systems page.
Rankings may be computed per task and/or per condition; details will be specified when the evaluation phase opens.
The evaluation will be conducted with a focus on system performance, fairness, and adherence to the challenge protocol. We will release the evaluation data and the evaluation trial pair lists when the evaluation phase opens. No details about the evaluation set (size, languages, or trial format) are disclosed beforehand to ensure a fair and unbiased benchmark.
Important: Information about the evaluation set will not be disclosed until the evaluation phase begins.
During the development phase, participants have access to both:
The validation set shares the same 40 languages and multi-lingual-per-speaker structure as the training set. Validation protocols and trial formats are described on the Dataset Download and Baseline Systems pages.
For information about the official baseline systems, please see the Baseline Systems page.