Dutch Open Speech Recognition Benchmark
Welcome to the benchmark page where researchers and developers report performance of various ASR models on Dutch datasets.
UT's benchmark
UT = University of Twente
- Results for N-Best 2008 Dutch Evaluation corpus
- Results for Jasmin-CGN corpus
- Results for Common Voice
- Environment setup
- Why do the results differ between whisper-timestamped and faster-whisper?
The results in bold indicate the best performance for the specific subset(s) between all models. The lower, the better.
These results were achieved during the PDI-SSH Oral History - Stories at the Museum around Art (OH-SMArt) project (2022-2025).
RU's Kaldi_NL vs. Whisper vs. Wav2vec2.0 evaluation
RU = Radboud University
These results were achieved during the PDI-SSH Homo Medicinalis (HoMed) project (2021-2024).
NISV's Whisper benchmark
NISV = Netherlands Institute for Sound & Vision
The results in bold indicate the best performance for the specific subset(s) between all models. The lower, the better.
Contributions
Feel free to click the link at the top that leads you to the GitHub repository of this website. You may add changes if you want by forking the repository, making changes on your fork, then opening a pull request on the source repository.
FAQ
Coming soon