...

Efficient Distillation of Multi-task Speech Models via Language-Specific Experts


View a PDF of the paper titled Multilingual DistilWhisper: Efficient Distillation of Multi-task Speech Models via Language-Specific Experts, by Thomas Palmeira Ferraz and 3 other authors

View PDF
HTML (experimental)

Abstract:Whisper is a multitask and multilingual speech model covering 99 languages. It yields commendable automatic speech recognition (ASR) results in a subset of its covered languages, but the model still underperforms on a non-negligible number of under-represented languages, a problem exacerbated in smaller model versions. In this work, we propose DistilWhisper, an approach able to bridge the performance gap in ASR for these languages while retaining the advantages of multitask and multilingual capabilities. Our approach involves two key strategies: lightweight modular ASR fine-tuning of whisper-small using language-specific experts, and knowledge distillation from whisper-large-v2. This dual approach allows us to effectively boost ASR performance while keeping the robustness inherited from the multitask and multilingual pre-training. Results demonstrate that our approach is more effective than standard fine-tuning or LoRA adapters, boosting performance in the targeted languages for both in- and out-of-domain test sets, while introducing only a negligible parameter overhead at inference.

Submission history

From: Thomas Palmeira Ferraz [view email]
[v1]
Thu, 2 Nov 2023 08:37:30 UTC (2,350 KB)
[v2]
Wed, 17 Jan 2024 10:56:08 UTC (2,353 KB)
[v3]
Tue, 12 Mar 2024 14:50:30 UTC (652 KB)
[v4]
Sat, 29 Nov 2025 15:39:59 UTC (1,267 KB)

Source link

#Efficient #Distillation #Multitask #Speech #Models #LanguageSpecific #Experts