Unsupervised Cross-lingual Representation Learning for Speech Recognition
(3 minutes introduction)
Alexis Conneau (Facebook, USA), Alexei Baevski (Facebook, USA), Ronan Collobert (Facebook, USA), Abdelrahman Mohamed (Facebook, USA), Michael Auli (Facebook, USA) |
---|
This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages.