Semi-Supervision in ASR: Sequential MixMatch and Factorized TTS-Based Augmentation
(3 minutes introduction)
Zhehuai Chen (Google, USA), Andrew Rosenberg (Google, USA), Yu Zhang (Google, USA), Heiga Zen (Google, Japan), Mohammadreza Ghodsi (Google, USA), Yinghui Huang (Google, USA), Jesse Emond (Google, USA), Gary Wang (Google, USA), Bhuvana Ramabhadran (Google, USA), Pedro J. Moreno (Google, USA) |
---|
Semi and self-supervised training techniques have the potential to improve performance of speech recognition systems without additional transcribed speech data. In this work, we demonstrate the efficacy of two approaches to semi-supervision for automated speech recognition. The two approaches leverage vast amounts of available unspoken text and untranscribed audio. First, we present factorized multilingual speech synthesis to improve data augmentation on unspoken text. Next, we propose the Sequential MixMatch algorithm with iterative learning to learn from untranscribed speech. The algorithm is built on top of our online implementation of Noisy Student Training. We demonstrate the compatibility of these techniques yielding an overall relative reduction of word error rate of up to 14.4% on the voice search tasks on 4 Indic languages.