Improving RNN-T for Domain Scaling Using Semi-Supervised Training with Neural TTS
(3 minutes introduction)
Yan Deng (Microsoft, China), Rui Zhao (Microsoft, USA), Zhong Meng (Microsoft, USA), Xie Chen (Microsoft, USA), Bing Liu (Microsoft, China), Jinyu Li (Microsoft, USA), Yifan Gong (Microsoft, USA), Lei He (Microsoft, China) |
---|
Recurrent neural network transducer (RNN-T) has shown to be comparable with conventional hybrid model for speech recognition. However, there is still a challenge in out-of-domain scenarios with context or words different from training data. In this paper, we explore the semi-supervised training which optimizes RNN-T jointly with neural text-to-speech (TTS) to better generalize to new domains using domain-specific text data. We apply the method to two tasks: one with out-of-domain context and the other with significant out-of-vocabulary (OOV) words. The results show that the proposed method significantly improves the recognition accuracy in both tasks, resulting in 61.4% and 53.8% relative word error rate (WER) reductions respectively, from a well-trained RNN-T with 65 thousand hours of training data. We do further study on the semi-supervised training methodology: 1) which modules of RNN-T model to be updated; 2) the impact of using different neural TTS models; 3) the performance of using text with different relevancy to target domain. Finally, we compare several RNN-T customization methods, and conclude that semi-supervised training with neural TTS is comparable and complementary with Internal Language Model Estimation (ILME) or biasing.