Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
(3 minutes introduction)
Yangyang Shi (Facebook, USA), Varun Nagaraja (Facebook, USA), Chunyang Wu (Facebook, USA), Jay Mahadeokar (Facebook, USA), Duc Le (Facebook, USA), Rohit Prabhavalkar (Facebook, USA), Alex Xiao (Facebook, USA), Ching-Feng Yeh (Facebook, USA), Julian Chan (Facebook, USA), Christian Fuegen (Facebook, USA), Ozlem Kalinli (Facebook, USA), Michael L. Seltzer (Facebook, USA) |
---|
We propose a dynamic encoder transducer (DET) for on-device speech recognition. One DET model scales to multiple devices with different computation capacities without retraining or finetuning. To trading off accuracy and latency, DET assigns different encoders to decode different parts of an utterance. We apply and compare the layer dropout and the collaborative learning for DET training. The layer dropout method that randomly drops out encoder layers in the training phase, can do on-demand layer dropout in decoding. Collaborative learning jointly trains multiple encoders with different depths in one single model. Experiment results on Librispeech and in-house data show that DET provides a flexible accuracy and latency trade-off. Results on Librispeech show that the full-size encoder in DET relatively reduces the word error rate of the same size baseline by over 8%. The lightweight encoder in DET trained with collaborative learning reduces the model size by 25% but still gets similar WER as the full-size baseline. DET gets similar accuracy as a baseline model with better latency on a large in-house data set by assigning a lightweight encoder for the beginning part of one utterance and a full-size encoder for the rest.