Multi-channel Opus compression for far-field automatic speech recognition with a fixed bitrate budget
(longer introduction)
Lukas Drude (Amazon, Germany), Jahn Heymann (Amazon, Germany), Andreas Schwarz (Amazon, Germany), Jean-Marc Valin (Amazon, USA) |
---|
Automatic speech recognition (ASR) in the cloud allows the use of larger models and more powerful multi-channel signal processing front-ends compared to on-device processing. However, it also adds an inherent latency due to the transmission of the audio signal, especially when transmitting multiple channels of a microphone array. One way to reduce the network bandwidth requirements is client-side compression with a lossy codec such as Opus. However, this compression can have a detrimental effect especially on multi-channel ASR front-ends, due to the distortion and loss of spatial information introduced by the codec. In this publication, we propose an improved approach for the compression of microphone array signals based on Opus, using a modified joint channel coding approach and additionally introducing a multi-channel spatial decorrelating transform to reduce redundancy in the transmission. We illustrate the effect of the proposed approach on the spatial information retained in multi-channel signals after compression, and evaluate the performance on far-field ASR with a multi-channel beamforming front-end. We demonstrate that our approach can lead to a 37.5% bitrate reduction or a 5.1% relative word error rate (WER) reduction for a fixed bitrate budget in a seven channel setup.