A Unified Approach for Audio Characterization and its Application to Speaker Recognition
Presented by: |
| ||
---|---|---|---|
Author(s): |
|
Systems designed to solve speech processing tasks like speech or speaker recognition, language identification, or emotion detection are known to be affected by the recording conditions of the acoustic signal, like the channel, background noise, reverberation, and so on. Knowledge of the nuisance characteristics present in the signal can be used to improve performance of the system. In some cases, the nature of these nuisance characteristics is known a priori, but in most practical cases it is not. Most approaches used to automatically detect the characteristics of a signal are designed for a specific type of effect: noise, reverberation, language, type of channel, and so on. We propose a method for detecting the audio characteristics of a signal in a unified way, based on iVectors. We show results for the detector itself and for its use as metadata during calibration of a state-of- the-art speaker recognition system based on iVectors extracted from Mel frequency cepstral coefficients. Results show relative gains in equal error rate of up to 15% in a variety of recording conditions.