Between-Class Covariance Correction For Linear Discriminant Analysis in Language Recognition
Abhinav Misra, Qian Zhang, Finnian Kelly and John H.L. Hansen |
---|
Linear Discriminant Analysis (LDA) is one of the most widely-used channel compensation techniques in current speaker and language recognition systems. In this study, we propose a technique of Between-Class Covariance Correction (BCC) to improve language recognition performance. This approach builds on the idea of Within-Class Covariance Correction (WCC), which was introduced as a means to compensate for mismatch between development and test data in speaker recognition. In BCC, we compute eigendirections representing the multi-modal distributions of language i-vectors, and show that incorporating these directions in LDA leads to an improvement in recognition performance. Considering each cluster in the multi-modal i-vector distribution as a separate class, the between- and within-cluster covariance matrices are used to update the global between-language covariance. This is in contrast to WCC, for which the within-class covariance is updated. Using the proposed method, a relative overall improvement of +8.4% Equal Error Rate (EER) is obtained on the 2015 NIST Language Recognition Evaluation (LRE) data. Our approach offers insights toward addressing the challenging problem of mismatch compensation, which has much wider applications in both speaker and language recognition.