Three-class Overlapped Speech Detection using a Convolutional Recurrent Neural Network
(3 minutes introduction)
Jee-weon Jung (Naver, Korea), Hee-Soo Heo (Naver, Korea), Youngki Kwon (Naver, Korea), Joon Son Chung (Naver, Korea), Bong-Jin Lee (Naver, Korea) |
---|
In this work, we propose an overlapped speech detection system trained as a three-class classifier. Unlike conventional systems that perform binary classification as to whether or not a frame contains overlapped speech, the proposed approach classifies into three classes: non-speech, single speaker speech, and overlapped speech. By training a network with the more detailed label definition, the model can learn a better notion on deciding the number of speakers included in a given frame. A convolutional recurrent neural network architecture is explored to benefit from both convolutional layer’s capability to model local patterns and recurrent layer’s ability to model sequential information. The proposed overlapped speech detection model establishes a state-of-the-art performance with a precision of 0.6648 and a recall of 0.3222 on the DIHARD II evaluation set, showing a 20% increase in recall along with higher precision. In addition, we also introduce a simple approach to utilize the proposed overlapped speech detection model for speaker diarization which ranked third place in the Track 1 of the DIHARD III challenge.