AvaTr: One-Shot Speaker Extraction with Transformers
(3 minutes introduction)
Shell Xu Hu (Upload AI, USA), Md. Rifat Arefin (Upload AI, USA), Viet-Nhat Nguyen (Upload AI, USA), Alish Dipani (Upload AI, USA), Xaq Pitkow (Upload AI, USA), Andreas Savas Tolias (Upload AI, USA) |
---|
To extract the voice of a target speaker when mixed with a variety of other sounds, such as white and ambient noises or the voices of interfering speakers, we extend the Transformer network [1] to attend the most relevant information with respect to the target speaker given the characteristics of his or her voices as a form of contextual information. The idea has a natural interpretation in terms of the selective attention theory [2]. Specifically, we propose two models to incorporate the voice characteristics in Transformer based on different insights of where the feature selection should take place. Both models yield excellent performance, on par or better than published state-of-the-art models on the speaker extraction task, including separating speech of novel speakers not seen during training.