SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model
(3 minutes introduction)
Edresson Casanova (Universidade de São Paulo, Brazil), Christopher Shulby (DefinedCrowd, USA), Eren Gölge (Coqui, Germany), Nicolas Michael Müller (Fraunhofer AISEC, Germany), Frederico Santos de Oliveira (Universidade Federal de Goiás, Brazil), Arnaldo Candido Jr. (Universidade Tecnológica Federal do Paraná, Brazil), Anderson da Silva Soares (Universidade Federal de Goiás, Brazil), Sandra Maria Aluisio (Universidade de São Paulo, Brazil), Moacir Antonelli Ponti (Universidade de São Paulo, Brazil) |
---|
In this paper, we propose SC-GlowTTS: an efficient zero-shot multi-speaker text-to-speech model that improves similarity for speakers unseen during training. We propose a speaker-conditional architecture that explores a flow-based decoder that works in a zero-shot scenario. As text encoders, we explore a dilated residual convolutional-based encoder, gated convolutional-based encoder, and transformer-based encoder. Additionally, we have shown that adjusting a GAN-based vocoder for the spectrograms predicted by the TTS model on the training dataset can significantly improve the similarity and speech quality for new speakers. Our model converges using only 11 speakers, reaching state-of-the-art results for similarity with new speakers, as well as high speech quality.