A Universal Multi-Speaker Multi-Style Text-to-Speech via Disentangled Representation Learning based on Rényi Divergence Minimization
(3 minutes introduction)
Dipjyoti Paul (University of Crete, Greece), Sankar Mukherjee (IIT, Italy), Yannis Pantazis (FORTH, Greece), Yannis Stylianou (University of Crete, Greece) |
---|
In this paper, we present a universal multi-speaker, multi-style Text-to-Speech (TTS) synthesis system which is able to generate speech from text with speaker characteristics and speaking style similar to a given reference signal. Training is conducted on non-parallel data and generates voices in an unsupervised manner, i.e., neither style annotation nor speaker label are required. To avoid leaking content information into the style embeddings (referred to as “content leakage”) and leaking speaker information into style embeddings (referred to as “style leakage”) we suggest a novel ''R''ényi ''D''ivergence based ''D''isentangled ''R''epresentation framework through adversarial learning. Similar to mutual information minimization, the proposed approach explicitly estimates via a variational formula and then minimizes the Rényi divergence between the joint distribution and the product of marginals for the content-style and style-speaker pairs. By doing so, content, style and speaker spaces become representative and (ideally) independent of each other. Our proposed system greatly reduces content leakage by improving the word error rate by approximately 17–19% relative to the baseline system. In MOS-speech-quality, the proposed algorithm achieves an improvement of about 16–20% whereas MOS-style-similarly boost up 15% relative performance.