This paper proposes speaker-adaptive neural vocoders for parametric text-to-speech (TTS) systems. Recently proposed WaveNet-based neural vocoding systems successfully generate a time sequence of speech signal with an autoregressive framework. However, it remains a challenge to synthesize high-quality speech when the amount of a target speaker's training data is insufficient. To generate more natural speech signals with the constraint of limited training data, we propose a speaker adaptation task with an effective variation of neural vocoding models. In the proposed method, a speaker-independent training method is applied to capture universal attributes embedded in multiple speakers, and the trained model is then optimized to represent the specific characteristics of the target speaker. Experimental results verify that the proposed TTS systems with speaker-adaptive neural vocoders outperform those with traditional source-filter model-based vocoders and those with WaveNet vocoders, trained either speaker-dependently or speaker-independently. In particular, our TTS system achieves 3.80 and 3.77 MOS for the Korean male and Korean female speakers, respectively, even though we use only ten minutes' speech corpus for training the model.
|Title of host publication||IEEE 22nd International Workshop on Multimedia Signal Processing, MMSP 2020|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Publication status||Published - 2020 Sep 21|
|Event||22nd IEEE International Workshop on Multimedia Signal Processing, MMSP 2020 - Virtual, Tampere, Finland|
Duration: 2020 Sep 21 → 2020 Sep 24
|Name||IEEE 22nd International Workshop on Multimedia Signal Processing, MMSP 2020|
|Conference||22nd IEEE International Workshop on Multimedia Signal Processing, MMSP 2020|
|Period||20/9/21 → 20/9/24|
Bibliographical notePublisher Copyright:
© 2020 IEEE.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Media Technology