This paper proposes a new speaker-dependent coding algorithm to efficiently compress a large speech database for corpus-based concatenative text-to-speech (TTS) engines while maintaining high fidelity. To achieve a high compression ratio and meet the fundamental requirements of concatenative TTS synthesizers, such as partial segment decoding and random access capability, we adopt a nonpredictive analysis-by-synthesis scheme for speaker-dependent parameter estimation and quantization. The spectral coefficients are quantized by using a memoryless split vector quantization (VQ) approach that does not use frame correlation. Considering that excitation signals of a specific speaker show low intra-variation especially in the voiced regions, the conventional adaptive codebook for pitch prediction is replaced by a speaker-dependent pitch-pulse codebook trained by a corpus of single-speaker speech signals. To further improve the coding efficiency, the proposed coder flexibly combines nonpredictive and predictive type method considering the structure of the TTS system. By applying the proposed algorithm to a Korean TTS system, we could obtain comparable quality to the G.729 speech coder and satisfy all the requirements that TTS system needs. The results are verified by both objective and subjective quality measurements. In addition, the decoding complexity of the proposed coder is around 55% lower than that of G.729 annex A.
|Number of pages||9|
|Journal||IEEE Transactions on Audio, Speech and Language Processing|
|Publication status||Published - 2007 Feb|
Bibliographical noteFunding Information:
Manuscript received January 24, 2005; revised December 7, 2005. This work was supported by Voiceware Co., Ltd. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Ravi P. J. Ra-machandran.
All Science Journal Classification (ASJC) codes
- Acoustics and Ultrasonics
- Electrical and Electronic Engineering