Abstract
In this paper, we propose a system to extract effective speaker representations from a speech signal using a deep learning method. Based on the fact that speech signal contains identity unrelated information such as text content, emotion, background noise, and so on, we perform a training such that the extracted features only represent speaker-related information but do not represent speaker-unrelated information. Specifically, we propose an auto-encoder based disentanglement method that outputs both speaker-related and speaker-unrelated embeddings using effective loss functions. To further improve the reconstruction performance in the decoding process, we also introduce a discriminator popularly used in Generative Adversarial Network (GAN) structure. Since improving the decoding capability is helpful for preserving speaker information and disentanglement, it results in the improvement of speaker verification performance. Experimental results demonstrate the effectiveness of our proposed method by improving Equal Error Rate (EER) on benchmark dataset, Voxceleb1.
Original language | English |
---|---|
Pages (from-to) | 447-453 |
Number of pages | 7 |
Journal | Journal of the Acoustical Society of Korea |
Volume | 39 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2020 |
Bibliographical note
Publisher Copyright:Copyright © 2020 The Acoustical Society of Korea.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Instrumentation
- Acoustics and Ultrasonics
- Applied Mathematics
- Speech and Hearing