Parameter enhancement for MELP speech codec in noisy communication environment

Min Jae Hwang, Hong Goo Kang

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

In this paper, we propose a deep learning (DL)-based parameter enhancement method for a mixed excitation linear prediction (MELP) speech codec in noisy communication environment. Unlike conventional speech enhancement modules that are designed to obtain clean speech signal by removing noise components before speech codec processing, the proposed method directly enhances codec parameters on either the encoder or decoder side. As the proposed method has been implemented by a small network without any additional processes required in conventional enhancement systems, e.g., time-frequency (TF) analysis/synthesis modules, its computational complexity is very low. By enhancing the noise-corrupted codec parameters with the proposed DL framework, we achieved an enhancement system that is much simpler and faster than conventional T-F mask-based speech enhancement methods, while the quality of its performance remains similar.

Original languageEnglish
Pages (from-to)3391-3395
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2019-September
DOIs
Publication statusPublished - 2019
Event20th Annual Conference of the International Speech Communication Association: Crossroads of Speech and Language, INTERSPEECH 2019 - Graz, Austria
Duration: 2019 Sep 152019 Sep 19

Bibliographical note

Funding Information:
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (2019-11-0124).

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modelling and Simulation

Fingerprint Dive into the research topics of 'Parameter enhancement for MELP speech codec in noisy communication environment'. Together they form a unique fingerprint.

Cite this