Audio-visual speech recognition (AVSR) using acoustic and visual signals of speech has received attention because of its robustness in noisy environments. In this paper, we present a late integration scheme-based AVSR system whose robustness under various noise conditions is improved by enhancing the performance of the three parts composing the system. First, we improve the performance of the visual subsystem by using the stochastic optimization method for the hidden Markov models as the speech recognizer. Second, we propose a new method of considering dynamic characteristics of speech for improved robustness of the acoustic subsystem. Third, the acoustic and the visual subsystems are effectively integrated to produce final robust recognition results by using neural networks. We demonstrate the performance of the proposed methods via speaker-independent isolated word recognition experiments. The results show that the proposed system improves robustness over the conventional system under various noise conditions without a priori knowledge about the noise contained in the speech.
Bibliographical noteFunding Information:
Manuscript received September 26, 2006; revised December 11, 2007. First published June 13, 2008; last published July 9, 2008 (projected). This work was supported by GRANT R01-2003-000-10829-0 from the Basic Research Program of the Korea Science and Engineering Foundation and Brain Korea 21 Project, The School of Information Technology, KAIST, in 2007. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Bo Shen.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Media Technology
- Computer Science Applications
- Electrical and Electronic Engineering