Abstract
In this paper, we address the problem of separating individual speech signals from videos using audio-visual neural processing. Most conventional approaches utilize frame-wise matching criteria to extract shared information between co-occurring audio and video. Thus, their performance heavily depends on the accuracy of audio-visual synchronization and the effectiveness of their representations. To overcome the frame discontinuity problem between two modalities due to transmission delay mismatch or jitter, we propose a cross-modal affinity network (CaffNet) that learns global correspondence as well as locally-varying affinities between audio and visual streams. Given that the global term provides stability over a temporal sequence at the utterance-level, this resolves the label permutation problem characterized by inconsistent assignments. By extending the proposed cross-modal affinity on the complex network, we further improve the separation performance in the complex spectral domain. Experimental results verify that the proposed methods outperform conventional ones on various datasets, demonstrating their advantages in real-world scenarios.
Original language | English |
---|---|
Title of host publication | Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 |
Publisher | IEEE Computer Society |
Pages | 1336-1345 |
Number of pages | 10 |
ISBN (Electronic) | 9781665445092 |
DOIs | |
Publication status | Published - 2021 |
Event | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 - Virtual, Online, United States Duration: 2021 Jun 19 → 2021 Jun 25 |
Publication series
Name | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
---|---|
ISSN (Print) | 1063-6919 |
Conference
Conference | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 21/6/19 → 21/6/25 |
Bibliographical note
Funding Information:1https://cloud.google.com/speech-to-text This research was supported by the Yonsei University Research Fund of 2021(2021-22-0001).
Funding Information:
∗ Both authors contributed equally to this work † Corresponding authors This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT). (NRF-2021R1A2C2006703).
Publisher Copyright:
© 2021 IEEE
All Science Journal Classification (ASJC) codes
- Software
- Computer Vision and Pattern Recognition