Learning to localize sound sources in visual scenes: Analysis and applications

Arda Senocak, Tae Hyun Oh, Junsik Kim, Ming Hsuan Yang, In So Kweon

Research output: Contribution to journalArticlepeer-review

Abstract

Visual events are usually accompanied by sounds in our daily lives. However, can the machines learn to correlate the visual scene and sound, as well as localize the sound source only by observing them like humans? To investigate its empirical learnability, in this work we first present a novel unsupervised algorithm to address the problem of localizing sound sources in visual scenes. In order to achieve this goal, a two-stream network structure which handles each modality with attention mechanism is developed for sound source localization. The network naturally reveals the localized response in the scene without human annotation. In addition, a new sound source dataset is developed for performance evaluation. Nevertheless, our empirical evaluation shows that the unsupervised method generates false conclusions in some cases. Thereby, we show that this false conclusion cannot be fixed without human prior knowledge due to the well-known correlation and causality mismatch misconception. To fix this issue, we extend our network to the supervised and semi-supervised network settings via a simple modification due to the general architecture of our two-stream network. We show that the false conclusions can be effectively corrected even with a small amount of supervision, i.e., semi-supervised setup. Furthermore, we present the versatility of the learned audio and visual embeddings on the cross-modal content alignment and we extend this proposed algorithm to a new application, sound saliency based automatic camera view panning in 360 degree videos.

Original languageEnglish
Article number8894565
Pages (from-to)1605-1619
Number of pages15
JournalIEEE transactions on pattern analysis and machine intelligence
Volume43
Issue number5
DOIs
Publication statusPublished - 2021 May 1

Bibliographical note

Funding Information:
A. Senocak, J. Kim, and I.S. Kweon were supported by the National Information Society Agency for construction of training data for artificial intelligence (2100-2131-305-107-19). M.-H. Yang is supported in part by NSF CAREER (No. 1149783).

Publisher Copyright:
© 1979-2012 IEEE.

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint Dive into the research topics of 'Learning to localize sound sources in visual scenes: Analysis and applications'. Together they form a unique fingerprint.

Cite this