Spatial audio is an essential medium to audiences for 3D visual and auditory experience. However, the recording devices and techniques are expensive or inaccessible to the general public. In this work, we propose a self-supervised audio spatialization network that can generate spatial audio given the corresponding video and monaural audio. To enhance spatialization performance, we use an auxiliary classifier to classify ground-truth videos and those with audio where the left and right channels are swapped. We collect a large-scale video dataset with spatial audio to validate the proposed method. Experimental results demonstrate the effectiveness of the proposed model on the audio spatialization task.
|Title of host publication||2019 IEEE International Conference on Image Processing, ICIP 2019 - Proceedings|
|Publisher||IEEE Computer Society|
|Number of pages||5|
|Publication status||Published - 2019 Sept|
|Event||26th IEEE International Conference on Image Processing, ICIP 2019 - Taipei, Taiwan, Province of China|
Duration: 2019 Sept 22 → 2019 Sept 25
|Name||Proceedings - International Conference on Image Processing, ICIP|
|Conference||26th IEEE International Conference on Image Processing, ICIP 2019|
|Country/Territory||Taiwan, Province of China|
|Period||19/9/22 → 19/9/25|
Bibliographical notePublisher Copyright:
© 2019 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition
- Signal Processing