We propose a novel deep architecture for video summarization in untrimmed videos that simultaneously recognizes action and scene classes for every video segments. Our networks accomplish this through a multi-task fusion approach based on two types of attention modules to explore semantic correlations between action and scene in the videos. The proposed networks consist of the feature embedding networks and attention inference networks to stochastically leverage the inferred action and scene feature representations. Additionally, we design a new center loss function that learns the feature representations by enforcing to minimize the intra-class variations and to maximize the inter-class variations. Our model achieves a score of 0.8409 for summarization and accuracy of 0.7294 for action and scene recognition on test set of CoVieW'19 dataset, which is ranked 3rd.
|Title of host publication||Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||8|
|Publication status||Published - 2019 Oct|
|Event||17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019 - Seoul, Korea, Republic of|
Duration: 2019 Oct 27 → 2019 Oct 28
|Name||Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019|
|Conference||17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019|
|Country||Korea, Republic of|
|Period||19/10/27 → 19/10/28|
Bibliographical noteFunding Information:
This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (NRF-2017M3C4A7069370).
© 2019 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Computer Vision and Pattern Recognition