Supervising neural attention models for video captioning by human gaze data

Youngjae Yu, Jongwook Choi, Yeonhwa Kim, Kyung Yoo, Sang Hun Lee, Gunhee Kim

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The attention mechanisms in deep neural networks are inspired by human's attention that sequentially focuses on the most relevant parts of the information over time to generate prediction output. The attention parameters in those models are implicitly trained in an end-to-end manner, yet there have been few trials to explicitly incorporate human gaze tracking to supervise the attention models. In this paper, we investigate whether attention models can benefit from explicit human gaze labels, especially for the task of video captioning. We collect a new dataset called VAS, consisting of movie clips, and corresponding multiple descriptive sentences along with human gaze tracking data. We propose a video captioning model named Gaze Encoding Attention Network (GEAN) that can leverage gaze tracking information to provide the spatial and temporal attention for sentence generation. Through evaluation of language similarity metrics and human assessment via Amazon mechanical Turk, we demonstrate that spatial attentions guided by human gaze data indeed improve the performance of multiple captioning methods. Moreover, we show that the proposed approach achieves the state-of-the-art performance for both gaze prediction and video captioning not only in our VAS dataset but also in standard datasets (e.g. LSMDC [24] and Hollywood2 [18]).

Original languageEnglish
Title of host publicationProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages6119-6127
Number of pages9
ISBN (Electronic)9781538604571
DOIs
Publication statusPublished - 2017 Nov 6
Event30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 - Honolulu, United States
Duration: 2017 Jul 212017 Jul 26

Publication series

NameProceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
Volume2017-January

Other

Other30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
Country/TerritoryUnited States
CityHonolulu
Period17/7/2117/7/26

Bibliographical note

Funding Information:
Acknowledgements. This research is partially supported by Convergence Research Center through National Research Foundation of Korea (2015R1A5A7037676). Gunhee Kim is the corresponding author.

Publisher Copyright:
© 2017 IEEE.

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Supervising neural attention models for video captioning by human gaze data'. Together they form a unique fingerprint.

Cite this