Counterfactual attention supervision

Seungtaek Choi, Haeju Park, Seung Won Hwang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Neural attention mechanism has been used as a form of explanation for model behavior. Users can either passively consume explanation, or actively disagree with explanation then supervise attention into more proper values (attention supervision). Though attention supervision was shown to be effective in some tasks, we find the existing attention supervision is biased, for which we propose to augment counterfactual observations to debias and contribute to accuracy gains. To this end, we propose a counterfactual method to estimate such missing observations and debias the existing supervisions. We validate the effectiveness of our counterfactual supervision on widely adopted image benchmark datasets: CUFED and PEC.

Original languageEnglish
Title of host publicationProceedings - 19th IEEE International Conference on Data Mining, ICDM 2019
EditorsJianyong Wang, Kyuseok Shim, Xindong Wu
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1006-1011
Number of pages6
ISBN (Electronic)9781728146034
DOIs
Publication statusPublished - 2019 Nov
Event19th IEEE International Conference on Data Mining, ICDM 2019 - Beijing, China
Duration: 2019 Nov 82019 Nov 11

Publication series

NameProceedings - IEEE International Conference on Data Mining, ICDM
Volume2019-November
ISSN (Print)1550-4786

Conference

Conference19th IEEE International Conference on Data Mining, ICDM 2019
Country/TerritoryChina
CityBeijing
Period19/11/819/11/11

Bibliographical note

Funding Information:
This work was supported by Samsung Research Funding Center of Samsung Electronics under Project Number SRFC-IT1701-01. Hwang is a corresponding author.

Publisher Copyright:
© 2019 IEEE.

All Science Journal Classification (ASJC) codes

  • Engineering(all)

Cite this