Neural attention mechanism has been used as a form of explanation for model behavior. Users can either passively consume explanation, or actively disagree with explanation then supervise attention into more proper values (attention supervision). Though attention supervision was shown to be effective in some tasks, we find the existing attention supervision is biased, for which we propose to augment counterfactual observations to debias and contribute to accuracy gains. To this end, we propose a counterfactual method to estimate such missing observations and debias the existing supervisions. We validate the effectiveness of our counterfactual supervision on widely adopted image benchmark datasets: CUFED and PEC.
|Title of host publication||Proceedings - 19th IEEE International Conference on Data Mining, ICDM 2019|
|Editors||Jianyong Wang, Kyuseok Shim, Xindong Wu|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||6|
|Publication status||Published - 2019 Nov|
|Event||19th IEEE International Conference on Data Mining, ICDM 2019 - Beijing, China|
Duration: 2019 Nov 8 → 2019 Nov 11
|Name||Proceedings - IEEE International Conference on Data Mining, ICDM|
|Conference||19th IEEE International Conference on Data Mining, ICDM 2019|
|Period||19/11/8 → 19/11/11|
Bibliographical noteFunding Information:
This work was supported by Samsung Research Funding Center of Samsung Electronics under Project Number SRFC-IT1701-01. Hwang is a corresponding author.
© 2019 IEEE.
All Science Journal Classification (ASJC) codes