Abstract
We present an approach named Dual Composition Network (DCNet) for interactive image retrieval that searches for the best target image for a natural language query and a reference image. To accomplish this task, existing methods have focused on learning a composite representation of the reference image and the text query to be as close to the embedding of the target image as possible. We refer this approach as Composition Network. In this work, we propose to close the loop with Correction Network that models the difference between the reference and target image in the embedding space and matches it with the embedding of the text query. That is, we consider two cyclic directional mappings for triplets of (reference image, text query, target image) by using both Composition Network and Correction Network. We also propose a joint training loss that can further improve the robustness of multimodal representation learning. We evaluate the proposed model on three benchmark datasets for multimodal retrieval: Fashion-IQ, Shoes, and Fashion200K. Our experiments show that our DCNet achieves new state-of-the-art performance on all three datasets, and the addition of Correction Network consistently improves multiple existing methods that are solely based on Composition Network. Moreover, an ensemble of our model won the first place in Fashion-IQ 2020 challenge held in a CVPR 2020 workshop.
Original language | English |
---|---|
Title of host publication | 35th AAAI Conference on Artificial Intelligence, AAAI 2021 |
Publisher | Association for the Advancement of Artificial Intelligence |
Pages | 1771-1779 |
Number of pages | 9 |
ISBN (Electronic) | 9781713835974 |
Publication status | Published - 2021 |
Event | 35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online Duration: 2021 Feb 2 → 2021 Feb 9 |
Publication series
Name | 35th AAAI Conference on Artificial Intelligence, AAAI 2021 |
---|---|
Volume | 2B |
Conference
Conference | 35th AAAI Conference on Artificial Intelligence, AAAI 2021 |
---|---|
City | Virtual, Online |
Period | 21/2/2 → 21/2/9 |
Bibliographical note
Funding Information:We thank SNUVL lab members for helpful comments. This research was supported by AIR Lab (AI Research Lab) in Hyundai Motor Company through HMC-SNUAI Consortium Fund and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-01082, SW Star-Lab and No. 2017-0-01772, Video Turing Test). Jongseok Kim was supported by Hyundai Motor Chung Mong-Koo Foundation. Gunhee Kim is the corresponding author.
Publisher Copyright:
Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved
All Science Journal Classification (ASJC) codes
- Artificial Intelligence