Abstract
We indirectly predict a class by deriving user-defined (i.e., existing) attributes (UA) from an image in generalized zero-shot learning (GZSL). High-quality attributes are essential for GZSL, but the existing UAs are sometimes not discriminative. We observe that the hidden units at each layer in a convolutional neural network (CNN) contain highly discriminative semantic information across a range of objects, parts, scenes, textures, materials, and color. The semantic information in CNN features is similar to the attributes that can distinguish each class. Motivated by this observation, we employ CNN features like novel class representative semantic data, i.e., deep attribute (DA). Precisely, we propose three objective functions (e.g., compatible, discriminative, and intra-independent) to inject the fundamental properties into the generated DA. We substantially outperform the state-of-the-art approaches on four challenging GZSL datasets, including CUB, FLO, AWA1, and SUN. Furthermore, the existing UA and our proposed DA are complementary and can be combined to enhance performance further.
Original language | English |
---|---|
Article number | 108435 |
Journal | Pattern Recognition |
Volume | 124 |
DOIs | |
Publication status | Published - 2022 Apr |
Bibliographical note
Funding Information:This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.2019R1A2C2003760) and Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-01361, Artificial Intelligence Graduate School Program (YONSEI UNIVERSITY)).
Publisher Copyright:
© 2021 Elsevier Ltd
All Science Journal Classification (ASJC) codes
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence