Abstract
Image retargeting has been applied to display images of any size via devices with various resolutions (e.g., cell phone and TV monitors). To fit an image with the target resolution, certain unimportant regions need to be deleted or distorted, and the key problem is to determine the importance of each pixel. Existing methods predict pixel-wise importance in a bottom-up manner via eye fixation estimation or saliency detection. In contrast, the proposed algorithm estimates the pixel-wise importance based on a top-down criterion where the target image maintains the semantic meaning of the original image. To this end, several semantic components corresponding to foreground objects, action contexts, and background regions are extracted. The semantic component maps are integrated by a classification guided fusion network. Specifically, the deep network classifies the original image as object or scene oriented, and fuses the semantic component maps according to classification results. The network output, referred to as the semantic collage with the same size as the original image, is then fed into any existing optimization method to generate the target image. Extensive experiments are carried out on the RetargetMe data set and S-Retarget database developed in this paper. Experimental results demonstrate the merits of the proposed algorithm over the state-of-the-art image retargeting methods.
Original language | English |
---|---|
Pages (from-to) | 5032-5043 |
Number of pages | 12 |
Journal | IEEE Transactions on Image Processing |
Volume | 27 |
Issue number | 10 |
DOIs | |
Publication status | Published - 2018 Oct |
Bibliographical note
Funding Information:Manuscript received July 17, 2017; revised January 3, 2018 and April 6, 2018; accepted April 27, 2018. Date of publication May 15, 2018; date of current version July 12, 2018. This work was supported in part by the National Natural Science Foundation of China under Grant U1536203 and Grant 61572493, in part by the IIE Project under Grant Y6Z0021102 and Grant Y7Z0241102, in part by the Strategy Cooperation Project under Grant AQ-1701. The work of M.-H. Yang was supported in part by the NSF CAREER under Grant 1149783, in part by Adobe, and in part by NEC. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Dacheng Tao. (Corresponding author: Yao Sun.) S. Liu is with the Beijing Key Laboratory of Digital Media, School of Computer Science and Engineering, Beihang University, Beijing 100191, China, and also with the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China.
Publisher Copyright:
© 1992-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Software
- Computer Graphics and Computer-Aided Design