Compositing is one of the most common operations in photo editing. To generate realistic composites, the appearances of foreground and background need to be adjusted to make them compatible. Previous approaches to harmonize composites have focused on learning statistical relationships between hand-crafted appearance features of the foreground and background, which is unreliable especially when the contents in the two layers are vastly different. In this work, we propose an end-to-end deep convolutional neural network for image harmonization, which can capture both the context and semantic information of the composite images during harmonization. We also introduce an efficient way to collect large-scale and high-quality training data that can facilitate the training process. Experiments on the synthesized dataset and real composite images show that the proposed network outperforms previous stateof- the-art methods.
|Title of host publication||Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||9|
|Publication status||Published - 2017 Nov 6|
|Event||30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 - Honolulu, United States|
Duration: 2017 Jul 21 → 2017 Jul 26
|Name||Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017|
|Other||30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017|
|Period||17/7/21 → 17/7/26|
Bibliographical noteFunding Information:
Acknowledgments. This work is supported in part by the NSF CAREER Grant #1149783, NSF IIS Grant #1152576, and a gift from Adobe. Portions of this work were performed while Y.-H. Tsai was an intern at Adobe Research.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Computer Vision and Pattern Recognition