Image extrapolation aims at expanding the narrow field of view of a given image patch. Existing models mainly deal with natural scene images of homogeneous regions and have no control of the content generation process. In this work, we study conditional image extrapolation to synthesize new images guided by the input structured text. The text is represented as a graph to specify the objects and their spatial relation to the unknown regions of the image. Inspired by drawing techniques, we propose a progressive generative model of three stages, i.e., generating a coarse bounding-boxes layout, refining it to a finer segmentation layout, and mapping the layout to a realistic output. Such a multi-stage design is shown to facilitate the training process and generate more controllable results. We validate the effectiveness of the proposed method on the face and human clothing dataset in terms of visual results, quantitative evaluations, and flexible controls.
|Title of host publication||Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||10|
|Publication status||Published - 2021 Jan|
|Event||2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021 - Virtual, Online, United States|
Duration: 2021 Jan 5 → 2021 Jan 9
|Name||Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021|
|Conference||2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021|
|Period||21/1/5 → 21/1/9|
Bibliographical noteFunding Information:
Acknowledgement. We thank the anonymous reviewers for their valuable feedback. This work is supported in part by the NSF CAREER Grant #1149783.
© 2021 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition
- Computer Science Applications