Given a random pair of images, a universal style transfer method extracts the feel from a reference image to synthesize an output based on the look of a content image. Recent algorithms based on second-order statistics, however, are either computationally expensive or prone to generate artifacts due to the trade-off between image quality and runtime performance. In this work, we present an approach for universal style transfer that learns the transformation matrix in a data-driven fashion. Our algorithm is efficient yet flexible to transfer different levels of styles with the same auto-encoder network. It also produces stable video style transfer results due to the preservation of the content affinity. In addition, we propose a linear propagation module to enable a feed-forward network for photo-realistic style transfer. We demonstrate the effectiveness of our approach on three tasks: Artistic style, photo-realistic and video style transfer, with comparisons to state-of-the-art methods.
|Title of host publication||Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019|
|Publisher||IEEE Computer Society|
|Number of pages||9|
|Publication status||Published - 2019 Jun|
|Event||32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019 - Long Beach, United States|
Duration: 2019 Jun 16 → 2019 Jun 20
|Name||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Conference||32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019|
|Period||19/6/16 → 19/6/20|
Bibliographical noteFunding Information:
Acknowledgement This work is supported in part by the NSF CAREER Grant #1149783, and gifts from Adobe, Verisk, and NEC.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition