We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views given a set of images of the same scene and a reference image of the desired style as inputs. Direct solution of combining novel view synthesis and stylization approaches lead to results that are blurry or not consistent across different views. We propose a point cloud-based method for consistent 3D scene stylization. First, we construct the point cloud by back-projecting the image features to the 3D space. Second, we develop point cloud aggregation modules to gather the style information of the 3D scene, and then modulate the features in the point cloud with a linear transformation matrix. Finally, we project the transformed features to 2D space to obtain the novel views. Experimental results on two diverse datasets of real-world scenes validate that our method generates consistent stylized novel view synthesis results against other alternative approaches.
|Title of host publication||Proceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||10|
|Publication status||Published - 2021|
|Event||18th IEEE/CVF International Conference on Computer Vision, ICCV 2021 - Virtual, Online, Canada|
Duration: 2021 Oct 11 → 2021 Oct 17
|Name||Proceedings of the IEEE International Conference on Computer Vision|
|Conference||18th IEEE/CVF International Conference on Computer Vision, ICCV 2021|
|Period||21/10/11 → 21/10/17|
Bibliographical noteFunding Information:
This work is supported in part by the NSF CAREER Grant #1149783 and a gift from Verisk.
© 2021 IEEE
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition