Most variational formulations for structure-Texture image decomposition force the structure images to have small norm in some functional spaces and to share a common notion of edges, i.e., large-gradients or large-intensity differences. However, such a definition makes it difficult to distinguish structure edges from oscillations that have fine spatial scale but high contrast. In this paper, we introduce a new model by learning deep variational priors for structure images without explicit training data. An alternating direction method of a multiplier algorithm and its modular structure are adopted to plug deep variational priors into an iterative smoothing process. The central observations are that convolution neural networks (CNNs) can replace the total variation prior, and are indeed powerful to capture the natures of structure and texture. We show that our learned priors using CNNs successfully differentiate high-Amplitude details from structure edges, and avoid halo artifacts. Different from previous data-driven smoothing schemes, our formulation provides another degree of freedom to produce continuous smoothing effects. Experimental results demonstrate the effectiveness of our approach on various computational photography and image processing applications, including texture removal, detail manipulation, HDR tone-mapping, and non-photorealistic abstraction.
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design