Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.
|Title of host publication||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Publisher||IEEE Computer Society|
|Number of pages||8|
|ISBN (Electronic)||9781479951178, 9781479951178|
|Publication status||Published - 2014 Sep 24|
|Event||27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014 - Columbus, United States|
Duration: 2014 Jun 23 → 2014 Jun 28
|Name||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition|
|Other||27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014|
|Period||14/6/23 → 14/6/28|
Bibliographical notePublisher Copyright:
© 2014 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition