For the last two decades, two related approaches have been studied independently in conjunction with limitations of image sensors. The one is to reconstruct a high-resolution (HR) image from multiple low-resolution (LR) observations suffering from various degradations such as blur, geometric deformation, aliasing, noise, spatial sampling and so on. The other one is to reconstruct a high dynamic range (HDR) image from differently exposed multiple low dynamic range (LDR) images. LDR is due to the limitation of the capacitance of analogue-to-digital converter and the nonlinearity of the imaging system's response function. In practical situations, since observations suffer from limitations of both spatial resolution and dynamic range, it is reasonable to address them in a unified context. Most super-resolution (SR) image reconstruction methods that enhance the spatial resolution assume that the dynamic ranges of observations are the same or the imaging system's response function is already known. In this paper, the conventional approaches are overviewed and the SR image reconstruction, which simultaneously enhances spatial resolution and dynamic range, is proposed. The image degradation process including limited spatial resolution and limited dynamic range is modelled. With the observation model, the maximum a posteriori estimates of the response function of the imaging system as well as the single HR image and HDR image are obtained. Experimental results indicate that the proposed algorithm outperforms the conventional approaches that perform the HR and HDR reconstructions sequentially with respect to both objective and subjective criteria.
All Science Journal Classification (ASJC) codes
- Computer Science(all)