Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.
Bibliographical noteFunding Information:
Manuscript received August 19, 2018; revised February 8, 2019; accepted March 28, 2019. Date of publication April 16, 2019; date of current version July 1, 2019. This work was supported in part by the National Natural Science Foundation of China under Grant U1736219, Grant U1605252, Grant U1803264, Grant 61532006, Grant 61772083, and Grant 61802403, in part by the National Key R&D Program of China under Grant 2018YFB0803701, in part by the Beijing Natural Science Foundation under Grant L182057, and in part by the CCF-Tencent Open Fund. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Jana Ehmann. (Corresponding author: Xiaochun Cao.) W. Ren and X. Cao are with the State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China (e-mail: firstname.lastname@example.org; email@example.com).
© 1992-2012 IEEE.
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design