Low-Light Image Enhancement via a Deep Hybrid Network

Wenqi Ren, Sifei Liu, Lin Ma, Qianqian Xu, Xiangyu Xu, Xiaochun Cao, Junping Du, Ming Hsuan Yang

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.

Original languageEnglish
Article number8692732
Pages (from-to)4364-4375
Number of pages12
JournalIEEE Transactions on Image Processing
Volume28
Issue number9
DOIs
Publication statusPublished - 2019 Sep

Fingerprint

Image enhancement
Recurrent neural networks
Visibility
Cameras
Sensors

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Graphics and Computer-Aided Design

Cite this

Ren, Wenqi ; Liu, Sifei ; Ma, Lin ; Xu, Qianqian ; Xu, Xiangyu ; Cao, Xiaochun ; Du, Junping ; Yang, Ming Hsuan. / Low-Light Image Enhancement via a Deep Hybrid Network. In: IEEE Transactions on Image Processing. 2019 ; Vol. 28, No. 9. pp. 4364-4375.
@article{cef9b678a4a94c4aafa30efcf018c5c2,
title = "Low-Light Image Enhancement via a Deep Hybrid Network",
abstract = "Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.",
author = "Wenqi Ren and Sifei Liu and Lin Ma and Qianqian Xu and Xiangyu Xu and Xiaochun Cao and Junping Du and Yang, {Ming Hsuan}",
year = "2019",
month = "9",
doi = "10.1109/TIP.2019.2910412",
language = "English",
volume = "28",
pages = "4364--4375",
journal = "IEEE Transactions on Image Processing",
issn = "1057-7149",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "9",

}

Ren, W, Liu, S, Ma, L, Xu, Q, Xu, X, Cao, X, Du, J & Yang, MH 2019, 'Low-Light Image Enhancement via a Deep Hybrid Network', IEEE Transactions on Image Processing, vol. 28, no. 9, 8692732, pp. 4364-4375. https://doi.org/10.1109/TIP.2019.2910412

Low-Light Image Enhancement via a Deep Hybrid Network. / Ren, Wenqi; Liu, Sifei; Ma, Lin; Xu, Qianqian; Xu, Xiangyu; Cao, Xiaochun; Du, Junping; Yang, Ming Hsuan.

In: IEEE Transactions on Image Processing, Vol. 28, No. 9, 8692732, 09.2019, p. 4364-4375.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Low-Light Image Enhancement via a Deep Hybrid Network

AU - Ren, Wenqi

AU - Liu, Sifei

AU - Ma, Lin

AU - Xu, Qianqian

AU - Xu, Xiangyu

AU - Cao, Xiaochun

AU - Du, Junping

AU - Yang, Ming Hsuan

PY - 2019/9

Y1 - 2019/9

N2 - Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.

AB - Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.

UR - http://www.scopus.com/inward/record.url?scp=85068391906&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85068391906&partnerID=8YFLogxK

U2 - 10.1109/TIP.2019.2910412

DO - 10.1109/TIP.2019.2910412

M3 - Article

C2 - 30998467

AN - SCOPUS:85068391906

VL - 28

SP - 4364

EP - 4375

JO - IEEE Transactions on Image Processing

JF - IEEE Transactions on Image Processing

SN - 1057-7149

IS - 9

M1 - 8692732

ER -