Abstract
Spatial-temporal filters have been widely used in video denoising module. The filters are commonly designed for monochromatic image. However, most digital video cameras use a color filter array (CFA) to get color sequence. We propose a recursive spatial-temporal filter using motion estimation (ME) and motion compensated prediction (MCP) for CFA sequence. In the proposed ME method, we obtain candidate motion vectors from CFA sequence through hypothetical luminance maps. With the estimated motion vectors, the accurate MCP is obtained from CFA sequence by weighted averaging, which is determined by spatial-temporal LMMSE. Then, the temporal filter combines estimated MCP and current pixel. This process is controlled by the motion detection value. After temporal filtering, the spatial filter is applied to the filtered current frame as a post-processing. Experimental results show that the proposed method achieves good denoising performance without motion blurring and acquires high visual quality.
Original language | English |
---|---|
Title of host publication | Proceedings of SPIE-IS and T Electronic Imaging - Image Processing |
Subtitle of host publication | Algorithms and Systems X; and Parallel Processing for Imaging Applications II |
DOIs | |
Publication status | Published - 2012 Mar 5 |
Event | Image Processing: Algorithms and Systems X; and Parallel Processing for Imaging Applications II - Burlingame, CA, United States Duration: 2012 Jan 23 → 2012 Jan 25 |
Publication series
Name | Proceedings of SPIE - The International Society for Optical Engineering |
---|---|
Volume | 8295 |
ISSN (Print) | 0277-786X |
Other
Other | Image Processing: Algorithms and Systems X; and Parallel Processing for Imaging Applications II |
---|---|
Country | United States |
City | Burlingame, CA |
Period | 12/1/23 → 12/1/25 |
Fingerprint
All Science Journal Classification (ASJC) codes
- Electronic, Optical and Magnetic Materials
- Condensed Matter Physics
- Computer Science Applications
- Applied Mathematics
- Electrical and Electronic Engineering
Cite this
}
Motion-compensated spatial-temporal filtering for noisy CFA sequence. / Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi.
Proceedings of SPIE-IS and T Electronic Imaging - Image Processing: Algorithms and Systems X; and Parallel Processing for Imaging Applications II. 2012. 82951D (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 8295).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
TY - GEN
T1 - Motion-compensated spatial-temporal filtering for noisy CFA sequence
AU - Lee, Min Seok
AU - Park, Sang Wook
AU - Kang, Moon Gi
PY - 2012/3/5
Y1 - 2012/3/5
N2 - Spatial-temporal filters have been widely used in video denoising module. The filters are commonly designed for monochromatic image. However, most digital video cameras use a color filter array (CFA) to get color sequence. We propose a recursive spatial-temporal filter using motion estimation (ME) and motion compensated prediction (MCP) for CFA sequence. In the proposed ME method, we obtain candidate motion vectors from CFA sequence through hypothetical luminance maps. With the estimated motion vectors, the accurate MCP is obtained from CFA sequence by weighted averaging, which is determined by spatial-temporal LMMSE. Then, the temporal filter combines estimated MCP and current pixel. This process is controlled by the motion detection value. After temporal filtering, the spatial filter is applied to the filtered current frame as a post-processing. Experimental results show that the proposed method achieves good denoising performance without motion blurring and acquires high visual quality.
AB - Spatial-temporal filters have been widely used in video denoising module. The filters are commonly designed for monochromatic image. However, most digital video cameras use a color filter array (CFA) to get color sequence. We propose a recursive spatial-temporal filter using motion estimation (ME) and motion compensated prediction (MCP) for CFA sequence. In the proposed ME method, we obtain candidate motion vectors from CFA sequence through hypothetical luminance maps. With the estimated motion vectors, the accurate MCP is obtained from CFA sequence by weighted averaging, which is determined by spatial-temporal LMMSE. Then, the temporal filter combines estimated MCP and current pixel. This process is controlled by the motion detection value. After temporal filtering, the spatial filter is applied to the filtered current frame as a post-processing. Experimental results show that the proposed method achieves good denoising performance without motion blurring and acquires high visual quality.
UR - http://www.scopus.com/inward/record.url?scp=84863141180&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84863141180&partnerID=8YFLogxK
U2 - 10.1117/12.907564
DO - 10.1117/12.907564
M3 - Conference contribution
AN - SCOPUS:84863141180
SN - 9780819489425
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Proceedings of SPIE-IS and T Electronic Imaging - Image Processing
ER -