TY - GEN
T1 - Area-efficient and low-power implementation of vision chips using multi-level mixed-mode processing
AU - Cho, Jihyun
AU - Park, Seokjun
AU - Choi, Jaehyuk
AU - Yoon, Euisik
N1 - Publisher Copyright:
© 2014 IEEE.
Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.
PY - 2014/8/29
Y1 - 2014/8/29
N2 - Miniaturized low-power implementation of a vision system is critical in battery-operated systems such as wireless sensor network (WSN), micro-air-vehicles (MAV), and mobile phones. Conventional digital-intensive processing uses the raw image with huge redundancy which degrades the power and speed. This paper reports multi-level mixed-mode processing schemes for efficient VLSI implementation in terms of power, area and speed. In this approach, the processing is distributed in pixel-level, column-level and chip-level processors. Each processor operates in mixed-mode, analog and digital, domains for an optimal use of resources. Three vision chips have been designed and characterized to show the effectiveness of this approach. First, motion detection and feature extraction are implemented in an object-adaptive CMOS image sensor to remove temporal and spatial redundancies for low power operation. Second, a neuromorphic algorithm is implemented for optic flow generation in mixed-mode circuits. Event-driven analog processing units allow low power operation of pre-processing, while the digital processor provides the robustness of backend processing. Finally, background light subtraction is implemented in a 3-D camera for outdoors mobile applications. The reconfigurable pixel array implemented by pixel-merging and super-resolution could achieve faster processing and better background light suppression.
AB - Miniaturized low-power implementation of a vision system is critical in battery-operated systems such as wireless sensor network (WSN), micro-air-vehicles (MAV), and mobile phones. Conventional digital-intensive processing uses the raw image with huge redundancy which degrades the power and speed. This paper reports multi-level mixed-mode processing schemes for efficient VLSI implementation in terms of power, area and speed. In this approach, the processing is distributed in pixel-level, column-level and chip-level processors. Each processor operates in mixed-mode, analog and digital, domains for an optimal use of resources. Three vision chips have been designed and characterized to show the effectiveness of this approach. First, motion detection and feature extraction are implemented in an object-adaptive CMOS image sensor to remove temporal and spatial redundancies for low power operation. Second, a neuromorphic algorithm is implemented for optic flow generation in mixed-mode circuits. Event-driven analog processing units allow low power operation of pre-processing, while the digital processor provides the robustness of backend processing. Finally, background light subtraction is implemented in a 3-D camera for outdoors mobile applications. The reconfigurable pixel array implemented by pixel-merging and super-resolution could achieve faster processing and better background light suppression.
UR - http://www.scopus.com/inward/record.url?scp=84907788566&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84907788566&partnerID=8YFLogxK
U2 - 10.1109/CNNA.2014.6888638
DO - 10.1109/CNNA.2014.6888638
M3 - Conference contribution
AN - SCOPUS:84907788566
T3 - International Workshop on Cellular Nanoscale Networks and their Applications
BT - International Workshop on Cellular Nanoscale Networks and their Applications
A2 - Niemier, Michael
A2 - Porod, Wolfgang
PB - IEEE Computer Society
T2 - 2014 14th International Workshop on Cellular Nanoscale Networks and Their Applications, CNNA 2014
Y2 - 29 July 2014 through 31 July 2014
ER -