Miniaturized low-power implementation of a vision system is critical in battery-operated systems such as wireless sensor network (WSN), micro-air-vehicles (MAV), and mobile phones. Conventional digital-intensive processing uses the raw image with huge redundancy which degrades the power and speed. This paper reports multi-level mixed-mode processing schemes for efficient VLSI implementation in terms of power, area and speed. In this approach, the processing is distributed in pixel-level, column-level and chip-level processors. Each processor operates in mixed-mode, analog and digital, domains for an optimal use of resources. Three vision chips have been designed and characterized to show the effectiveness of this approach. First, motion detection and feature extraction are implemented in an object-adaptive CMOS image sensor to remove temporal and spatial redundancies for low power operation. Second, a neuromorphic algorithm is implemented for optic flow generation in mixed-mode circuits. Event-driven analog processing units allow low power operation of pre-processing, while the digital processor provides the robustness of backend processing. Finally, background light subtraction is implemented in a 3-D camera for outdoors mobile applications. The reconfigurable pixel array implemented by pixel-merging and super-resolution could achieve faster processing and better background light suppression.