|
Adaptive Gain and Analog Wavelet Transform for Low-Power Infrared Image SensorsDOI: 10.1155/2012/610176 Abstract: A decorrelation and analog-to-digital conversion scheme aiming to reduce the power consumption of infrared image sensors is presented in this paper. To exploit both intraframe redundancy and inherent photon shot noise characteristics, a column based 1D Haar analog wavelet transform combined with variable gain amplification prior to A/D conversion is used. This allows to use only an 11-bit ADC, instead of a 13-bit one, and to save 15% of data transfer. An pixels test circuit demonstrates this functionality. 1. Introduction Modern high-performance infrared sensors, like CdHgTe-based ones, require low-power consumption and digital output to reduce their cost and increase their ease of use, by avoiding the need for analog components on proximity board. However, when developing large format sensors (e.g., ) the bottlenecks of analog-to-digital conversion and data transfer for low-power compliance worsen. Thus, the first two main contributors to power consumption, to consider for minimizing it, are the analog-to-digital converters (ADC-) and the drivers for data transfer off the chip. Several digital read-out circuits have been demonstrated, relying on pixel-level [1], column-level [2], or array-level A/D conversion. In such sensors, power optimization is focused on the ADC itself, and each pixel signal is treated as completely independent, in time and space, from the others. Thus no specific transfer rate optimization is implemented. Moreover, the ADC noise figure is defined with regards to the lowest-input signal noise, without considering the signal and noise dependency in the case of photons; this leads to over-conservative conversion for large input fluxes. To target the data transfer power, compression is a well-known technique used in image processing to reduce the bit rate. Compression algorithms are composed of two steps: firstly, data are decorrelated using either a predictor or a transformation, then entropy coding is applied to reduce the bit rate. Implementations of compression are mostly digital; however, decorrelation schemes can also be implemented in the analog domain [3–5]. This paper, by exploiting the input signal characteristics as well as the inherent spatial redundancy, targets a decrease of both the ADC resolution and the amount of data transfer. It presents a decorrelation scheme based on a modified first-level Haar decorrelation combined with a variable gain applied to its coefficients accordingly to their probability density function (PDF). This paper is organised as follows. Section 2 discusses the main noise contributions in an
|