Architecture of CMOS Image Sensors

Jupsat Pro Astronomy Software

Secrets of the Deep Sky

Get Instant Access

In addition to the basic concept of active pixels, a number of common features can be found in most CMOS-based imagers. As shown in Fig. 2, two different scanners surround the actual pixel array: a vertical scanner to control the row selection, and a horizontal scanner to amplify and multiplex the analog pixel signals. While most CMOS imagers include comparable vertical scanners, they differ quite substantially in the architecture of the horizontal scanner. Low-speed astronomy detectors typically use a row of analog switches controlled by a simple digital shift register. Faster image sensors require additional circuitry in each column like sample/hold stages or column buffers. In some cases, even the A/D conversion is integrated into the horizontal scanner as a part of the column structure.

Mono Cmos Sensor Read Out Diagram
Figure 2. Block diagram of a generic CMOS sensor.

Most CMOS sensors include additional circuitry for bias generation, timing control, and A/D conversion. The latter can be found more and more in modern sensors that use deep submicron process technologies. By integrating all the support electronics into the same silicon as the pixel array, complete cameras can be built on a single chip. This approach is very attractive to the commercial image sensor market because of size and cost constraints. On the other hand, high-performance scientific sensors typically do not push towards the highest integration level. Here, very high resolutions at very low power levels require a simpler detector architecture, and most of the support circuitry is provided by external electronics.

2.3 Common CMOS Properties

A high level of flexibility and a number of unique features characterize CMOS detector technology. This section summarizes some of the important aspects, in particular the properties that set CMOS detectors apart from CCD sensors.

In terms of manufacturing, CMOS imagers in general benefit from the large availability of foundries worldwide. Using the same foundry resources as microchips guarantees cost-efficient production and highly mature process technology. Design rules as advanced as 0.13 ^m are being used for the latest generation of image sensors.

A significant advantage of CMOS sensor technology is its high level of flexibility. Small and simple pixels with three or four transistors are being used to achieve basic light detection at high resolutions. Larger and more complicated pixels with hundreds of transistors provide A/D conversion or other image processing capabilities directly at the pixel level. Furthermore, additional on-chip circuitry can support many analog and digital signal processing functions to reduce the requirements on system power or transmission bandwidth. Fortunately, CMOS imagers operate at very low power levels and therefore can tolerate the increased power consumption of most on-chip functions.

Typically, CMOS-based detectors do not need a mechanical shutter. Instead, integration time is controlled electronically. Two of the main electronic shutter concepts, the snapshot shutter and the rolling shutter, are explained in Section 2.4. An interesting feature is that pixels can be read out without destroying the integrated detector signal (nondestructive read). This allows, each pixel can be read multiple times, thereby reducing read noise by averaging multiple reads. Unlike CCDs, the detector array can be scanned in a number of different ways, including random access to any pixel at any time. Section 2.3.1 illustrates some of the special scanning techniques.

Because every pixel includes active components, the matching of device properties from pixel to pixel is important. Any mismatch can lead to pixel-to-pixel nonuniformities, called fixed pattern noise (FPN) or photoresponse nonuniformity (PRNU). Many sensors resolve the issue by providing on-chip correlated double sampling (CDS). This reduces the FPN and, at the same time, eliminates the temporal kTC noise. However, in some cases, the correction for the nonidealities has to be performed outside the sensor, thus adding complexity to the system.

2.3.1 Scan Modes

Similar to a memory chip, CMOS sensors are capable of accessing pixels in random order. It is merely a matter of the surrounding scanner logic to define the available scanning methods. Figure 3 shows four examples of common scanning schemes beyond the standard full frame read mode. Because of a reduced number of effective pixels, all special scanning modes provide the advantage of faster frame rates compared to the full frame mode.

The first example is the window mode, in which a rectangular shaped subsection is being read out. The location and the size of the window are usually programmable parameters. If the window can be reset and read without disturbing the other pixels in the detector array, the window mode can be used for simultaneous full field science exposure and fast guide window operation for telescopes. The second special scanning technique is called subsampling. In this method pixels are skipped during the frame readout. As a result, the complete scene is captured at a faster frame rate, yet at a reduced resolution. The third possible readout or reset scheme is random access. Every pixel is read when desired, and no predefined sequence has to exist. This can be helpful in selectively resetting saturated pixels. Fourth, CMOS sensors can combine multiple pixels into one larger pixel. This process is similar to the binning technique known from CCDs. However, CMOS devices typically do not achieve the same improvements in signal-to-noise ratio from binning as do CCD devices, and the scheme is therefore not as commonly used.

Cmos Sensor Subsampling
Figure 3. Different scanning schemes available in CMOS sensors.

2.4 Snapshot vs. Rolling Shutter

Two main concepts of electronic shutters have been established for CMOS image sensors: the snapshot shutter and the rolling shutter. Neither requires a mechanical shutter, but they differ with respect to their specific implications. Figure 4 illustrates both shutter concepts by means of a timing diagram of five consecutive rows. In the case of the snapshot shutter (left side), all rows start and stop integrating at the same time. Consequently, a complete picture is captured simultaneously by the whole array, a fact that is very important for fast moving objects. Depending on the design and the complexity of the pixel, the next exposure has to wait until all pixels are read (integrate, then read), or the next exposure can start while the previous values are being read (integrate while read).

The rolling shutter approach, on the other hand, does not globally start and stop the exposure. Instead, the start and the stop time of the actual integration is shifted every row by one row time with respect to the previous row. This can be seen on the right side of Fig. 4. Basically, as soon as the exposure of one row has finished, the row is read out and then prepared for the next integration period. Thus, the pixel does not require any sample and hold circuit. This results in smaller pixels and typically higher performance. The rolling shutter is beneficial for all applications that observe static or slow moving objects.

Figure 4. (left) Snapshot shutter vs. (right) rolling shutter.

2.5 CMOS-Based Detector Systems

As described in Section 2.2, CMOS image sensors require additional support electronics for controlling and biasing the detector system and for digitizing the analog signals. Three main approaches are used for implementation of the support electronics, as shown in Fig. 5, (a) a single chip solution like in Kozlowski et al. [1], (b) a system with analog image sensor and discrete external electronics, (c) a dual chip approach with analog image sensor and application specific integrated circuit (ASIC).

The single chip system is the most integrated concept since all required electronics are part of the sensor chip itself. Single chip camera systems can be very small and do not consume much power. However, they are difficult and expensive to design. Also, they can show undesired side effects like transistor glow or higher power consumption close to the pixel. Both effects potentially increase the detector dark current. Therefore, single chip solutions are typically not being used for high-performance scientific applications in order to maintain lowest dark current and highest sensitivity.

Conventionally, scientific detectors, and in particular astronomy camera systems, have been using the second approach comprising of an analog sensor chip and discrete external electronics. Some of the advantages include modularity, i.e., one controller can be used with many different detectors, and a less expensive detector design. On the negative side, discrete systems dissipate more power and are much larger.

To combine the benefits of the two concepts, a third approach is being pursued. Here, the detector system is composed of two separate chips. The first one is an analog image sensor, like the one used with discrete electronics controllers. The second chip is a programmable mixed-signal ASIC that effectively combines all functions of the discrete controller in a single chip [2]. The two chips together form a small and lightweight, yet flexible and high-performance, camera system.

Detector Array

Includes ADC. bias &

clock generation

Digital data

Acquisition System |

Figure 5. Three concepts for implementing support electronics in CMOS-based detector systems: (left) single chip, (center) discrete electronics, and (right) dual chip.

2.6 Stitching

The maximum size of a CMOS chip in modern deep submicron technology is limited to about 22 mm x 22 mm. This is the maximum reticle size that can be exposed in a single step. To build larger sensor arrays, a special process called stitching has to be applied. To do this, the large CMOS sensor is divided into smaller subblocks. These blocks have to be small enough that they all fit into the limited reticle space. Later, during the production of the CMOS wafers, the complete sensor chips are being stitched together from the building blocks in the reticle. Each block can be exposed multiple times within the same sensor, thus creating arrays much larger than the reticle itself.

An example of a stitched FPA is shown in Fig. 6. The left part of the picture includes the reticle with six independent subblocks. The right half illustrates a completely assembled CMOS readout chip, built from the individual subblocks in the reticle. The ultimate limit for this approach is only the wafer size itself, i.e., 15 to 30 cm depending on the process technology. However, yield limitations will make it nearly impossible to manufacture wafer-scale arrays, and mosaics of smaller independent detectors are typically used to build very large focal plane arrays (FPAs).

Digital data

Figure 5. Three concepts for implementing support electronics in CMOS-based detector systems: (left) single chip, (center) discrete electronics, and (right) dual chip.

Focal Plane Array
Figure 6. (left) Mask reticle and (right) stitched CMOS readout array.

Was this article helpful?

0 0
Telescopes Mastery

Telescopes Mastery

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

Get My Free Ebook


Post a comment