Properties of Color Images

Photography Jobs Online

Learn Digital Photography Now

Get Instant Access

Digital cameras, DSLRs, and webcams perform remarkably well in astronomical applications. However, it is important to remember that these cameras were engineered for short-exposure daytime terrestrial imaging rather than low-light long-

exposure astronomical imaging. It is therefore hardly surprising that the cameras and their file formats are optimized for daytime picture-taking rather than for astronomy. With proper attention to technique, however, you can make excellent astronomical images with them.

To determine how to obtain excellent results from digital camera color image files, we will begin by examining their distinctive properties. (In Chapter 7, we performed a similar analysis for astronomical CCD cameras.) Our goal in this examination is to develop image-processing techniques to exploit the embedded color information while offsetting and overcoming undesirable artifacts.

Bayer Array Artifacts. In many digital cameras and webcams, the CCD or CMOS detector has an integral array of red, green, and blue color filters that separate color information into color channels. The filter pattern is called a Bayer array (shown in Figure 1.12), after its inventor. Each pixel on the detector sees only one color. To assign a complete set of color channels to each pixel, the signal processor in the camera averages obtains the other two color channels from adjacent pixels. Thus the color that you see in each individual pixel has actually been synthesized from an area containing five to nine pixels.

If you enlarge a color image from a digital camera so that you can clearly see individual pixels, you will see blocky small-scale artifacts caused by the Bayer array. In the context of a 5- to 8-megapixel image, these artifacts have little or no impact on overall quality, but on small-scale features like star images, these blocky distortions can be annoying.

8-Bit or 12-Bit Color Depth. Color images from digital cameras and webcams reach you as JPEG, BMP, and TIFF files, or as proprietary "raw format" files. Color information in the JPEG and BMP file formats is stored as red, green, and blue "color channels" each having an 8-bit depth, for an uncompressed total of 24 bits per pixel. These files are said to contain "24-bit color." The 8-bit depth means that each color channel has only 256 brightness levels.

Although it seldom matters for daytimeapplications, the impact of the 8-bit format on astrophotography is severe. Stars and bright objects become easily overexposed, and a bright sky background can fill a significant portion of the dynamic range of the image in a few minutes' exposure. While an 8-bit dynamic range is adequate for daytime images, it can be too limiting for astronomical work.

The proprietary "raw" formats contain a richer brew of data. Ideally, the raw format would consist of an unaltered "dump" of data directly from the camera's analog-to-digital converter to the raw file. Most digital cameras have a 12-bit analog-to-digital converter, so the raw data begin with 4096 possible levels of brightness—sufficient range to handle astro-imaging reasonably well. However, some digital cameras appear to adjust or pre-process the raw data by altering the bias level of the data or by filtering out hot pixels, so the "rawness" status of "raw" files is uncertain: they may be "truly raw" or they may be "sort-of' raw.

To convert raw files to standard image format, the camera makers supply adjustment and conversion software. This software mimics the camera's internal

Figure 21.1 his 210-second exposure of the Orion Nebula was taken with a Nikon D70 digital SLR camera at the focus of a 6-inch f /5 Newtonian. The camera's built-in noise reduction feature (i.e., dark frame subtraction) was turned on. Although the bright core of the nebula has saturated and the sky background is somewhat noisy, the image has a pleasing aesthetic appeal.

Figure 21.1 his 210-second exposure of the Orion Nebula was taken with a Nikon D70 digital SLR camera at the focus of a 6-inch f /5 Newtonian. The camera's built-in noise reduction feature (i.e., dark frame subtraction) was turned on. Although the bright core of the nebula has saturated and the sky background is somewhat noisy, the image has a pleasing aesthetic appeal.

Figure 21.2 The raw image from a digital camera consists of an array of red-, green-, and blue-filtered pixels (top image). After interpolating colors from the pixels around it, each pixel can be assigned a complete (R,G,B) color triad. The full-size decoded image is shown in Figure 21.3, on the page opposite.

Figure 21.2 The raw image from a digital camera consists of an array of red-, green-, and blue-filtered pixels (top image). After interpolating colors from the pixels around it, each pixel can be assigned a complete (R,G,B) color triad. The full-size decoded image is shown in Figure 21.3, on the page opposite.

processing, providing exposure correction, color balancing, and image sharpening. Among the output options are TIFF files in which the red, green, and blue color channels are stored as 16-bit integers. This format allows you to export the full 12-bit range of data captured by the camera to any software that supports 16-bit or

Figure 21.3 Viewed as part of an entire 6-megapixel image, a decoded Bayer array produces smooth color. The 56 x 38-pixel section shown in Figure 21.2 is located in the northern tip of the Sagittarius Star Cloud. Nikon D70 image by David Hayworth, 18 mm lens at f/3.5, 4-minutes, ISO 400.

higher integer data for further processing. Note, however, that even though the 12bit data have been stored in a 16-bit format, they remain 12-bit data.

• Tip: AIP4Win imports JPEG, BMP, TIFF, as well as proprietary Nikon and Canon raw files. When you import a color image, AIP4Win converts color data to its internal 32-bit floating-point format. This guarantees that absolutely no color or luminance information present in the original image(s) can be lost when AIP4Win processes a color image.

Noise and Dark Current. CCDs and CMOS devices all produce an unwanted signal called dark current. In astronomical CCDs, cooling the sensor to -IOC, -20C, -30C, or even lower can reduce dark current to well under one electron per pixel per second. Because they operate at ambient temperatures, however, most digital cameras have much higher dark current levels than they would if they were cooled, and they show a scatter of "hot pixels" in the image. In well engineered cameras, however, dark-frame subtraction reduces dark current and hot pixels sufficiently well to allow multi-minute astronomical exposures with excellent results.

Figure 21.4 shows a dark frame made with a 300-second exposure. In this 8bit image (0 to 255 ADUs), the dark current averages 4 ADUs, but a few hot pixels have values as high as 120 ADUs. The artifact in the upper left corner, peaking at 200 ADUs, is the glow of the CCD's on-chip amplifier. The camera's dark-frame

Figure 21.4 This 30-second dark frame with a Nikon D70 digital SLR shows amplifier glow in the upper left, a scattering of hot pixels across the image, and weak horizontal banding. The camera's built-in dark-frame subtraction removes the amplifier glow and hot pixels from Images, but faint banding remains.

Figure 21.4 This 30-second dark frame with a Nikon D70 digital SLR shows amplifier glow in the upper left, a scattering of hot pixels across the image, and weak horizontal banding. The camera's built-in dark-frame subtraction removes the amplifier glow and hot pixels from Images, but faint banding remains.

subtraction (Nikon's term is "noise reduction") does a good job of removing both hot pixels and amplifier glow (demonstrated by the near-total absence of amplifier glow and hot pixels in Figure 21.1).

Readout speed is prized in digital cameras, but the 10- to 20-megapixel-per-second readout rate required produces higher readout noise levels than a CCD or CMOS device would be capable of producing if it were read out slowly. In daytime photography, the intrinsic statistical variation in photon count in fully exposed images tends to mask readout noise, but in astronomical images—which consist almost entirely of underexposed sky background—readout noise and readout artifacts may become the dominant source of noise. By using a low ISO speed rating, or by saving files in the camera's raw format, this effect will be minimized.

File Compression Artifacts. Most digital cameras save images in the JPEG file format. JPEG uses a lossy compression scheme, the discrete cosine transform, to reduce the size of the image file. The greater the amount of compression the greater the quality loss. However, files compressed by a factor of 4 (usually called "fine" or "best" mode) show minimal quality loss; those compressed by a factor of 8 ("normal" compression) have losses acceptable for normal scenes, and those compressed by 16 ("basic" or "coarse" compression) may display annoying artifacts.

JPEG compresses an image by dividing it into blocks 8 pixels on a side, computes Fourier components (see Figure 21.5 and Chapter 17), and discards the com-

Jpeg Compression Noise

Figure 21.5 In this greatly enlarged close-up, artifacts from lossy JPEG compression appear as eight-pixel-square blocks. If your digital camera supports an uncompressed or "raw" format, save your astronomical images in raw files. If it does not, use the highest-quality JPEG option available.

Figure 21.5 In this greatly enlarged close-up, artifacts from lossy JPEG compression appear as eight-pixel-square blocks. If your digital camera supports an uncompressed or "raw" format, save your astronomical images in raw files. If it does not, use the highest-quality JPEG option available.

ponents that have the smallest amplitudes. The greater the compression required, the greater the number of Fourier components sacrificed. Since the least influential components are cut out first, you can use a low compression with little effect on image quality.

With mild compression, JPEG artifacts usually appear as soft or blurry-looking horizontal and vertical bands eight pixels long or high. The effect is most visible in areas where the local pattern of the image noise is disrupted. With strong compression, eight-pixel blocks may have the same brightness or color.

For astronomical images, "fine" or "best" JPEG compression may be acceptable to you. If you find the compression artifacts objectionable, then you should be using your digital camera's raw file format.

Film Grain in Scanned Images. Scanning films and prints is an effective way to bring older images into an image-processing environment. However, in addition to a small amount of digital noise from the linear CCD array in the scanner, you can expect film grain. Unlike the statistical variations due to photon noise where the variation in a pixel is independent of adjacent pixels, film-grain noise is correlated from one pixel to the next. Correlated noise of this type is extremely difficult to filter out, and almost always represents the limiting factor in enhancing scanned film images.

Compared to CCDs and digital camera raw files, scanned color films have a short dynamic range. Color negative materials have the longest dynamic range,

Random Pixel Pattern
Figure 21.6 In this greatly enlarged close-up, fixed-pattern artifacts appear as aligned blobs in areas that should appear random. Artifacts can also appear as horizontal bands (see Figure 21.4) and hot pixels. Fixed pattern artifacts can usually be removed from images by dark subtraction.

followed by color transparencies. Scanned prints display a combination of severely limited dynamic range and surface artifacts (fingerprints and fine scratches).

Finally, because many scanners use a linear sensor that is scanned down the image, the images may suffer from a low-amplitude streak pattern. If your scanned images show long linear artifacts, consult the scanner manual; you may find that the scanner software includes procedures for "flat fielding" its scans.

21.1 Calibrate to Remove Dark Current and Vignetting

Like their CCD counterparts, images from color digital cameras contain unwanted bias, dark current, and pixel non-uniformities. By applying an image-processing technique called calibration, these artifacts can be largely removed. Section 5.5 of this book contains practical hints on calibration for CCD cameras, and Chapter 6 provides a detailed description of the theory behind image calibration.

Although calibrating color images is fundamentally exactly the same process as calibration of monochrome CCD images, it differs in important details. In this section we discuss the differences and describe taking bias, dark, and flat frames and how to use them to calibrate color images.

21.1.1 Calibrating Bayer-Array Color Images

In the raw image from a digital camera, color information has been spatially en coded in a Bayer array. However, because unwanted signals (like dark current), noise sources (readout noise and hot pixels), and sensor nonuniformity in the raw image are specific to the source pixel on the sensor, after the raw image has been converted to red, green, and blue (RGB) color channels, noise from the different pixels is mingled and cannot be correctly subtracted.

Instead, the bias and dark current frames should be subtracted, and flat-fielding should be applied before decoding the Bayer array into color channels. This means that to calibrate a Bayer-array image correctly, it must be loaded as a monochrome image. It must be bias corrected and dark subtracted using a bias frame and dark frame in the monochrome configuration, and then divided by a normalized flat frame, also in the monochrome configuration. Once these extraneous signals and noise have been removed, the Bayer array can be properly decoded into red, green, and blue color channels.

21.1.2 Calibration for Digital Cameras

Raw images, whether produced by a digital camera or by an astronomical CCD, consist of a dark frame plus a photon image containing vignetting and other non-uniformities. Calibration is an image-processing procedure that enables you to remove these degrading effects.

In this section, we examine in detail how image calibration works. We'll begin by looking at the components contained in a raw image:

(RAW),,, = (DARK),+ CVIGN)^ x (PHOTONS)^. (Equ. 21.1)

The x,y subscripts remind us that a term such as (DARK), ^ refers to the image as a collection of pixels at every pixel location.

The goal of calibration is to extract the image (PHOTONS) from the image (RAW). To accomplish this, we can rearrange the terms in Equation 21.1, thus:

To obtain (PHOTONS), we need both (DARK) and (VIGN). Making a dark frame is easy: you take an exposure with the shutter closed or with the lens cap on the camera. Because the amount of dark current depends on the exposure time, the exposure times you use for the image (RAW) and the image (DARK) must be the same.

Obtaining (VIGN) is somewhat more involved, but it is not difficult. To record the non-uniformities, you make an image of a uniform field of light—a "flat field." Since dust or vignetting will cause spots and dark edges, the flat field image, called (FLAT) , is a map of these nonuniformities. Figure 21.7 shows a typical flat-field image.

To give a short exposure time, the uniform source should be fairly bright. However, since the image (FLAT) is an image like any other, you must make a dark frame for it, too. This dark frame is (FDARK). With the addition of the flat-

Figure 21.7 A flat-field image is nothing more complicated than a picture of an illuminated screen placed close to your camera or telescope. The dark corners are caused by vignetting, and the little "donut" at the lower right is a shadow cast by dust in the optical system.

dark frame, Equation 21.2 becomes:

(PHOTONS),. v = <RAW),,-(PARK)„v ,x'y <FLAT>x>iI-<FDARK)x>>

Although it is tempting to make only one dark frame, the exposure times for (RAW) and (FLAT) are rarely the same, so the exposure times you use for (FLAT) and (FDARK) will also be different. As a result, it is necessary to make two dark frames, one for the raw image and one for the flat-field exposure.

Although the formulation in Equation 21.3 works, dividing by the flat frame produces pixel values that are inconveniently small. To produce values that average around the values in the raw image, it is necessary to normalize the flat frame, that is, to divide the flat frame by the mean value of (FLAT)X>>, - (FDARK)r v, which we'll call F, to create a normalized master flat frame, (MFLAT):

and then compute the calibrated image from:

Calibration for monochrome CCD images is done this way. However, with Bayer-array color images from digital cameras, Equations 21.3 and 21.5 introduce an undesirable side-effect: calibration shifts the color balance of the image.

Suppose that the uniform source that you use for the flat-field image is a tungsten lamp, and therefore yellow-orange rather than white. If the mean value of the red-filtered pixels in the Bayer array is greater than that of the green-filtered and blue-filtered mean, then when the Bayer array image is flat-fielded, the value of red pixels will decrease and those of green and blue pixels will increase, shifting the color balance of the image toward the color complement of the flat-field image—in this example toward a greenish-blue cast.

To avoid this effect, instead of normalizing all pixels in the flat-field image, it is necessary to normalize the red, green, and blue pixels separately. The mean of the red pixels would be FR ; the mean of the green pixels, FG ; and that of the blue pixels, FB. The normalized master flat frame is computed as follows:

if R «FLAT>Xf„- (FDARK)x,y)/FR ifG^((FLAT)^-<FDARK)^)/F¡ • (Equ.21.6) if B -> «FLAT>xo,- <FDARK>^)/F¡

For red pixels, normalizing uses the mean of the red pixels, and for green pixels and blue pixels, uses their means. Calibration is completed by applying Equation 21.5. This calibration procedure does not change the color balance of the image.

The value of image calibration is shown in Figure 21.9.

Indeed, calibration is so important for good image quality that digital camera makers allow serious photographers to calibrate their images. In the Nikon D70, for example, dark-frame subtraction is an optional built-in function. If you set the "long exposure noise reduction" option to "ON," the camera automatically makes and subtracts a dark frame for any exposure longer than one second. Furthermore, you can perform flat-fielding if you make a "Dust reference photo" of an out-of-focus white object 4 inches in front of the camera lens. The dust reference photo is aflat-field frame. Nikon's Capture Editor software uses the dust reference photo to perform a flat-field division.

•Tip: Color calibration in AIP4Win uses the algorithms described in this section to calibrate single images or to calibrate multiple images during stacking. Calibration plus stacking is the key to producing high-quality images with your digital camera.

Was this article helpful?

0 0
Digital Camera and Digital Photography

Digital Camera and Digital Photography

Compared to film cameras, digital cameras are easy to use, fun and extremely versatile. Every day there’s more features being designed. Whether you have the cheapest model or a high end model, digital cameras can do an endless number of things. Let’s look at how to get the most out of your digital camera.

Get My Free Ebook


Post a comment