Remapping Pixel Values

After an image is calibrated, the pixel values in the image are directly proportional lo the amount of light that fell on the CCD. In the case of scanned photographs,

Processing Pixels

Figure 13.3 In this image, a low endpoint of 0.10 allows the lowest-valued 10% of the pixels to saturate black. To show where information has been lost, black-saturated pixels are printed white. Because these pixels represent sky, their loss may pass unnoticed—but the information they once contained is gone.

Figure 13.3 In this image, a low endpoint of 0.10 allows the lowest-valued 10% of the pixels to saturate black. To show where information has been lost, black-saturated pixels are printed white. Because these pixels represent sky, their loss may pass unnoticed—but the information they once contained is gone.

the pixel values bear some directly traceable relationship to the light that fell on the film. At this stage, the image contains more information than it ever will again—because every subsequent image-processing operation inevitably destroys some of the information present in the original image. What is essential to remember is that subsequent operations enhance the visibility of information that you— the image processing practitioner—decide to emphasize.

13.2.1 Isolating the Range

The first step in deciding how to remap the pixel values in an image is to determine the range of useful pixel values. The image histogram gives a quick look at their distribution. As an example, consider the image of M101 in Figure 13.1, and its histogram shown in Figure 13.2. You can see right away that the vast majority of pixels in the image have low values.

Histograms are the key to understanding how the values of the pixels that make up an image are distributed over the total range of values available. The computer generates a histogram as follows:

Figure 13.4 Setting the high endpoint to 0.99 allows 1% of the pixels to saturate white. To emphasize the information lost, white-saturated pixels are shown black in this example. With "only" 1 % of the pixels in the image allowed to saturate, it is clear that after scaling significant areas in the galaxy, they contain no information.

Figure 13.4 Setting the high endpoint to 0.99 allows 1% of the pixels to saturate white. To emphasize the information lost, white-saturated pixels are shown black in this example. With "only" 1 % of the pixels in the image allowed to saturate, it is clear that after scaling significant areas in the galaxy, they contain no information.

FOR x = 0 to xmax pv = image(x, y) hist(pv) = hist(pv) + 1 NEXT x NEXT y where image ( ) is the array containing the image, and hist ( ) is the array containing the histogram. After the code fragment above has run, the hist ( ) array contains the number of pixels having each pixel value. Histograms like the one shown in Figure 13.2 are graphs of hist ( ) arrays.

Because the number of pixels with a given value can range from zero to as many as the image contains, the number of pixels is often plotted on a logarithmic scale, as in Figure 13.2. At the low end of the scale, the graph shows 2,000 to 3,000 pixels per pixel value; but at the high end, it often shows only 0, 1, or 2 pixels per pixel value. If it were not plotted logarithmically, the number of pixels at the high end of the graph would not be visible.

Next consider what the histogram shows. Pixels that contribute to the high counts at the low end come from the background sky and the faint outer spiral arms. Pixels at the high end belong to stars and the brighter parts of the galaxy. It is fairly easy to set a pixel value that should display as black. If you allow 10% of the pixels to saturate black, a large section of the sky will be black. You can see this in Figure 13.3, where the lowest 10% of pixels are shown in white. If you allow only the lowest 1 % of pixels to saturate black, the black ones will be scattered around the background sky and their loss will hardly be noticed.

Setting the white-saturation value is much trickier. In this image, 99% of pixels lie below 1108 ADUs, 99.9% lie below 2100, and 99.99% lie below 3954. If you select 99%, then the remaining 1% of the pixels will saturate to white. Unfortunately, those pixels depict the nucleus and core of the galaxy. Figure 13.4 shows how much information you lose by setting the white point at 99% (1108 ADUs). The lost pixels are shown black;, and they make up the nucleus, core, and centers of all the bright stars.

By setting the black point to 1 % (26 ADUs) and raising the white point to 0.9995 (3130 ADUs), very little information is lost either from the background sky or from the center of the galaxy (see Figure 13.5).

When users specify the black and white points as pixel values, they are doing a direct endpoint specification; when users specify the endpoints as percentages of saturated pixels, that is histogram endpoint specification. While it is possible to do direct endpoint specification from a plotted histogram, it takes a fair amount of skill and experience to produce consistent results. With histogram endpoint specification, however, novices can produce consistent results by sticking with appropriate endpoints, such as 0.01 and 0.999 for deep-sky images and 0.1 and 0.9999 for planetary and lunar images.

Given histogram endpoints, a simple algorithm analyzes the histogram to find the direct endpoints for an image, as follows:

total = (xmax +1) * (ymax + 1) pixels = endpoint * total sum = 0

FOR pvend = 0 TO pvmax sum = sum + hist(pvend) IF sum => pixels THEN EXIT FOR NEXT pvend where total is the total number of pixels in the image, endpoint is the endpoint as a decimal fraction, pixels is the number of pixels you want to be unsaturated, pvmax is the maximum pixel value expected in the image, and sum is a running total of pixels. As the loop variable pvend steps through the histogram array, sum holds the number of pixels with pixel values less than pvend. When the cumulative sum equals or exceeds the desired number of unsaturated pixels, the computer exits the loop, leaving the desired pixel value in the variable pvend. In software, this calculation is done twice, once for the black endpoint, pvblack, and again for the white endpoint, pvwhite.

Once the low and high pixel values that bracket the range of useful information in an image are determined, the transfer function controls what happens to the pixel values between pvblack and pvwhite.

Figure 13.5 Endpoirits of 0.01 (1%) and 0.9995 (99.95%) preserve almost all of the image information. Compare this image with Figure 13.1. Detail in the sky background and the galaxy core is visible. Pixel values between 26 and 3130 ADUs were remapped using the gammalog transfer function.

13.2.2 Transfer Functions

The transfer function is a mathematical relationship between old and new pixel values. In mathematical notation, a function looks like this:

You should read this as "function of p equals q." The function is embodied in the operator/. This operator acts like a black box—when you put the value p into the black box/, out pops the value q according to the function's formula. Consider a few concrete examples. The expression:

means that whatever value of p you put into function/, the value that comes out is 1. It's a dull function because the output is always the same. Suppose you see:

This one means that when you put p into the function, you get out the value of p. This is a valid function even though it doesn't change anything. Here's another:

New Pixel Value

New Pixel Value

Was this article helpful?

0 0
100 Photography Tips

100 Photography Tips

To begin with your career in photography at the right path, you need to gather more information about it first. Gathering information would provide you guidance on the right steps that you need to take. Researching can be done through the internet, talking to professional photographers, as well as reading some books about the subject. Get all the tips from the pros within this photography ebook.

Get My Free Ebook


Post a comment