The Calibrated Image

The calibrated image should be an accurate, albeit sampled and quantized, copy of a section of night sky. In the ideal image, pixel values are directly proportional to the average flux of photons falling on the CCD during the integration. The most amazing thing—since we don't live in an ideal world—is that real-world calibrated CCD images really do approach the ideal. Observers using rather ordinary small telescopes routinely reach 20th magnitude, and routinely pull down images good enough to yield sub-arcsecond astrometry and photometry with a precision of 0.02 magnitude over a range of five magnitudes. Truly these are inspiring times for amateur astronomy.

Despite the accomplishments of CCD imaging, it is important to look critically at the images we obtain, not only to see what's wrong, but also to learn ways to improve them. As we shall see, after dark current is gone and less-than-perfect uniformity corrected, the noise contributions remain. In this section, we take a realistic look at the properties of calibrated images, and explore their limitations and their strengths.

6.4.1 Photons, Dark Current, and Readout Noise

When you take an image, each photosite on the CCD captures some number of photons; but because they arrive at random, if you take a whole series of images one right after the other, the number of photons captured by the very same photosite will vary. In some images more photons than average will arrive, and in oth ers, fewer than average will arrive. When P is reasonably large, the variation in the number of photons obeys a simple law: it forms a roughly normal distribution about the mean value, Px? v. When we count up the samples, 68% lie within Jj>^y of the mean value, 95% lie within 2 JPx v, and 99% lie within 3 Jp^ .

While photon statistics might at first seem removed from astronomy, it most definitely is not: short-exposure CCD images look grainy because of photon statistics. Suppose that you have shot a beautiful 60-second integration of the Horse-head Nebula with your CCD camera, and, after careful calibration with a high-quality master dark and master flat, you realize that the sky background looks grainier than you would like. You scan a pixel-value tool over the image and read off a few dozen pixel values: you see some 66s, more 67s, lots of 68s, fewer 69s, some 70s, and the occasional 65s and 71s. Why is this happening? Is it right?

Let's look first at the photon statistics in this image by developing a specific example. The average pixel value in the sky is 68 ADUs. Suppose also that you have characterized your camera (see Chapter 8) and found that the conversion factor, g, is 35 electrons per ADU. A typical photosite exposed to the sky in this example would have generated 68 ADU x 35 electrons per ADU = 2380 electrons during the integration. You look up the manufacturer's quantum efficiency curve and find that the average quantum efficiency of your CCD is 40%; so you extrapolate that 5950 photons fell on each photosite during the 60-second integration, at an average rate of 99 photons per second. During any one second, an average of 99 ± 10 photons fell on each photosite; during the entire integration, 5950 ± 77 photons fell on each sky pixel.

However, you don't know anything about photons—what you actually have is a sample of 2380 photoelectrons created when the photons struck the CCD. The statistical uncertainty in a sample of 2380 electrons is 49 electrons; that is, your measured signal is 2380 ± 49 electrons. When you convert electrons back into ADUs using the conversion factor of 35 electrons per ADU, you find that the pixel value of the sky, from photoelectron statistics alone, should be 68 ± 1.4 ADUs. In other words, part of the variation that you see in the sky background is due to the Poisson statistics having to do with random arrival of photons.

Dark current has also made a contribution to your Horsehead image, although its contribution at most photosites turns out to be quite small. With a measured rate of 1.5 electrons per second per photosite, you expect an average of 90 electrons to have accumulated during the 60-second integration of the image, for a thermal noise contribution, gte , of a/90 = 9.5 electrons.

The CCD also has a small population of "hot" pixels. The "hottest" of these generate roughly 250 electrons per second, for an accumulation averaging some 15,000 electrons during integration. With an expected standard deviation of 122 electrons, those oddball photosites are likely to result in a few light or dark pixels even after calibration.

Finally, when the accumulated electrons were clocked off the CCD and converted to a voltage, the charge detection node of the CCD added readout noise, gro , a small, random variation generated by it. For this example, let's assume that

Figure 6.21 For good image quality, you need to accumulate enough photons to give a statistically "clean" image. From left to right, you see exposures of 1 millisecond, 10 ms, 100 ms, 1 second, and 10 seconds. When there are enough photons, you can see a wood pile beside a wood-burning stove.

Figure 6.21 For good image quality, you need to accumulate enough photons to give a statistically "clean" image. From left to right, you see exposures of 1 millisecond, 10 ms, 100 ms, 1 second, and 10 seconds. When there are enough photons, you can see a wood pile beside a wood-burning stove.

you have measured the readout noise in your camera (using the methods described in Chapter 8) and found that it has an r.m.s. readout noise of 31 electrons. Capturing the raw image included these noise sources:

• in the "hot" pixels: aHP = 122 electrons.

To find the total noise of a series of independent noise sources, square them, sura them, and take the square root (this is what "summing in quadrature" means):

Noise adds in this manner because the individual noise contributions add together or cancel randomly. The total noise is:

Converting 58.8 electrons to ADUs, you see that the sky background contains a variation of ±1.7 ADUs of noise due to capturing the image. The signal was

Raw Darkness Trial
Figure 6.22 Case study: One raw image of the Horsehead Nebula. In addition to the nebula, the raw image contains the bias offset, dark current, and several dark dust shadows. In addition to 44 raw images, the observer shot 20 bias frames, 60 dark frames, as well as flats and flat-darks.

large enough that in normal pixels, photon and photoelectron statistics made a greater contribution to the "graininess" in the sky background than any other cause. For the 100 or so hot pixels, the standard deviation works out to 132 electrons, or about 4 ADUs.

We next assess the noise contribution from calibrating the image.

6.4.2 Noise from Calibration

Dark-frame subtraction and flat-fielding add noise because dark frames and flat frames are themselves noisy—although it is fair to say that dark frames are the greater culprit, since a well-done flat-field is uncertain by only one part in 600.

Let's see how bad it is. When you took the Horsehead image, you made an effort to produce an excellent master dark frame by shooting ten dark frames with 300 seconds integration each. The noise in a single dark frame consists of the following:

• The normal photosites have a dark current of 1.5 electrons per second, in 300 seconds an average of 450 electrons accumulated, so aTE = 21 electrons.

Figure 6.23 Here is the same image, now with the master dark frame subtracted using a dark-matching algorithm. Because the temperature of the camera changed slightly, the formula used was <RAW>-1.0623*<THERMAL>. Although the hot pixels have been subtracted, the dust shadows remain.

• Each dark frame contains a sample of readout noise. You already know that aR0 = 31 electrons.

• The small number of "hot" photosites have a dark current of 250 electrons per second, so in 300 seconds these accumulate 75,000 electrons with a noise level of aHP = 274 electrons.

The noise expected from a single normal photosite is therefore 37 electrons, and in a hot photosite, 275 electrons. Averaging the ten dark frames that you took consists of two operations: summing the noise from the ten dark frames, as in Equation 6.25, and then dividing the noise by ten:

When you run the numbers, oavcr.lt;c = 12 electrons. In calibrating the raw image, the noise from dark subtraction is added in quadrature, increasing the noise from ±58.6 electrons in the raw image to ±60 electrons in the calibrated image. In the hot pixels, the noise from the dark frame works out to 87 electrons; which, when added in quadrature to the 148 electrons of hot pixel noise in the raw image, is 172 electrons r.m.s., or about 5 ADUs.

Glitter Countertop

Figure 6.24 After fiat-field correction, the dust shadows are gone! Although a single calibrated image often looks rather grainy, the image still shows a lot more than most observers could see visually with a much larger telescope. The next step is to build a stronger signal by combining many such images.

Figure 6.24 After fiat-field correction, the dust shadows are gone! Although a single calibrated image often looks rather grainy, the image still shows a lot more than most observers could see visually with a much larger telescope. The next step is to build a stronger signal by combining many such images.

The bottom line is that carefully done dark-frame subtraction does not add significantly to the noise in the image; and, of course, it does remove the dark current. In the raw image, "hot" pixels have values around 500 ADU, and are, in fact, the most obvious feature therein. After calibration, the dark-subtracted hot pixels have the same average value as the sky—68 ADUs; but their standard deviation is ±5 ADUs, just about twice that of a normal pixel. Although the hot pixels have been removed on the average, they are in fact a population of unusually noisy pixels, considerably more likely to have abnormally high or low values compared to the normal pixels around them.

In flat-fielding images of deep-sky objects with light-box and dome flats, noise is simply not a problem. Because of the full exposure given to the raw flats and the averaging of many frames, the statistical uncertainty of a pixel in a wellmade master flat frame is one part in 600 or better. When the image is fiat-fielded, the application of the flat frame to a sky-background pixel generates an uncertainty of about 0.14 ADUs. Since that sky pixel already has an uncertainty of ±1.7 ADUs, the noise from flat-fielding is inconsequential. Flats are difficult to make, but the difficulties stem from reasons other than noise.

Whenever photon statistics dominate the noise in a single image, you can achieve abetter signal-to-noise ratio by summing multiple images. Providing the images are independent samples of the photon flux, the signal in the resulting image increases with the number of frames added, while the noise increases only as the square root of the number of images. Thus, the signal-to-noise ratio improves with the square root of the number of images summed. In the photon-noise dominated case, shooting many short integrations will produce the same quality image as making a single long integration.

When another source, such as readout noise, is dominant, the improvement in the signal-to-noise ratio is limited not by photon statistics, but by the other noise sources. You cannot stack hundreds of short-exposure images that are dominated by readout noise and expect to create an image that even approaches the quality of a single long integration. However, the signal-to-noise ratio will grow as you add more and more images—but with significantly more noise than you would accumulate with a single long integration.

The key to effective "stacking" is that each image must be statistically independent. Often, however, you must use the same master dark frame to calibrate each one, so that some portion of the noise in each is correlated with noise in the other images, and the gain in image quality is lower—but that difficulty is partially offset when the individual images contain tracking errors and must be shifted into registration. If the noise in the dark frame is a significant fraction of the noise in each image, however, it will show up as a pattern in the sky background. For good results, therefore, you must create a master dark frame with at least twice the total integration time as the total integration used in the stacked images, so that dark noise makes a negligible contribution to noise in the image.

When used properly, summing images ("stacking"), or registering and then summing (i.e., "track-and-stack"), is a powerful method for shooting very deep images. For the best results, the individual exposures should be as long as practical, and the total integration time for the master dark frame should exceed the total image integration time by a factor of two or more.

6.4.4 How to Spot Calibration Errors

A variety of errors may occur in the process of taking support frames and calibrating images. Underlying these are three problems that account for most image calibration faults: bias drift, changing CCD temperature, and changing optical configuration. This section gives a brief overview of common problems and how to spot them.

6.4.4.1 Bias Drift

If the electronics in your CCD camera have temperature-sensitive components, the bias value can change when the air temperature changes. As long as the raw

Figure 6.25 Stacking images makes use of more photons, thereby increasing the signal-to-noise ratio. In this image, 44 integrations of 60 seconds each reveal the Horsehead in all its glory. Features that were barely visible in one 60-second integration stand out strongly when many integrations are combined.

images and the dark frames have the same bias value, no harm is done; but when the bias changes with time, significant errors can result.

• If the bias value in the raw image is higher than in dark frames, pixel values in the image will be too high. When a properly made flat-field is applied, vignetting is only partially corrected. This problem is difficult to spot unless flat-fielding under the standard or advanced protocols produces undercorrected flat-fielding.

• If the bias value in the dark frames is higher than it is in the raw images, pixel values in the images will be low. If the drift is severe, the entire sky background can have a zero or negative value. If the drift is small, when a properly-made flat-field is applied, vignetting is overcorrected. A completely black sky, bright corners, or bright dust shadows are clear indicators of bias increase.

CCD cameras that feature drift subtraction or automatic bias correction are immune to this problem. In these units, the bias is measured each time the CCD is read out, and pixel values in the image are adjusted to have a constant bias value.

Antiblooming Gate

Figure 6.26 This enlarged section of an image displays both "hot" and "cold" pixels after calibration. This is evidence that the image frames and dark frames were taken at different temperatures, or that the CCD was operated with an active antiblooming gate, resulting in dark frames that were nonlinear.

Figure 6.26 This enlarged section of an image displays both "hot" and "cold" pixels after calibration. This is evidence that the image frames and dark frames were taken at different temperatures, or that the CCD was operated with an active antiblooming gate, resulting in dark frames that were nonlinear.

6.4.4.2 Changing CCD Temperature

If the temperature of the CCD changes between the time you take the raw image and the time you take your dark frames, the dark current will change. Dark current is quite sensitive to the temperature of the CCD, so a change of one or two degrees Celsius can make a big difference.

• If the raw image was taken with the CCD at a higher temperature than the dark frames, the master dark frame will have recorded too little dark current; and you will see a residual population of hot pixels after dark subtraction. A residue of hot pixels points to temperature drift in the CCD.

• If the dark frames were taken with the CCD at a higher temperature than the raw images, the master dark frame contains pixel values that are too high for the raw image; and you will see a population of oversubtracted, or "cold" pixels. Cold pixels are a clear diagnostic for CCD temperature drift.

CCD cameras with actively stabilized CCD chip temperature are obviously desirable, but the observer should be aware that a poorly designed temperature stabilization circuit may result in a temperature that oscillates between too high and

Defects Thinfilm Coatings

Figure 6.27 Even though "nothing was changed" from one night to the next, something did change. Here is a flat-field, made on December 22, that has been flat-fielded using a flat-field made on December 21. What probably happened is that the CCD camera shifted slightly when it was refocused on December 22.

Figure 6.27 Even though "nothing was changed" from one night to the next, something did change. Here is a flat-field, made on December 22, that has been flat-fielded using a flat-field made on December 21. What probably happened is that the CCD camera shifted slightly when it was refocused on December 22.

too low. For cameras that use water cooling without active temperature stabilization, use the largest practical volume of coolant water. For air-cooled cameras, use a fan to promote a constant rate of heat dissipation.

"Dark matching" partially corrects CCD temperature drift because it computes a dark-current scaling factor based on the properties of the hot pixel population. The characteristic appearance of a temperature drift that has been corrected by dark matching is a mixture of hot and cold pixels in the same image.

6.4.4.3 Changing Optical Configuration

Good flat-fielding demands an optical and mechanical system that stays the same between the time that images are taken and the time that flat-field frames are made. Other factors being equal, screw-thread couplings are preferable to slide-in adaptors, and refractors are preferable to reflectors and catadioptrics. However, the most important factors in getting good flat-fields are an observer who pays close attention to the mechanics of the telescope and CCD camera, and frequent cross-checks to insure optical stability and repeatability.

Here are some common problems that you may encounter in flat-fielding:

Section 6.5: Defect Mapping and Correction

• Center-to-edge gradients in flat-fielded images usually result from scattered or stray light either in the flat-fielding setup or in the normal imaging setup. To correct this, blacken all interior surfaces and install baffles to prevent low-angle reflections inside the focuser tube, filter assembly, and camera housing.

• Right-to-left and top-to-bottom gradients in flat-fielded images can often be traced to non-uniform illumination of the dome screen or the light box. If you have a light box, compare flat-fields made before and after rotating it; if you have a dome screen, change the illumination and make comparison images.

When you flat-field raw images, examine the results critically and with an open mind. Remember that overcorrected and undercorrected flat-fielding will result if the bias level changes in your raw images. However, don't blame everything on bias frames; good flat-fielding techniques take some time to work out, and you need to check yourself and your equipment. If your flats aren't working right, try these tests with light-box and dome flats:

• Take data for two master flats, one right after the other without changing anything. Flat-field the first master flat with the second one; the result should be perfectly flat. Any departure from a uniformly illuminated image suggests that your flat-making technique or processing of the data is wrong.

• Take data for one master flat, point the telescope in all different directions, and then take data for a second one. When you flat-field the first with the second, the resulting image should be perfectly flat. Variation suggests movement of elements in the optical system.

• Take data for one master flat, rotate the telescope tube 90 degrees, and then take data for a second master. Any difference suggests that gravity is causing one or more elements in the optical system to sag or shift.

• Take data for one master flat at the beginning of your observing session and then take data for a second at the end of the session. Flat-field the first with the second flat. Expect some movement of dust particles. Any other changes suggest movement in the optical system.

• If you leave your CCD camera on the telescope all the time, compare master flats taken a few days or weeks apart. Changes should be minor; on the order of 1 %.

Was this article helpful?

0 0
Telescopes Mastery

Telescopes Mastery

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

Get My Free Ebook


Post a comment