Learn Photo Editing

Learn Photo Editing

Get Instant Access

One of the wonders of digital image processing is the ability to sharpen a blurry image. This is done in several ways, and of course it cannot bring out detail that is not present in the original image at all. What it can do is reverse some of the effects of blurring, provided enough of the original information is still present.

13.5.1 Edge enhancement

The simplest way to sharpen an image is to look for places where adjacent pixels are different, and increase the difference. For example, if the values of a row of pixels were originally

they might become:

(changes underlined). This gives the image a crisp, sparkling quality, but it is most useful with relatively small images. DSLR images have so many pixels that single-adjacent-pixel operations like this often do little but bring out grain.

13.5.2 Unsharp masking

A more versatile sharpening operation is unsharp masking (Figure 13.6). This is derived from an old photographic technique: make a blurry negative (unsharp mask) from the original image, sandwich it with the original, and rephotograph the combination to raise the contrast. Originally, the unsharp mask was made by contact-printing a color slide onto a piece of black-and-white film with a spacer separating them so that the image would not be sharp.

The effect of unsharp masking is to reduce the contrast of large features while leaving small features unchanged. Then, when the contrast of the whole image is brought back up to normal, the small features are much more prominent than before. What is important about unsharp masking is that, by varying the amount of blur, you can choose the size of the fine detail that you want to bring out.

Today, unsharp masking is performed digitally, and there's no need to create the mask as a separate step; the entire process can be done in a single matrix convolution (Astrophotography for the Amateur, 1999, p. 237).

Note that Photoshop has a considerable advantage over most astronomical software packages when you want to perform unsharp masking - Photoshop can use a much larger blur radius, and a large radius (like 50 to 100 pixels) is often needed with large DSLR images.

Figure 13.6. The concept of unsharp masking: (a) original image; (b) unsharp mask; (c) result of stacking them; (d) contrast stretched to full range.

13.5.3 Digital development

Digital development processing (DDP) is an algorithm invented by astrophotog-rapher Kunihiko Okano that combines gamma stretching with unsharp masking.2 It is particularly good at preserving the visibility of stars against bright nebulae. Many astrophotographers like the way it combines several steps of processing into one (Figure 13.7).

Some care is needed setting the parameters because implementations of digital development that are optimized for smaller CCDs will bring out grain in DSLR images. The unsharp masking radius needs to be much larger.

In MaxDSLR, the unsharp masking part of digital development can be turned off by setting a filter matrix that is all zeroes except for a 1 in the center; digital development then becomes a kind of gamma stretch.

13.5.4 Spatial frequency and wavelet transforms

Another way to sharpen an image is to analyze it into frequency components and strengthen the high frequencies.

2 Web:

Figure 13.7. Digital development processing (DDP) turns the first picture into the second in one step.

Complete waveform

Low frequencies

Figure 13.7. Digital development processing (DDP) turns the first picture into the second in one step.

High frequencies


Figure 13.8. Sound waves can be separated into low- and high-frequency components. So can images. (From Astrophotography for the Amateur.)

To understand how this works, consider Figure 13.8, which shows the analysis of a sound wave into low- and high-frequency components. An image is like a waveform except that it is two-dimensional; every position on it has a brightness value.

It follows that we can speak of spatialfrequency, the frequency or size of features in the image. For example, details 10 pixels wide have a spatial frequency of 0.1. High frequencies represent fine details; low frequencies represent large features. If you run a low-pass filter, you cut out the high frequencies and blur the image. If you emphasize the high frequencies, you sharpen the image.

Figure 13.9. A wavelet (one of many types). A complex signal can be described as the sum of wavelets.

Figure 13.9. A wavelet (one of many types). A complex signal can be described as the sum of wavelets.

Any complex waveform can be subjected to Fourier analysis or wavelet analysis to express it as the sum of a number of sine waves or wavelets respectively. (A wavelet is the shape shown in Figure 13.9.) Sine waves are more appropriate for long-lasting waveforms, such as sound waves; wavelets are more appropriate for nonrepetitive waveforms, such as images.

After analyzing an image into wavelets, you can selectively strengthen the wavelets of a particular width, thereby strengthening details of a particular size but not those that are larger or smaller. This is how RegiStax brings out fine detail on planets without strengthening the film grain, which is even finer (see p. 206).

Image processing that involves separating the image into different frequency components is often called multiscale processing.

13.5.5 Deconvolution

Suppose you have a blurred image, and you know the exact nature of the blur, and the effects of the blur have been preserved in full. Then it ought to be possible to undo the blur by computer, right?

Indeed it is, but the process is tricky. It's called deconvolution and has two pitfalls. First, you never have a perfectly accurate copy of the blurred image to work with; there is always some noise, and if there were no noise there would still be the inherent imprecision caused by quantization.

Second, deconvolution is what mathematicians call an ill-posed problem - it does not have a unique solution. There is always the possibility that the image contained even more fine detail which was completely hidden from view.

For both of these reasons, deconvolution has to be guided by some criterion of what the finished image ought to look like. The most popular criterion is maximum entropy, which means maximum simplicity and smoothness. (Never reconstruct two stars if one will do; never reconstruct low-level fluctuation if a smooth surface will do; and so on.) Variations include the Richardson-Lucy and van Cittert algorithms.

Deconvolution has long been a specialty of MaxIm DL (not MaxDSLR), although several other software packages now offer it. In Figure 13.10 you see the result of doing it. Fine detail pops into view. The star images shrink and become rounder; even irregular star images (from bad tracking or miscollima-tion) can be restored to their proper round shape. Unfortunately, if the parameters are not set exactly right, stars are often surrounded by dark doughnut shapes.

Figure 13.10. Deconvolution shrinks star images and brings out fine detail. This example is slightly overdone, as shown by dark doughnuts around stars in front of the nebula.

Because deconvolution is tricky to set up and requires lots of CPU time, I seldom use it. My preferred methods of bringing out fine detail are unsharp masking and wavelet-based filtering.

Was this article helpful?

0 0
Photoshop CS Mastery

Photoshop CS Mastery

Artists, photographers, graphic artists and designers. In fact anyone needing a top-notch solution for picture management and editing. Set Your Photographic Creativity Free. Master Adobe Photoshop Once and For All - Create Flawless, Dramatic Images Using The Tools The Professionals Choose. Get My Video Tutorials and Retain More Information About Adobe Photoshop.

Get My Free Videos

Post a comment