Telescopes Are Filters

Learn Photo Editing

Learn Photo Editing

Get Instant Access

The telescope is a device that reproduces an image of reality. It does not write on paper like a copy machine, but its images are just as unreal. It creates its reproduction on a tiny image plane only a few centimeters from the tip of the observer's nose. The scrutiny is done using a powerful magnifying lens called an eyepiece. Only through the multiplicity of imaged values (location, color, brightness) and the selective visual processing power of the human brain does this reproduction get interpreted as reality.

Similarly, the visual image projected onto the retina is only representative of reality. Those with keen vision are able to derive more information about the external world than people with weak vision, but each individual tends to give equal value to that perception, regardless of its absolute quality. We tend to ignore errors.

Before going farther, look at Fig. 3-1. Think of a filter as a process that degrades information contained in an image or signal. Filters are not only objects such as the colored disks of glass you might attach to the eyepiece. They are anything which removes information from the image, even relatively subtle factors such as the limitation on aperture and the wavelength of light viewed. This concept, which I call the "wobbly stack," represents a partial list of the filters between the observer of an image and reality. Some of the nitrations depicted are not independent. For example, eyepiece aberrations may, by good fortune or design, partially cancel the main instrument's aberrations.

The effects of each of these filters can be lessened, but not all may be removed. For example, one can avoid atmospheric turbulence by going into space. One can lessen aberration errors by building near-perfect optics. One can even avoid the mushy errors of the eye-brain system by using the more predictable filtration of photography. So why wouldn't it be possible to put an absolutely perfect image of Jupiter onto a sheet of


Bandwidth of light Atmospheric turbulence and absorption

Bandwidth of light Atmospheric turbulence and absorption

Internal scattering and absorption In the eye

Damaged or Insensitive areas of the retina_

Local retina processing faults

Non-uniform distribution of rods and cones

Internal scattering and absorption In the eye

Damaged or Insensitive areas of the retina_

Local retina processing faults

Non-uniform distribution of rods and cones

Mental processing errors


Fig. 3-1. A diagram depicting some of the aberrations, obstructions, misalignments and processing errors that can degrade an image: the "wobbly stack."

photographic film?

The most important form of filtration cannot be removed or diminished—a filtering caused by the aperture, or the finite extent of the window through which the telescope looks. Let's imagine that we have some sort of super-film that records everything that is projected on it. Further imagine that the instrument has absolutely perfect optics. We suppose that if a photograph of Jupiter taken with such a system is inspected with a microscope, we have a perfect image of everything at the distance of Jupiter—volcanoes on Io, tiny ice crystals in the atmosphere, the smallest cloud-belt whorls, etc. If we can do it for a full-sized telescope, we can do it for a little one. It might even be easier to make perfect small optics than large ones. If the optics are tiny, what then?

We have reached a fundamental limit. Small lenses don't image as well as large ones. The sharpness of the image of every optical system is limited by the presence of the aperture. If we squeeze a representation of the universe through a tiny hole, we should expect this image to be a bit scuffed by the passage. Passage of light through larger apertures results in less damage.

3.1 Perceptions of Reality

Figure 3-1 shows many limitations that we can do nothing about. Although other receptors are often used, most people interpret eyeball vision as the best representation of reality. Thus, we all send the signal through an unavoidable filtration system that happens to be attached to our heads. Some might argue that this filtering is always present and can be viewed as a sort of baseline. True, we are probably accustomed to our own vision, but we are most familiar with its performance under bright lights, using two eyes. In a telescope, with low illumination and monocular vision, many of the errors that would otherwise be incidental are worsened until they cause significant and unexpected information loss.

The only things that can be done about such unavoidable errors is to be aware of them and use strategies to lessen their effects. For example, the most commonly used corrective measure for the awkward distribution of low-light sensors on the retina is not to look directly at dim objects, i.e., to use averted vision. The unbalance caused by monocular viewing can be eliminated with binoculars.

The filters covered in this book concentrate on the center of the stack, from the atmosphere down to the image inspected by the eyepiece. This is not to say that these are the worst sources of error, but they are filters most associated with the telescope, its environment, and its use. They are the forms of filtering that we are most able to affect by corrective actions.

Eyepieces are neglected here. Although changing the eyepiece changes the optical system, it is not usually the worst source of optical difficulty. If a high magnification is used, the errors of the primary optics will dominate unless the eyepiece is entirely defective or it is used for the unusually steep light cone of a low focal ratio telescope. Also, a poor eyepiece can be identified by trying it in several instruments.

Aberrations that may seem less important are covered because they are generally caused by construction details peculiar to the particular instrument or repeated poor habits of use. For example, if you store your telescope in a hot shed and only use it in the early evening, you will be plagued by tube currents. Telescopes that are misaligned seldom become better aligned the next time they are used. Optics don't heal by themselves.

Clearly, the most desirable image would be a one-to-one mapping of points on the real object to points on the image. Telescopes already violate this principle by compressing much of the three-dimensional universe to a two-dimensional plane or slightly curved surface. Only for nearby objects are the images stretched out into a three-dimensional image space. Thus, a gas cloud 500 light-years away is imaged on top of a giant star visible 200 light-years beyond it. People are accustomed to this effect and tend to ignore it. Only where we have independent knowledge of the three-dimensional placement of the objects does this compression becomes objectionable. Many people are familiar with a similar perspective compression effect when they use binoculars at a sports event. If they are far enough from the action, the field seems flattened to a playing area a few steps deep.

For most purposes, the filtration caused by squeezing the image onto a sheet is not harmful. In fact, this and other forms of filtering can be quite useful. Astronomical telescopes are not troubled with the vanishingly small depth of field so bothersome in microscopy, where only part of the object is clear at any one focus.

Another positive use of filters is the way emission nebulae pop out from skyglow using narrow-band nebular filters. Although such filters diminish the interesting signal slightly, they profoundly reduce the superimposed noise of skyglow. The observer is happy to take the somewhat weaker true image as a trade for eliminating artificial light.

3.2 A Comparison to Audio

Because generalized filtering is a difficult concept at first, let's use an example where the filtration ideas appear in our common vocabulary—the commercial sound system (see comparisons in Table 3-1). Because electronics are a more recent invention than telescopes, and the terminology for audio was invented by engineers brought up on signal-processing math ematics, many modern words used for sound systems tend to have filtering concepts built right in. Many people are familiar with audio hardware or have a superficial understanding of the words. Comparisons, or at least some strained analogies, can be made to similar patterns in telescope filtration. In the audio descriptions, a frequency of 20,000 cycles/second or 20,000 Hz equals 20 kilohertz, abbreviated as "kHz."

Table 3-1

Generalized telescope and associated equipment filtrations that behave in an analogous manner to another reproduction device, the high-fidelity sound system.


(telescope/eye/camera) (sound system)

aperture diameter colored filters image processing scattered light spatial response size of speakers equalizer filters signal processing audio noise frequency response

3.2.1 Aperture Diameter/Size of Speakers

A typical stereo system has a cascaded set of sound reproduction devices called speakers, more technically named transducers. They convert electrical energy to the air compressional waves of sound. A typical stereo has subdivided speakers into at least two ranges, "woofers" and "tweeters." Low-frequency woofers are quite large, while high-frequency tweeters tend to be very tiny.

Low-frequency speakers, having to cycle a large quantity of air, occasionally move large distances and have enormous diameters. High-frequency speakers cannot be visually perceived to move at all and seem to work well in small sizes. Most people figure the reason the frequency range is broken up into multiple transducers is because no single speaker can cover the range, which is mostly true. However, we can easily imagine, if not build, a single speaker that would reproduce the entire frequency range with equal facility. Why would we still expect speakers to be divided?

Flat speakers (and optics) transfer energy in an angular range according to how many wavelengths fit across the aperture. The speed of sound is about 330 meters/second, or about a mile every 5 seconds. A sound at 880 Hz, or 880 cycles/second, would have a wavelength of 330/880, or 3/8 meter. If we had a speaker of this size, about 15 inches, it would put out most of the 880 Hz energy in a 120° cone. In other words, twice the 60° angle of an equilateral triangle, 1 wavelength across the speaker at the base, 1 wavelength on each side. This size is effective for stereo imaging, because sound can spread widely into a room. So we're done designing a speaker, right?

Consider our imaginary, one-size-fits-all single speaker for a moment. It works fine at 880 Hz, but we can hear to frequencies about 20 times higher. to around 17,600 Hz. Young people might hear higher pitched sounds, older ones lower. Now the wavelength at 17.6 kHz is 18.8 mm (about 3/4 inch). The emergent cone of energy from this same speaker is twice the narrow angle in a long, skinny triangle 20 wavelengths at the base and 1 wavelength high—less than 6°. At high frequencies, the angular spread of our single speaker is so narrow that we need to carefully aim such a source to hear it directly. Even worse, the mix of frequencies depends on whether we are directly in front of the speaker or sitting to the side, because each frequency has its own cone. The "sweet spot" of best stereo effect would be hard to find. High-pitched music from such a speaker may only be detectable in one ear at a time and may vary with motion of the head. Sound engineers still design multi-speaker systems of different sizes because no single speaker would emit sound into the optimum angle at all frequencies. They also design speakers in other shapes for this reason. Flat is sometimes not the best form for a speaker.

What is annoying in a sound system is desirable in a telescope, however. A typical telescopic aperture is 200 mm, or 360,000 wavelengths across. We still use the crude estimate of the angle of the energy cone similar to the speaker above—twice the narrow angle of a skinny triangle one wavelength high and 3.6 x 105 wavelengths across the base. This cone is less than 1.2 arcseconds across. Thus, most of the energy of a star detected by a 200-mm visual telescope can be found in an angle confined to less than 1.2 arcseconds.

One can also see from these resolution arguments why the location of a subwoofer (a very low frequency speaker) makes no difference. At a typical frequency of a subwoofer—say, 33 Hz—the wavelength is 330/33, or 10 meters. Any subwoofer smaller than a railroad car cannot even pretend to be able to direct the sound in any particular direction. Subwoofer location matters little because it radiates sound in all directions. The listener also isn't capable of hearing it in a unique location; the sound has such low frequency that normal two-eared perception is subverted by acoustic transmission directly through the head.

Similarly, radio telescopes have to be huge to offer any resolution. A 20-cm (8-inch) telescope has an aperture of only about one wavelength of the 21-cm line often observed by radio astronomers. If one were foolish enough to build a single-element, 8-inch radio telescope, a point image of 21-cm radiation would occupy a broad, fuzzy 120° angle.

3.2.2 Colored Filters/Equalizer Filters

Equalizer filters are often added to sound systems to compensate for the room damping or reverberation. The rough equivalent in a telescope is to add color filters to the optical system. The color filters perform a similar shift or emphasis in the frequency spectrum.

It is no whim or accident that the sonic signature of an individual room is called the room's coloration. Sound spaces with unusually high amounts of upholstery or curtain absorption are called "warm," meaning that they absorb high frequencies much more strongly than low ones. Red and orange colors are also called "warm." The cure for a warm sonic space is to boost high frequencies. Similarly, when we use a blue filter on Mars, we can see high dust clouds more easily because they reflect more of the blue sunlight than the ruddy Martian surface.

The brain itself acts as a compensating filter for the external world. Since the time of Edison, listeners have repeatedly declared their contemporary audio technology to be perfect, even when it was scratchy and indistinct. Our light perceptions are equally questionable. The brain will automatically adjust nearly any color balance it sees to the color balance it experiences beneath the Sun.

A good example is illumination under artificial lighting. Look into a fluorescent-lit room from the outdoors. Your color balance is held fixed by the brighter natural sunlight, so the room light you see inside is greenish. Mercury vapor lighting is composed (mostly) of two pure colors, a green line and a blue-violet one. Even under these extraordinary conditions, the eye manages to fool us most of the time. Only when the eye is confronted with just one color does it give up. Even then, I suspect if you gave the eye-brain system enough time, it would eventually adjust to perceive any single color as a kind of washed-out gray.

3.2.3 Image Processing/Signal Processing

So many techniques to process audio signals and images exist that they all cannot be included here. Let's look at an example—oversampling.

Sixteen-bit numbers are imprinted on a Compact Disc (CD) at the rate of 44.1 kHz, and they are read off the medium at the same rate. The values are then delivered to a digital-to-analog converter (DAC), which takes the numbers and "undoes" them back into a voltage that one hopes is a reasonable simulation of the original recorded signal.

One would think that 44.1 kHz would be more than fast enough to reproduce sound of less than half that frequency, but it turns out that this is only barely sufficient. If the original signal is 11 kHz, a sampling rate of 44 kHz

Fig. 3-2. Sampling of digital audio signals.

yields only four values per wave (see Fig. 3-2a). The DAC is attempting to produce the dashed stair-step pattern, which does not simulate the original sinusoidal tone very well. The stair-step is rich in higher frequencies, called harmonics, of 22 kHz and higher. More exotic combinations of waveforms could actually create artifacts that would spill into audible frequencies.

One could use electronic analog filters to reject DAC frequencies 20 kHz and higher, but it takes extremely sophisticated (i.e., expensive) electronics to produce a precipitous cutoff beyond a certain frequency. Inexpensive analog lowpass filters attenuate sound at a rate of, for example, 12 dB/octave, or a factor of 16 with each doubling of frequency. In other words, a perfect amplifier feeding one of these lowpass filters with an intensity of 1 at 11 kHz has an intensity of '/16 at 22 kHz and '/256 at 44 kHz. Unfortunately, some of the attenuation leaks into audible frequencies. The response curve is unacceptable for high-fidelity sound reproduction.

The solution often used is to digitally "oversample" the signal. A DAC that operates at 88.2 kHz is used, and the signal is sampled at twice the usual rate. Using a digital filter with an interpolation algorithm (algorithm is a fancy word for recipe), one can achieve something like the dotted line of Fig. 3-2b. The analog lowpass smoothing can then be readily applied with a cutoff frequency of 44 kHz instead of the more objectionable 22 kHz. Of course, actual CD players probably use much more sophisticated filters and algorithms than this one. This scheme has been presented only to give you a feel for the processing (Strong and Plitnick 1992, p. 440).

The simple act of choosing a higher-powered eyepiece is itself a form of oversampling (see Fig. 3-3). Here, you are giving up the outer portions of a) single-sampled

Fig. 3-3. Using higher magnification is a form of oversampling. Here, the object seen at higher power is spread over four times the area of the retina, allowing more receptors to participate in averaging the image.

a) single-sampled

Fig. 3-3. Using higher magnification is a form of oversampling. Here, the object seen at higher power is spread over four times the area of the retina, allowing more receptors to participate in averaging the image.

the image area in favor of expanding the signal over more retinal receptors. Areas of the image can be identified that have a fairly constant signal strength and coloration, but if they are smaller than a retinal receptor, that intensity will be mixed together with a nearby area.

The retina, like any other light sensor, is noisy. One mechanism the visual system uses to suppress noise is to consider receptors in batches. A detail that shows on only one receptor may be interpreted as noise and ignored unless it is very bright or contrasty. Going to higher magnification allows you to expand small areas until they cover more individual receptors. The average over many noisy detectors is a less noisy number, so lower contrast details can be seen. This procedure works on low contrast details even when the magnification expands the blurring of the telescope beyond the best resolution of the eye, which makes it a true form of oversampling. The eye has a maximum resolution of about 1 arcminute, but the image continues to improve until magnification drives the radius of the Airy disk beyond 4 to 8 arcminutes.

3.2.4 Scattered Light/Audio Noise

Every sound system is troubled by noise. In some cases, this noise can be objectionable (as in scratchy old phonograph disks) and in other cases the noise can be imperceptible (as in modern digital systems). The absolute level of the noise is less important than its signal-to-noise ratio, or SNR. When the music is soft, a constant low level of noise can become unacceptable because it now sounds relatively strong compared with the interesting signal. However, the same level of noise goes undetected when the music is loud.

One obtains a better appreciation of the relative effects of noise by compressing the signal-to-noise ratio logarithmically to the decibel scale,

where Is is the power of the signal and In is the power of the noise. For the strongest signals in a typical CD, the digital SNR is somewhere above 96 dB. For a high quality analog cassette tape, a similar number might be around 55 dB (Strong and Plitnick 1992, p. 441). Of course, the lowering of quality caused by passage through the rest of the sound reproduction system lowers these SNRs a good deal. Noise begins to become objectionable when it is 20 dB below the interesting signal and offensive when it is 10 dB below what one is trying to perceive. When SNR reaches 0 dB, or conditions where noise and signal have equal intensities, people have a difficult time recognizing spoken single words, only catching about 70% of them (Kinsler et al. 1982, p. 284).

The analogue to noise in optical systems is scattered light. Say that the surfaces of your optics are dirty, or they are rough on the wavelength scale. Some light diffracts from the tiny irregularities and is scattered beyond the image of the object. If you are trying to observe a very dim object right next to a bright object, the smearing of the light, even though it is a very tiny fraction of the bright object's light, can be strong enough to render the dim object unobservable. The "noise floor" has risen enough to be objectionable. Contrast is appreciably reduced.

We can calculate the scattering from a single round piece of dust, 1/1,000 of the diameter of the aperture across (for a 200-mm telescope, the speck of dust would be a 0.2 mm disk). One millionth of the energy incident on the aperture would hit the rear side of the speck and be absorbed or reflected. However, we would see a fairly normal Airy disk that has two-millionths of the energy missing (van de Hulst 1981). The other millionth part of the energy has been scattered throughout the field of view. If we assume that we are looking at a large extended object and that none of the scattered light has been lost outside the region of interest, then the signal-to-noise ratio is as low as 10 logi0(l,000,000) = 60 dB, still far below the noise level of good magnetic tape. At 1,000 specks, we would find that our SNR could be well on the way to becoming noticeable at 30 dB. Thirty decibels is approximately 7.5 magnitudes. Thus, if we were looking at a first magnitude star, we would see the scattered light in a fuzzy glow with a total brightness of 8.5 magnitude.

Scattered light only damages the image in specialized observing situations. Noise that is 24 dB down is only about as bright as the second ring of a perfect diffraction pattern. Such light would only be troubling if it covered something dim, a situation that is seldom the case in dark-field observation. Scattering is a worse problem in solar and lunar observation, or in the perception of very low contrast detail on planets.

Was this article helpful?

0 0
Photoshop CS Mastery

Photoshop CS Mastery

Artists, photographers, graphic artists and designers. In fact anyone needing a top-notch solution for picture management and editing. Set Your Photographic Creativity Free. Master Adobe Photoshop Once and For All - Create Flawless, Dramatic Images Using The Tools The Professionals Choose. Get My Video Tutorials and Retain More Information About Adobe Photoshop.

Get My Free Videos

Post a comment