where R is the approximate radius of curvature of the mirror and D is its diameter. LA means longitudinal aberration, the stretch of the black bar in the diagram. For a hyperboloid, LA is a larger number, and for a prolate spheroid it is smaller.
If an optical worker measures the shift between situations A and C and keeps changing the shape of the mirror until that shift is at or a little less than D2 /4R, then the paraboloid is closely approximated. Before Foucault, mirror making had been the province of high art and not a small amount of guesswork. Mirror testing at the simplest level had been reduced to finding the appearance of the center zone (A) and the edge zone (C) and measuring the distance between them. (See Suiter 1988.)
Examples of these situations photographed on a real mirror appear in Fig. A-3. You can also see a tiny central button zone. Because of its location in the shadow of the diagonal, it would not harm the images.
The contours seemingly shown by the imaginary side lamp have apparent amplitudes of something like 3 to 6 mm for a 150-mm f/8 paraboloid. Since this paraboloid departs from a sphere by about 1/8 wavelength, we can calculate a synthetic magnification of errors that amounts to about 4 mm/0.00007 mm or 60,000.
This development, coupled with the technology for depositing metal films on glass, set the stage for the huge reflectors of the 20th century. Foucault is the godfather of the massive instruments we use today.
Foucault's knife-edge test is sensitive and proven, but it is not recommended as a final evaluation, for several reasons:
1. The test requires some practice. One must be skilled in setting up, aligning, and interpreting the Foucault test.
2. It does not allow testing of the convex elements of compound optical systems without expensive additional hardware. Except for an auto-collimation test against a huge optical flat, one must disassemble the telescope and test individual pieces, some of which may not be easy to remove.
Fig. A-2. A drawing of three knife locations and the resulting appearance of a parabolic mirror in the Foucault test. From How to Make a Telescope by Jean Texereau, Copyright ©1984 by Willmann-Bell, Inc.
Fig. A-3. Photographs of the three cases of Fig. A-2 on a paraboloidal mirror. Bright streaks are spurious reflections of the slit. (Photographs by William Herbert of Columbus, Ohio.)
3. In its more sophisticated forms, it requires a tiresome and easily bungled mathematical reduction procedure. Computers can help with this calculation, but such computations can be mishandled at the input stage.
4. It requires that an elaborate knife edge tester be constructed. Simpler testers could be made when people only tested long-focus Newtonian mirrors, but fast instruments give little room for error. Very good motion platforms—called kinematic stages—and compact source/knife assemblies must be built or purchased.
Variants of the Foucault test include the caustic test and the wire test. All are more or less bothered by these same difficulties.
One other use of the knife edge test would be applicable to all forms of optics if it were more convenient. Using a knife edge at the focus of a star recovers the conditions that led to the gray, flat appearance of a sphere at the center of curvature. This method is easiest for owners of an excellent clock drive and a heavy, unshakable mounting, because they can follow the brightest stars. Some sort of method for gradually introducing the knife into the focused beam is also necessary because this variant of the Foucault test is extremely touchy.
Since few telescopes are likely to perfectly null, those wanting to use a knife-edge at focus should provide their testing setup with a method to measure the length of focus shift from a situation resembling A in Fig A-2 to situation C. In a typical test, this focus shift should be below 100 ¡um (0.004 inch). Using the artificial star described in Chapter 5 eases the mounting, clock drive, and illumination problems, but some sort of measurement screw must still be placed on the focuser.
This test, developed by J. Hartmann around the year 1900, is used most often to check the surface of very large observatory instruments. A screen is centered over the objective lens or mirror. Carefully sized and placed holes are cut into this screen. Then the telescope is pointed at a distant star or artificial source at the center of curvature.
Two photographic plates are then exposed, one inside of focus and the other outside. After they are developed, they are carefully measured. If the test is successful, the images of the holes are identifiable, resolved, and not so large that their positions are uncertain. Two pictures aren't really necessary when the positions of the holes over the mirror are very accurately known, but caution dictates a second photograph.
Finally, the corresponding dots in the two pictures are connected mathematically. Once the intersections of the dot pairs are known, they are entered into expressions that convert longitudinal aberrations to wavefront error, provided that the surface is deformed smoothly. See Fig. A-4 for a diagram depicting the manner in which the Hartmann test is used to measure aberration. A two-hole version of the Hartmann arrangement is occasionally used today as a focusing aid (Suiter 1987).
This method is not recommended for first-time testers for the following reasons:
1. These photographs are best taken on glass plates. The region of interest near focus is small and photographic film is flexible. Both plates and equipment to handle them are expensive.
2. The two plates should be held precisely perpendicular to the optical axis and each other during exposure.
3. The measurements of dot positions are typically made with a plate measuring engine, which looks like a microscope or a microfiche reader on a milling-machine bed. Although they are common enough in professional observatories, they are not available to the rest of us.
4. The mathematical reductions are of about the same complexity as advanced versions of the Foucault test, but because of the number of connected dots, many more calculations must be done. This data analysis may be a very tiresome chore. (See Danjon and Couder 1935 for an early reduction procedure that led to the procedure appearing in Texereau 1984.)
While determining the resolving power of a telescope is not a complete optical test, many people treat it as such. Thus, it deserves mention here. This method of evaluating telescope images became popular during the 19th century, when double stars were the subject of very active research. Observers who were primarily interested in the clean separation of two stars began to judge the performance of their telescopes entirely by this characteristic.
Certainly, a telescope that fails to show double stars close to the expected resolution is displaying one of the symptoms of poor optics, but other types of equally bothersome optical difficulties do not betray themselves this way. Spherical aberration of 74 wavelength insignificantly damages the telescope's ability to split stars. (See the encircled energy plot in the chapter on spherical aberration.) However, on planetary detail requiring only moderate resolution, optics with correction errors present distinctly fuzzy images.
Figure A-5a displays the various criteria of resolution. The first is called the Rayleigh criterion, which is not to be confused with the 74-wavelength Rayleigh limit of wavefront error. The Rayleigh resolution criterion is met when the separation of the two objects is precisely at the radius of the theoretical Airy disk. In other words, the second star is placed on the valley between the first star's central disk and the first diffraction ring.
The degree to which the Rayleigh criterion divides the stars varies with details of the diffracting aperture. Different obstructions and aberrations result in differing depths of the "saddle" between the stars. For a perfect circular aperture with no obstruction and no aberrations, the dip between the stars is to about 70% of the brightest intensity.
The second criterion was stated by double star observer W.R. Dawes in 1867 (Sidgwick 1955). It applies only to unobstructed apertures dividing equal stars. In the Dawes criterion, the separation is a little less than 85% of the separation defined in the Rayleigh criterion. Fig. A-5a illustrates it
stellar separation for Sparrow's criterion
for Dawes' criterion
/" tor Rayleigh's criterion
Left star always
placed at zero \ / i \
•1.0 00 1.0 2.0 Angle in reduced units (Airy disk radius = 1.22)
•1.0 00 1.0 2.0 Angle in reduced units (Airy disk radius = 1.22)
Resolution profiles for a 50% obstructed aperture
Resolution profiles for a 50% obstructed aperture
as the intensity curve with the small intensity drop between the stars. This drop is only about 730 magnitude less than the maximum intensity. When doubles are this tight, their lack of roundness contributes as much or more to distinguishing duality than the intensity drop between them.
The third and narrowest separation is called Sparrow's criterion, which is denned as that separation that results in a flat isthmus between the stars.
The Sparrow criterion is adjusted for obstructed and aberrated apertures until it always gives that flat region between the stars. Thus, it is always uniquely defined and always delivers the same behavior, but its exact separation varies with details of the aperture.
Sparrow's criterion for unobstructed apertures has stars separated by about 92% of Dawes' criterion or about 77% of Rayleigh's separation. In the perfect, unobstructed aperture's MTF chart of Fig. 3-6, the Sparrow criterion is found at a spatial frequency off the graph at 1.06Smax. This placement doesn't mean the resolution is illusory. Some double-star observers have approached or even exceeded this value. Stars are point objects, while the MTF target consists of bars. Points can be resolved by using the shape of the image as the only discriminator.
Figure A-5b depicts one of the most disturbing aspects of using double stars as test objects. The summed diffraction patterns of two stars seen in a 50% obstructed aperture is calculated with the same stellar separations as were used in the unobstructed aperture of Fig. A-5a. For all three curves, equal separations deliver stronger dips in intensity between the stars. To recover the behaviors of Fig. A-5a, the stars must be pushed closer together. In short, the 50% obstructed aperture resolves better.
Double-star resolution doesn't tell us much about other types of observing performance,1 particularly in that case where the 50% blocked aperture will fail badly. Planetary images are greatly degraded by such severe obstruction. Blocking the aperture kicks a large fraction of the light outside the central spot into distant portions of the point image. Resolution of a nearby star is little affected, because the star is close to the center of the image and the light diffracted by the obstruction is largely beyond it.
If the desired object were a small, low-contrast crater on the Moon, the whole image is the sum of all light scattered from light patches around the crater as well as the image of the crater. Much of this spurious light is joined together to fog the interesting image. The scattering from any single diffuse spot is not enough to seriously damage the image by itself, but the combined effect of all of them worsens contrast considerably.
Blocking the aperture by a 50% central obstruction helps some double stars resolve, but the same obstruction leads to severe image degradation of planetary detail. You can easily verify this result yourself. Look at the Moon or a planet some night with and without a large paper obstruction in front of the aperture. Unless the telescope's obstruction is high already, the artificially blocked aperture will appear much worse.
Using the resolution of double stars as the sole criterion of optical quality, the astronomer is demanding that the judgement favor high spatial
1 Dawes himself was well aware of this difficulty, even though his name is attached firmly to the misuse of this test as an indicator of optical quality.
frequencies. In the case of obstructed apertures, the spatial frequency response has been robbed of some of its intermediate frequency strength to achieve better contrast at higher frequencies.
Double stars as test objects present other difficulties. Variability is associated with the sky and with the stars themselves. Ideally, one should use equal-brightness white stars separated at the diffraction limit and demand that they be high in the sky. Few stars conveniently arrange themselves this way. Usually, the test must be done with unequal brightness stars separated by a distance close to your instrument's Dawes or Rayleigh criterion but not precisely at it. One of the stars may be tinged with blue or red, and they may be at a low altitude, which produces unfortunate atmospheric dispersion into rainbow-like spectra. "Seeing" will constrict the number of nights on which double star tests can be attempted, particularly for large instruments.
Artificial double stars can be used to test resolution, but they cannot cure the basic inadequacies of the evaluation technique. Because very high magnification must be used, ground turbulence must be low. By bringing the source close to the telescope, one writer avoided the problem of turbulence. In this case, the source was only about 10 meters away (Maurer 1991). However, such a close distance makes such a test an unreliable check for aberrations (see Chapter 5).
Atmospheric problems also trouble the star test, but because the stars are considered singly, you have more freedom to choose one at high altitude. Since the star test involves inspecting the much larger defocused stellar disk, it does not always require that seeing be perfect.
For the above reasons, the double star resolution test is not recommended as an all-purpose test. In brief summation:
1. Resolution tests are interesting to those who are concerned with double stars. Resolution is of little use as a general indicator of telescope performance because it favors high spatial frequencies.
2. Suitable target stars are difficult to find. Artificial sources have their own pitfalls.
3. Seeing must be superb before the double-star resolution test yields interesting results.
A test is attractive if it doesn't require a great deal of data reduction or interpretation. One such test involves placing a coarse periodic grating of dark and transparent bars near stellar focus of the instrument. An example frequency of such a grating is 100 lines/inch (4 lines/mm), with a corresponding period of 0.01 inch (0.25 mm). This method was investigated by Vasco Ronchi around 1923 and thus carries the name the Ronchi test.
Interpretation, at least when we assume that light is comprised of rays, is simple. Supposedly, light has originated from a point source very far away (like a star). If the star is correctly imaged, it should be focused to a single point. Starlight passes through a grid of straight, opaque bars, which remove the light rays that strike them. The perceived pattern on the mirror is just the shadows of those vertical bars.
If the optics have aberrations associated with them, the light from different zonal radii is not diverted toward a single focus. In the case of spherical aberration, various axis crossing points are distributed near the region of the caustic. Because the grating lies at various distances from these points, the spacings of the grating projected on the aperture seem to vary from radius to radius. In the case of undercorrection, this effect is portrayed in Fig. A-6.
Fig. A-6. A Ronchi ruling is placed near the focus of a star and examined without an eyepiece, a) If the rays are all directed to a common focus, the optics are perfect, b) If the lines distort, they indicate aberrations. The Ronchi grating need not be confined to the tiny circle. Any stripes outside the illuminated portion of the light cone do not contribute.
If the grating is 100 lines/inch and the aperture has a focal ratio of f/8, the 2.5 periods (or "lines") shown in the illuminated part of the grating would be about 0.2 inches inside focus and only 0.025 inches in diameter.
It seems as if the Ronchi test is the final answer, a clear and unambiguous null test. Unfortunately, it doesn't always work that way.
Focus region of underccrrected optics b)
A 10-inch f/6 mirror came to my attention in 1980. In spite of the fact that it gave soft images, it passed the Ronchi test using a grating of 100 lines/inch. It failed the star test in a stunning manner, however. When removed from the tube and tested with a more elaborate variation of the Foucault test, the mirror showed a ^-wavelength undercorrection.
Still, it had passed the Ronchi test at focus. Something was wrong, either with one of the tests or the mirror. Consider the aberrated Ronchi drawing of Fig. A-6 again. The distortion of the pattern seen against the mirror is caused by the change in focus distance as measured from the grating. As depicted, the focus difference (or the longitudinal aberration) is about a third of the distance from the grating to the average point of focus, causing severely distorted Ronchi lines projected on the mirror. The density of lines at the center of the aperture is much higher than at the edge. One might call this curvature a "33% distortion." If the region of focus is a much smaller fraction of the distance to the grating, distortions can be appreciably lower. For optics of conventional focal ratios and diameters, this low-distortion condition is true much of the time.
Fig. A-7. A similar-triangles setup used to estimate the Ronchi curvatures. The particular crossing points chosen here would be characteristic of overcorrection, but the derivation in the text would not be affected if they were reversed.
One can easily calculate the length of the focus region for optics diverting light to the edge of the diffraction disk. If we draw the similar triangles of Fig. A-7, the outside triangle has edges of height D/2 (half the aperture) and base f The triangle outside of focus has height d and base x, with d being the radius of the diffraction disk. The radius of the Airy disk is (1.22)(wavelength)(focal ratio), or 1.22XF. We can use similar triangles to show that x = 2.44XF2. Because we must regard a finite-sized region to estimate the number of lines showing, the density of lines at the edge is compared with the density of lines at a point roughly halfway out the radius of the mirror. The dashed line, because it has only half the slope, can cross over at about twice the distance x from the point of focus.2
If light from the edge is deflected to cross the axis at more than 3x from light at the 50%-radius zone, it begins to miss the diffraction disk entirely and signs of severe optical degradation become visible.
The vertical dashed line depicts a grating located between the aperture and the region of focus. It has the same 2.5 lines as Fig. A-6, measured for the outside zone. Above the center, 1.25 lines are showing (a "line" is a complete on-off cycle, or a period). The height of this little triangle is therefore 1.25/N, where N is the number of lines per inch (or mm) of the grating. The base length of this triangle is left unknown and is called x'. The big triangle is the same as before. Using similar triangles again, one can show that x' = 2.5F/N.
To calculate the distortion, we estimate the tolerable focus shift divided by the distance to the grating, or
Was this article helpful?