X

7 * v \ (■ {:,*)})

* •

((0)./

Figure 6,1. Accuracy and precision Accurate measurements have the right average value Precise measurements are tightfy bunched. High accuracy and high precision is best High accuracy and low precision isn't so great, but it is better than high precision and low accuracy, Which conveys a meretricious air of authority to a misleading result

Figure 6,1. Accuracy and precision Accurate measurements have the right average value Precise measurements are tightfy bunched. High accuracy and high precision is best High accuracy and low precision isn't so great, but it is better than high precision and low accuracy, Which conveys a meretricious air of authority to a misleading result self had systematic problems in correctly connecting the dots that stars formed on photographic plates with the true brightness of those stars. Just because your measurements agree with one another is not a guarantee you're doing things right.

There are other subtle ways to go wrong that have to do with what's in your sample of objects to measure. Suppose the stack of cardboard has some thin shirt cardboards from a Rhodes Scholar's starched Oxford cloth shirts mixed in with the corrugated sheets from the plate box. Even if you carefully measure a hundred sheets from this stack, correctly compute the average, and conscientiously report it to 1/1000 of an inch, you will still have the wrong value for the corrugated sheets alone because some interlopers crept into the sample. If you don't have a clear enough understanding of the objects, sampling errors sneak in.

Astronomers are especially vulnerable to a tricky sampling error based on picking objects that are bright enough for you to see. It is such a frequent problem is has its own proper name: Malmquist bias. Nearby, you can see bright supernovae and dim ones. But as you look at more and more distant samples, as the dimming caused by the inverse square law limits your ability to detect objects, the only objects you will see are the especially bright ones because the dim tines don't make it over your detection threshold. The average intrinsic brightness of your sample creeps higher and higher as you look at more distant objects. If you're judging distance from apparent brightness, the distance scale gets compressed at large distances. This is very bad. It's like having a tape measure that starts out with tick marks spaced evenly, but which subtly shifts scale as it plays out. You will systematically underestimate the largest distances because you are inadvertently selecting just the brightest supernovae. You become a victim of Malmquist bias.

It's a little like looking at people walking by on the sidewalk from a ground-floor window. If your window stretches down to the floor, you will see tall people and short people, Chihuahuas and Great Danes passing by. But if the windowsill is 6 feet off the ground, you'll miss all the dogs and children and you might conclude that everybody in your town is 6 feet tall. It's just everybody you can see. And that's not always the same thing.

And there are ways to goof that are not so subtle. Suppose the fine Swiss micrometer you are using is calibrated in centimeters, but you're a rocket scientist in the United States and you think it is in inches. This type of error is known technically as a scale error or "stupid mistake." Everything you report will be off by a factor of 2.54 even though every measurement looks as if it is good to 0.1 percent. These measurements will be precise, but not accurate and your spacecraft will disappear near Mars instead of landing on it. Usually, the evidence that you are making an error is not so vivid.7

Systematic errors are much worse than crude measurements with big measuring uncertainties. Instead of leading to vague con-

elusions, a measurement that is precise but not accurate can lead to strongly held, but wrong, conclusions. Hubble's measurement of 528 kilometers per second per megaparsec has an air of precision (it's that "28" that makes you think he's pinned it down to just a few kilometers per second per megaparsec), but it was seriously inaccurate because of several subtle systematic errors that crept into his measurements during the night- I;or example, Hubble identified cepheid variables in nearby galaxies and then compared their brightness to similar cepheids in the Magellanic Clouds. The distances to the Magellanic Clouds were wrong by a factor of 3- This is like having a micrometer that you think is measuring inches, but it's really centimeters. When you use the work of others, if they are wrong, you get the wrong answer too.

The cepheids turned out to be more complex than Hubble knew, and he was lumping together two different types of variable stars when he made his measurements. This is a little like having two kinds of cardlx>ard in the stack. To get out to the Virgo Cluster, at a redshift of about 1200 kilometers per second, Hubble needed something brighter than the cepheids. But it turns out that the brightest "stars" that he picked out from his Mount Wilson plates to measure those distances were not really stars at all. They were giant clouds of gas, glowing because the gas is excited by the ultraviolet light of many massive stars within. This is a little like squashing the cardboard—you're not measuring the quantity you think you are and you do it over and over and over without realizing it. The accumulation of all these systematic errors in Hubble's work is very significant. Mtxiern values for the Hubble constant are seven times smaller than Hubble's and for the Hubble time, seven times larger. While Hubble's Hubble constant corresponded to a disturbingly short Hubble time of 2 billion years, today there is reasonable agreement between the Hubble time and stellar timescales i/the universe has been expanding at a constant rate. That's a big "if' because it is exceedingly difficult to measure changes in cosmic expansion.

We have good reasons to believe that the errors of measurement for the Hubble constant are smaller today than they were in the 1930s. But systematic errors that depend on the distance to the

Magellanic Clouds, understanding the types of supernovae, the properties of cepheids in different settings, possible confusion of a single star with many, and the dreaded Malmquist bias are still with us. The challenge for observational astronomers is to anticipate possible systematic errors and then try to limit them through measurement. But the human mind is fallible and there are many ways to make subtle, but significant, errors or even mistakes, which are not improved by a refined statistical treatment of the data. Sometimes it's the problem you didn't think of that jumps up to bite you, as with the pulsar in SN 1987A. You can be sure you've got the right answer only when independent lines of evidence converge. Different groups of people and independent ways of making the measurement guard against the many forms of error.

Today there is still a lively discussion on the numerical value of the Hubble constant. The relation between distance and redshift itself is not in doubt, but the numerical value of the slope, which is Hnt has been difficult to measure accurately. Hubbie s old value of 528 kilometers per second per megaparsec is so far off the mark, it hasn't been part of the modern discussion, Values since 1950 range from 50 to 100 kilometers per second per megaparsec, with the most recent measurements narrowing down to the range from 60 to 80. In this book, I use 70 ± 7 kilometers per second per megaparsec because I think it well represents today's data, especially the data from supernovae.

The basic techniques that Hubble used in the 1920s are still right at the center of modern measurement. Cepheid variable stars play the leading role in establishing the cosmic expansion rate, just as they did in the era of silent films. What has changed are the tools for measuring light from distant stars.

Galileo led the way in applying telescopes to astronomy. When you go to Florence, you can nip up to the Museum of Science while somebody is holding your place in the long line for the Uffizi Gallery. Galileo's 1610 lens is enshrined there (along with Galileo's own finger, like the relic of a saint who was no saint). With his first astronomical telescope, Galileo used his eye to detect craters cm the moon (and measure their height), to see Jupiter's moons orbiting that massive object {as the planets orbit the sun), to observe the phases of Venus (a prediction of the Copernican view of the solar system), and to observe that the band of light across the summer sky, the Milky Way, was not made of Hera's milk as legend held, but of a vast number of stars, too numerous to be resolved individually with the naked eye. This was very good work for a small telescope. Galileo immediately applied for a research grant from the Medici. This explains why modern scientists find Galileo a kindred sou!.

One great advance from Galileo's time to Hubble's time was the steady march toward larger telescopes. The drum major in this parade was George Ellery Hale who orchestrated building the world's largest telescope four times: the 40-inch at the University of Chicago's Yerkes Observatory in 1887, the 60-inch reflector at Mount Wilson in 1904, the 100-inch telescope at Mount Wilson in 1917, and the 200-inch reflector at Palomar Mountain, now called the Hale Telescope, which started operation in 1948. Big telescopes collect more light and, other things being equal, enable you to measure fainter and more distant objects.

The other great advance was the invention of better detectors to measure the light that giant telescopes gather at such great expense of money and effort. Galileo's eye was developed by natural selection over the last few hundred million years, and was a marvelous light detector (until he went blind), but eyes are limited in two fundamental ways. First, there's no permanent record—you can have eyewitness accounts, and drawings, but there's no way to store the actual data. Second, you can't accumulate the light in a time exposure to record fainter objects than you can see in a single good look. The apogee of the eyeball approach to astronomy was the "Leviathan of Parsonstown" telescope completed by William Parsons, third Earl of Ross, on his commodious front lawn at Birr Castle in Ireland in 1845. The telescope, with a 6-foot, 3-ton metal mirror, and an ingenious pointing mechanism of chains and cables between massive masonry walls, had elaborate wooden scaffolding to lift the observer to the business end of this monster so he could look in by eye. There's also a nice drawing board so the observer could sketch what he saw, if there came a fine night in County Offaly. There must have been a few clear nights, since Parson's sketch of the Whirlpool galaxy, M51, provided the first evidence for the shape of "spiral nebulae."

Every subsequent large telescope has been built with photography in mind. Starting in 1852 with a daguerreotype of the Moon made with the Great Refractor at the Harvard College Observatory and on into the 1970s, astronomical evidence was recorded on photographic plates of the chemical kind: extra-flat glass coated with a gelatin emulsion that suspends silver salts. Plates could be exposed for long periods of time and when later developed, the silver metal retained a record of the stars and galaxies whose light had fallen tin them. The advantages were tremendous—long time exposures, like the heroic early galaxy spectra taken by Slipher over many hours, accumulated light for much longer than a human eye, which sums up light for less than a second. And the record was comprehensive and permanent, so Hubble could go back month after month to photograph M31 and then compare the plates, searching over the whole image for stars that had changed their brightness— the cepheid variables that set the distances to nearby galaxies.

The great recent technical change of modern astronomy has been to shift away from these messy, but simple and cheap, analog chemical imaging devices where light plus darkroom voodoo makes dark dots on a glass plate. Now we have complicated and expensive digital imaging. Light falls on small wafers of silicon carefully held deep inside elaborate cryogenic bottles where ancient photons from distant stars liberate electrons that can be measured with a delicate amplifier and recorded digitally in a computer.

Why is this better? It is better tecause photographic emulsions detect only alxDut 1 percent of the light photons that fall on a plate. Light from a distant supernova travels across 7 billion light years of intergalactic space, traverses Earth's atmosphere, bounces off the primary mirror of a big telescope and into the camera. In Hubble's time, 99 percent of that light was lost right there, being absorbed in the photographic emulsion without making a grain to be developed. What a waste! Silicon CCD (charge coupled device) detectors, sophisticated siblings of those in digital cameras, detect almost 100 percent of the light. So old telescopes w ith modern detectors are nearly 100 rimes more efficient than when they were built.

what time is it»

Was this article helpful?

0 0

Post a comment