Our small brains decode the messages encrypted in ancient light to build an orderly picture for the universe that matches the observations and obeys the local laws of physics. A correct scientific idea had better agree with the physical and astronomical facts as we know them. But because our current knowledge is incomplete, it's not smarl to impose too strict a censorship on ideas. Common sense isn't always the best guide because the real universe is more bizarre than anyone dares to imagine. On the other hand, ideas are not useful just because they are wild. They must match the facts. The cosmological constant has always teen a wild idea. As invented by Einstein in 1917, it was used to account for a static universe. In the 1930s, this wild idea was discarded by most astronomers because it was not needed to match the observed fact of an expanding universe. But now we have a broader concept of what A might be: we think of it as a dark vacuum energy with negative pressure. After 70 years of excluding A, new facts not only pennit, but require something like the cosmological constant.
What are those facts? Since the 1930s, cosmic expansion has been a fact, though obtaining precise and accurate measurements of the present rate of cosmic expansion has provided astronomers with decades of difficult and contentious work. Since 1965, the cosmic microwave background radiation's remnant glow has been a finri fact that any physical picture for the early day.s of the universe must match. The evidence glimpsed in the 1930s from the furious zooming about of galaxies in clusters shows that galaxies have much more mass than meets the eye: galaxies are trapped in invisible pits of dark matter. The evidence from helium and from deuterium, the heavy isotope of hydrogen, has become another fact that any picture must match. Deuterium measurements place such a low ceiling on the density of baryons that confidence in the picture of the freeze-out from a hot Big Bang has led to the strange view that most of the matter in the universe is not anything we know from the periodic table of the elements and is definitely not the stuff we are made of.
Then we have some astronomical facts. These are often inferences based on a long chain of measurement and reasoning. Because these facts result from such a complex set of observations and ideas, it is hard to know exactly what measuring errors and systematic errors lurk behind the digits. The way to find out is to make measurements by a variety of methods—if they disagree, you can stage a debate so proponents can make arguments about which method has the biggest errors, but when they agree, then you may be getting close to the truth. With those cautions, it is reasonable to say we know that the oldest stars in our own galaxy are about 12 ± 1 billion years old. Also, we observe a value for the present rate of cosmic expansion, the Hubble constant, of about 70 ± 7 kilometers per second per megaparsec. And, when we measure the mass associated with clusters of galaxies, we find the cosmic density, expressed as a fraction of the critical density, gives = 0-3 ±0.1. These facts give the background for observing cosmic acceleration, which wc observe directly by measuring the apparent brightness of supernovae at large redshifts.
Einstein's theory of gravity, applied to the universe as a whole, lets us predict what we will see when wc look deeply into the past. For the past 50 years, astronomers have been trying to test these predictions to find out what kind of universe we live in. Telescopes observe the distant past. An important observational test is to measure the change over cosmic time in the rate of cosmic expansion.
The wild card in this confrontation of theory with evidence is that the predictions are simple only if there is no cosmological con-
slant. For the past fifty years, almost every discussion of these COS-mological tests starts with a brief disclaimer—that the results apply for A = 0. Given the universal distaste for blundering, those less talented than Finstein have stayed well to windward of the cosmo-logical constant. Only the convergence of powerful facts could convince a skeptical community that A really is necessary.
The 200-inch Hale Telescope at Palomar Mountain was put into action in the early 1950s. During the decades while it was the world's most powerful telescope, until it was supplanted by the IOmeter (400-inch) Keck telescope in 1993, hundreds of nights were assigned to the problem of determining the deceleration of the universe from observations. In 1961, Allan Sandage spelled out how these tests could be done. Despite heroic efforts, this observational program using the brightness of galaxies to map the history of cosmic expansion stalled—nobody produced credible evidence for changes over time in the expansion rate. Rut the seeds of success were sown. As astronomers slowly developed a base of knowledge about supernova explosions, we created the tools and techniques for measuring the acceleration of the universe.
In the past five years, as a result of improved instruments like the Keck and the Hubble Space Telescope, diligent accumulation of data on nearby supernova e, and a concerted effort by two international teams to measure supernovae halfway across the universe, we are beginning to paint a new, messy, and wild picture for the cosmos. It's an extravaganr universe To match all the evidence, we need a universe that has ordinary matter, glowing and dark; dark matter of at least three kinds: baryons, neutrinos, and weakly interacting massive particles (WIMPs); and a large dollop of dark energy whose negative pressure drove the inflation era and another, much longer-lived dark energy that drives cosmic acceleration now. You would be unwise to believe such a baroque mixture, which seems to violate common sense, Occam's razor, and the1 boundaries of good taste, except that there are lines of evidence, from direct measurement of the matter density, from the concordance of cosmic ages, and from the subtle watermark of manufacture observed in the background radiation, all of which converge on a view that the universe now has a preponderance of dark energy. Dark energy, which might be the cosmologjcal constant, or something that changes with time, has moved from being a wild idea, not really fit for serious discussion, to an essential feature of our present view of the universe. How did this happen?
The first step in developing the evidence for the accelerating universe has been to develop a reliable ruler for measuring distances in the universe. Today's best tool is the explosion of a white dwarf star as a type la supernova. In the 1940s, Walter Baade, working at the Mount Wilson Observatory in Pasadena, began to compile measurements of supernova brightnesses. He and Fritz Zwicky had worked together to establish that supernovae were a genuine phenomenon, different from ordinary novae, in which the stupendous energy release signaled the death of a star. Zwicky used cobbled-up cameras, and then, after 1936, his new 18-inch Schmidt wide-field telescope at Palomar Mountain to find supernovae. Baade and another Mount Wilson astronomer, Rudolph Minkowski, took spectra of the supernovae at Mount Wilson. Their goal was to find out from empirical observation what supernovae were and then to puzzle out from those clues what their physical origin might lie.
Since a supernova is as powerful as a few billion suns, Baade recognized that supernovae might be useful in measuring extraga-lactic distances. Just as Hubble had used cepheid variables to chart the distances to nearby galaxies, Baade reasoned that the supernovae might be useful yardsticks to intermediate distances, large enough to provide an independent calibration of the Hubble constant.
When Baade looked into this question in 1938, he found that supernovae were not super good as standard candles. The typical scatter in brightness was a sigma of about a factor of three. In 1938, supernovae were a coarse ruler for measuring cosmic distances. But today, they are the very best "standard candle" for cosmology. What changed?
First, Minkowski made a very important contribution, which has been elaborated in the past 60 years. He looked at the spectra of supernovae, which convey infonnation about the chemical composition and the expansion speed of the stellar debris. The spectra of the original handful of supernovae were very strange compared to
Figure 8.1. Spectra of type I and type II supernovae. Type I superrovac do rat have lines of hydrogen while Type II supernovae have prominent hydrogen lines. Although this does not exhaust the possibility, with type lb (and type Ic) being introduced later, most supernova spectra we observe are of these two general Types. Courtesy of Tom Matheson, Harvard-Smithsonian Center for Astrophysics
Figure 8.1. Spectra of type I and type II supernovae. Type I superrovac do rat have lines of hydrogen while Type II supernovae have prominent hydrogen lines. Although this does not exhaust the possibility, with type lb (and type Ic) being introduced later, most supernova spectra we observe are of these two general Types. Courtesy of Tom Matheson, Harvard-Smithsonian Center for Astrophysics any ordinary star, but similar from one event to another. As Minkowski put it in 1939, "the spectra of all supernovae arc practically identical."1 Out in 1940, Minkowski found a supernova that broke the mold: "the spectrum of this supernova is entirely different from that of any nova or supernova previously observed."2 SN 1940B had strong and easily identified lines of hydrogen. It was a supernova of a different kind. Minkowski's observation split supernovae into two classes, type I and type II.
Type I was the original type with the mysterious, but uniform, spectrum. The prototype was SN 1937C, an especially bright object in a nearby galaxy for which Minkowski obtained spectra out to 339 days after maximum light. Even if you didn't understand the origin of the spectrum, if it was the same as SN 1937C, then it was a type I supernova. Type II was the type with hydrogen lines. By sorting out the supernova types, Minkowski put us on the track to understanding that there is more than one way to explode a star, and to using spectra to sharpen up supernova samples If you toss out the type IT supernovae, the ones that remain are more similar to one another, and better standard candles. It's a little like trying to detennine the average height of sixth-grade boys. You do a much tetter job if you make sure there aren't any girls mixed in because the girls are much taller at that age!
Now an empirical method is a good thing, and Minkowski's description of the spectra was clear enough for others to identify supernova spectra in the same way. What was not clear, at first, was the physical origin of supernova explosions. An empirical method that you don't understand is not as good as one that has a foundation.
By the 1960s, using the principles of nuclear physics, Willy Fowler and Fred Hoyle shed some light on the origin of type I and type II supernovae. They traced the history of nuclear burning in stars of different masses. Low-mass stars, up to about 8 times the mass of the sun, end up as white dwarfs, with a core that is made of carbon and oxygen, or in the ease of the most massive progenitors, oxygen, neon, and magnesium. The white dwarf does not ignite because the star is held up by quantum mechanical effects. White dwarf stars are potential thermonuclear bombs because they have unburned fuel, but, like a stick of dynamite, they are harmless unless detonated. Hoyle and Fowler identified the type I supernovae as the nuclear explosion of a white dwarf, an event that might be precipitated by added mass from a binary companion. This provided a theoretical underpinning to uniformity—since there is a fixed upper mass limit to white dwarfs of 1.4 solar masses as worked out by Chandrasekhar, a uniform energy output might come from explosions of identical stars at that maximum mass.
The life history of more massive stars is different because they fuse carbon and oxygen without detonating. They burn oxygen into sulfur and silicon and, eventually, fuse all the way to iron. Then, at the nadir of nuclear binding, they collapse. Hoyle and Fowler weren't too clear on the details, but they surmised that these collapsing events inside a star with hydrogen tin the outside, with an immense release of gravitational energy, could produce the type II supernovae that Minkowski had identified.
In 1970, as a skinny, red-headed kid of 21,1 arrived as a graduate student at Caltech. I was assigned to Bev Oke, one of the astronomy faculty, for a research job to supplement my National Science Foundation fellowship. Oke, a friendly, modest, red-headed Canadian, had applied advances in light detectors to the job of measuring spectra at the 200-inch Hale telescope. When I showed up in his office on the second floor of Robinson Lab, he asked blandly, "Well, what do you want to do?"
I didn't really know, but I knew enough to avoid three areas of astronomy that I thought were really dull. One was the measurement of parallax, which demands more patience and precision than I possess. Another was studying dust, which is messy stuff, whose properties are exceptionally hard to measure and interpret. And the third was spectral classification, which has an empirical quality of making fine distinctions that resembles philately. Curiously, the study of supernovae has drawrn me into each of these areas and each of them has been essential to building up the picture of the accelerating universe.
As a senior at Harvard, I had enjoyed working on ultraviolet emission from the sun with Bob Noyes. Bob had been a graduate student at Caltech a decade earlier and he encouraged me to go to Pasadena. He wrote a letter of recommendation. I don't know if he was honestly ignorant of my slightly erratic academic record, or whether he explored the outer bounds of puffery, but the letter wrorked and, much to my surprise, 1 was admitted to the astronomy program at Caltech_ Hal Zirin, a Caltech professor who studied the sun (known as Captain Corona to the students), much later told me that he had lobbied hard for a graduate student who might work on the sun. Me. Well, in the end, I didn't, but it worked out all right. Bev Oke was giving me a chance to use data from the wrorld's largest telescope. I just wanted to avoid parallax, dust, and spectral classification.
Remembering the fun I'd had working on the Crab Nebula, a supernova remnant, I said, "I'd be interested in working on super-novae." The legend was that Caltech professors had so much telescope time they would take data and then put it away like fine wine until the moment for its analysis ripened. In a quintessential Caltech moment, Oke opened a drawer in his desk, and pulled out a fistful of Kodak yellow cardboard jackets containing spectra of superno-vae. "Here," he said, "see what you can do with these." T had no idea what to do with them, but I wasn't going to admit that on the first day of graduate school.
In that bundle there were spectra of type I supernovae and spectra of type II supernovae recorded on photographic plates. Oke had also invented a new instrument, the multichannel spectrophotometer (the "multichannel"), which made simultaneous quantitative measurements of the light from an object at 32 different wavelengths. This was a big step up from earlier instruments that could make a similar measurement at just one wavelength, though a long step from today's instruments that make 1000 such measurements of 100 objects in a single observation. With the world's best telescope and the world's best instrument, we would have to be dull indeed not to do something useful.
The first bunch of data Oke handed me included a set of observations of SN 1970G, a type II supernova in the nearby galaxy M101 He also had several observations of type II supernovae with the multichannel. The 200-inch users, including Chip Arp, Maarten Schmidt, Leonard Searle, Wal Sargent, and Jim Gunn, following the good examples of Baade and Minkowski, ctx>perated to get good coverage of the changing spectra of supernovae during the weeks when an object was bright. In fact, their motivation was a little stronger than altruism—there was a sense of noblesse oblige. Supernovae had been understood first in Pasadena, studied best in Pasadena, and it was natural for people in Pasadena with the world's best instrument to contribute to this topic, which seemed important in its own right, if not yet useful to cosmology. And now I was holding a fistful of supernova spectra. I had an obligation to understand them, even if I had no idea how to proceed.
I took all this grist down to my office in Robinson Lab As a beginning student, I was placed in the second sub-basement of the building, where all the offices had numbers that began with 00, amusing the James Bond fans, though the only thing you were likely to kill was yourself. With work. To get to 0013, I had to go past another of the sub-basement suites, where there was a very strange and forbidding old man, wearing an eye patch as he worked away on a pi ate-measuring machine. I ie looked like a pirate. It was Fritz Zwicky.
Zwicky, the astrophysical swashbuckler who named the super-novae and the dark matter, charted the galaxy clusters, and launched the first interplanetary ball bearing. Zwicky, who claimed his "Morphological Method" wras the greatest contribution to human thought since Pascal. Zwicky, at age 72, a terrifying spectacle for a fledgling graduate student who maybe ought to be studying the sun instead of Zwicky's own subject, supernovae. Fritz was wearing an eye patch to help look through the single eyepiece of a measuring machine where he was grinding away compiling his great catalog of galaxies and clusters of galaxies. He wras tall and gaunt. His speech was as intimidating as his looks.
At that time, my wife was a substitute teacher. She would get calls before 6 a.m., telling her to become Miss Jones, third grade teacher at the Burbank School by 7:15- Awakened by these academic alarms, I would get up and wralk over to Robinson Lab. Arriving before 7 a.m. is unusual in any academic setting, but at an astronomy department, the night owls usually showed up around noon and worked until midnight (1 guess—how would I know?). But no matter howr early I arrived, Zwicky was already there.
He began to talk to me briefly each day. I Ie usually launched into bitter vituperation in a spicy Swiss-German accent, aimed at the current staff, including my advisor, Bev Oke.
"Those spherical bastards threw me off the 200 goddam-inch telescope!" he fumed. "Made up a special rule. No observing after the age of 70! Grrrr, them I could crush!"
A spherical bastard was "a bastard any way you looked at it." Or sometimes the injustice wras more widespread.
"In 1933, I told those no-good spherical bastards that super-novae make the neutron stars. Now they find these damn pulsars and nobody gives me the credit."
Or "Quasars? Quasars? Maarten Schmidt and his goddamn quasars. They are objects Hades, by the Morphological Method predicted!!"
These set-piece speeches blasting the Caltech faculty were shocking, subversive, and wickedly amusing at first. There was a large but finite number of them. They became familiar, then tedious, then a little embarrassing. Zwricky used these packaged diatribes as "questions" after a colloquium talk on any topic. So after a talk on the magnetic fields of white dwarfs, or galaxy dynamics revealing the dark matter, or the chemical composition of extraga-lactic gas clouds, we would once again learn of the injustice of quasar nomenclature, eliciting inward (and sometimes outward) groans in the audience.
Sometimes Zwicky would give me advicc:
"Always get here before the Americans" (advice I could not possibly adhere to!).
Sometimes he would pose conundrums;
"Do you know how to get the 200-inch to give diffraction-lim-ited images?"
1 had to admit that I did not. As I understood it, the telescope's imaging was limited by the blurring effects of temperature inhomo-geneities in the Earth's atmosphere. The atmospheric limit was about 50 times worse than the theoretical limit given by the mirror's size and the wavelength of light. It seemed like a sensible answer to me, suitable for the Ph.D. oral exam I was preparing to take.
"Hah!" Zwicky's face contorted with scorn. "Hah! You're just like the rest of those low flying shit-eaters! No, No, No! You fly a jet over the dome at the speed of sound! Then you use the shock wave like a knife edge. Those bastards never let mc do it!"
I nodded, having only the vaguest idea what this enraged man was shouting about, but hoping to get to my office for a few hours of quiet wrork. I had to catch up with the night owls.
One morning Fritz seemed to be in orbit.
"Never mind the Bolsheviks and their so-called Sputnik. I, Fritz Zwicky, launched the first interplanetary probe!"
I was tcx) amazed to inquire further. But one day, years later, I had an hour to kill in Alamagordo, New Mexico. The choices are limited. I recommend the New Mexico Museum of Space History. Upstairs beyond the gift shop selling inedible "astronaut ice cream," on the wall of the International Space Hall of Fame, there was a bronze plaque of Fritz Zwicky. Just like tine in Cooperstown of Ted Williams. Fritz had been telling the truth! An Aerobee rocket launched at White Sands, NM on the night of 15 October 1957 carried a shaped explosive charge in its nose. After ascending 53 miles in 91 seconds, the explosive was detonated, blasting out luminous pellets at more than 9 miles per second, fast enough not just to orbit Earth, but to travel indefinitely out into the solar system. Fritz was not making this up.1
Even though Zwicky had written the bcxik on supernova classification, I never told him I was working on supernovae—it seemed too dangerous. And he was too wrapped up in his own sense of injustice to bother asking. I don't think he ever asked my name.
But writh Fritz in the next room, I felt some weight of history leaning on me. I learned to sort the SN I from the SN II. Like everybody else in the previous 40 years, I couldn't identify most of the absorption lines in the type I spectra, so I put those aside. The type II supernova spectra were more promising because even a beginner could understand what was going on. It was hydrogen, after all, that made the type II spectrum. I used the hydrogen lines to try to understand how the mass wras distributed in the atmosphere of the explcxling star. This might give a clue to the state of the star when it blew up. That seemed worth doing.
I was making some progress tin understanding the atmospheres of type II supernovae when Bev Oke was invited to a winter workshop on supernovae at the Kitt Peak National Observatory in Tucson in February 1972. He suggested to the organizers that they should invite me, too. In his own quiet way, Bev was a very good advisor. I lis sharp sense of smell for a good scientific opportunity always put a student in a position to succeed, but Bev would rarely tell you what to do next. Sink? Swim? That part was Lip to you. But he'd take you to the beach.
I was delighted to go to Tucson where many of the hotshots in the field would be present. It was a great chance for a rookie to meet the All-Stars. Jerry Ostriker, the brilliant Princeton theorist, was there, full of new ideas about neutron stars, and Stirling Colgate, the wrikl-man physicist from Los Alamos who knew how to blow things up, and Craig Wheeler, already one of the best at connecting supernovae with the stars that make them. Our host, Leo Goldberg, had been the Harvard College Observatory Director when I was there, and was now the Director of Kitt Peak, no longer deploring paucity, but allocating plenty, and, completing his liberation from the I iarvard faculty, no longer wearing a tie! Rudolph Minkowski was there, a living legend from Mount Wilson days, a pioneer of supernova studies, looking a little like a gray walrus with a brushy moustache, sagely puffing on a pipe.
Goldberg presided over a conference dinner in downtown Tucson. For entertainment, some guy from Livermore did magic tricks with a piece of rope, cutting it, but revealing it to be whole, Scientists, like everybody else only more so, don't believe in magic; we believe in evidence and reason, so the conflict between the evidence of our eyes and our faith in reason made us admire his deception twice as much. Or maybe it was the wine.
As the party broke up, I joined Craig Wheeler and Jerry Ostriker to walk the mile or so back to the University of Arizona campus. As we were approaching the campus, near the geometrically significant address of Euclid and University, a group of students cruising by in a 1965 Mustang found three astronomers oddly provocative. Perhaps it was Jerry's enthusiastic reply to their jeers. I think he said, "Free Angela Davis." Anyway, they stopped the car, carefully put down their six-packs of Luc ky Lager, and rambled over to confront us. Craig had the collar of his Oxford-cloth shirt ripped, and Jerry had his wire-rimmed glasses broken again ("my optometrist will be cross with me") wrhile I was wrrestling with a pretty strong guy. He probably did not know I had been the 137-pound runner-up in the Harvard freshman intramurals, but I didn't feel compelled to inform him that my body was a deadly weapon. I was ahead on points, and executed a neat take-down, but something felt funny when my shoulder hit the sidewalk. I learned in an instant that cement is stiffer than a wrestling mat. Then I hit him in the fist with my lip, and they all fled, fearing dry cleaning bills.
The next morning, with my arm in a sling, I talked about the atmospheres of type II supernovae on the sunwashed patio at the Kitt Peak offices. There's something about a separated shoulder that takes the zip out of a presentation. Maybe it was the painkiller, or perhaps the inability to gesture vigorously. I started out by reviewing the data on hydrogen lines in type II supernovae. I showed how the data we had for SN 1970G indicated that as time passed, the velocity decreased. This did not mean the gas was slowing down—it meant we were seeing deeper into the star, where the velocities were lower. It was a way to reconstruct the mass distribution on the outside of the exploding star. Minkowski, 77 years old, was sitting in the front row, puffing on his pipe. He quickly grew impatient with this introductory material, put down his pipe, and growled in heavily German-accented English, "Ve know all dis." It was not a gcxxl start.
A more useful suggestion came back in Pasadena from Leonard Searle, one of the staff astronomers at the Carnegie Observatories (and later its director.) Genial Leonard had cooperated in getting data for SN 1970G, and he noticed that the multichannel data from the supernova photosphere (the surface where light escapes) defined a beautiful continuum—just like the blackbody spectrum from any opaque object. Wouldn't it be possible, Leonard asked, to use the information from the hydrogen lines, which gave the velocity, together with the multichannel scans, which could give a temperature, to work out the size of the supernova photosphere at several times and compute the distance to M101? Leonard's suggestion was to use supernova data alone to find the distance to the galaxy in which the supernova had exploded. I worked this out, using the data Oke and others had gathered at Palomar. Though Leonard Searle was right in principle, the problem was a little more complicated than it seemed at first. Another Caltech graduate student, John Kwan (now an astronomy professor at the University of Massachusetts), contributed ideas and worked out theoretical issues where I got stuck. We computed distances to M101 and NGC 1058 that were completely independent of all the intermediate steps in the extraga-lactic distance scale. Since the redshifts for those galaxies were well known, and pan of the overall cosmic expansion, we felt justified in computing the ratio of velix_ity to distance, the Hubble constant. For this work, we found a value of the Hubble constant., H„, of 60 ± 15 kilometers per second per megaparsec.4
At the same time, Allan Sandage, up on Santa Barbara Street, and Sandagels Swiss colleague from Basel, Gustav Tammann, had been working on distances to the very same galaxies, M101 and NGC 1058, using empirical methods that calibrated the properties of galaxies. Those two galaxies were tcxi distant for 1970s technology to detect the individual cepheids. Sandage and Tammann were embroiled in a vigorous debate about the Hubble constant with Gerard de Vaucouleurs of the University of Texas. In the 1970s, de Vaucouleurs maintained that the evidence favored a high value of
Hm around 80 or 90, while Sandage and Tammann stoutly maintained that 55 was the right answer. Each group claimed a precision that ruled out the answer given by the other. John Kwan and I had stepped into an arena already soaked with bad blood by heavyweight gladiators. At first, they were glad to see us. Tammann sent me a nice note, congratulating us on getting the right answer.
While the universe doesn't care what we think, we do. And Allan Sandage thought that our distances based on the expanding photospheres of type II supernovae were close enough to his to be a pretty good result and evidence against the misguided Parisian in a ten-gallon hat. So he regarded us as possible allies in resisting the falseh(x>ds being issued from Austin. My own view was less dogmatic—I had no stake in the outcome, we were just trying to measure a number and that's the one we got. In the long run, I had confidence we'd find out what was going on. Then we could move on to error and confusion on a new set of questions.
Sandage's view seemed much more emotional—perhaps as Hubble's only student, and the world's leading practitioner of practical cosmology, he felt responsible for the Hubble constant and the Hubble time coming out right and making sense. Much later, in 1994, Ron Eastman, Brian Schmidt, and I used a larger set of data and the expanding photosphere method (EPM) to find Hn = 73 ± 8 kilometers per second per megaparsec, which was 2 sigma away from 55. Not so close. Sandage took a personal view of the Hubble constant—if you disagreed with him, you must be wrong, and possibly malicious. And if you changed from agreement to disagreement, you must be treacherous or stupid or both. At that time, I was the department chair at Harvard, and we invited Sandage to come to Cambridge to give a talk about his work on the Hubble constant. Sandage wrote back, declining. He said his mother had taught him not to talk to the village idiot.
The expanding photosphere method was a parallax— number 1 on my list of things not to do, and it also led to a confrontation with interstellar dust, number 2 on my list of things to avoid. Dust between the stars has been a bugaboo for astronomy for a century. Correct understanding of the size and shape of the Milky Way was hindered for decades until people worked out the effects of obscur ing matter. One important clue to the presence of dust is that it absorbs blue light more effectively than red light. The signature of interstellar dust is "reddening." This resembles the effect you see at sunset, where the setting sun looks dimmer and much redder than the noonday sun, because the light traverses a longer path through the atmosphere and the atmosphere scatters and absorbs the sun's blue light, making Lhe sun look red. When an astronomer sees a familiar type of object, but its color is unusually red, the first thought is that dust is responsible. Could dust be a problem for the EPM distances? (See figure 4.1 in the color insert, which shows reddening in the direction of the center of the Milky Way.)
Dust doesn't make much difference to the distances derived from the expanding photospheres. Dust in our galaxy or in the galaxy wrhere the supernova (formerly) resided absorbs light. This makes the supernova appear dimmer, so, other things being equal, you would mistakenly assign it a larger distance than the real one. However, since the dust removes more blue light than red light, it also makes the supernova appear redder. If the supernova's light is reddened, you would mistakenly assign the supernova a cooler temperature than it actually has, since cooler objects emit redder light. In the arithmetic of the EPM, this red color makes you think the supernova is closer than it really is. The two effects very nearly balance, so that the error you make because the supernova is dim is corrected by the error you make because of the change in color. By good fortune, dust doesn't create a big systematic error for type II supernova distances found through the expanding photosphere method. But the lesson was to think carefully about dust, or you might make a systematic error so large (and avoidable) someone else might call it a mistake.
In May of 1972, Charlie Kowal was using the 18-inch Schmidt at Palomar to search for supernovae. Zwicky's old telescope was fine for this work, and Charlie made a regular patrol of nearby galaxies where the wide field of the little Schmidt made it the best tool for the job. Tipping the telescope as far to the south as was prudent, Charlie exposed a film at the Centuams group of galaxies, centered on NCiC 5236, a big fat spiral galaxy with evidence of star formation and a history of producing supernovae. At the same time, he got an image of the insignificant little neighboring galaxy NGC 5253 for free.
When he developed that film, he placed it on top of an older film on a light box, aligning the two so that every dot that was in both epochs appeared double. Scanning the film by eye, one dot jumped out at him from the thousands on the film. It was a plump, solo dot—present in one film, but not the other. Separating the films, he saw it was tonight's film with the new objcct. Charlie had discovered another supernova. It was his job to discover supemo-vae, but that didn't make it less fun. And this was a really good one
This was supernova 1972E in the galaxy NGC 5255. It was the brightest supernova in 35 years, since SN1937C, the one studied so well by Minkowski at Mount Wilson. Discovered at Palomar, SN 1972E was studied thoroughly at Palomar, using the multichannel scanncr on the 200-inch, where it took only a few minutes to get a fabulous spectrum. What's more, there was a new telescope at Palomar, a 60-inch telescope that had been more-or-less finished, but not yet scheduled for observations. Since the multichannel was not going to be mounted on the 200-inch every night in May, Bev Oke thought it would be a gotxl idea for somebody to go up to Palomar for a couple of weeks to make observations of SN 1972E on the new 60-inch. Even though this was a single-channel scanner, 32 times slower, on the 60-inch, with 10 times less collecting area, it would be good to get data every night. Was there a graduate student interested in supernovae and looking for a thesis project who wanted to do this? I raised my hand.
Bev Oke drove me up the mountain in his gray MGB hatchback. He was a careful driver, but he enjoyed the curves up Palomar Mountain more than I did. When we got to the observatory, one of the technicians saw two redheaded guys whose age differed by about 20 years getting out of Oke's car.
"Is that your son?" the electroniker asked Oke.
"Nope," Oke explained.
SN 1972E was a type la supernova, very similar to SN 1937C studied so carefully by Minkowski 35 years earlier. But now wre had a beautiful set of modern digital data that covered the whole range from the ultraviolet to the near infrared. The 60-inch observations
I was making were 300 times slower than the observations Oke obtained at the 200-inch Big Eye. One minute of observation at the 200-inch collected as much information as 5 hours at the 60-inch. But in May 1972, SN 1972F was bright enough that I could get good data in a few hours. The 200-inch was overkill.
From Falomar, this supernova in the constellation Ccntaurus was scraping the southern horizon. At southern latitudes, in Chile, Fat Osmer was also observing SN 1972E. Pat had finished his Ph.D. at Caltech a few years earlier and was on the staff at Cerro Tololo Inter-American Observatory (CTIO). Pat was making observations very similar to mine with the Cerro Tololo 60-inch telescope at that excellent site. Even though SN 1972E was far in the south for us, our Palomar data set was more complete than Pat's for May when the supernova was bright, and, as the supernova faded in June and July, the speed advantage of the 200-inch made a huge difference. We compiled the best-ever record of the complex and mysterious spectrum of a type I supernova. That summer, Pat dropped by to show us his supernova spectra. They were good. Then we unfurled our massive set of observations. They were very good. Pat grew quiet and a little glum. This is the way they liked it in the old days at Caltech—the Big Eye blew the compct.it.ion away.
When Subramanyan Chandrasckhar visited Caltcch, he courteously took an hour to go to lunch at the Athenaeum, Caltech's Faculty Club, with the graduate students. A cerebral, slender man, Chandrasckhar was a formidable figure in theoretical astrophysics, whose career at Cambridge started with debates with Fddington and became a legend at the University of Chicago.
"Why," he politely asked the assembled group of six graduate st udents as they enjoyed their free lunch, ''have you chosen to st udy at Caltech?" When nobody responded for about 30 milliseconds, I spoke up.
llOh," I said, "that's easy. Caltech has the 200-inch."
He looked at me skeptically. "Really? You chose to come here because of a machine? How odd. I should have thought the faculty would matter."
In 1973, the International Astronomical Union had its once-every-three-years meeting in Sydney, Australia. Bev Oke was in vited to give a review talk about, supernovae to the whole General Assembly, since everybody wanted to see what we'd been doing with SN 1972E. He didn't want, to go, but he suggested that. I would be a good substitute. This was another good chance to meet the pros, only this time from all over the world, and from all fields. By good fortune and being at the right place at the right time (the right distance from NGC 5253 for the light, which had been traveling for 12 million years, to arrive at Earth in May 1972, just as I was looking for a thesis project), I was standing on the stage in front of 1500 astronomers, pretending to be an authority on supernovae.
But the most interesting aspect of studying SN 1972E came later. More than a year after the explosion, the supernova had faded so much that only the 200-inch could take spectra of it. Jim Gunn, then a young professor at Caltech, and Bev Oke integrated for hours to get the last observations. As a supernova expands, it eventually turns transparent, and you can see in toward the center of the explosion. Type I supemovae rise to a maximum brightness and then fade, quickly in the first month, then more slowly. After about two months, the brightness of the supernova tracks the radioactive decay rate of ^Co, the isotope of cobalt that has 27 protons and 29 neutrons.
The theoretical idea is that a SN I is the thermonuclear detonation of a white dwarf. This implies that elements near iron in the periodic table (like cobalt) are produced in the explosion. An exploding white dwarf should blast out about 0-6 solar masses of elements near iron. When you consider that present-day gas in our galaxy has one iron atom for every 10"1 atoms of hydrogen, the sudden addition of 10" iron nuclei from a type la supernova is a very important source of iron for the galaxy.
This is a good story, but we'd like to check the details of the prediction against the observations to see if it is true (or, more precisely, to see if it is false). Nuclear physics theory predicts that, in the conditions that prevail deep inside an exploding white dwarf, the most likely iron-peak product is ,6Ni, the isotope of nickel that has 28 protons and 28 neutrons. This is radioactive, with a half-life of 6 days. As nickel decays, it emits energy that helps make the supernova glow. When we catch a supernova rising toward maxi mum light, which takes about 20 days, or fading in the month after the peak, most of the energy comes from this radioactive decay. The decay product of s&Ni is 5f,Co, which has a 77-day half-life. So the long, slow decline in brightness that characterizes SN 1 is, in theory, dLie to the subsequent decay of cobalt into stable iron. Is this right?
Observations at early times showed that iron was accumulating at the expense of cobalt, and observations at late times also help test the idea. First of all, the spectra that Gunn and Oke obtained with the multichannel showed that the light curve continued to decline as predicted for at least 700 days. This made it plausible that the energy released from cobalt turning into iron was responsible. Even more telling was the spectrum, which showed four broad peaks. What were they? If we were seeing iron from the core of the explosion, heated by radioactive decay, then it seemed plausible that the spectrum at late times ought to be made up of emission lines of iron.
I had just received an HP-45 calculator for my twenty-fourth birthday, so I merrily sat down to compute what the spectrum of iron would look like under these conditions. I did a pretty crude job, but sometimes good enough is good enough. After one afternoon, it was clear that when you added up the emission from all the lines of iron atoms that were missing one electron, there was a g<xxi match with three of the four bumps in the late-time spectrum of SN 1972E. Bev Oke suggested looking at the contribution from iron that was missing two electrons, but I couldn't find a good compilation of the atomic data, so I wrote up the paper with the results we already had. I should have listened to my advisor. Tim Axelrod, a graduate student at Santa Cruz working with supernova wizard Stan Woosley, did the calculation right, including other forms of iron, and showed that the feature I could not account for was indeed due to iron stripped of two electrons.
I also should have included Jim Gunn on the list of authors for this paper—he had contributed many hours of his precious time on the 200-inch to these heroic observations. When I finally woke up to this gaffe a few years later, I sheepishly said to Jim that we should have made him an aLithor of the late-time spectrum paper. Though he never said anything at the time, or in the intervening years, he hadn't forgotten. Jim smiled a hit and said, "Yes, Robert, you should have." Two lessons learned: listen to your advisor, and give credit where it is due. Observers have a choice of how to use their time and it is reasonable for people to be included in the published results even if the data itself is their only contribution. The most recent paper one of my graduate students wrote on type I supernova light curves had (because he listened to me!) 42 authors, each of whom had contributed some data.
The spectrum of SN 1972E was very similar to SN 1937C, the classic type I prototype that Minkowski had observed. The new data helped build the legend that all type I supernovae are the same. If their spectra were the same, and they all came from while dwarfs of the same mass, perhaps it would be a good idea to revisit their use as standard candles for measuring distances in the universe. Charlie Kowal, the Palomar supernova searcher, had compiled the data in 1968. His result was better than Baade's, but not by much. Supernovae of type I bounced around the inverse-square line with a scatter of around 70 percent, so assuming that SN I were all the same would lead to errors of about 35 percent in the distance of each host galaxy. This was better than Baade had found, and mildly encouraging for measuring cosmic expansion, but not good enough for measuring cosmic deceleration.
At the same lime, Sandage, and independently Oke and Gunn, were trying to perfect the use of giant elliptical galaxies as distance indicators, to push the Hubble diagram out to distances that would reveal cosmic deceleration. In the early 1970s that seemed like a more promising path than using supernovae. The giant elliptical galaxies were brighter than supernovae by a factor of 30, and though they were extended, fuzzy objects, Gunn had devised an exceptionally clever way to deal wilh those complexities in the data that he and Oke were gathering with the multichannel
A few years later, this massive effort to measure cosmic deceleration from the Hubble diagram for giant elliptical galaxies began to lose traction. The problem wasn't the measurements, difficult as they were, the problem was the galaxies. Galaxies are made of stars, and stars change over time If there were high-mass, fast-evolving stars in a galaxy, you might cxpect the galaxy to be somewhat, brighter when it was young. As time goes by, the massive stars would become supernovae, then wink out. On the other hand, galaxies are collections of stars that seem to form in groups and clusters. Though the stars don't collide, the galaxies can interact, and even swallow one another. Galactic cannibalism was probably most important for the big bright elliptical galaxies that people were using as standard candles. If a galaxy had grown over time, then it would have been dimmer in the past. Which was more important, stellar evolution that made galaxies brighter in the past or cannibalism that made them dimmer? Nobody knew, and the uncertainty in the properties of the galaxies was larger than the expected effects due to cosmic deceleration. By the 1980s, it was clcar that another path needed to be found to crack this problem. Some people turned to supernovae.
Was this article helpful?