Figure 9 2. The good effect of using the light curve shape. The top panel shows a Hubble diagram of the redshift versus die distance (in astronomers' units) If you assumed all type I supernova« were identical and you judged the distance from the apparent brightness, you would get the Hubble diagram in the upper panel. When a supernova is intrinsically dim, this approach mistakenly assigns it an extra-farge distance. The lower panel shows the Hubble diagram after correcting for light curve shape and reddenir^g usir^g the MLCS The improvement is dramatic The Id error drops from about 15 percent to 7 percent error in the distance. This means each measurement becomes (bur times as useful Courtesy of Adam Riess, Harvard-Smithsonian Center for Astrophysics.
To me, this vindicated our lung-term strategy of concentrating on learning the properties of nearby supernovae before attacking the cosmological problem. Even though we'd been paddling around in the shallow end of the pool, while the SCP had been diving in the deep end, learning how to find distant supernovae, now wc were ready to swim the English Channel. In fact, if you believed the errors from the MLCS, you would get a strong hint about ft from just one supernova. The difference in apparent brightness at z = 0.5 between an ft = 1 world and an ft = 0 world was 25 percent. Our uncertainty, if we had light-curve measurements as good as the ones from the Calan/Tololo sample or the CfA data, was just 15 perccnt. Of course, we couldn't expect the data on faint and distant objects to be quite as good, and you wouldn't dare to draw a definitive cosmological conclusion from only one object, no matter what your sigma said, but it meant that reasonable data on a handful of objects at redshift of 0.5 could show the fate of the universe. That seemed worth doing.
The LBL team had a handful of supernovae by 1994, but I was convinced they hadn't yet learned anything about cosmology. The SCP data of SN1992bi were, unfortunately, taken in only one filter. With data in one filter there was no way to tell whether dust had dimmed the supernova. So there was no way to separate the effect of dust from the effect of cosmology. You simply could not say how much the acceleration or deceleration of the universe had affected that supernova's brightness becausc they hadn't gathered the essential data. Wc had developed the tools to make that measurement right, and we knew exactly what needed to be done.
At this point in 1994, Brian Schmidt returned from his trip to Chile. He and Nick Suntzeff had been talking. There was enough progress on the Calan/Tololo project to see that they were also going to solve the linked problems of supernova brightness and absorption by dust using measurements in several filters. And Nick, being Nick, wasn't convinced the photometry of the Supernova Cosmology Project was up to his own standards of excellence. If the CfA and Tololo worked together with our friends in Europe as the "high-z supernova search team" we surely could do this cosmological problem correctly, "z" is the astronomer's shorthand symbol for "rcdshift," so "high-2:" meant we were searching for very distant supernovae that could tell us something about cosmic deceleration,
Except for one thing. We hadn't found any supernovae at the distances where deceleration would show up. The Calan/Tololo search was photographic—that was yesterday's technology and couldn't l>e extended to higher redshift. LBL had been working on the problem of automated detection of supernovae since 1986. They had invested many years in building the software for their present system. Their team included highly experienced experimental physicists who were wizards at separating signal from noise in vast data sets. Was it realistic to think we could catch up?
"I think it will take a month," Brian said.
He quickly sketched how we could combine some software packages astronomers used every day to align the new data with the old and subtract it to show only the objects that had changed. We didn't have to reinvent the wheel as the LBL team had done: we could pick up some old wheels at the swap meet.
We applied for time in the first half of 1995 to search for supernovae at Cerro Tololo, using the 4-mctcr Blanco telescope. We convinced the Time Allocation Committee that our methods were good enough to find distant supernovae, and the enterprise was worthwhile. We were assigned three observing runs with two nights each over three dark runs in February and March, when the moon was down and faint galaxies can be seen. Bruno Leibundgut and Jason Spyromillio applied for time at the European Southern Observatory (ESO) to follow up the flood of distant supernovae we were anticipating. Except ESO has a different calendar for assigning time and we missed the deadline to apply for the first quarter. We were disappointed to be assigned rime starting in the second quarter of 1995 to get spectra and light curves at ESO.
Although we had convinced the Time Allocation Committee, Mother Nature had not read our persuasive prose. The first two runs at Tololo were not good and we did not find a single supernova. On the last night of the last run, 30 March 1995, Mark Phillips finally struck gold—a good candidate for a type la supernova. While Brian's software had some bugs ("please call them features"), it worked. Bruno Leibundgut carried the finding charts up the Fan-
American highway to ESO's La Silla observatory where he and Jason used the New Technology Telescope to get images and spectra on 2 April. Becausc they had missed the filing deadline, their observing time had been pushed back just enough to save the whole enterprise from a terrific flop. The spectrum from ESO showed this was a genuine type la at redshift z = 0.479, the highest yet observed for a supernova. We announced SN 1995K in IAU Circular 6160. By the skin of our teeth, we were in the game.
But our high-^ supernova search was far behind the LBL team. Taking into account the observing runs we had scheduled and the time it takes to completely process and calibrate the data, I figured it would be the middle of 1997 before we would have any results worth talking about. In June 1996, Princeton University celebrated its 250th birthday. Pan of the self-congratulatory fun was a meeting called "Critical Dialogs in Cosmology." Princeton University has played a central role in the development of astronomy and (if physics and of the combination of the two into modern astrophysics. And, for those without very sharp knowledge of institutional geography, the formidable presence of the Institute for Advanced Study, where Einstein worked, and where John Bahcall has built a temple of excellence for postdoctoral scholars in astrophysics, blurs into the luminous aura of the university. Plus they have a mutant race of black squirrels on campus, so it's always worth taking Exit 9 at East Brunswick on the New Jersey Turnpike.*
One of the arenas for "critical dialog" was the status of cosmo-logical dark matter. Was = 1? The meeting organizers opted for the dialectic—they decided to have a debate. As always in science, debates, polls, and opinion are less important than data. One good measurement is worth a thousand metaphors: as a nail is worth a thousand paperclips. But when a subject is murky, with conflicting claims that can't all be true, a debate can at least illuminate our ignorance.
The preponderance of evidence, based on the motions of galaxies in clusters and similar measurements of the mass of clusters, favored a low value of £}. A good bet was that ii is 0.3 + 0.1. That's 7 sigma from Q « I. On the other hand, theoretical elegance favors H = 1, and when the data are not conclusive, esthetics have some weight. To have a debate, somebody has to take each side. Having no data, I got to moderate. Wc heard the conventional view that £2 = 0 3, based on the usual evidence from mass associated with galaxies On the other hand, in the summer of 1996, there was some observational evidence presented for the view that data, not just theory, favored £2 = 1. Avishai Dekel, a former Israeli tank commander, argued forcefully, as tank commanders will, that his method of measuring galaxy motions and inferring the mass that caused those motions pointed in the direction of £2 = 1. I then turned to Saul Perlmutter, who presented the preliminary results of the Supernova Cosmology Project. Saul showed a Hubble diagram with seven supernovac at redshifts where cosmological deceleration would be important. If £2 = 1 then the distance to those redshifts would be a little smaller than otherwise and the supernovae would appear a little brighter. And, according to Saul, that's what the first bit of supernova data indicated—a decelerating universe with £2 not yet well determined, but in best agreement with £2=1.
At the coffee break, people asked me what I thought. Sincc I had nothing useful to say, 1 was polite. I said these were tough measurements: photometry is deceptively difficult, type la supernovac are not all alike, and you need a way to deal with dust. Maybe this wasn't the last word. Our high-z team was also working hard and would make an independent measurement. That's what 1 said. What I thought was, "Maytx? we're too late."
Was this article helpful?