Info

the principal target of the Calan/Tololo search was SN la for the Hubble constant and cosmology, you don't know what type of supernova you have discovered until you take its spectrum. Along with the type ITs they were looking for, like fishermen with a broad net, the Calan/Tololo patrol was also trawling up some SN II. My student, Brian Schmidt, visited CTIO in 199] and instigated an active collaboration that helped him complete his thesis, using some of the type II supernovae from the Calan/Tololo search. Brian's plan was to gather up the SN II observations, which were piling up on the floor in La Serena as a by-product of the Calan/Tololo search. If Brian could measure the supernova colors and the speed of the atmospheric expansion from the light curves and spectra, he could use them to get a better value of the Hubble constant from the expanding photosphere method using some of the theory that Ron Faslman was developing.

In 1993 Brian Schmidt finished his Ph.D. at Harvard. Although we usually liked to push the fledglings out of the nest, Brian was so extraordinary he won one of the competitive postdoc jobs at the Center for Astrophysics, Ihis gave him the chance to step out as an independent worker and to double his salary without moving. Brian decided to visit CTIO, and talked with Nick Suntzcff and with Mark Phillips about what should come next in studying supernovae.

Meanwhile, having begun in 1986, a serious effort to find and study supernovae was developing at Berkeley. The combination of the Berkeley Astronomy Department with Alex Filippenko, the Berkeley Physics Department with Rich Muller, and the Lawrence Berkeley Laboratory (LBL), including Carl Pennypacker and later Saul Perlmutter, had been working on various aspects of supernova science. Muller, a brilliantly inventive physicist, decided to turn the process of finding supernovae from a craft into an industry.

Years earlier, in New Mexico, Stirling Colgate had cobbled together a supernova search telescope from a surplus Nike missile turret and the primitive computers of the 1970s. He built an automated system that could point at galaxies one after another without human intervention, taking an image in a few seconds. Computer software would then examine the image, and sound an alarm when a new object was detected. But Stirling Colgate was not quite (he

Leif Ericsson of supernova searching. Stirling was so far ahead of his time in so many technical areas that he never got all the pieces working together king enough to find even one supernova. He never got to Vinland.

Rich Muller knew that technology had evolved, and, after being rebuffed by the Air Force in a proposal to use their tracking telescopes at Kwajalein atoll to look for supernovae in their classified data stream, he inspired the effort to get Berkeley's 30-inch telescope east of the Berkeley Hills operating in the way that Colgate had envisioned.*

After some agony, it began to work, and the Berkeley Automatic Supernova Search Team began to find supernovae in 1986. What was especially good about this approach was that you could keep careful records of the galaxies searched and use that information to figure out the rales of supernovae in various galaxy types. Best of all, if you adjusted the observing cadence of the search, you could maximize your chances of finding supernovae on the way up, before they reached their maximum brightness. Getting the supernova search telescope working took technical innovation, but building up results on rates took patience and dedication to the subject.

Rich Muller's brain was too effervescent to plod. Also at Berkeley, physicist Luis Alvarez and his geologist son Walter were beginning to piece together evidence that the Earth had been bombarded by asteroids. About 65 million years ago, one of these killer rocks had whacked into the Yucatan, shrouding the Earth in dust, making life stressful, perhaps to the point of extinction, for the dinosaurs.6 Further investigation of the cratcring history of the Earth suggested that episodes of bombardment were periodic, recurring roughly every 26 million years. One hypothesis was that the sun had a distant companion—a dim star 160 times farther from the sun than Pluto, which slowly made its way around an elliptical orbit. Every 26 million years, according to this idea, there would be a rain of doom as the Nemesis star nudged rocks in the outer solar system into orbits that would bombard the Earth. This idea was so interesting to Muller that the automated telescope was partly diverted from supernovae and the fate of the universe into searching for Nemesis and the fate of life on Earth. This isn't the choice I would have made, but you can see thai an Earthling might be interested. Muller didn't find Nemesis, though it may yet be lurking out there. Or perhaps there is some other cause for the periodic bombardment. Or perhaps the geological evidence for periodicity is not as strong as it seemed at first.

In any case, ihe idea that supernovae were interesting and possibly a route to learning something about the fate of the universe remained alive. LBL had working software that could find a new supernova in an image of a galaxy and had shown that this system could work on individual galaxy images with a small telescope, It wasn't that great a leap to think that similar software could work on an image that contained many galaxies from a large telescope as the Danes were doing at the European Southern Observatory, LBL worked out a deal with the Anglo-Australian Telescope to build a big, very fast CCD camera that could get the data for this program by installing it on that 4-meter telescope in exchange for time to use it to hunt for supernovae. The optical design was very daring. But the instrument never worked satisfactorily and that LBL effort never reported a supernova.

In 1989, UC Berkeley won a national competition sponsored by the National Science Foundation to fund a new science center to address the question of dark matter in the universe. The Center for Particle Astrophysics was ably led by Bernard Sadoulet, formerly a lieutenant of Carlo Rubbia at CERN, the European accelerator near Geneva. The idea of the center was to learn about dark matter in a large number of ways. Their artfully designed T-shirt said, "If it isn't dark, it doesn't matter." Sadoulet himself would take the direct approach, building laboratory detectors to see if dark matter particles were drifting through the room. Another group would look for the signature of dark matter in the microwave background. Theorists would weave all of this together into a coherent story for the evolution of a dark matter universe. And supernovae would be used to measure the amount of dark matter by detecting cosmic deceleration. If Q = 1, then at redshift 0.5, the supernovae should appear 25% brighter than otherwise. The Supernova Cosmology Project (SCP) was going to make that measurement. LBL was experienced in supernova detection software, had capabilities in advanced in strumentation, and, as experimental physicists, understood the analysis of subtle data sets. They would lead the way, with help from Alex Filippenko in the Berkeley astronomy department, who joined the project in 1993-

To stimulate this enterprise, in 1989 they organized a symposium in Berkeley to bring together all of these strands in modern astrophysics. I gave a talk on "Attacking H„ and ii with Superno-vae." Despite this bellicose title, my conclusion, based on the Danish work, was timid: "these pioneering observations point out the possibility of making progress on the casmological problem from diligent observation." In fact, I thought that the scatter among SN la was so large, that since the number of supernovae you need increases as the square of the scatter, you would need so much diligent observation that we should build a spccial 4-mcter telescope with supernovae and il in mind at a cost oi $10 million. The cheaper path would be to understand supernovae better.

When the National Science Foundation established the Center for Particle Astrophysics, Bernard Sadoulet asked me to serve on their External Advisory Board, which was supposed to help evaluate the many activities of the Center and advise him on choices he had to make. The supernova team was having trouble. After the dead end at the Anglo-Australian Telescope, they did not have a working camera on a telescope where they had guaranteed access. They were going to have to compete with the rest of the astronomical community for time at Kitt Peak or Cerro ToJolo. But their credibility in that community was not the best after the 1987A pulsar report, the diversion of the supernova search to Nemesis, and the camera's failure in Australia. Although they had invested serious effort in the supernova-finding software, they hadn't yet found any distant supernovae, so time-allocation committees were reluctant to give them scarce telescope time to carry out a search. If they didn't search, they weren't going to find any supernovae. To get out of this catch-22, Bernard convened an outside committee.

That group proposed putting Perlmutter in charge. Although he was quire junior, Saul was very determined, had good judgment about what was most important, and made a forceful spokesman for the project. Maybe he could convince people to give them the telescope time they needed. They also proposed more money and a program to acquire large CCD detectors to put in a camera on a British telescope in the Canary Islands, in exchange for guaranteed time. Though the outcome was good for them, the SCP didn't like undergoing all these reviews.

While the LBL crew was struggling with all this, I would breeze in periodically for the External Advisory Board meeting. As I recall, I emphasized three things. One was that photometry was hard, and they should nor underestimate the difficulty of making accurate measurements of faint objects. Another was that there was growing evidence that SN la were not all alike, and they should pay close attention to this work. And finally, there was a history to this subject, and the lesson of histoiy was to watch out for dust. If they didn't make measurements to determine reddening, there would be problems later with the interpretation. Alex Filippenko, from the Berkeley astronomy department, gave them similar warnings. Notx)dy at LBL really wanted to hear all these cautions—they had their hands full figuring out how to find distant supernovae. I realized just how unwelcome these suggestions were when SCP memlxr Gcrson Goldhaber later described this period by saying, "Bob Kirshner was pcxih-pcx)hing our research every step of the way; he said the approach would never work."7

Finally, the Berkeley team got a break. In 1992, they found SN I992bi using the 2.5-meter Isaac Newton Telescope in the Canary Islands. Because of this discovery, they were successful with Lhe Kitt Peak time allocation system and won time to search with the observatory's standard CCD camera at the 4-meter. By 1994, they had six objects. Saul proved to be a master at getting other people to observe their discoveries. He would track you down in the control room of a telescope anywhere in the world, impress on you how important his work was, and try to convince you that it was more important to observe his supernova tonight than your own program. It was a tough sell, but Saul was relentless. People might roll their eyes, but they would take the data he wanted.

Of course, I had been on the other end of these exchanges myself many times, diplomatically hoping to get an observer to take some unscheduled data of a particularly interesting object. Super-

novae arc different from most astronomical objects. Most objects will be there next year, so if you don't do them tonight, you can do them next year. But with supernovae, if you don't act now, the chance will pass, and you will lose them forever. It adds drama to observing. The quid pro quo, most effectively instituted at CTIO, but which I had also implemented at the CfA, was first to get the authority to butt in from the Time Allocation Committee and then to give credit to everybody who contributed data—including them as authors of the resulting scientific publication.

In response to his phone calls, 1 observed Saul's objects myself, getting a spectrum of SN 1994G at the MMT in Arizona. At that point this was the best spectrum ever taken of a high-redshift supernova. I shared my data with SCP. I was surprised when they presented it at the next advisory board meeting I attended as "a spectrum we have obtained."

In August 1993 the LBI. team submitted their first scientific result for publication in 'Jhe Astrophysical Journal Letters, a description of their work on SN 1992bi, a supernova in a galaxy at redshift 0.46. Publication in a reputable journal is the moment when a scientific team gets credit for their work. It is important, even in a world where electronic preprints and meeting abstracts are lesser forms of telling the world what you've done. In astronomy, as in mast academic fields, the editors of a journal send a paper to a "competent referee" who is supposed to read it carefully, offer comments or suggestions for improvement, and advise the editor whether the paper is suitable for publication in that journal. Referee's reports in astronomy arc usually anonymous to avoid retribution for frankness. A typical referee's report might point out omissions ("this paper contains too few references to the work of the anonymous referee"), errors ("the statement at the end of the paragraph is wrong—a standard candle appears dimmer in an empty universe"), as well as offering a judgment ("this paper is both novel and correct. Unfortunately, the parts that are correct are not novel and those that are novel are not correct").

The Astrophysical Journal Letters is a U.S. journal with high standards—to get in, an article needs to be short (four pages, maximum) and very interesting. Since I wasn't an author and knew something about supemovae, the editor sent this paper to me. At first, I was delighted. After ail, rhis is a paper I would read carefully in any case. Then I read it and was not so delighted. It was short and interesting, but a reader couldn't tell if it was right. It seemed to minimize three things. That photometry is hard. That SN la are not all alike. And what about dust? Because of the way they observed this object, the SCP did not have any information on the color of this supernova, so they had no way to say anything about the effects of dust, which could easily be as big as the effects of cosmology. Maybe the supernova was much brighter due to a decelerating universe, but this was balanced out by dust absorption. There was no way to tell. Since the true brightness of the supernova was the central point of the paper, I thought they had a real problem.

What to do? On one hand, you owe the journal a frank appraisal (especially if the journal editor has his office four hallways away!); on the other, you hate to make life harder for people who are busting a gut to do something important, I sent a very detailed report, recommending a lot of changes before publication. The authors revised the text, but I still wasn't convinced they had dealt with the central issues. Maybe it wasn't possible in the four-page format of Jhe Astrophysical Journal Letters, and they should consider writing the War and Peace-length version for another journal. Authors don't have to accept the verdict of a single referee, who might be pig-headed. They can ask for another. Which in this case, they did. Journal editors figure the chance of getting two village idiots in a row is small. The second referee wrote a long report, generally concurring with mine and suggesting a major change in the paper's emphasis. Then the first referee gets to see what the second referee said. Neither of us recommended publishing the paper in its present Form.

The editors, sensibly, err on the side of caution, not willing to publish something until people more-or-less agree and somebody says, "this should be published." The authors can revise their paper to take the referees' comments into account. All of this back-and-

forth takes time. Their paper, with more modest claims about cosmology, appeared in the 20 February 1995 issue of The Astrophysi-cal Journal Letters.

While the Supernova Cosmology Project was getting underway at LBL, the Calan/Tololo Team had begun to crack open the problem of what to do about the big difference in the brightnesses of SN la like SN 199IT and SN 199lbg. Supernovae are not all alike, but there is a way to deal with it. Mario I lamuy was the lead author of a paper from the Calan/Tololo team that took the nugget of an idea from Mark Phillips and turned it into a real solution to this puzzle. Mario's paper showed that Mark was right: the slow declining supernovae are the bright ones and the fast decliners are the dim ones. If you measure how fast a type la supernova fades after it reaches maximum light, you learn whether it is on high beams or low. If you know that, you won't foolishly assign it the wrong distance.

This result was very important for the program of using supernovae to measure cosmic deceleration. Instead of a big range of brightness that causes big distance errors, the use of the light curve shape for SN la decreased the error in distance for a single measurement to about 7 percent. This moved the problem of measuring from a major undertaking requiring its own 4-metcr telescope to a reasonable observing program that could probably be done by a determined group with existing facilities in the span of a single graduate student's thesis.

At the same time, the Cerro Tololo team let us at the CfA see some of their light curves. Combined with our own data from Arizona, we then had a good set of light curves to look at the connection between the decline rate and the true brightness. Graduate student Adam Riess, with mathematical inspiration from Bill Press, another professor at Harvard, and astronomical advice from me, developed an alternative way to use the light curves to determine the intrinsic brightness of supernovae. Adam started from Bruno Leibundgut's template light curve, then examined how the light curves of brighter or dimmer supernovae differed from the template. It was a neat piece of work that gave results as gcx)d as the CTIO group's method. These methods also gave quantitative estimates of just how good the distance measurement was for each supernova. Knowing your errors is very helpful in knowing how far to trust the conclusions. Some objects are observed many times and the light curve is great; others have spotty data due to weather, failure to twist the observer's arm, or other causes. It matters whether you know the distance to a supernova well or poorly when you are trying to measure cosmic deceleration. The "light-curve shape" method (LCS—which we thought was droll in the year of the baseball strike when there was no League Championship Series) told us the sigma: how trustworthy each distance measurement was and how much it could add to a measurement of the cosmic deceleration.

Adam didn't stop there. I was worried about dust. If supernovae nearby were found more easily in dusty regions than the supernovae in distant galaxies, then you might end up with local supernovae being dimmed, distant supernovae appearing brighter, and a false signal for cosmic deceleration that was due only to failure to account for absorption by dust. How embarrassing would that be?

Adam and I noticed a very troubling property of the data for local supernovae. If you took the data for most of the supernovae from the time of Baade and Zwicky, and assumed they had the same intrinsic color at maximum light, then you could use the observed color to estimate how much they had been affected by dust. l;or example, if you knew the real color of a supernova was blue, but you measured yellow or red, you would know that wicked dust was between you and the supernova, dimming it and making it appear redder. The trouble with this simple picture was that when you took a sample of data and corrected in this way, instead of decreasing the scatter, the scatter of the points got bigger. This is Nature's way of telling you that you've done something stupid and that instead of correcting for reddening, you are somehow making things worse.

The solution was not so complex. What if, instead of assuming that all supernovae had the same color, you assumed that the color might depend on the brightness? After all, the light curvcs of the bright ones declined more slowly and the spectrum was a little different, so why couldn't the colors be different, too? The bright ones might, be blue and the dim ones red In fact, if the spectrum differences were caused by temperature differences (which is usually the case), then you'd cxpect blue color, high temperature, and an extra-bright supernova like SN 199IT to show a spectrum that was a little different from a red, cool, dim one like SN 1991bg. Adam was able to solve separately for the light-curve shape as observed in one filter (which tells you the true brightness and the intrinsic color) and the measured color using another filter, which tells you how much the supernova light has been reddened by dust. And if you made measurements through more filters, each sampling the light in a different color, you learned still more about the true distance of the supernova.

With this in mind, all the new data we had been taking in the CfA sample and all the new data from Cerro Tololo were observations taken in several colors, ranging from the ultraviolet to blue to green to red out into the infrared where the CCD detectors work, but your eyes don't. We called the new and improved method MLCS (for "multicolor light-curve shape"). The Calan/Tololo crew developed an independent method that also allowed a measurement of both the distance and the dust absorption. Both groups had figured out how to use the light curve information to make type la superno-vae into the best distance-measuring tools for cosmology.

The results were excellent. When we used MLCS on the data for nearby SN la, we could reduce the scatter from about 40% (if you assumed they were identical standard candles) down to less than 15% by using information about the light-curve shape and the color to see which were bright and which were dim. Using the methods developed by Mark Phillips and his collaborators worked equally well. Since random Gaussian errors get driven down by the square root of the number of measurements, the number of supernovac you need to see the difference between an f 2 = 1 universe and an n = 0 universe depends on the square of the error associated with each data point. Reducing the scatter in brightness for a sample of SN la from 40% to under 15%, about a factor of 3, meant you could make the cosmological measurement nine times faster!

Was this article helpful?

0 0

Post a comment