A2

where m is the mean, n is the number of independent measurements of x, and the xt are the individual measured values. In this case, m is the true mean, a value that can be obtained only with an infinite amount of data.

In practice the average value xav of the n measured numbers may be the best approximation of m that is available. In this case, since m is not independently obtained, the divisor n in (5) is replaced with n — 1,

1 i =n a2 =- > (xt — xav)2 (Practical variance) (6.6)

The two expressions become equal for large n. The variance can be evaluated for any given experimental distribution whether it is random or not. One simply substitutes the raw data points into (5) or (6).

The variance (5) of a theoretical distribution such as the Poisson distribution (1) can also be calculated. For this case, it is useful to rewrite (5) in terms of the probability Px used in the theoretical expressions. The summation would include subsets of terms wherein there were nj occurrences of the same value xj. The sum of this subset divided by n can be written as (xj — m)2 nj / n. The overall summation (5) can be rewritten as a sum of such terms. That is, the summation will be over xj or simply x, rather than the trial number i. Each term of (5) will thus contain the quotient nj /n which is the probability Px that the value x occurs. Thus (5) becomes, x a2 = J2 (x — m)2Px (6.7)

The variance of the Poisson distribution is obtained from substitution of (1) into (7). Subsequent (difficult) evaluation yields

2 (x — m)2 mx e—m a = > -= m (Variance of (6.8)

x Poisson distribution)

The standard deviation a of the Poisson distribution is simply the square root of the mean number of occurrences(!), a = m1/2 (6.9)

If 100 photons are expected to arrive at a pixel of a CCD during an exposure of 1 s, based on the average rate of many prior trials, the standard deviation for the single measurement is a = -J 100 = 10. In subsequent measurements, the values will fluctuate by ±10 or even ±30 about the 100-count mean.

The uncertainty a relative to the mean value in this case is a/m = 10/100 or 10%. It is "a 10% measurement". Now, at this same rate, in 100 s, one expects

10 000 counts. In this case, the standard deviation for a single measurement is ^10000 = 100, and the relative uncertainty is much less, 100/10000 or 1%. The accumulation of more counts leads to higher absolute fluctuations and uncertainty, but the relative or fractional uncertainty is reduced. The latter observation determines the average rate, namely 100 counts/s (if previously unknown) to ~1%. Longer observations determine rates more precisely.

If only a single measurement is made with m = 100 and only 90 counts are detected, the true value is not known, but it probably lies within 10-20 counts of the measured value. One often bases the "error" on the measured, not the true, number, in this case a = ^/90 = 9.5. Here one is adopting the measured value as an approximation of the true mean.

The variance of the normal distribution is obtained through substitution of (3) into the integral form of (7), a 2 =

(Variance, normal distribution)

The standard deviation a of the normal distribution is thus simply aw, the width parameter of the distribution (3). This is not accidental; the constant in the exponential is adjusted to make this so. In practice, the symbol a is used in (3) without subscript, and it is called the standard deviation, which, of course, is what it is. Hereafter we do so also.

Measurement significance

As noted above, if one sets a = m1/2 and expects a large number of events, the Poisson distribution is approximated by the symmetric normal distribution. In this case, one can invoke the normal-distribution probabilities in Table 2, e.g., the probability of exceeding ±3a is 0.27%. Thus, if the pixel of a CCD is expected to record 100 photons during a given exposure time, the standard deviation will be a = 10, and the probability of lying outside the 70-130 range will be only 0.0027.

If one measures 130 photons in this experiment, one might ask if the source really brightened or if this is simply a statistical fluctuation. There is one chance in 1/(0.0027) = 370 that this excursion (or a greater one) would happen solely from statistical fluctuations. One might then be inclined to suspect the source had actually brightened, but it would be better to obtain more data to convert this into a 4a or 5a result, the latter with a probability of being a statistical fluctuation of only 6 x 10-7. Additional data can improve one's confidence greatly that the effect is real, if it is. Otherwise, the statistical significance would most likely decrease.

It is sometimes tempting to report such a result as real if a repeat measurement is difficult or impossible, e.g., if it would require an entire week on a big telescope or if the source was in a rare flaring state. The temptation is even greater if the indicated result is of sufficient importance to win one great fame. Because of this, one must always try to repeat the measurement or at least to take great care to understand all the factors that went into the probability calculation that makes such an event seem real. One can also earn fame as a fool by over interpreting data.

Statistical traps

Statistical arguments can seem convincing even if they are wrong. Here we mention two rather common traps in such arguments. The first is to overlook the effect of repeated measurements, or multiple trials. Assume that one makes many measurements and finds a 5a effect in one of them. There is a probability of a statistical fluctuation this large in one given trial of only 6 x 10-7. But if the measurement consisted of an examination of each of 4 million pixels of a CCD chip exposed to the sky, one has 4 million trials, each of which could produce the 5a effect.

In this case, the expected number of such fluctuations, the expectation value, among the 4 million pixels is (6 x 10-7) x (4 x 106) = 2.4. Thus one would not be surprised to find several statistical fluctuations of this magnitude on the CCD exposure. One must conclude therefore that the 5a effect could easily be a statistical effect and is unlikely to be the representation of a real celestial source.

A related error is to calculate a probability after examining some data. This a posteriori probability (after the fact) calculation can easily overlook other unconscious trials. For example, if on my birthday, I happen to see a license plate with my birth-date numerals on it, I would be amazed. I could calculate that in my state there are ~106 plates and that I only see about 100 per day. Thus the probability of seeing it accidentally on this day is only 10-4, one chance in 10000. I might be tempted to conclude that there was a real psychic phenomenon at work here, namely that the owner of the car is drawn subconsciously to my locale.

My misconception here is that I did not take into account that there are lots of different plate numbers that are significant to me such as my birth-date numbers reversed or reordered, the birth dates of my wife or daughters or other relatives, my bank pin number, or important constants in physics, etc. etc. These amount to other experiments or trials that I should include in my calculation. If there are 100 such numbers, the probability would drop to 1 in 100, and that is not so unusual. Once about every 100 days, I might well see a number that amazes me, based only on the statistics of random events, not on psychic phenomena. Similar issues can arise in astronomy.

This error is particularly insidious because one can never know after the fact what events might have appealed to one as unusual. The proper thing to do is to define a priori (before the fact) the questions you will ask of your data, that is, before you take them. If a set of data show an interesting effect, a posteriori, one should test one's hypothesis that this effect is real by taking additional data. But sometimes one can not, because telescope time is unavailable, or because one may have already used all the examples the sky has to offer. Again great care is required in reporting results.

Background

Data are often contaminated by background events. For example, if one is counting x rays from a neutron-star binary system in a proportional counter with a simple tubular collimator, the background will consist of two major components: (i) counts due to cosmic ray particles that fail to trigger anticoincidence logic, and (ii) counts from the diffuse glow of x rays from the sky, known as the diffuse x-ray background. Commonly, two measurements will be made, one with the astrophysical source in the field of view and one with the telescope (or collimator) offset from the source so it measures only background. If the telescope/detector produces a sky image, one can measure intensities on and off the source in a single exposure. We start with a brief discussion of the propagation of errors.

Propagation of errors The discussion above has pertained to the nature of errors on measured quantities. After the data are taken, one invariably manipulates then to obtain other quantities. For example one may divide the accumulated number of counts by accumulation time to find the rate of photon arrivals. Or, one might subtract the background to get the true source counts. What are the errors on the calculated quantities? The four basic arithmetical operations cover most practical cases.

A simple non-statistical argument can be used to give some insight into the process. Assume the measurement errors on a quantity never exceed some fixed value. Consider measurements of a length x and a length y, each accurate to 1 mm (maximum). The maximum possible error in the sum would thus be 2 mm. Restating this, let z be the sum, z = x + y. If dx and dy are the deviations (errors) in these quantities, the deviation is its differential, dz = dx + dy. The maximum possible deviation would be |dz |max = |dx |max + |dy |max. The maximum error is thus the sum of the individual maximum errors. The subtraction z = x — y would yield the same result, namely that the individual errors are summed.

A similar argument shows the fractional error of a product is the sum of the individual fractional errors. Let z = xy to find that dz = x d y + y dx, or |dz/z |max = |dx /x |max + |d y/y |max. The same result follows for the quotient of two variables.

The above argument assumes that excursion in one variable will be matched by excursions in the other in the direction to maximize the error. In fact the measurements, of x and y would most likely be uncorrelated. A maximum positive excursion in x would most likely not be associated with a maximum excursion in y. The errors in the calculated product or sum are thus, on average, less than the extreme values found above.

In this more realistic case, the individual variables, x and y, vary independently of one another about their respective means with normal distributions characterized by standard deviations ax and ay . It can be shown (not here) that in a summation or subtraction, the variance az2 of the sum or difference is the individual variances of x and y added in quadrature,

+ az 2 = ax 2 + ay 2 (Error in a sum or difference) (6.11)

For example, if ax = ay, the standard deviation in the sum (or difference) would be az = -J2 ax instead of 2ax given by the simplified argument. If ax > ay, the error in x more strongly dominates the overall error than under the former assumptions, giving az ~ ax. Similarly, the fractional error squared of a product or quotient is the sum of the fractional standard deviations, added in quadrature, a 2 a 2 (a \2

In each case, the "maximum value" derivation above gave the correct relation, but for the addition in quadrature. Remember that one adds absolute errors (in quadrature) when adding or subtracting measured quantities and adds fractional errors (in quadrature) when dividing or multiplying such quantities.

Background subtraction

The subtraction of background makes use of these tools. Let the expected number of source counts detected in a given time interval At be S, and let the number of expected background counts in the same or equivalent time interval be B. The on-source measurement will thus yield S + B counts, and an off-source measurement of the same duration will yield B counts. The desired signal S is simply the difference of the two measured quantities,

S = (S + B) — (B) (Signal counts; equal exposures) (6.13)

The measured quantities S + B and B will exhibit fluctuations which propagate through the subtraction process to yield a net error on the calculated S.

The two measurements of B and S + B are quite independent of one another; different photons and different background counts are involved. Thus the fluctuations in the two rates will be uncorrelated. The variance on the difference S is thus, from (11), as2 = af+b + a2 (6.14)

where the two variances on the right refer to the two measured quantities, (S + B) and (B), respectively. The two standard deviations as+b and ab are obtained from the Poisson distribution which gives a standard deviation of ^/m. Thus, we have as+b = y/(S + B) and ab = -JB, where we approximate the mean values m with the measured values. Thus (14) becomes a2 = S + B + B = S + 2 B (6.15)

If as is much smaller than S, the measurement is of high quality and vice versa. If S were 3 times as, the result would be called a "3a result". If the result is less significant than 3a, one should question whether the source was detected at all. This significance can be described with the ratio, S/as.

Telescopes Mastery

Telescopes Mastery

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

Get My Free Ebook


Post a comment