## Ss

+ Significance =— = — (Number of standard (6.16)

where S and B are the accumulated source and background counts in some fixed time. This expression is also known as the signal-to-noise (S/N) ratio.

The intensity of a source is best represented by the source event rate rs (counts/s) which is independent of data accumulation time. In our case, with equal on-source and off-source accumulation times (At),

According to (12), division of S by the precisely known At does not change the fractional error from its initial value as/S. We therefore divide as by At to maintain this ratio for the standard deviation of the derived source rate, ars = as /At. If At is not precisely known; one would have to factor its uncertainty into ars.

If the counts S and S + B were accumulated over different time intervals, which is often the case, one could follow our logic leading to (16) to find an expression for S/N in terms of the rates rs and rb (counts s-1) and accumulation times ts+b and tb.

### Low and high background limits

Two limiting cases of (16) may be given, one wherein the background counts are much less than the source counts, and vice versa. We assume again that the accumulation time At is the same for both on-source and off-source measurements.

The low-background (B « S) case gives

— « — = VS = Vrs At (Background negligible; B « S) (6.18)

which shows that the significance increases as the square root of the number of counts. We also express the counts in terms of the rate and the duration of the measurement and find that the significance increases as (At)1/2. If one has a 2a result that looks tantalizing, but hardly convincing at only 2a, one could take more data. To increase the significance to 5a, one would have to increase the duration At by a factor of (5/2)2 = 6.25.

The significance for the high background case (B » S) is, again from (16),

The background counts are also expressed as the background rate rb and the duration of the measurement At. Again the significance increases as the square root of the duration of the observation. It can take a lot of observing time to increase the significance by a substantial amount.

Let us compare the S/N ratios of our two hypothetical detectors, one high B and the other of low B, but otherwise similar. Expose both to a source yielding the same signal rate rs. As above, in (19), rb is the background rate of the high-B detector; so that by definition rb » rs. The background rate of the low-B detector does not enter our approximations; see (18). From (18) and (19),

\aJB»s V 2rb VaJB«S sensitivities)

Since rs « rb, the expression tells us that the significance is much less in the high-background case than for the low-background case, again for similar exposures. This agrees with one's common sense; a source should be detected with higher significance in the absence of background counts.

Bright and faint source observations Focusing instruments are essentially low-background systems. The detection of only 3 x-ray photons during an observation, in one resolution element of the focal plane could be highly significant because the background in any given resolution element is so low. If the expected background in that element is only 0.1 counts during the observation, the probability of this background giving rise to the 3 x rays in that element is only 1.5 x 10—4 according to the Poisson formula (1). Thus the 3 x rays would be a highly significant detection. Such instruments are the only way to detect the distant faint sources in the cosmos. This discussion assumes that only this one resolution element is of interest, say because a known source in another frequency band is located there.

How does the significance of a detection in a given time depend on source intensity, i.e., on the rate rs? When the source intensity dominates the background, as in focusing instruments, the statistical noise arises from the source itself. When source intensity increases, so does the statistical noise. Thus the significance in the weak-background case grows rather slowly with source intensity according to (18), specifically as ,-Jrs. When the background dominates, the significance (19) scales as rs .A source with twice the intensity (rate) yields a measurement with twice the significance (in the same time) because the statistical noise depends only on the background rate which does not change as rs increases.

This comparison is illustrated in (20) which is valid as long as the source in the high-background detector is still weaker than the background in that detector (rs ^ rb). As rs/rb increases, but while it is still well below unity, the sensitivity of the high-background detector moves toward the sensitivity of the low-background detector; the advantage of the low-background detector decreases as the source brightens. When finally the source becomes so bright in the high-background detector that it exceeds its high background, the weak-background limit (18) applies to both detectors. Then the sensitivities of both detectors become identical, given that they are similar in all other respects.

In reality, such detectors are not similar. Other factors such as different effective areas at different energies control the relative sensitivities. For example, non-focusing (high-background) x-ray detectors, such as proportional counters with mechanical collimators, can be constructed to have a large effective area at energies up to 60 keV and higher. In contrast, focusing (low background) systems reach only to ~8 keV, and the practical effective areas are typically much less, especially at the higher x-ray energies. Thus, for bright sources, a high-background large-area system can yield a higher significance (S/N) in a given time than can a low-background system.

The Rossi X-ray Timing Explorer makes use of large collecting areas to measure the temporal variability of bright x-ray sources on time intervals of a millisecond or less, thus probing the motions of matter in the near vicinity of neutron stars and black holes. This advantage in timing accuracy comes at the cost of not having the high angular and spectral resolution that are possible with focusing x-ray missions.

### Comparison to theory

The result of the data processing above is that one ends up with a series of numbers, each of which has its own uncertainty, yl ± ai. This uncertainty ai is generally not the square root of y, which would be the case only if the process obeyed Poisson statistics and if the numbers were not further processed, for example by division by the exposure time to get a rate.

Finding parameters and checking hypotheses Often one will want to compare the data to some theoretical expectation in order to (i) derive some parameter or parameters in the theory or (ii) to check whether the data are consistent with the theory (or hypothesis). For example one might measure how many cars come down the street every hour in order to find the average rate, but also to test whether the rate is indeed constant.

Suppose we make two measurements, and count 20 cars the first hour and 30 the second. We could conclude that the average rate was 25 cars per hour. What is the error on this result? Our raw unprocessed numbers obey Poisson statistics so the errors are the square roots, N1 = 20 ± 4.47 and 30 ± 5.48. To obtain the average we added the two numbers and divided by 2. Propagating the errors according to (11) and dividing by 2, we find the average rate to be 25 ± 3.53. This is our best estimate of the true value. The uncertainty is the standard deviation of the average. It is somewhat uncertain because it is derived from (11) which is based on a normal distribution. Note that there is a significant probability that the true value could be 2 standard deviations (or more) from 25, namely as much as 31.7 or even 35 at 3 standard deviations, or as low as 15 on the low side.

Can we also argue that the rate in this experiment increased from one hour to the next? It depends on the sizes of the error bars. To evaluate this, calculate the value of the difference of the two numbers which is expected to be zero if the underlying rate did not change. The difference and its calculated uncertainty, from (11), are N2 — N1 = 10 ± 7.07. Is this result consistent with zero? Recalling that successive measurements can fluctuate by greater amounts than one standard deviation, it is actually quite consistent with zero. The measured value is only 10/7.07 = 1.4 standard deviations from zero.

What is the probability that a true value of zero could fluctuate to a value as high as 10 or as low as -10? From Table 2, the probability of a 1.4 standard deviation fluctuation is 16%; one in six sets of measurements would show such a fluctuation. Our data thus can not exclude the constant-rate hypothesis. If we had found a probability significantly less than 1%, we would have seriously questioned the constant rate hypothesis and concluded with some confidence that the rate actually increased.

With more data points and with more complicated theoretical functions, one can ask the same questions: what are the parameters of the theory that best fit the data, and are the data consistent with the theory? There are formal ways to address these.

Trial fits

Trial fits

Best fit

Slope b

- Intercept a

Best fit

Slope b

- Intercept a fit

Figure 6.9. Least squares fits. The solid line is a by-eye fit to the data points in an effort to minimize x2 given in (21), namely the sum of the squares of the deviations, the latter being in units of the standard deviations. The dashed lines are fits that would have larger values of x2 and thus are less good, or terrible, fits.

For the former we introduce the least squares fit and for the latter the closely related chi square test.

### Least squares fit

Comparison of data to theory can be carried out with a procedure known as the least squares fit. Consider the data points and theoretical curves in Fig. 9. Each data point is taken at some value xi and has a value and uncertainty yi ± ai indicated with vertical error bars. At each position xi, calculate the deviation, yobjI - ythji, of the (observed) data point yobj, from the value ythj, of a theoretical curve at that point. Then write this deviation in units of the standard deviation ai of that data point and square the result. This yields a value that is always positive regardless of whether the deviation is positive or negative. Then sum over all data points to obtain the quantity called chi squared, x2, x2 -

(chi square definition)

## Telescopes Mastery

Through this ebook, you are going to learn what you will need to know all about the telescopes that can provide a fun and rewarding hobby for you and your family!

## Post a comment