problem because they are easily recognized by elementary tests so that the affected data may be removed at an early level. In contrast to transmission problems, operator errors are frequently the most difficult to recognize because they do not occur with any regular pattern and normally no indication exists within the data stream itself as to which data were attached manually at some stage of the data transmission process.

The deteclability of hardware or software malfunctions depends on the type of malfunction. The best method for identifying subtle malfunctions (i.e., biases which shift output values by a small amount) is the use of independent, redundant attitude hardware and processing techniques. Non-nominal operating conditions may also produce subtle errors that are difficult to detect. For example, spacecraft in synchronous orbits may have Earth horizon sensors which have been thoroughly analyzed and tested for normal mission conditions, but which are essentially untested for conditions arising during attitude maneuvers or transfer from low Earth orbit to synchronous altitude. (See, for example, the "pagoda effect" described in Section 9.4.) Each of these possible sources of bad data should be considered in preparation for mission support.

9.1 Validation of Discrete Telemetry Data

Validation of discrete telemetry data consists of checking individual data items. The two principal methods of validation are (1) checking the actual value of data items, such as quality flags and sensor identification numbers, to determine if associated data are valid, and (2) checking that values of selected data items fall within specified limits.

In describing errors in raw telemetry, it is pertinent to distinguish between systematic and random errors. Systematic errors, or those which occur over a non-negligible segment of telemetry data, often are more troublesome to detect and

0 0

Post a comment