N

Choice of the range, N, and weights, ak, permits the selective attenuation of high-frequency components in the data and significantly alters the statistical characteristics of the data.* Removal of high-frequency noise highlights the actual frequency characteristics of the data. The digital filter, Eq. (9-1), is derived from analog filters used for electronic signal processing.

Curve fitting is a technique which assumes a functional form for the data over a time interval and computes a set of coefficients which represent the data over the given interval. The function selected to fit the data may be (I) a linear combination of orthogonal polynomials, or (2) a nonlinear combination of functions chosen to represent the probable characteristics of the data. Curve fitting techniques treat the data asymmetrically because end points tend to influence the coefficients less than midrange points; thus they are most appropriate for batch processing (the end points are the most important for recursive or real-time processing). Data represented by coefficients are difficult to treat statistically and should not be used for many types of subsequent processing. Curve fitting is most frequently used for data display and interpolation.

Sifting is a technique which subdivides an interval into discrete bins and replaces the data in each bin, independent of other bins, with a randomly or systematically selected data point within the bin. Preaveraging is similar to sifting except that the data within the bin are replaced with the arithmetic mean. Either sifting or preaveraging must be combined with another method for data validation, such as a comparison with data from adjacent bins. Sifting and preaveraging are most appropriate for reducing the quantity of data to be processed. They are the preferred choices for preprocessing data which are subsequently used in a differential corrector or any algorithm that depends on the statistical characteristics of the data. Note that any data preprocessing method alters or destroys some statistical properties inherent in the measurement, including the systematic process employed by the spacecraft to insert sensor data into the telemetry stream. The advantage of data sifting (and, to a lesser extent, preaveraging) is that it js a less destructive method of preprocessing, and sifted data more closely conform to the requirements of the attitude determination algorithms described in Chapter 13.

The guidelines presented above should not replace the careful consideration of the processing requirements and the characteristics of each data type before selecting a preprocessing method. The most important data characteristic is its implicit or explicit frequencies. Smoothing is most useful for measurement frequencies, um, such that um = 2n/At»us, where A/ is the telemetered data rate

*End points, e.g., pt for i<N+ I, must be treated separately. One approach is to assume_P/=,>7 for i<N+l.

and to, is any real frequency associated with the data which is to be retained. Dominant low frequencies of interest are related to the orbital rate, which is uo«;2Tr/100 minutes = 10"3 rad per sec for near-Earth spacecraft. The orbital rate affects the thermal profile and solar and aerodynamic torques. The dominant gravity-gradient frequency for a pencil-shaped spacecraft is and for a polar orbit the magnetic torque frequency is approximately 2<os. High frequencies of interest are related to the spin period, onboard control, flexible components, and rastering instruments and are typically 0.1 to 50 rad per sec, which is also the frequency range of telemetered data. Thus, telemetry data rates are often a limiting factor in the extraction of high-frequency information.

To summarize, the tradeoff between preprocessing sensor data before attitude determination and postprocessing computed attitudes must be established for each spacecraft. In general, it is better to preprocess only for the purpose of data validation and postprocess to reduce random (or high frequency) noise, primarily because attitudes have a time dependence which is simpler than sensor data and preprocessing may destroy important statistical properties used in some attitude determination algorithms. For postprocessing, curve fitting may use low-order polynomials or well-established functional forms.

Curve Fitting. Curve-fitting techniques require a data model which may be either purely phenomenological, such as a linear combination of orthogonal functions, or a nonlinear function chosen to approximate the assumed dynamics characteristics of the data. Fitting techniques, as described in Section 13.4, may be either sequential or batch. A sequential method (see subroutine RECUR in Section 20.3) has been used successfully on the AE mission to postprocess computed nadir angles with a nonlinear model of the following form [Grell, 1976]:

_y(i) = y4,sin(io,i + <f.1) + ^ 2sin(u2f + <f>2) (9-2)

The state parameters, A,, A2, <o„ co2, <£>,, and <f>2, are updated sequentially with the covariance matrix controlled to track or smooth the measurements to allow for large model deficiencies. Curve fitting was used on AE to validate computed nadir angles and extract approximate nutation and coning frequencies, phases, and amplitudes. Nonlinear models, such as Eq. (9-2), generally require special techniques to obtain an initial estimate of the model parameters. For AE, a frequency analysis based on a fast Fourier transform [Gold, 1969] was used to obtain to, and <o2.

Linear models are preferred for curve fitting because of the ease of solution. Power series, spherical harmonics, and Chebyshev polynomials are used frequently, although any set of orthogonal polynomials may be used. Care must be taken to ensure that the correct degree of the representation is selected. If n data points are to be fitted with representation of degree r, clearly r must be less than n. However, if r is either too small or too large for a given n, a poor compromise between minimizing truncation error and reducing random noise will be obtained.

One procedure to automatically select the degree of the representation is to monitor the goodness of fit or chi-squared function, where gk (/,) is the kth basis polynomial evaluated at the ith value of the independent variable,^, is the measured data, o, is the standard deviation of/„ and the parameters Ck are selected by a linear least-squares algorithm to minimize x2(r)-X2(r) decreases rapidly with increasing degree. The degree may be chosen to be the lowest such that either absolute or relative convergence is obtained; i.e.,

Xî(r)<e1«10 l[x,(r)-x2('-l)]/x2(')l<«2«0.1

Assuming the model is adequate, x2 should range from I to 10 for a correct r; XZ<1 is indicative of too high a degree, r, or an overestimate of the standard deviations, o,.

As an example of curve fitting, we consider the use of Chebyshev polynomials. We wish to smooth the data, measured at discrete times, /,. Let gk(x) be a sequence of orthogonal polynomials defined for * = (—1,1). As described in Section 13.4, the problem is to determine the coefficients, Ck, to minimize the quantity

where

where the weight of the ith measurement is v^s 1 /of and t^ and t^ are the maximum and minimum values of (¡, the independent variable. The mapping function, Eq. (9-5), limits the range of the independent variable to that permitted for the orthogonal polynomials which satisfy the relation

Was this article helpful?

0 0

Post a comment