Info

Substitution of Eq. (12-57) into (12-55) and considerable matrix algebra [Keat, 1977] yields the following convenient form for the modified loss function:

where the (4x4) matrix K is

and the intermediate (3 X 3) matrices B and S, the vector Z, and the scalar a are given by

The extrema of J\ subject to the normalization constraint qTq = 1, can be found by the method of Lagrange multipliers [Hildebrand, 1964]. We define a new function g(q) = qTKq-XqTq (12-63)

where X is the Lagrange multiplier, g(q) is maximized without constraint, and X is chosen to satisfy the normalization constraint. Differentiating Eq. (12-63) with respect to qT and setting the result equal to zero, we obtain the eigenvector equation (see Appendix C)

Thus, the quaternion which parameterizes the optimal attitude matrix, in the sense of Eq. (12-52), is an eigenvector of K. Substitution of Eq. (12-64) into (12-60) gives

Hence, J' is a maximum if the eigenvector corresponding to the largest eigenvalue is chosen. It can be shown that if at least two of the vectors W,- are not collinear, the eigenvalues of K are distinct [Keat, 1977] and therefore this procedure yields an unambiguous quaternion or, equivalently, three-axis attitude. Any convenient means, e.g., use of the subroutine EIGRS [IMSL, 1975], may be used to find the eigenvectors of K.

A major disadvantage of the method is that it requires constructing vector measurements, which is not always possible, and weighting the entire vector. Alternative, optimal methods which avoid these disadvantages are described in Chapter 13. Variations on the f-method which avoid the necessity for computing eigenvectors are described by Shuster [1978a, 1978b].

12 J Covariance Analysis

Gerald M. Lerner

Covariance analysis or the analysis of variance is a general statistical procedure for studying the relationship between errors in measurements and error in quantities derived from the measurements. In this section, we first discuss covariance analysis for an arbitrary set of variables and then discuss the interpretation of the covariance parameters for three-axis attitude. For a more extended discussion 0f covariance analysis, see, for example, Bryson and Ho [1969] or Bevington [196^]. For geometrical procedures for analyzing single-axis attitude error, see Section 11.3.

We define the mean, £, and variance, v, of the random variable x by

where E denotes the expectation value or ensemble average. The variance is simply the mean square deviation, 8x = x —of x from the value x=£. The root-mean-square (rms) deviation, or standard deviation, a, is defined by o=Vo (12-67)

The covariance of two variables x, and x2, with means and £2 and standard deviations o( and o2, is defined by

and is a measure of the interdependence or correlation between the two variables. The correlation coefficient of jc, and x2 is the normalized covariance which satisfies the inequality

For independent variables, Cl2=A12=0, and for totally correlated variables (e.g., x, = 7x2), |C|2| = I. Covariance analysis relates the presumably known variance and covariance in one set of variables (e.g., measurement errors) to a second set of variables (e.g., computed attitude errors).

We assume that the n computed quantities, x„ are functions of the m measurements,^, with m> n. Thus,

or, in vector notation, x = x(y). In Chapter 11, geometrical techniques were applied to the special case n=>m = 2 and x=(a,6)T. Here, we are primarily concerned with the case where n = 3, m > 3, and x consists of three attitude angles; however, other interpretations and higher dimensions are consistent with the formal deve lopment. If m >3, then the problem is overdetermined and the functional form of Eq. (12-71) is not unique. In this case, we assume that a unique function has been chosen.

If the measurement errors, SyJt are sufficiently small and x is differentiate, the error in x, may be estimated by using a first-order Taylor series:*

where H is the nXm matrix of partial derivatives with the elements Hi}=3jc,/ The expectation value of the outer product of fix with 6xT is

E( fix 5xT) = E( // 5y SyTH T) = HE(SySyJ)HJ (12-73)

which may be rewritten in matrix notation as

where the elements of the covariance matrix, Pc, and the measurement covariance matrix, Pm, are defined by

Thus, the diagonal elements of the nXn symmetric covariance matrix, Pc, give the variance of the errors in the computed components of x and the off-diagonal elements give the covariance between the components of x. Similarly, the elements of the m X m matrix Pm give the variance and covariance of the measurement errors in y.

Equation (12-74) provides the link between the (presumably) known variance and correlation in the measurements, and the desired variahce and correlation in the computed quantities. Different algorithms for obtaining x from y, when m > n, will, in general, yield different solutions, different partial derivatives and, consequently, a different computed covariance matrix. Thus, an algorithm, x(y), might be chosen to avoid undesirable error correlations.

Equation (12-74) relates the variance and covariance in the measurements to the variance and covariance in the computed quantities without implying anything further about the distributions of the errors in either x or y. However, three specific cases are often used in attitude analysis.

1. If the distribution of errors in y is Gaussian or normal, then the distribution of errors in x is also Gaussian.

•In general, we are not free to pick some appropriately small region about the solution in which the first-order Taylor series is valid. It must be valid over the range of solutions corresponding to the range in measurement errors. Thus, second-order effects, which are ignored here, may become important when using realistic estimates of the measurement errors.

2. If the measurement accuracy is determined by quantization, i.e., buckets or steps which return a single discrete value for the measurement or group of measurements, then the variance in the measurement is S2/12, where S is the step size. If all of the measurements are limited by quantization, then the probability density of the attitude is a step function (i.e., uniform within a particular region and 0 outside that region. See Section 11.3.)

3. If there are a large number of uncorrelated measurements, then the Central Limit Theorem (see, for example, Korn and Korn [1973]) can be used to infer the distribution of errors in x, irrespective of the form of the distribution of the measurement errors. The theorem states that if the m random variables rj are uncorrected with mean fy and variance vjt then as m-*oo, the distribution of the sum m is asymptotically Gaussian [Bevington, 1969] with mean m j"1

and standard deviation

0 0

Post a comment