Info

where o, (/ = 1,2,...,ft) is the uncertainty in the /'th observation.

An important variation of the loss function given by Eq. (13-26) penalizes any deviation from the a priori estimate in proportion to the inverse of the uncertainty in that estimate; that is y=ipT^P+i[x°-x»],S0[x°-x»] (13-29)

where S0 is the (m X m) state weight matrix. If the elements of S0 are zero, no weight is assigned to the a priori state estimate, and Eq. (13-29) is equivalent to Eq. (13-26). Commonly, S0 has the form

where axk (k= 1,2,...,/w) is the uncertainty in the a priori estimate, x°. The use of S0 is especially valuable when lack of observability is a problem. This occurs when a change in one or more state parameters causes little change in the observations, i.e., when the observations do not contain enough information to completely specify the state. (The problem of state observability is discussed from a practical point of view in Chapter 14.)

The loss function given in Eq. (13-29) is particularly. useful in the later discussion of sequential estimators. Other criteria, for goodness of fit are discussed by Hamming [1962],

Locating the Loss Function Minimum. For J to be a minimum with respect to x°, 9y/3x° must be zero. Therefore, the value of x° which minimizes J is a root of the equation

dx2 3x5

Values for 3g,/3x are normally computed analytically from the observation model. Values for dgjdx0 are then calculated from a?" aï^a?^-n^M

where D(t,,tq) is the (mXm) state transition matrix consisting of the partial derivatives of the state at t, with respect to the state at the epoch time, t0; that is,

3x2,

The elements of D may be calculated either numerically or analytically, depending on the functional form of h(x°,/). If x is assumed to be constant, then £>(/„ i0) is the identity, and hi ^ Hi

The most common method of solving Eq. (13-30) is to linearize g about a reference stale vector, x°R, and expand each element of g in a Taylor Series of x^. Note that x°R may be different from x°. If higher order terms are truncated, this yields for each element of g:

In general, each element of g could be evaluated at a different reference vector. Expressing the above equation in vector form gives g = gIJ + GIJx0-G*x° (13-32)

if the same reference vector is used for each element of g. (The possibility of using distinct reference vectors for different elements of g will be useful in the later development of a sequential least-squares algorithm.) The subscript R signifies evaluation at x°= x^.

Substituting Eq. (13-32) into Eq. (13-30) yields

We now solve this equation for x°, and denote the result by x°, x°=x° + [S0+ GlWGR]~'[GRW(y-gR) + S0(x° - x° ) ] (13-34)

If x% = xA, and if g is a linear function in x°, then this equation will provide the best estimate for Xq. If g is nonlinear, x° will not be corrected exactly by Eq. (13-34) unless x° is already very close to the optimum value.

If the correction determined from Eq. (13-34) is not small, then an iterative procedure is usually necessary. In this case, g is first linearized about the a priori estimate, which is then corrected to become x®, as follows:

The corrected value, x°, then replaces x° as a reference for the linearization of g in the next iteration. The (k + I)st estimate for x° is derived from x°+I = x° + [S0+ G?WGk]"1 [G?W(y-gk) + S0(S° -x«)] (13-36)

These iterations continue until the differential correction (i.e., the difference between x£+i and xk) approaches zero and/or until the loss function no longer decreases. At this time, xk+i has converged to its optimum value. If the estimator fails to converge, a new a priori estimate should be attempted. If this is not successful, improved mathematical modeling, additional data, or higher quality data may be necessary. A block diagram of the batch least-squares alorithm is shown in Fig. 13-2.

Statistical Information. - For a converged solution, several statistical quantities are useful. The (m X m) error covariance matrix is given by*

assuming that E(e) = 0, where the estimation error vector e=x°-x°, and E denotes expected value. Provided the estimation process has converged, uncertainties in the estimated state parameters may be calculated from the diagonal elements of P by

These uncertainties are realistic error estimates only if the observations are uncorrected and contain only random errors. The mathematical models characterizing state propagation and the relationship of the observations to the state are also considered to be known with sufficient accuracy. As discussed in Chapter 14, these assumptions are seldom fulfilled completely in practice. To account for this problem, Bryson and Ho [1968] recommend modifying the uncertainty to be o^U^jj/in + m) (13-39)

where JQ is the loss function based on the final estimate for x° and E(J)= ±(n + m) is the expected value of J for n observations and m state vector elements (See Eq. (13-29).)

The off-diagonal elements of P represent the interdependence or correlation among errors in state parameters. The correlation coefficient,

♦The properties and physical significance of the error covariance matrix are more fully described in Section 12J. Eqs. (13-37) and (12-74) are equivalent if S0=0, and if the components of the observation vector are uncorrelated so that W is diagonal.

measures the correlation between the y'th and /th state parameters. Correlation coefficients range from -1 to +1; either extreme value indicates that the two parameters are completely dependent and one may be eliminated from the estimation process.

Another useful quantity is the weighted root-mean-square (rms) residual, given by

0 0

Post a comment