## B [q W J

where W is upper triangular.

As pointed out by Thornton (1976), a lower triangular factor W may be obtained from Eqs (5.8.6) through (5.8.12) if the indicies i and j are reordered so that j = 1,..., n and i = j + 1,..., k. At the conclusion of this algorithm B has the form

where W is lower triangular.

The previous algorithms yield W as either upper or lower triangular. For the U — D factorization we need U and D in order to perform the time update for P,

The following algorithm employs a generalized Gram-Schmidt orthogonalization to preserve numerical accuracy and was developed by Thornton and Bierman (1975), and it also appears in Maybeck (1979). The measurement update for P at the kth stage is given by

0 Qk

Then it can be seen that Yk+1 Dk+1 YT+1 satisfes Eq. (5.8.1). We may now apply the following algorithm to obtain U and D, the time updated values of U and D. Let

Where each vector a* is of dimension n + to. Again n is the dimension of the state vector and m is the dimension of the process noise vector.

aj — U jiai n where denotes replacement; that is, write over the old variable to reduce storage requirements. For the feal iteration, I = 1 and only Ci and Du are computed.

### 5.9 CONTINUOUS STATE ERROR COVARIANCE PROPAGATION

In this section a method that allows the integration of the continuous state error covariance differential equations in square root form is described. The derivation follows the approach used in Tapley and Choe (1976) and Tapley and Peters (1980), but the results are based on the P = UDUT decomposition. This algorithm can be combined with a triangular measurement update algorithm to obtain a complete square root estimation algorithm for which square root operations are avoided. In addition, the effects of state process noise are included without approximation.

The differential equation for propagation of the state error covariance matrix can be expressed as

where P(t) is the a priori state error covariance matrix, A(t) is the n x n linearized dynamics matrix, and Q(t) is the process noise covariance matrix. Each of the matrices in Eq. (5.9.1) is time dependent in the general case. However, for simplicity, the time dependence will not be noted specifically in the following discussion.

If the following definitions are used,

and if the fist part of Eq. (5.9.2) is differentiated with respect to time and substituted into Eq. (5.9.1), the results can be rearranged to form

Noting that the fist term of Eq. (5.9.3) is the transpose of the second term, and making the following definition

one obtains

Relation (5.9.5) requires that C(t) be either the null matrix or, more generally, skew symmetric.

Equation (5.9.4) can be simplifed by selectively carrying out the multiplica--T —T

tion of the -QU term by U to yield, after terms are arranged

Equation (5.9.6) defines the differential equations for U and D to the degree of uncertainty in C(t). Since the unknown matrix C(t) is skew symmetric, there exist n(n-1)/2 unknown scalar quantities in Eq. (5.9.6). The problem considered here is one of specifying the elements of C(t) so that U is maintained in triangular form during the integration of Eq. (5.9.6). (The derivation pursued here assumes that U is lower triangular and D is diagonal, although an algorithm for an upper triangular U can be obtained as easily.) The following definiti ons are made to facilitate the solution to the problem posed:

With these defnitions, Eq. (5.9.6) is expressed as

Since U and U in Eq. (5.9.6) are lower triangular, and since from Eq. (5.9.5) C(t) is skew symmetric, several observations can be made regarding Eq. (5.9.8). There are n(n — 1)/2 unknown elements in C. The products U D and U D are lower triangular, creating n(n + 1)/ 2 unknowns. Therefore, the n x n system of equations in Eq. (5.9.8) has [n(n — 1)/ 2 + n(n + 1)/2] = n x n unknowns that can be determined uniquely.

An expansion of Eq. (5.9.8) into matrix elements indicates the method of solution

Mni Mn2-^n

Cn 1 Cn

In Eq. (5.9.9), Q is assumed to be a diagonal matrix with elements qu = qjj/2

(i = 1,..., n). (This assumption can be generalized to allow other nonzero terms in the Q matrix with only a slight increase in algebraic complexity.) Each row of the upper triangular portion of the C matrix in Eq. (5.9.9) is determined as the product of the corresponding row of the M matrix with the appropriate column

of the U matrix. After an upper triangular row of C is determined, the condition from Eq. (5.9.5) that Cj = —Cj (i = 1,..., n; j = 1,...,i — 1) is invoked to evaluate the corresponding lower triangular column of C. Then, a column of the lower triangular elements of M can be evaluated. Once the elements of the M matrix are determined, the new row of the upper triangular C elements can be computed along with a column of the U and D elements. This process is repeated until all U and D values are determined. The implementation of this approach proceeds as follows. From Eqs. (5.9.6) and (5.9.7) one can write

The expansion of Eq. (5.9.10) in summation notation gives n n U d k=i k=i

for i = j , Uij = 1 , and Uij = 0 . Therefore, Eq. (5.9.12) becomes dii = 2 (Mii +Tii) (i = 1,..., n). (5.9.13)

For i > j , Eq. (5.9.12) is rearranged to obtain the differential equation

Equations (5.9.13) and (5.9.14) are the forms of the differential equations to be employed in the derivative routine of a numerical integrator. The elements of Tij and Mij are computed as defile d in Eq. (5.9.7). The pertinent equations can be combined to obtain the following algorithm.

5.9.1 Triangular Square Root Algorithm

### Measurement Update

If the Extended Sequential Filter algorithm (for which the reference trajectory is updated at each observation point) is adopted, the measurement update algorithm for the UDUT factorization has the following form. Using the observation Yfe+i = G(Xfe+i,ife+i) , calculate

Set ^n+i = Rk+i (where Rk+i is the variance of the measurement noise at the k + 1st observation epoch) and calculate

Calculate diagonal covariance elements d = di (i = 1,...,n) (5.9.19)

Uij = Uij - BijPj (i = 2,... n; j = 1,...,i — 1). (5.9.23) Compute the observation residual yk+i = Yc+i — G (Xk+i, tk+i). (5.9.24)

Calculate gain, and update the state using i-i

Propagation

Given the elements of the square root state error covariance in lower triangular U D U form, Q = Q/2 and A(t) , the differential equations U j and dn can be computed as follows:

i j Cij = E MikUjfe - E TifcUjfc (i = 1,..., n; (5.9.28)

_ j-i _ Mj = -Cji - £ MifcUjfc (i = 1,..., n; (5.9.30)

k=i j = 1,...,i- 1) ddii = 2(Mii + Tii) (i = 1,..., n) (5.9.31) Uj = (My + Tij - Uijddjj / 2) / djj (i = 1,..., n; (5.9.32)

The algorithm summarized in Eqs. (5.9.15) through (5.9.32) defiles a complete sequential estimation algorithm in which the covariance matrices P and P are replaced by the factors (U, D ) and (U, D ), respectively. The algorithm given here assumes that only a single scalar observation is processed at each observation epoch; however, the algorithm is applicable to the case of multiple observations at a given epoch if the observation errors are uncorrelated. To obtain the additional integration of the system of nonlinear differential equations, X = F(X,t), is required. At the initial point, the starting conditions Xk = Xk are used.

### 5.10 THE SQUARE ROOT INFORMATION FILTER

In Sections 5.7, 5.8, and 5.9, attention has been directed to the problem of developing estimation algorithms based on the square root of the covariance matrix. Both measurement incorporation and time propagation algorithms were considered. In this section, we consider algorithms derived from the information equations, referred to as the normal equations in Chapter 4. Speciftall y, we consider algorithms that deal with factoring the information matrix. In general such an algorithm is referred to as a square root information il ter or SRIF.

We will fist consider the case where the state vector, x, is independent of time; that is, a constant. Assume that a priori information [x, A] is given, where x is the a priori value of x and A is the a priori information matrix, A = P . The a priori information can be written in the form of a data equation by noting that x = x + n, (5.10.1)

where n is the error in x and is assumed to have the following characteristics

Factoring the information matrix yields

Multiplying Eq. (5.10.1) by R yields

Defile b = Rx and n = Rn. (5.10.5) Then Eq. (5.10.4) assumes the standard form of the data equation b = Rx + n. (5.10.6)

Note that the error n still has zero mean but now has unit variance,

## Post a comment