(36.15)
As a final note, we observe that the normalizing term in the denominator, known as the evidence, can be expressed in a more directly obvious form by de‐marginalizing about the state vector as follows:
Thus, we have presented the mathematical form of both the propagation (Eq. 36.12) and update (Eq. 36.16) actions on the pdf representing the state random vector.
For a specific class of problems (e.g. linear Gaussian systems), the above equations can be solved in closed form. In this case, the generalized process model (Eq. 36.5) simplifies to
where
where Hk is the observation influence matrix at time k, and vk is a zero‐mean, white Gaussian sequence with covariance Rk.
Thus, both the a priori and posterior pdfs can be represented as the following Gaussian densities, respectively:
(36.19)
(36.20)
where
(36.21)
(36.22)
Furthermore, substituting the linear observation model (Eq. 36.18) into our update relationship (Eq. 36.16) results in the linear Kalman filter update equations:
(36.23)
(36.24)
where zk is the realized measurement observation, and Kk is the Kalman gain at time k:
(36.25)
and Sk is the residual covariance matrix, given by
(36.26)
In many cases, systems can be accurately represented by linear Gaussian models. Unfortunately, there are a number of systems where these models are not adequate. This motivates the development of various algorithms that attempt to solve these equations for various classes of problems.
In the next section, we will present the fundamental concepts which will be used to derive various recursive nonlinear estimators.
36.3 Nonlinear Filtering Concepts
In the previous sections (Section 36.2.1), we have developed the generalized theory for recursive estimation problems. The theory is based on the fundamental need to determine the pdf of the state vector at an epoch of interest, conditioned on the observations up to, and including, the current epoch. Complete knowledge of the conditional state pdf represents maximum possible knowledge of the system. This is, in fact, the normal state of affairs for Gaussian systems, as the pdf can be completely described by a mean and covariance.
36.3.1 Effects of Nonlinear Operations on Random Processes – Breaking Up with Gauss
Consider a Gaussian random vector x with mean and covariance
Next consider a linear transformation from x to y which is governed by the transformation matrix H. The resulting equation for y is given by
(36.27)
In this case, the transformed random vector, y, can be shown to be a Gaussian random vector with mean and covariance
(36.28)
(36.29)
This preservation of Gaussian nature when transformed via linear operations is an important property of Gaussian densities that makes the linear Kalman filter relatively simple to implement.
Now consider a generalized nonlinear transformation
(36.30)
In this case, the density of y can become difficult to calculate exactly. While we will address this issue in more detail later in the chapter, generally speaking, the resulting density function is clearly non‐Gaussian, thus limiting the performance of the linear Kalman filter algorithm. Nonlinear estimators attempt to maintain a higher‐fidelity estimate of the overall density function as it transforms over time.
In the next section, we present our first class