Observation influence matrix: The observation influence matrix at time k is given by Hk. Note that the time index may be omitted when contextually unnecessary.
Measurement noise vector and covariance: The measurement noise vector at time k is represented by vk. The measurement noise covariance is represented by Rk.
Probability density function: Probability density functions are expressed as p(·).
36.2 Linear Estimation Foundations
The goal of any estimator is to estimate one (or more) parameters of interest based on a model of the system, observations from sensors, or both. Because the parameters are, by definition, random vectors, they can be completely characterized by their associated probability density function (pdf). If we define our parameter vector and observation vectors at time k as xk and zk, respectively, the overarching objective of a recursive estimator is to estimate the pdf of all of the previous state vector epochs, conditioned on all observations received up to the current epoch. Mathematically, this is expressed as the following pdf:
where
(36.2)
and
While this is the most general case, it should be noted that most online algorithms would only be concerned with the conditional state estimate at the current epoch. For this situation, Eq. 36.1 would be represented as
(36.4)
In the next section, we will present the typical recursive estimation framework which will serve as the foundations for developing the forthcoming nonlinear recursive estimation strategies to follow.
36.2.1 Typical Recursive Estimation Framework
In a typical recursive estimation framework, the system is represented using a process model and one (or more) observation models. The process model represents the internal dynamics of the system and can be expressed as a nonlinear, stochastic difference equation of the form
where xk is the state vector at time k ∈ ℕ, and wk − 1 is the process noise random vector at time k – 1. External observations regarding the system state are represented by an observation model. The generalized observation model is a function of both the system state and a random vector representing the observation errors:
In the above equation, zk is the observation at time k, and vk is the random observation error vector at time k. The objective of the recursive estimator is to estimate the posterior pdf of the state vector, conditioned on the observations
(36.7)
where ℤk is the collection of observations up to, and including, time k. This is accomplished by performing two types of transformations on the state pdf, propagation and updates. The result is a filter cycle given by
Note the introduction of the a priori pdf given by
(36.9)
Further examination of the propagation and update cycle in Eq. 36.8 provides insights into how our system knowledge and observations are incorporated into our understanding of the state vector. To begin, we consider the propagation step from epoch k – 1 to k. Time propagation begins with the posterior pdf p(xk − 1| ℤk − 1). The process model defined in Eq. 36.5 is used to define the transition pdf p(xk| xk − 1), which can then be used to calculate the a priori pdf at time k via the Chapman–Kolmogorov equation [2]:
Examination of the process model (Eq. 36.5) shows that the propagated state vector is a first‐order Gauss–Markov random process and is dependent only on the previous state vector and the process noise vector. As a result, we can express the transition probability, which is independent of the prior observation, as
Substituting Eqs. 36.11 into 36.10 results in the propagation relationship
An observation at time k can be incorporated by considering the posterior pdf p(xk| ℤk), which, given the definition of our observation sequence in Eq. 36.3, can be expressed equivalently as
Applying Bayes’ rule to Eq. 36.13 yields
Observing the form of the previously defined observation, Eq. 36.6 shows that zk