Now consider the case where the function in the integrand, g(x), can be expressed as the product
where p(x) is a probability density function; thus, p(x) ≥ 0 and ∫p(x)dx = 1. If N independent samples, x[i], can be drawn in accordance with p(·), then the integral can be estimated as the sample mean of the transformed particles:
(36.104)
The resulting error in the estimate is unbiased and, most importantly, scales as the reciprocal of the square root of N. This is an important result as it indicates that the error is independent of the dimensionality of the state, as long as the particles are properly sampled from the distribution of x. This is an important distinction from the grid filter, which requires particles that increase geometrically with the number of dimensions in the state vector [6].
Unfortunately, it is not always possible to generate samples from arbitrary density functions. This motivates additional development of the concept known as importance sampling.
To further our discussion of importance sampling, it is convenient to introduce the concept of a proposal density, chosen to resemble (and provide support over) the true density of x, while retaining the ability to generate samples. An illustration of a proposal‐density sampling approach is shown in Figure 36.16.
Given a random vector with true density p(x) and particles sampled from a proposal density, q(x), Eq. 36.103 can be rewritten as
(36.105)
Figure 36.16 Proposal sampling illustration. In this example, the particles are generated using the proposal density (q) and subsequently weighted to represent the desired density (π).
The resulting estimate of the integral, assuming N independent particles sampled from q(·), is given by
(36.106)
where the ratio between the true density and the proposal density can be expressed as particle importance weights:
Substituting Eq. 36.108 into Eq. 36.107 yields
Finally, the collection of particle weights can be normalized via
(36.110)
then Eq. 36.109 becomes
(36.111)
which we will exploit to develop a recursive estimator.
36.3.8 Sequential Importance Sampling Recursive Estimator
In this section, we leverage the previously presented concept of importance sampling to derive the basis for a recursive nonlinear estimator using Monte Carlo integration [4]. This type of filter is generally referred to as a recursive particle filter.
Consider the following general system model:
(36.112)
(36.113)
where xk is the state vector at time k, f(·, ·) is the process model function at time k – 1, wk − 1 is the process noise vector, h(·, ·) is the observation function, and vk is the measurement noise vector at time k. The noise vectors are assumed to be independent of each other and in time with a known density function. Note that Gaussian densities are not required or assumed.
Assuming we begin with a known posterior density, p(xk − 1| ℤk − 1). If N samples are drawn from an associated proposal density,
(36.114)
With normalized weights given by
(36.115)
where κ is the normalization factor required such that the sum of weights is unity, the posterior density function is expressed by the collection of particles and weights
(36.116)
or, equivalently
(36.117)
Our goal is to estimate the posterior density at time k, p(xk| ℤk), by incorporating the statistical process model and the observation at time k. The density function of interest can be written as
(36.118)
Assuming our proposal density can be factored:
the posterior particle locations can be sampled from
(36.121)