Nonlinear Filters. Simon Haykin. Читать онлайн. Newlib. NEWLIB.NET

Автор: Simon Haykin
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Программы
Год издания: 0
isbn: 9781119078159
Скачать книгу
alt="left-parenthesis bold x Subscript i Baseline comma p left-parenthesis bold x Subscript i Baseline right-parenthesis right-parenthesis"/> may change the shape of the PDF curve significantly but it does not affect the value of the summation or integral in (2.95) or (2.96), because summation and integration can be calculated in any order. Since upper H is not affected by local changes in the PDF curve, it can be considered as a global measure of the behavior of the corresponding PDF [27].

      Definition 2.4 Joint entropy is defined for a pair of random vectors based on their joint distribution as:

      (2.98)upper H left-parenthesis bold upper X comma bold upper Y right-parenthesis equals double-struck upper E left-bracket StartFraction 1 Over log p left-parenthesis bold x comma bold y right-parenthesis EndFraction right-bracket period

      Definition 2.5 Conditional entropy is defined as the entropy of a random variable (state vector) conditional on the knowledge of another random variable (measurement vector):

      (2.99)upper H left-parenthesis bold upper X vertical-bar bold upper Y right-parenthesis equals upper H left-parenthesis bold upper X comma bold upper Y right-parenthesis minus upper H left-parenthesis bold upper Y right-parenthesis period

       It can also be expressed as:

      (2.100)upper H left-parenthesis bold upper X vertical-bar bold upper Y right-parenthesis equals double-struck upper E left-bracket StartFraction 1 Over log p left-parenthesis bold x vertical-bar bold y right-parenthesis EndFraction right-bracket period

      Definition 2.6 Mutual information between two random variables is a measure of the amount of information that one contains about the other. It can also be interpreted as the reduction in the uncertainty about one random variable due to knowledge about the other one. Mathematically it is defined as:

       Substituting for from (2.99) into the aforementioned equation, we will have:

      (2.102)upper I left-parenthesis bold upper X semicolon bold upper Y right-parenthesis equals upper H left-parenthesis bold upper X right-parenthesis plus upper H left-parenthesis bold upper Y right-parenthesis minus upper H left-parenthesis bold upper X comma bold upper Y right-parenthesis period

      Therefore, mutual information is symmetric with respect to bold upper X and bold upper Y. It can also be viewed as a measure of dependence between the two random vectors. Mutual information is nonnegative; being equal to zero, if and only if bold upper X and bold upper Y are independent. The notion of observability for stochastic systems can be defined based on the concept of mutual information.

      Definition 2.7 (Stochastic observability) The random vector (state) is unobservable from the random vector (measurement), if they are independent or equivalently . Otherwise, is observable from .

      Instead of considering the notion of observability as a yes/no question, it will be helpful in practice to pose the question of how observable a system may be [29]. Knowing the answer to this question, we can select the best set of variables, which can be directly measured, as outputs to improve observability [30]. With this in mind and building on Section 2.7, mutual information can be used as a measure for the degree of observability [31].

      An alternative approach aiming at providing insight into the observability of the system of interest in filtering applications uses eigenvalues of the estimation error covariance matrix. The largest eigenvalue of the covariance matrix is the variance of the state or a function of states, which is poorly observable. Hence, its corresponding eigenvector provides the direction of poor observability. On the other hand, states or functions of states that are highly observable are associated with smaller eigenvalues, where their corresponding eigenvectors provide the directions of good observability [30].

      (2.103)rho left-parenthesis bold upper X comma bold upper Y right-parenthesis equals StartFraction upper I left-parenthesis bold upper X semicolon bold upper Y right-parenthesis Over max left-parenthesis upper H left-parenthesis bold upper X right-parenthesis comma upper H left-parenthesis bold upper Y right-parenthesis right-parenthesis EndFraction comma

      which is a time‐dependent non‐decreasing function that varies between 0 and 1. Before starting the measurement process, upper H left-parenthesis bold upper X vertical-bar bold upper Y right-parenthesis equals upper H left-parenthesis bold upper X right-parenthesis and therefore, upper I left-parenthesis bold upper X semicolon bold upper Y right-parenthesis equals 0, which makes rho left-parenthesis bold upper X comma bold upper Y right-parenthesis equals 0. As more measurements become available, upper H left-parenthesis bold upper X vertical-bar bold upper Y right-parenthesis may reduce and therefore, upper I left-parenthesis bold upper X semicolon bold upper Y right-parenthesis may increase, which leads to the growth of rho left-parenthesis bold upper X comma bold upper Y right-parenthesis up to 1 [33].

      Observability can be