Industrial Data Analytics for Diagnosis and Prognosis. Yong Chen. Читать онлайн. Newlib. NEWLIB.NET

Автор: Yong Chen
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Математика
Год издания: 0
isbn: 9781119666301
Скачать книгу

      The covariance matrix of Z = CX is

      The similarity of (3.2) and (2.10) is pretty clear. When C is a row vector cT = (c1, c2,…, cp), CX = cTX = c1X1 + … + cp Xp and

      where μ and Σ are the mean vector and covariance matrix of X.

      Let X1 and X2 denote two subvectors of X, i.e., bold X equals open parentheses table row cell bold X subscript bold 1 end cell row cell bold X subscript bold 2 end cell end table close parentheses. The mean vector and the covariance matrix of X can be partitioned as

      where Σ11 = cov(X1) and Σ22 = cov(X2). The matrix Σ12 contains the covariance of each component in X1 and each component in X2. Based on the symmetry of Σ, we have capital sigma subscript 21 equals capital sigma subscript 12 superscript T.

      Normal distribution is the most commonly used distribution for continuous random variables. Many statistical models and inference methods are based on the univariate or multivariate normal distribution. One advantage of the normal distribution is its mathematical tractability. More importantly, the normal distribution turns out to be a good approximation to the “true” population distribution for many sample statistics and real-world data due to the central limit theorem, which says that the summation of a large number of independent observations from any population with the same mean and variance approximately follows a normal distribution.

      Recall that a univariate random variable X with mean μ and variance σ2 is normally distributed, which is denoted by X ∼ N (μ, σ2), if it has the probability density function

      The multivariate normal distribution is an extension of the univariate normal distribution. If a p-dimensional random vector X follows a multivariate normal distribution with mean vector μ and covariance matrix Σ, the probability density function of X has the form

      We denote the p-dimensional normal distribution by Np(μ, Σ).

      From (3.8), the density of a p-dimensional normal distribution depends on x through the term (xμ)T Σ−1 (xμ), which is the square of the distance from x to Σ standardized by the covariance matrix. Then it is clear that the set of x values yielding a constant height for the density form an ellipsoid. The set of points with the same height for the density is called a contour. The constant probability density contour of a p-dimensional normal distribution is:

left curly bracket bold x vertical line left parenthesis bold x minus bold italic mu right parenthesis to the power of T capital sigma to the power of negative 1 end exponent left parenthesis bold x minus bold italic mu right parenthesis equals c squared right curly bracket comma

      Example 3.1: Consider a bivariate (p = 2) normally distributed random vector X = (X1 X2)T. Suppose the mean vector is μ = (0 0)T and the covariance matrix is

table row cell bold capital sigma equals open parentheses table row 1 rho row rho 1 end table close parentheses. end cell end table

      So the variance of both variables is equal to one and the covariance matrix coincides with the correlation matrix. The inverse of the covariance matrix