Handbook of Regression Analysis With Applications in R. Samprit Chatterjee. Читать онлайн. Newlib. NEWLIB.NET

Автор: Samprit Chatterjee
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Математика
Год издания: 0
isbn: 9781119392484
Скачать книгу
variance of the errors is constant ( for all ). That is, it cannot be true that the strength of the model is greater for some parts of the population (smaller ) and less for other parts (larger ). This assumption of constant variance is called homoscedasticity, and its violation (nonconstant variance) is called heteroscedasticity. A violation of this assumption means that the least squares estimates are not as efficient as they could be in estimating the true parameters, and better estimates are available. More importantly, it also results in poorly calibrated confidence and (especially) prediction intervals.

      3 The errors are uncorrelated with each other. That is, it cannot be true that knowing that the model underpredicts (for example) for one particular observation says anything at all about what it does for any other observation. This violation most often occurs in data that are ordered in time (time series data), where errors that are near each other in time are often similar to each other (such time‐related correlation is called autocorrelation). Violation of this assumption means that the least squares estimates are not as efficient as they could be in estimating the true parameters, and more importantly, its presence can lead to very misleading assessments of the strength of the regression.

      4 The errors are normally distributed. This is needed if we want to construct any confidence or prediction intervals, or hypothesis tests, which we usually do. If this assumption is violated, hypothesis tests and confidence and prediction intervals can be very misleading.

      1.3.1 INTERPRETING REGRESSION COEFFICIENTS

      The least squares regression coefficients have very specific meanings. They are often misinterpreted, so it is important to be clear on what they mean (and do not mean). Consider first the intercept,

.

      1 : The estimated expected value of the target variable when the predictors are all equal to zero.

      Note that this might not have any physical interpretation, since a zero value for the predictor(s) might be impossible, or might never come close to occurring in the observed data. In that situation, it is pointless to try to interpret this value. If all of the predictors are centered to have zero mean, then

necessarily equals
, the sample mean of the target values. Note that if there is any particular value for each predictor that is meaningful in some sense, if each variable is centered around its particular value, then the intercept is an estimate of
when the predictors all have those meaningful values.

      The estimated coefficient for the

th predictor (
) is interpreted in the following way:

      1 : The estimated expected change in the target variable associated with a one unit change in the th predicting variable, holding all else in the model fixed.

      There are several noteworthy aspects to this interpretation. First, note the word associated — we cannot say that a change in the target variable is caused by a change in the predictor, only that they are associated with each other. That is, correlation does not imply causation.

      Another key point is the phrase “holding all else in the model fixed,” the implications of which are often ignored. Consider the following hypothetical example. A random sample of college students at a particular university is taken in order to understand the relationship between college grade point average (GPA) and other variables. A model is built with college GPA as a function of high school GPA and the standardized Scholastic Aptitude Test (SAT), with resultant least squares fit

      Another common use of multiple regression that depends on this conditional interpretation of the coefficients is to explicitly include “control” variables in a model in order to try to account for their effect statistically. This is particularly important in observational data (data that are not the result of a designed experiment), since in that case, the effects of other variables cannot be ignored as a result of random assignment in the experiment. For observational data it is not possible to physically intervene in the experiment to “hold other variables fixed,” but the multiple regression framework effectively allows this to be done statistically.

      Having said this, we must recognize that in many situations, it is impossible from a practical point of view to change one predictor while holding all else fixed. Thus, while we would like to interpret a coefficient as accounting for the presence of other predictors in a physical sense, it is important (when dealing with observational data in particular) to remember that linear regression is at best only an approximation to the actual underlying random process.

      1.3.2 MEASURING THE STRENGTH OF THE REGRESSION RELATIONSHIP

      The least squares estimates possess an important property:

as a measure of the strength of the regression relationship, where

      The

value (also called the coefficient of determination) estimates the population proportion of