. . Читать онлайн. Newlib. NEWLIB.NET

Автор:
Издательство:
Серия:
Жанр произведения:
Год издания:
isbn:
Скачать книгу

       State null hypothesis: : .

       State alternative hypothesis: : .

       Calculate test statistic: .

       Set significance level: 5%.

       Look up t‐table:– critical value: The 97.5th percentile of the t‐distribution with 29 degrees of freedom is 2.045 (from Table C.1); the rejection region is therefore any t‐statistic greater than 2.045 or less than (we need the 97.5th percentile in this case because this is a two‐tail test, so we need half the significance level in each tail).– p‐value: The area to the right of the t‐statistic (2.40) for the t‐distribution with 29 degrees of freedom is less than 0.025 but greater than 0.01 (since from Table C.1 the 97.5th percentile of this t‐distribution is 2.045 and the 99th percentile is 2.462); thus, the upper‐tail area is between 0.01 and 0.025 and the two‐tail p‐value is twice as big as this, that is, between 0.02 and 0.05.

       Make decision:– Since the t‐statistic of 2.40 falls in the rejection region, we reject the null hypothesis in favor of the alternative.– Since the p‐value is between 0.02 and 0.05, it must be less than the significance level (0.05), so we reject the null hypothesis in favor of the alternative.

       Interpret in the context of the situation: The 30 sample sale prices suggest that a population mean of seems implausible—the sample data favor a value different from this (at a significance level of 5%).

      

      1.6.3 Hypothesis test errors

      When we introduced significance levels in Section 1.6.1, we saw that the person conducting the hypothesis test gets to choose this value. We now explore this notion a little more fully.

      Whenever we conduct a hypothesis test, either we reject the null hypothesis in favor of the alternative or we do not reject the null hypothesis. “Not rejecting” a null hypothesis is not quite the same as “accepting” it. All we can say in such a situation is that we do not have enough evidence to reject the null—recall the legal analogy where defendants are not found “innocent” but rather are found “not guilty.” Anyway, regardless of the precise terminology we use, we hope to reject the null when it really is false and to “fail to reject it” when it really is true. Anything else will result in a hypothesis test error. There are two types of error that can occur, as illustrated in the following table: Hypothesis test errors

Decision
Do not reject images Reject images in favor of images
Reality images true Correct decision Type 1 error
images false Type 2 error Correct decision

      So far, we have focused on estimating a univariate population mean, images, and quantifying our uncertainty about the estimate via confidence intervals or hypothesis tests. In this section, we consider a different problem, that of “prediction.” In particular, rather than estimating the mean of a population of images‐values based on a sample, images, consider predicting an individual images‐value picked at random from the population.

      Intuitively, this sounds like a more difficult problem. Imagine that rather than just estimating the mean sale price of single‐family homes in the housing market based on our sample of 30 homes, we have to predict the sale price of an individual single‐family home that has just come onto the market. Presumably, we will be less certain about our prediction than we were about our estimate of the population mean (since it seems likely that we could be further from the truth with our prediction than when we estimated the mean—for example, there is a chance that the new home could be a real bargain or totally overpriced). Statistically speaking, Figure 1.5 illustrates this “extra uncertainty” that arises with prediction—the population distribution of data values, images (more relevant to prediction problems), is much more variable than the sampling distribution of sample means, images (more relevant to mean estimation problems).

      We can tackle prediction problems with a similar process to that of using a confidence interval to tackle estimating a population mean. In particular, we can calculate a prediction interval of the form “point estimate images uncertainty” or “(point estimate images uncertainty, point estimate images uncertainty).” The point estimate is the same one that we used for estimating the population mean, that is, the observed sample mean, images. This is because images is an unbiased estimate of the population mean, images, and we assume that the individual images‐value we are predicting is a member of this population. As discussed in the preceding paragraph, however, the “uncertainty” is larger for prediction intervals than for confidence intervals. To see how much larger, we need to return to the notion of a model that we introduced in Section 1.2.