Economic Evaluation in Education. Henry M. Levin. Читать онлайн. Newlib. NEWLIB.NET

Автор: Henry M. Levin
Издательство: Ingram
Серия:
Жанр произведения: Социология
Год издания: 0
isbn: 9781483381824
Скачать книгу
and well-trained teachers. In an environment of low student achievement and resource scarcity, determining the cost-effectiveness (CE) of school investments becomes particularly important. How can the limited funds available to the school system be spent in order to maximize the academic achievement of students?

      The following table shows the results from a CE analysis by Harbison and Hanushek (1992). First, the range of possible educational interventions are specified; these are shown in the first column. The first category is infrastructure: the provision of potable water, of basic school furniture (e.g., desks), and additional school facilities (e.g., school offices), and then a combination of all these (“hardware”). The second category, material inputs, includes two interventions: (1) student textbooks and writing materials and (2) the combination (“software”). The teacher category includes two separate in-service teacher training programs (curso de qualificação and Logos II), either 4 or 3 years of additional formal schooling, and an increase in teacher salaries.

      Costs, Effects, and Cost-Effectiveness Ratios for Primary School Investments

Table 2

      Source: Adapted from Harbison and Hanushek (1992, Table C6-1).

      Notes: The original table presents effectiveness-cost ratios, rather than the CE ratios presented in this table. For an explanation of the difference, see Chapter 8 of this volume. NE: no evidence of positive effect. NA: not applicable. Adjusted to 2015 dollars.

      The second step is to determine the costs of each intervention. To derive these costs as an annual per student amount, the authors used the “ingredients” method. The ingredients of each intervention, such as materials and personnel time, were exhaustively listed and priced out; the costs of durable inputs, such as infrastructure, were annualized. As shown in the second column of the table, the cost per student varied across the interventions, with more intensive investments (e.g., hardware and software) being progressively more costly within their category. The cost per student is low because this is a poor area, the interventions are from the 1980s, and the exchange rate was low when translated into dollars.

      The third step is to estimate the effectiveness of each intervention. Here, the measure of effectiveness is a test of Portuguese language achievement among second graders. To estimate incremental effectiveness per intervention, the authors use nonexperimental regression analyses. Notably, the interventions vary significantly in effectiveness: hardware, school facilities, and textbooks are the most effective at increasing test scores; and some interventions have no statistically significant impact on achievement.

      The final step is to combine the data on costs and effectiveness by calculating a CE ratio. The ratio indicates the cost required to attain a 1-point increase in achievement. The final column of the table shows which interventions are the most cost-effective—that is, yield achievement for the least amount of resources. Clearly, we should be most interested in investing in those interventions that exhibit the lowest cost per unit of effect.

      A simple examination of the CE ratios shows that material inputs have the lowest CE ratios. By providing more textbooks and writing materials, policymakers can attain 1 point of effectiveness at a cost of $0.62 and $0.88 per student. In contrast, increasing teacher salaries costs $15.47 per unit of effect; it requires more than 20 times as much resource to obtain the same gain in learning as textbooks.

      How would our decisions have been different if costs had been excluded from the analysis? We might have been tempted to invest heavily in school facilities and hardware, which exhibit the highest effectiveness. But they are also among the most costly inputs and, consequently, somewhat less cost-effective. Unsurprisingly, the most effective interventions may be too costly to justify their use.

      Source: Adapted from Harbison and Hanushek (1992).

      There is no presumption that the most effective intervention is also the most cost-effective. There may certainly be either cases where highly effective interventions are so costly to implement that they no longer appear to be viable or justifiable or cases where interventions with very modest effects are worthwhile because of their low cost. Yet, without an analysis of costs, which is then linked to effects, it would be impossible to know this.

      The CE approach has a number of strengths. Most important is that it merely requires combining cost data with the effectiveness data that are observed from an educational evaluation to create a CE comparison. Further, it lends itself well to an evaluation of alternatives that are being considered for accomplishing a particular educational goal. Its one major disadvantage is that one can compare the CE ratios only among alternatives with a similar goal. One cannot compare alternatives with different goals (e.g., reading vs. mathematics or high school completion vs. health), nor can one make an overall determination of whether a program is worthwhile in an absolute sense. That is, we can state whether a given alternative is relatively more cost-effective than other alternatives, but we cannot state whether its total benefits exceed its total costs. That can be ascertained only through BC analysis.

      1.3.3. Cost-Utility Analysis

      CU analysis is a close cousin of CE analysis. It refers to the evaluation of alternatives according to a comparison of their costs and their utility (a term that is often interpreted as value or satisfaction to an individual or group). Unlike CE analysis, which relies upon a single measure of effectiveness (e.g., a test score, the number of dropouts averted), CU analysis uses information on a range of outcomes to assess overall satisfaction. These outcomes are then weighted based on the decisionmaker’s preferences—that is, how much each outcome contributes to total utility. Data on preferences can be derived in many ways, either through highly subjective estimates by the researcher or through more rigorous methods designed to carefully elicit opinions as to the value of each outcome. Once overall measures of utility have been obtained, however, we proceed in the same way as CE analysis. We choose the interventions that provide a given level of utility at the lowest cost or those that provide the greatest amount of utility for a given cost. This CU analysis is like CE analysis except the outcome is weighted based on stakeholders’ perceptions or preferences.

      We can apply CU analysis to a simple example of alternative reading programs that have outcomes that are not valued equally by the decisionmaker. One reading intervention raises test scores by 0.6 standard deviations, and another reading intervention raises test scores by 0.4 standard deviations. With a statewide achievement policy that a test score gain of 0.2 meets accountability standards for the school, then, although incremental test score gains are desirable, they are not valued in the same way as gains up to an effect size of 0.2. So, if we assume gains beyond 0.2 are worth half as much as gains below 0.2, then the utility of intervention one is 0.4 (= 0.2 + (0.6 – 0.2)/2) and of intervention two is 0.3 (= 0.2 + (0.4 – 0.2)/2). Now, with respective costs of $400 and $200, the second intervention is preferred from a CU basis: The first intervention is twice as costly but only one-third more valuable. Of course, we could imagine utility weights that would overturn this conclusion.

      CU analysis is in one sense an extension of CE analysis. That is, CU analysis requires that the preferences of the decisionmakers be explicitly incorporated into the research. The classic example of a utility-based measure is the quality-adjusted life year (QALY) used by health sciences researchers (Drummond et al., 2009; Neumann, Thorat, Shi, Saret, & Cohen, 2015). Unfortunately, the challenge of CU analysis is that of finding valid ways to determine the values of outcomes in order to weight these preferences relative to costs. This quest requires separate modeling exercises often of substantial complexity. The simple reading example in the previous paragraph was made easier because there was only one outcome—test scores; when there are multiple outcomes that need to be combined, the utility calculations become more difficult and subjective.

      1.3.4. Benefit-Cost Analysis

      BC analysis is an analytical tool that compares policies or interventions based on the difference between their costs and a monetized measure of their effects