Embedded Formative Assessment. Dylan Wiliam. Читать онлайн. Newlib. NEWLIB.NET

Автор: Dylan Wiliam
Издательство: Ingram
Серия:
Жанр произведения: Учебная литература
Год издания: 0
isbn: 9781945349232
Скачать книгу
the potential impact of educational technology, such as computers, on classrooms. While there is no shortage of boosters for the potential of computers to transform education, reliable evidence of their impact on student achievement is rather hard to find. The history of computers in education is perhaps best summarized by the title Oversold and Underused (Cuban, 2002). This is not to say that computers have no place in education; some computer programs can be effective at teaching challenging content matter. One good example is Carnegie Learning’s Cognitive Tutor: Algebra I, developed over a period of twenty years at Carnegie Mellon University (Ritter, Anderson, Koedinger, & Corbett, 2007). The program has a very specific focus—teaching procedural aspects of ninth-grade algebra—and therefore should be used only for two or three hours per week, but it is more effective than many teachers at teaching this particular content (Pane, Griffin, McCaffrey, & Karam, 2014; Ritter et al., 2007). However, such examples are rare, and computers have failed to revolutionize our classrooms in the way leaders predicted (Bulman & Fairlie, 2016). As Heinz Wolff once said, “The future is further away than you think” (as cited in Wolff & Jackson, 1983).

      Attention has become focused on the potential of interactive whiteboards. In the hands of expert practitioners, these are stunning pieces of educational technology, but as tools for improving educational outcomes at scale, they appear to be very limited. We know this from an experiment that took place in London. The English secretary for education, Charles Clarke, was so taken with the interactive whiteboard that he established a fund that doubled the number of interactive whiteboards in London schools. The net impact on student achievement was zero (Moss et al., 2007). But, say the technology boosters, you should provide professional development to go with the technology. This may be so, but if interactive whiteboards are only effective when teachers are given a certain number of hours of professional development, then surely it is right to ask whether the same number of hours of professional development could be more usefully, and less expensively, used in another way.

      As a final example of an effort to produce substantial improvement in student achievement at scale, it is instructive to consider the impact of teachers’ aides in England. One large-scale evaluation of the impact of support staff on student achievement found that teachers’ aides actually lowered the performance of the students they intended to help (Blatchford et al., 2009), largely because in many schools, aides were routinely tasked to help the students with the most profound learning needs—a task for which they were not well suited. Of course, this does not mean that the use of teachers’ aides cannot increase student achievement. Evidence from North Carolina suggests that teachers’ aides can be cost-effective—particularly for minority students—if they are well managed and assigned suitable classroom roles (Clotfelter, Hemelt, & Ladd, 2016). However, all this means is that two or three teachers’ aides can be as effective as a regular teacher. Where qualified teachers are in short supply, the deployment of teachers’ aides may be a useful short-term measure. However, it is unlikely to have much impact on overall student achievement.

      The reform efforts discussed here, and the history of a host of other reform efforts, show that improving education at scale is clearly more difficult than we often imagine. Why have we pursued such ineffective policies for so long? Much of the answer lies in the fact that we have been looking in the wrong place for answers.

      Economists have known about the importance of education for economic growth for years, and this knowledge has led to surges of interest in studies of school effectiveness. Some schools appeared to get consistently good test results, while others seemed to get consistently poor results. The main thrust of the first generation of school effectiveness research, which began in the 1970s, was to understand the characteristics of the most effective schools. Perhaps if we understood that, we could reproduce the same effect in other schools.

      Unfortunately, things are rarely that simple. Trying to emulate the characteristics of today’s most effective schools would lead to the following three measures.

      1. First, get rid of the boys. All over the developed world, girls are outperforming boys, even in traditionally male-dominated subjects such as mathematics and science (OECD, 2016). The more girls you have in your school, the better you are going to look.

      2. Second, become a parochial school. Again, all over the world, parochial schools tend to get better results than other schools, although this appears to be more due to parochial schools tending to be more socially selective than public schools (see, for example, Cullinane, Hillary, Andrade, & McNamara, 2017).

      3. Third, and most important, move your school into a nice, leafy, suburban area. This will produce three immediate benefits. First, it will bring you much higher-achieving students. Second, parents will better support their students, whether this is in terms of supporting the school and its mission or paying for private tuition. Third, the school will have more money—potentially lots more. Some American schools receive more than $40,000 per student per year, compared with others that receive less than $5,000 per student per year (National Public Radio, 2016).

      In case it wasn’t obvious, these are not, of course, serious suggestions. Girls’ schools, parochial schools, and schools in affluent areas get better test scores primarily because of who goes there, rather than how good the schools are, as researchers pointed out in the second generation of school effectiveness studies (see, for example, Thrupp, 1999). These researchers pointed out that most of the differences between school scores are due to the differences in students attending those schools rather than any differences in the quality of the schools themselves. The OECD data (Programme for International Student Assessment, 2010) are helpful in quantifying this. The PISA data show that 74 percent of the variation in the achievement of fifteen-year-olds in the United States is within schools, which means that 26 percent of the variation in student achievement is between schools (such as some schools getting better test scores than others). However, around two-thirds of the between-school variation is caused by differences in the students attending that school. This, in turn, means that only 8 percent of the variability in student achievement is attributable to the school—or, in reverse, that 92 percent of the variability in achievement is not attributable to the school (PISA, 2010). What this means in practice is that if fifteen out of a class of thirty students achieve proficiency in an average school, then seventeen out of thirty would do so in a “good” school (one standard deviation above the mean, or one of the best one-third of all schools) and thirteen out of thirty would do so in a “bad” school (one standard deviation below the mean). While these differences are no doubt important to the four students in the middle who are affected, they are, in my experience, much smaller than people imagine. It seems that Basil Bernstein (1970) was right, therefore, when he said that “education cannot compensate for society” and that we should be realistic about what schools can, and cannot, do (as cited in Thrupp, 1999).

      However, as higher-quality data sets have become available, we have been able—in the third generation of school effectiveness studies—to dig a little deeper. In particular, where a data set allows us to compare a student’s achievement at the beginning of the year and at the end of the year, we can estimate the school’s value added (the difference between what a student knew when she arrived at a school and what she knew when she left). It turns out that as long as you go to school (and that’s important), then it doesn’t matter very much which school you go to, but it matters very much which classrooms you’re in.

      In the United States, the classroom effect appears to be at least four times the size of the school effect (PISA, 2007), which, predictably, has generated a lot of interest in what might be causing these differences. It turns out that these substantial differences between how much students learn in different classes have little to do with class size, how the teacher groups the students for instruction, or even the presence of between-class grouping practices (for example, tracking). The most critical difference is simply the quality of the teacher.

      Parents have always understood how important having a good teacher is to their children’s progress, but only since the mid-1990s have we been able to quantify exactly how much of a difference teacher quality makes.