OLPC was founded by Nicholas Negroponte, the chairman emeritus of the Media Laboratory at the Massachusetts Institute of Technology (MIT) and current member of the board of directors for Motorola (OLPC, 2013d). Since its inception, the OLPC initiative has also partnered with Google, eBay, and Citigroup (OLPC, 2013c) and gained widespread recognition through organizations such as TED (Technology, Entertainment, Design), which promotes “ideas worth spreading” (TED, 2013) through a set of global conferences and online videos.
According to Negroponte (2006), OLPC teachers reported declines in truancy to almost zero, declines in discipline problems, increases in student engagement, and practically universal attendance of parent-teacher conferences. These findings notwithstanding, this information on the effects of OLPC is only anecdotal. Negroponte himself explicitly stated that he considers research on the effectiveness of OLPC to be unnecessary:
This is not something you have to test. The days of pilot projects are over. When people say, “Well, we’d like to do three or four thousand in our country to see how it works,” [we say,] “Go to the back of the line, and someone else will do it, and then when you figure out that this works, you can join, as well.” (Negroponte, 2006)
The first randomized research evaluations on the program were carried out in Peru, OLPC’s largest deployment country with over 8,300 involved schools. Julián Cristia, Pablo Ibarrarán, Santiago Cueto, Ana Santiago, and Eugenio Severín (2012) found that while “the intervention generated a substantial increase in computer use both at school and at home” and “positive impacts on cognitive skills and competences related to computer use,” the results “indicate limited effects on academic achievement” (p. 21). Furthermore, they stated:
To improve learning in Math and Language, there is a need for high-quality instruction. From previous studies, this does not seem the norm in public schools in Peru, where much rote learning takes place (Cueto et al., 2006; Cueto, Ramírez, and León, 2006). Hence, our suggestion is to combine the provision of laptops with a pedagogical model targeted toward increased academic achievement by students. Our results suggest that computers by themselves, at least as initially delivered by the OLPC program, do not increase achievement in curricular areas. (p. 21)
Other studies on one-to-one laptop use have produced mixed results, as well. According to Colleen Gillard (2011), Maine’s $35 million laptop program for middle school students sometimes “floundered” in its implementation as a result of “poor execution, tepid leadership, or inadequate teacher training” (p. 84). Program director Bette Manchester said, “People don’t realize it’s more about teaching and learning than technology” (as cited in Gillard, 2011, pp. 84–85).
Likewise, a study by Shapely and her colleagues (2011) suggested that “large-scale one-to-one laptop programs are difficult to implement, and, as a result, programs may produce either very small or no improvements in test scores” (p. 312). They added, “If improved standardized test scores is the primary justification for investments in one-to-one laptop programs, then results probably will be disappointing” (p. 312). However, Shapely and her colleagues found other positive effects of one-to-one laptop programs that are congruent with the benefits discovered by Cristia and his colleagues (2012) in Peru. For example, they pointed out, “Individual laptops and digital resources allowed middle school students to develop greater technical proficiency and reduced their disciplinary problems in classes.… Especially noteworthy was the positive immersion effect on students from lower socioeconomic backgrounds” (Shapely et al., 2011, p. 310). Mark Windschitl and Kurt Sahl (2002) also reported positive effects; students “uniformly acknowledged a sense of pride in having their own computers” and reported that they were “more organized because most of their schoolwork was stored on the laptops” (p. 201). In sum, while one-to-one laptop programs may increase students’ technological skills and technological confidence, effective teaching is required to produce meaningful gains in student achievement.
The Internet
The Internet is considered the most efficient system for distributing new reading, writing, and communication tools in the history of civilization (Lankshear & Knobel, 2006). Between 2000 and 2012, global Internet usage increased by 566 percent, and in 2013, over 2.4 billion people, or 34 percent of the world’s population, had access to the Internet (Internet World Stats, 2013). In the United States, nearly all public schools have access to the Internet (Snyder & Dillow, 2012). Most educational research on the Internet is focused on two areas: (1) distance learning and (2) blended learning.
Distance Learning
In distance learning, students do not attend a physical school, but they take lessons remotely on the Internet or via broadcast technologies. The broader category of distance learning encompasses earlier technologies like correspondence courses, educational television, and videoconferencing. A number of meta-analyses have reported relatively small effect sizes for distance learning (defined in this broader sense) versus regular classroom learning, prompting debate over the efficacy of distance learning (see Bernard et al., 2004; Cavanaugh, Gillan, Kromrey, Hess, & Blomeyer, 2004; Jenks & Springer, 2002; Zhao, Lei, Yan, Lai, & Tan, 2005). For example, Hattie (2009) reported an average effect size for distance education of only 0.09 (equivalent to a 4 percentile point gain).
However, a report by the U.S. Department of Education’s Office of Planning, Evaluation and Policy Development (Means, Toyama, Murphy, Bakia, & Jones, 2010) called for a fresh perspective on distance learning, since its manifestations are changing rapidly:
The question of the relative efficacy of online and face-to-face instruction needs to be revisited, however, in light of today’s online learning applications, which can take advantage of a wide range of Web resources, including not only multimedia but also Web-based applications and new collaboration technologies. These forms of online learning are a far cry from the televised broadcasts and videoconferencing that characterized earlier generations of distance education. (p. xi)
The sentiment expressed by Means and her colleagues—that manifestations of distance learning are changing—is echoed by others (see Bernard et al., 2004; Jenks & Springer, 2002; Sitzmann, Kraiger, Stewart, & Wisher, 2006; Tallent-Runnels et al., 2006; Waxman et al., 2003; C. Zirkle, 2003).
Means and her colleagues’ (2010) optimistic perspective may be due to the fact that their meta-analysis focused on web-based instruction, a relatively new format for distance learning. Hattie (2009) reported an average effect size of 0.18 for web-based learning (equivalent to a 7 percentile point gain). A meta-analysis by Traci Sitzmann, Kurt Kraiger, David Stewart, and Robert Wisher (2006) found that web-based learning was most effective for declarative knowledge (understanding of facts, details, principles, and generalizations), as opposed to procedural knowledge (strategies and processes).
Blended Learning
Instruction that combines online and face-to-face elements is known as blended learning, or hybrid learning (Means et al., 2010; Schulte, 2011). The aforementioned meta-analysis by Means and her colleagues (2010) compared blended learning to face-to-face instruction and reported an average effect size of 0.35 in favor of the blended approach. Similarly, Sitzmann and her colleagues (2006) examined the effects of web-based instruction as a blended supplement to classroom instruction and reported a mean effect size of 0.34 for declarative knowledge and 0.52 for procedural knowledge. David Pearson, Richard Ferdig, Robert Blomeyer, and Juan Moran (2005) examined the impact of digital literacy tools on middle school students through a synthesis of studies published between 1988 and 2005. Most of the digital literacy tools they examined were used in a blended approach. They found an overall effect size of 0.49, indicating a 19 percentile point gain. Finally, Hattie (2009) reported a number of specific uses of digital media that indicated a blended approach. For example, he reported an average effect size of 0.52 for interactive video, an average effect size of 0.22 for audiovisual methods (including television, film, video, and slides), and an average effect size of 0.33 for simulations. As with the research on computer use, this research seems to indicate that using