Get important education news and analysis delivered straight to your inbox
School district administrators and principals are inundated with salesmen peddling computers and software programs. Many claim that scientific research proves their wares work. Can they be believed? The researchers at the Abdul Latif Jameel Poverty Action Lab (J-PAL), an organization inside the economics department of the Massachusetts Institute of Technology, scoured academic journals, the internet and evaluation databases and found only 113 studies on using technology in schools that were scientifically rigorous.
“Education technology is an area where innovation has outpaced rigorous research,” said Vincent Quan, who runs the North American education unit at J-PAL. “We wanted to find all the studies and distill the main lessons so that decision makers can decide which programs to scale up and invest in.”
To meet J-PAL’s high standards, the study either had to be a randomized controlled trial, in which students were randomly selected to try a technology and studied alongside students who didn’t try it, or it had to be a “regression discontinuity design,” in which students with statistically similar test scores were studied, but those just below a test-score threshold tried the technology, for example, and those just above it didn’t. Both types of studies are expensive and typically take two years or more to conduct — time and money that ed-tech entrepreneurs usually don’t have. Technology can become obsolete by the time the results come out.
J-PAL threw out some other quasi-experimental studies because it wasn’t confident that the students trying the technology weren’t already higher achieving or more motivated to learn than the ones who didn’t try the technology. J-PAL also didn’t include observational studies, in which results were based on teacher surveys of how much they thought their students had learned. They tend to show much more positive results. That doesn’t mean the technologies don’t work, it just means they haven’t been rigorously tested.
The end product is an August 2017 working paper, “Education Technology: An Evidence-Based Review,” distributed by the National Bureau of Economic Research with clear tables on which technology improves learning and which doesn’t. I talked with Quan and another co-author of the paper, Maya Escueta, a graduate student at Teachers College, Columbia University (The Hechinger Report is an independently funded unit of Teachers College). Three big themes emerged.
1. Computers and internet access alone don’t boost learning
Handing out laptops, providing high-speed internet access or buying most other kinds of hardware doesn’t on its own boost academic outcomes. The research shows that student achievement doesn’t rise when kids are using computers more, and it sometimes decreases. The J-PAL researchers did find that students who have computers use them more, and become more adept at clicking and typing. It remains an open question whether tech-savvy students will be better workers in the future, even if they’re not better students now.
While hardware alone isn’t making kids smarter, students need computers and the internet to use educational software. And some of that does work.
2. Some math software shows promise
Exactly 29 software studies met J-PAL’s standards and 20 of those showed at least some measure of learning improvement (see table 2, pp. 30-37). The ones that tended to show positive results were mostly in one subject: math. Rising to the top were math programs such as SimCalc and ASSISTments. One popular program, DreamBox, showed small gains for students, as well. Only one piece of software that taught reading, Intelligent Tutoring for the Structure Strategy (ITSS), showed promise, suggesting that it is possible to create good educational software outside of math, but it’s a lot harder.
One commonality of the software that seems to work is that it somehow “personalizes” instruction. Sometimes students start with a pre-test so the computer can determine what they don’t know and then sends each student the right lessons, or a series of worksheet problems, to help fill in the gaps. Other times, the computer ascertains a student’s gaps as he works through problems and makes mistakes, giving personalized feedback. Teachers also get data reports to help pinpoint where students are struggling.
The J-PAL paper noted that ASSISTments improved seventh-graders’ math scores in Maine when students spent only 10 minutes a night on the software, three or four times a week, as homework. Teachers didn’t have to change their existing lesson plans or textbooks to incorporate it. Other education software, by contrast, often imposes its own curriculum, or requires teachers to make major changes to the way they teach.
“We had the guts to expose ourselves” to randomized control trials, said Neil Heffernan, the inventor of ASSISTments, and a professor at Worcester Polytechnic Institute in Massachusetts.
Heffernan’s business approach is unusual in the education technology industry. He financed the development of his software with federal funds and gives it away free to schools. The federal government, through the U.S. Department of Education’s Institute for Education Sciences, also foots the bill for the studies to see if his software improves math education.
Heffernan now has grants to test whether the Maine results can be replicated in North Carolina, and whether teachers can be trained online to use the system across the country, especially in more urban settings. But the results of these studies won’t be known for another four years, he said.
The more a piece of software is studied, the more likely it is to get mixed results. The most studied software was Cognitive Tutor, a complete math teaching curriculum that asks students to spend 40 percent of their class time on the computer. The J-PAL researchers found nine rigorous studies of it. Some were positive. Some showed no benefits. A 2017 study in eight U.S. states found no benefit in the first year of implementation, but good outcomes in the second year, when teachers reduced the number of recommended activities done during the non-computerized part of class.
3. Cheap can be effective
Low-cost technological interventions, particularly text message reminders, were surprisingly effective with students and parents. “It’s not necessarily the most expensive or complicated technologies that make a difference,” J-PAL’s Quan told me. “Even text messages can have a measurable impact on academic outcomes. It’s not flashy. Sometimes you don’t need all that flash and gimmick.”
One example is a study in San Francisco where texts reminded mothers to read to their preschoolers. That boosted children’s literacy scores. “We see that texts work when it’s a helpful nudge for something that they want to do,” Quan’s colleague, Escueta, said. “You can’t change attitudes with texting.”
At The Hechinger Report, we publish thoughtful letters from readers that contribute to the ongoing discussion about the education topics we cover. Please read our guidelines for more information. We will not consider letters that do not contain a full name and valid email address. You may submit news tips or ideas here without a full name, but not letters.
By submitting your name, you grant us permission to publish it with your letter. We will never publish your email address. You must fill out all fields to submit a letter.
Buyer beware, buyer be able to see the forest from the trees, buyer be willing to change for the better. Learn the research and best practices on how students learn in a way that guarantees sustained performance improvement outcomes for all students (personalized, reinforced, facilitated over time, differentiated instruction, spaced reinforcement, distributed practice), Does the software include these items? What are the documented results of the software in advancing sustained student success? Work smarter, not harder. Start by giving up one size fits all teaching which doesn’t work to advance sustained student success outcomes for all students.
Submit a letter