Figuring out whether a piece of educational technology actually works isn’t easy. Most software companies are too cash-strapped and time-pressed to conduct experiments that can generate high-quality evidence outside a laboratory. And the depressing truth is that when researchers take the time to conduct a proper randomized control trial in classrooms, most innovations don’t show positive results for students. Meanwhile, schools are buying loads of software based on incomplete, indecipherable or low-quality evidence. Given the state of ed tech research, one can’t blame schools for not knowing which products they should be buying.
A government-funded foundation in the United Kingdom is trying a new approach, something it calls “ed tech testbeds.” The idea is to bring software developers, schools and researchers together to test new products that are already in classrooms and generate scientific evidence faster than a typical multi-year randomized control trial.
“Ed tech often fails to live up to expectations,” said Joysy John, director of education at Nesta, formerly known as the National Endowment for Science, Technology and the Arts, a U.K.-based foundation that aims to foster innovation. “What we’re trying to do with the testbed is test products in a school environment and create a rapid feedback loop to understand if they are working or not and why.”
John delivered a presentation on Nesta’s ed tech testbeds at a roundtable held Nov. 21, 2019 during the World Innovation Summit for Education (WISE) held in Doha, Qatar. (I attended the roundtable as a guest of the Qatar Foundation, a charitable organization of the Qatari royal family, which organized the WISE conference.)
Nesta is planning 12 ed tech trials during the 2020-21 year with three each quarter. It is currently selecting which wares t0 test among the companies that have applied to participate. To be selected, a product must already be used by at least 25 schools in the U.K. and it should be aimed toward reducing teachers’ workloads, a U.K. government priority. (Nesta was originally funded by the U.K. National Lottery.)
More than 300 schools — 329 to be exact — have volunteered to be guinea pigs to test products. One might wonder why teachers would volunteer to take on the additional task of helping companies evaluate software if they’re already burdened with too much work. Nesta is luring schools with financial compensation to cover staff time along with free software and teacher training.
The kind of time-saving technology under consideration includes software to automatically grade student essays; technology to help teachers communicate more efficiently with parents; computerized assessments to evaluate student progress throughout the year and scheduling software.
One of Nesta’s inspirations is the iZone inside New York City’s Department of Education, where small groups of educators have test driven new products for 12 weeks. But Nesta’s testbeds will also include researchers from the University of Durham to evaluate the 12 trials. The researchers will document how easy it is for educators to use the software and what problems schools are having with implementation, John explained. They will also set up control groups to compare metrics with schools that aren’t using the software.
The notion that we could perhaps learn in just a few short months whether a piece of software is improving student learning is both seductive and seemingly unrealistic. After all, change happens slowly in education. It can take months or years to train teachers to teach something differently and years more to see if those pedagogical changes lead to better outcomes for students.
On the other hand, quick feedback loops address a unique problem in evaluating software: short development cycles. By the time a traditional multi-year trial is conducted, the software version studied is often long out of date and we don’t know how the current version fares in classrooms.
John says that Nesta isn’t trying to replace randomized control trials with quick testbeds. “It’s in no way a substitution,” said John. “We’re trying to help organizations progress on the journey of evidence to get them closer to a randomized control trial. We hope by working closely with companies and researchers and schools that we can change mindsets about what evidence is of what works and why it is important.”
Many details of the testbeds are still to be sorted out. It’s unclear how schools will be matched with software, how much time schools will have for training and implementation, how many months each piece of software will be tested for and which metrics will be measured. John said she plans to adjust and tweak how the testbeds are designed as they go along. “It’ll be a test of the testbeds,” John said.
I’ll write more in this space when the results are in to let you know how the testbed experiment went and what was learned. But that won’t be until late 2021 at the soonest. In the world of education, even rapid feedback loops are slow.
This story about ed tech testbeds was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.