Editor’s note: This story led off this week’s Future of Learning newsletter, which is delivered free to subscribers’ inboxes every Tuesday with trends and top stories about education innovation. Subscribe today!
What makes educational technology initiatives succeed or fail? Hint: It’s not just the technology.
A new nonprofit, formed out of the Jefferson Education Accelerator at the University of Virginia Curry School of Education, is embarking on an ambitious project to find the answer. The plan is to map out exactly what contributes to success and failure. And it’ll take tens of thousands of educators to bring a very blurry landscape into focus.
Right now, when a school is considering purchasing a new educational technology, whether it’s a device or a program, officials tend to research its track record. Did it work in other schools? But without knowing a lot of information about those schools, teachers and administrators can’t possibly make the best decisions, argues Bart Epstein, CEO of the Jefferson Education Accelerator, an ed tech evaluation and support venture at the university. Epstein said even if technology companies wanted to be helpful by steering prospective clients only toward the products that would be best for them, the fact is, they don’t have the information to do that, either.
Epstein will transition from leading the JEA to leading the JEX, or Jefferson Education Exchange, this year. The new nonprofit’s research project will get underway with $1 million in seed funding from the Strada Education Network and support from the Curry School of Education. Robert Pianta, dean of the Curry School, will serve as chairman of the Jefferson Education Exchange board, whose members include leaders in government, philanthropy, education and research.
The goal is to build on expertise already honed by education associations, nonprofits that support schools, and others.
In determining what works in ed tech, it’s not just about student demographics or the geographic location of a school, although these factors do matter. Epstein and his team have already identified more than a dozen variables that contribute to the success or failure of educational technology initiatives. Among them:
- Teacher agency — did teachers play a role in selecting the product?
- Student access to technology and internet outside of the classroom
- The number of ed tech products being implemented at the same time in the same school
- The quality of professional development offered to help teachers learn to use new products
- Whether a school fosters a culture of experimentation
- Whether the school first introduced the new product in a pilot
- The track record of previous ed tech initiatives
- Whether the school has made a single- or multi-year commitment to the new product
- Whether any educators had prior experience with the product before implementation
At its core, the JEX project aims to help schools and districts identify useful peer groups.
“A school in suburban Chicago may have much more in common with a school in Virginia when it comes to education technology than another school in the Chicago area,” Epstein said. “The two schools may have a brand-new principal, who is supervising a highly experienced ed tech director, who has a track record of implementing things well in a place where there is a great culture of experimentation, but the bandwidth is fairly low and there’s a lot of money for [professional development] but not a great track record of doing it well.”
The Jefferson Education Exchange plans to offer stipends to tens of thousands of educators in the next couple years to get in-depth information about what is and isn’t going well in their own schools’ technology implementations. Researchers will ask about the variables they’ve already identified as being important and ask educators to consider additional variables that affect the implementation. With several responses from a single school, Epstein expects to get a good picture about what made a certain ed tech initiative succeed or fail there. Then, schools with similar characteristics — based on the long list of variables — can determine whether a product is likely to be a good fit for them.
Epstein said that within two years, the organization should have strong data to show which variables stand out as having the greatest impact. Even if researchers can’t identify a secret sauce that guarantees successful implementation, they may be able to say which variables virtually guarantee failure. That would also be helpful.
“We’ll have negative correlation data long before we have the ideal recipe for success for these individual products,” Epstein said. “And that’s OK.”
This story was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up here for our newsletter.