Get important education news and analysis delivered straight to your inbox
As part of the federal recovery effort to boost the economy after the 2008 recession, the U.S. Education Department suddenly had a big pot of money to give away to “innovations” in education. Since then, more than $1.5 billion has been spent on almost 200 ideas because Congress continued to appropriate funds even after the recession ended. Big chunks went to building new KIPP charter schools and training thousands of new Teach for America recruits to become teachers. Other funds made it possible for lesser known programs in reading, writing, math and science instruction to reach classrooms around the country. Many of the grant projects involved technology, sometimes delivering lessons or material over the internet. One “innovation” was to help teachers select good apps for their students. Another was for a novel way to evaluate teachers.
In order to obtain the grants, recipients had to determine if their ideas were effective by tracking test scores. Results are in for the first wave of 67 programs, representing roughly $700 million of the innovation grants and it doesn’t look promising.
Only 12 of the 67 innovations, or 18 percent, were found to have any positive impact on student achievement, according to a report published earlier in 2018. Some of these positive impacts were very tiny but as long as the students who received the “innovative treatment” posted larger test score gains than a comparison group of students who were taught as usual, it counted.
“It’s only a handful,” said Barbara Goodson, a researcher at Abt Associates Inc., a research and consulting firm that was hired to analyze the results of the Investing in Innovation (i3) Fund for the Department of Education. “It’s discouraging to everybody. We are desperate to find what works. Here was a program that was supposed to identify promising models. People are disappointed that we didn’t come up with 20 new models.”
“That’s the dirty secret of all of education research,” Goodson added. “It is really hard to change student achievement. We have rarely been able to do it. It’s harder than anybody thinks.” She cited a prior 2013 study that also found when education reforms were put to rigorous scientific tests with control groups and random assignment, 90 percent of them failed to find positive effects.
Why is innovation so hard in education?
To Goodson, who has specialized in early childhood education research for 40 years, the problem is that learning is ultimately about changing human behavior and that is always difficult for adults and children. And so many other things — like nutrition, sleep, safety and relationships at home — affect learning. “We’ve known for the longest time that economic background characteristics swamp any education intervention,” she said. “We’re starting out with only being able to make a small difference in how people do. The lever of education is only operating on a small slice of the pie.”
In some cases, the current measures of effectiveness, generally standardized assessments, may be too broad to capture the targets of these innovations, Goodson said. For example, a phonics program might help some kids read more fluently. But the ability to read more fluently might only be indirectly captured in a reading test that’s focused on comprehension and vocabulary. An intervention aimed at soft skills, such as the ability to persist and try again, can’t be measured at all on these conventional tests.
Many interventions target kids who are several grade levels behind. A seventh-grade math test might not pick up on how a student progressed through two year’s worth of math from third-grade multiplication of single digits to fifth-grade addition of fractions. Instead the test might suggest a minuscule academic improvement because the student flubbed most of the seventh-grade questions on solving for x and graphing equations.
A more sensitive yardstick for measuring innovation would require creating and administering more tests to students. That’s a hard sell to principals, teachers and families who may already feel that there’s too much testing in schools.
Saro Mohammed, a partner at the Learning Accelerator, a non-profit organization that supports using technology to tailor instruction to each child, says that it’s sometimes hard to prove that an innovation works is because of unintended consequences when schools try something new. For example, if a school increases the amount of time that children read independently to try to boost reading achievement, it might shorten the amount of time that students work together collaboratively or engage in a group discussion.
“Your reading outcomes may turn out to be the same [as the control group], but it’s not because independent reading doesn’t work,” Mohammed said. “It’s because you inadvertently changed something else. Education is super complex. There are lots of moving pieces.”
Mohammed said the study results are not all bad. Only one of the 67 programs produced negative results, meaning that kids in the intervention ended up worse off than learning as usual. Most studies ended up producing “null” results and she said that means “we’re not doing worse than business as usual. In trying these new things, we’re not doing harm on the academic side.”
Mohammed also pointed out that learning improvements are slow and incremental. It can take longer than even the three-to-five-year time horizon that the innovation grants allowed.
Eighteen of the studies had to be thrown out because of problems with the data or the study design. In some cases, too many students who tried the innovation were ignored in the final figures. When you exclude kids with disabilities, for example, that can skew the results upward. Too many of the early-stage innovations weren’t tried on enough students to produce statistically significant results. That means even when the students in the intervention produced larger test score gains than those in a comparison control group, the researchers still had to call it a “null” result if the odds of reproducing such a positive result were no better than flipping a coin. (One of the reasons that many small education studies cannot be replicated is because they were lucky flukes in the first place.) In more recent grant making, Goodson says the small studies have been “powered up” so that the results will be statistically useful. (They’re now called Education Innovation and Research grants.)
This grant program was also a first test of using rigorous scientific evidence as a way of issuing grants in education. Proven concepts received the largest $25-50 million grants. Ideas with the least evidence received less than $5 million to help them build an evidence base. Ideas in between might get $15 million. Among the 48 least proven ideas, only 4 were found to increase student achievement. That’s a low 8 percent success rate. (Links to all the publicly available evaluations for each program are here. Appendix D of the report lists the academic results for each program.)
But programs in the highest tier were supposed to have a proven track record and only two of the four — the KIPP charter school network and Reading Recovery — generated stronger test scores.
Michael Hansen, director of the Brown Center on Education Policy at the Brookings Institution, characterized the results as “discouraging” but cautioned that high failure rates are not a reason to give up on educational innovation. “This is the nature of R&D,” he said. “If we stop giving out grants, then we stop innovating.”
This story about innovation in education was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.
At The Hechinger Report, we publish thoughtful letters from readers that contribute to the ongoing discussion about the education topics we cover. Please read our guidelines for more information. We will not consider letters that do not contain a full name and valid email address. You may submit news tips or ideas here without a full name, but not letters.
By submitting your name, you grant us permission to publish it with your letter. We will never publish your email address. You must fill out all fields to submit a letter.
My experience in the field of education as a parent, teacher, consultant and community member leads me to wonder what systems are in place to expose educators to the pedagogy of teaching and learning in these models to Innovate in districts and schools. The systems need to be in place for gaining strategies to improve achievement for all students but also for the educators. So, we attend workshops/professional development—–THEN WHAT. Is there a well-oiled system in place to support teachers in their experiences. Who helps them to review what their experiences entail and how to grow and fix the failures or gaps. Now what? Who supports the classroom teachers in Professional Learning Communities? Who mentors? Monitors? I have seen talk about improvements in what is done in some schools and it ends with professional development. We need support systems to enable teachers to debunk their experiences and develop more strategies to implement improvements in their teaching and students’ learning in their classrooms. It has been all talk in some school districts with lack of systems in place to implement and evaluate what is happening with teachers who need to be grown in flower pots for survival and success in the classroom.
We already know what causes a change in educational attainment and that is one-to-one tutoring.
But the other problem with all these interventions is that they are starting from a place of failure because 9 out of 10 American kids do not get enough exercise.
As a teacher and as Pastoral Head I know that any intervention, academic or therapeutic, is a non-starter unless the child is involved in some type of exercise programme.
I taught in America for 10 years and have taught in South Africa for 20. The contrast between the pupils is stark. Compulsory sport/exercise is a part of the holistic education offered at many schools in South Africa. The activity is not just reserved for those who make the school team but for all pupils participate.
Once pupils are active, there is more chance that other interventions will have the desired effect. They sleep better, produce endorphins, serotonin, dopamine, etc. They lose weight and feel better about their bodies. They learn the soft skills of grit, perseverance and teamwork in a natural setting and not in an artificial academic environment.
The researchers can try any intervention they want, but until the kids are moving, involved and active they are banging their heads against an interventionist brick wall and wasting billions of dollars.
Students being active physically will not change their reading comprehension nor will it increase any academic need apart from effective teaching in the classroom. Of course, I agree that interactive and active learning strategies should be incorporated to involve all students in their learning of new knowledge and skills so they can debunk their own learning. Just as teachers need to practice new skills and knowledge in ways that will enhance their pedagogy of teaching and learning while causing an increase in students’ academic progress in all areas of learning, students learn by applying, questioning, interacting with others and determining why they have learned or cannot learn and apply new knowledge and skills. Teachers must be able to debunk their experiences in the classroom with peers, supervisors and students so they can redo their lessons and succeed in the process of growing as educators who can set up the environment for learning for all students with equity for all. Give the teachers what they need to practice and eventually succeed using their pedagogy of teaching and learning with all students support, time and strategies to learn from their successful and unsuccessful experiences. Improvement doesn’s end with attendance at a workshop or professional development experince.
Submit a letter