As the parent of children who attended public schools and an educator who has been a teacher of children, a school leader, and now a teacher-educator and director of a teacher-education program, I welcome the Obama administration’s efforts to ensure that educator preparation programs support their graduates to do the absolute best for the children entrusted to their care.
How they do this, however, can be helpful or harmful, depending on the kind of information they use to hold programs accountable and on what is done as a result of collecting that information. Examples of this kind of helpful data that can be used for accountability purposes include:
- Surveys of graduates and their employers about how well-prepared the graduates are for the many different aspects of teaching – this allows faculty to reflect on their strengths and weaknesses and adjust their programs accordingly.
- Tracking where education-school graduates go and how long they remain in the field of teaching – this could offer insight to how prepared graduates are for the field, as research to date indicates that more poorly prepared teachers drop out more quickly.
- Statistics about how many teacher education candidates pass performance assessments (used for certification or program completion) that demonstrate how well they can actually teach – this information opens our eyes to new directions for instruction and needs in the classroom. Several such performance assessments have recently been developed and are being used in many states to license beginning teachers – much like the bar exam in law and the medical licensing exam for physicians. Just as passing rates on these tests are reported for professional schools in law and medicine, they could be reported for schools of education, as well.
Data that can be harmful, however, are data that don’t reflect the actual work of teachers and/or programs and that are used punitively rather than for improvement. An example of this kind of accountability practice that is not only unhelpful but also harmful is the Obama administration’s proposal to withhold TEACH grants from students in particular universities on the basis of test scores of students who are taught by their graduates.
The idea of evaluating teacher preparation programs using test scores of students taught by the graduates of those programs, referred to as “value-added measures – VAM,” is fraught with problems, not only for evaluating graduates but also for evaluating individual teachers. The so-called value-added metrics have been found to be both highly unstable – shifting dramatically from year to year, based in large part on who the teachers teach –- and biased against particular groups of teachers, like those who teach new English learners, special education students, and even gifted and talented students who have already hit the ceiling on the grade-level tests (and therefore cannot show growth on those tests). The National Research Council and several research organizations have published warnings against this kind of use, as have many individual researchers, including Professor Ed Haertel, chairman of the Board on Testing and Assessment of the National Academy of Sciences.
Withholding TEACH grants from universities based on this flawed measure will be a disincentive for schools of education to prepare special education teachers, bilingual teachers, teachers for children in high-need communities, and others who are likely to net lower VAM scores. Just think what would happen if medical schools were judged based on the number of patients who died in the care of their graduates. What institution would withstand the pressures from such a policy to prepare researchers and caregivers for the poor, the elderly, or those suffering from as-yet uncurable illnesses?
Genuine accountability should be aimed at gathering information that can be used to improve existing practices. Capacity-building, not punishment, should be the principle guiding the policies. Many of us who work in teacher education have spent many years working on ways to collect and use data to strengthen our work.
Of particular note is the move toward performance assessment (the edTPA – currently used for state certification or program completion in 34 states and over 500 institutions of higher ed), which offers evidence about the direct impact of what we do – how well school of education graduates are able to plan, instruct, and assess – so that they will be ready to teach when they enter the classroom and be prepared to assume responsibility for students’ lives.
The data we receive from this assessment helps us reflect on and adjust our programs in order to strengthen the effectiveness of our graduates. This kind of accountability practice leads us to better outcomes and supports our collective ongoing learning.
The network of teacher preparation institutions engaging in this work is nationwide and growing. We would welcome the opportunity to share what we’ve learned with the Obama Administration.
Beverly Falk, Ed.D., is professor and director of Graduate Programs in Early Childhood Education, City College of New York.