Mark Berends
New research led by the University of Notre Dame’s Center for Research on Educational Opportunity (CREO) points a way forward to improve certain teacher performance evaluation systems.
These systems look closely at the question: To what degree did the teachers add value — that is, did students of these teachers grow and achieve more than expected, as measured by their test score gains?
According to a U.S. Department of Education announcement of the study, the scholars’ report “provides new information on the degree to which value-added estimates of teachers differ by the assessment used to measure their students’ achievement growth.”
The research team was led by David Stuit of the independent Basis Policy Research. Other key participants were distinguished sociologist Mark Berends, director of CREO within Notre Dame’s Institute for Educational Initiatives, and CREO graduate student Megan Austin, along with R. Dean Gerdeman of the American Institutes for Research.
The researchers compared the value-added estimates of teacher effectiveness from the state test and a norm-referenced test in the academic years 2005-06 through 2010-11. Data were drawn from the reading and math assessments in grades four and five in 46 schools in Indiana. The state uses assessment tools called the Indiana Statewide Testing for Educational Progress Plus (ISTEP+) and the Measures of Academic Progress (MAP).
Results of the study showed a “moderate relationship” between estimates of teacher value-added performance from the two assessments, although there was also important variability among the estimates that requires further research. But the scholars’ new report finds that one can reduce the likelihood of misjudging teacher performance by looking at the student test results in a particular way, focusing on confidence intervals — degrees of confidence inherent in the data from the student tests.
“The findings indicate that incorporating confidence intervals for value-added estimates reduces the likelihood that teachers’ performance will be misclassified based on measurement error,” according to the U.S. Department of Education’s Institute of Education Sciences.
This January report, “Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests,” can be found at the Institute of Education Sciences website.
Contact: Mark Berends, IEI faculty fellow, Notre Dame Department of Sociology and CREO, mark.berends.3@nd.edu
Originally published by William Schmitt at iei.nd.edu on Jan. 25, 2014.