A major study by researchers from Harvard and Columbia universities on the long-term effects of teachers on the lives of their students has generated a great deal of interest recently, including a detailed article in the New York Times. The research team tracked 2.5 million students from grades 4-8 to age 28 to determine the impact of teachers with high value-added SCOREs on the likelihood that students would go on to college, successful careers, and improved socio-economic status. Among their more contentious findings, researchers state, “Replacing a teacher whose [value-added] is in the bottom 5% with an average teacher would increase the present value of students’ lifetime income by more than $250,000 for the average classroom” in their sample. Further, as demonstrated by the graphic below, the arrival of teachers with high value-added SCOREs correlated with substantial gains in student test SCOREs during the study’s timeframe.

Teacher Value-Added GraphObservers have noted, however, that this study does not in itself resolve many outstanding concerns about the use of value-added data in tenure, promotion, and compensation decisions for teachers. Perhaps foremost among them, a January 15 piece by Michael Winerip of the Times pointed out much of the data on which the study is based pre-date the era of high stakes testing inaugurated by the now ten-year-old No Child Left Behind Act.

The release of the Harvard-Columbia study coincided with that of an Issues Analysis Report from TNTP (formerly known as The New Teacher Project). This report aims to simplify the findings from a larger review of the Measures of Effective Teaching (MET) project funded in six urban districts by the Bill & Melinda Gates Foundation. Drawing from a MET research project involving 3,000 teachers across participating districts, including Memphis, TNTP ultimately concludes, “Evaluations should not rely on value-added SCOREs alone, because no single measure can tell the full story of a teacher’s performance. But including value-added data makes results significantly more accurate over time, not less.” The MET findings suggest some role for student survey results in evaluation models, as well. According to the TNTP brief, MET researchers “found that evaluations were most accurate when they combined value-added data with rigorous classroom observations and surveys of student perceptions.” Evaluations must also recognize value-added SCOREs are best viewed as a trend line over time, without making critical decisions based on one-year fluctuations in results.

Findings from both the Harvard-Columbia and TNTP publications carry great significance for Tennessee educators and policymakers as the state implements new models of teacher evaluation. In subject areas for which they are available, value-added data are incorporated to assess teachers, and rubrics remain under development to reflect student learning gains in evaluations of teachers for whom these data are not yet available. Under this approach, classroom observations represent half of a teacher’s evaluation, reflecting the need for multiple measures in creating an accurate, fair assessment system.

One other interesting note from the Harvard-Columbia study: Researchers found only modest gains from paying bonuses to teachers with high value-added SCOREs, who would likely have remained in the profession without this added incentive. According to their findings, “Replacing low [value-added] teachers may therefore be a more cost effective strategy to increase teacher quality in the short run than paying bonuses to retain high [value-added] teachers,” although increasing overall salaries could make teaching a more attractive profession to higher quality candidates in the long run.