While there seems to be consensus that student data can be a valuable tool in improving performance, the jury is still out on the most effective methodologies for using it. Year-end results from three different urban school districts offer directional insight into which approaches seem to be producing the most immediate gains
In school year 2010-2011, Maryland’s Prince George’s County Public Schools (PGCPS) posted impressive student performance gains and made progress in closing the achievement gap. In a prepared statement, superintendent Bill Hite attributed the improvements to a “collaborative planning and data inquiry model” that teachers are using to tie “data to the improvement of instruction to review, reteach, and assess.”
In the same period, Chicago Public Schools (CPS) saw their most dramatic one year elementary student performance gains in over a decade. Catalyst Chicago, an independent publication that covers reform efforts in Chicago’s public schools, noted that while “there was not a specific reform effort, but a premium was put on data-driven instruction.”
DC Public Schools (DCPS), on the other hand, generated no discernable increase in student achievement—for the second year in a row—despite massive investment in an innovative teacher evaluation system that relies heavily upon using test data to rate teachers.
At the risk of oversimplification, it’s worth noting how the two districts which saw performance gains emphasized the use of data differently than the flat-lining DC schools did:
In the districts making gains, a great deal of systemwide effort was put into supporting educators with the training and tools they needed to make informed decisions about instruction and student intervention strategies. For instance, between 2009 and 2011 period, Chicago public school leaders rolled out a systematic method of analyzing data and making instructional adjustments from the boardroom to the classroom. CPS leaders also made significant efforts to support instructional leadership and teacher teams by providing data, training, toolkits and in some cases, customized coaching to educators.
Likewise, in Prince George’s County Public Schools, teacher teams in each school participated in weekly collaborative planning sessions focused on analyzing all types of student data (formative test scores, student work, etc.) to determine what was working and what was not. Cohorts of principals meet with their Assistant Superintendents regularly to examine school performance trends, share best practices, and develop plans for improvement.
In DC Public Schools, on the other hand, the emphasis has been almost exclusively on using data to hold teachers accountable for student results, rather than on teaching educators to use data to diagnose student performance and to adjust their instructional strategies accordingly. To be fair, DCPS is now starting to put much more emphasis on using the evaluation data to identify the supports teachers need, but that has not been the primary focus of the last two years.
While far from scientific, these examples suggest that an emphasis on supporting educators in using data to inform and adjust instruction generates more immediate results than using data to hold them accountable. As we examine new frameworks for evaluating performance, we should also ensure we’re identifying and including strategies for helping educators make meaningful use of the data we’re collecting.