Here in the education research/reform/policy community, we really like data. Really like it. After all, how can you evaluate the success of a program without data? In order to assess them, we need to be able to measure the effect of our actions, our reforms, our programs, on students. This is largely what we mean when we refer to research-driven or research-proven strategies, etc. This is the gold standard in program evaluation and policy making, and as such has been an explicit requirement of many of this administration’s federal grant programs. The importance of using data to evaluate programs’ and schools’ success was emphasized in 2001′s No Child Left Behind (NCLB), with its standardized tests assessing students’ achievement of grade-level skills.
But today, as we’ve been doing some research on “turning around” low-performing schools, I’m reminded that frequently data-driven analysis is limited.
I was reminded of this while reading a report by the Communities for Excellent Public Schools assessing the federal Education Department’s prescriptive turnaround strategies. The report features two case studies, one of which is Christopher Columbus High School in New York City. Columbus High has long been overcrowded because of school closings across the city that have shifted large numbers of high-needs students into the South Bronx school. In 2004, for example, Columbus was at 180% capacity. The school is now also being closed to qualify for a federal School Improvement Grant (SIG). The report states:
In its justification for its decision to close Columbus, the Department of Education points to absolute measures such as four year graduation rates and student test scores. But being “date driven” does not tell the whole story, or provide a context for those statistics. … It makes much more of the four year graduation rates, but failes to note that Columbus sticks by its high needs students as long as it takes and graduates large numbers in five, six and seven years.
This reminds me of something a local advocate for special needs students once told me. She observed that in New Orleans’ choice landscape, individual schools’ principals have significant freedom in the extent to which they can welcome and encourage enrollment of the most at-risk students, including those with special needs or a history of disruptive behaviors. Consequently, principals and schools who embrace these students are at a competitive disadvantage. These schools will have students entering with lower base-line skills, with more barriers to learning, etc, that will likely be reflected in the data. The data, then, will create the impression that these are “failing” schools and ineffective principals and teachers, instead of reflecting a welcoming and open school leadership. By extension, we have few indicators to tell us whether a site with a low School Performance Score is underperforming because of ineffectual staff and programs or because of leadership willing to take on the toughest cases. How do we distinguish one from the other, rewarding the latter while giving the former the greatest opportunity to improve?
These are issues that those of us in the research and analysis field must be aware of. Data is an invaluable and necessary element, allowing us to analyze, assess, and compare education landscapes. But currently our indicators are imprecise and imperfect; perhaps we should be careful to not rely too heavily upon data alone.