Last month, Secretary DeVos announced an initiative to bolster the efficacy of the Civil Rights Data Collection by leveraging the capacity of the National Center of Education Statistics.
It was perhaps a first: A Trump administration policy initiative greeted without outrage from the education advocacy community. After all, everyone in education policy and journalism agrees: more data is always a good thing.
But is it? Or has “data” done more harm than good for American education?
Catholic University professor Jerry Muller penned a compact but compelling book last year, The Tyranny of Metrics, to ring the alarm against, “the unintended consequences of trying to substitute standardized measures of performance for personal judgment based on experience.”
His book on the deformities of “data-driven decision-making” ought to be read and heeded by education policymakers and school district leaders across the country.
One fallacy Muller lays bare is the uncritical assumption that data recorded from the system reflects primarily on processes occurring within the system. Muller points out that during debates about American health-care policy, advocates often point to American obesity, diabetes, and other markers of ill-health as evidence that our medical system is inferior to European counterparts.
Maybe in part. But much more of that is undoubtedly attributable to the fact that we eat way more Doritos than our friends across the Atlantic.
We have seen this ever more on display in education policy conversations, especially those around disaggregated data that reveals racial disparities. Policymakers jump to the conclusion that these disparities are caused or, at minimum, “perpetuated” by our schools.
Maybe in part. But much more of that is undoubtedly attributable to deeply rooted inequities outside of the schoolhouse.
Assuming that a high degree of observable statistical differences can be attributed to the institution recording the data becomes particularly problematic given that policymakers frequently don’t have the faintest clue what the data actually mean.
I will never forget a private discussion I had with a former Obama Department of Education assistant secretary: when I tried discussing the academic literature on school disciplinary disparities, he replied: “studies schmuddies. I’ll stick to the facts.” By “facts” he meant aggregate data, of which he didn’t care to develop a deeper understanding.
Worse still, many policymakers assume that the identification of a statistical disparity provides sufficient evidence that they know how to fix it. The logic chain of much of what passes for thoughtful conversation about education policy goes roughly as follows: (1) I see X bad thing in the data; (2) I believe that Y policy will solve X problem; (3) If you disagree with me, you are “against data.”
In a “data-driven” system, Muller notes, the idea of “accountability” ceases, by a linguistic sleight of hand, to mean being responsible for actions and comes to mean demonstrating success through standardized membership. Hence, we have seen stronger “accountability” leading to vastly more irresponsible actions on the part of school and district administrators.
Graduation rates go up! (Never mind that they go up because standards fall through the floor.)
Suspension rates go down! (Never mind that they go down because more misbehavior is tolerated.)
The least heeded admonition amongst the “evidence-based” policy crowd is Campbell’s Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social process it is intended to monitor.”
Data can be useful as a supplement to prudential human decision-making. But, too frequently, “data” becomes a substitute. A guide rather than a reference. And unfortunately, this is not a bug, but a feature written into the DNA of our schools by policymakers.
Muller notes that the impulse for “more data” is distrust. We don’t trust that those within an institution will do the right and proper thing based on their unmonitored judgment.
The problem is that “accountability” can corrupt judgment, making individuals less trustworthy. And when journalists uncover stronger cause of distrust, policymakers respond by calling for more data-driven decision-making.
Therefore, Muller astutely quips, “metric fixation, which aspires to be a science, too often resembles faith.”
Unfortunately, it is a faith without salvation. Only a cycle of condemnation and counter-productive penances.
Perhaps, if we want to genuinely improve the quality of American education, what we really need is less data.