Recently, I stumbled on this piece in Chronicle by Jerry Muller. It made my blood boil. In the piece, the author basically argues that, in the world of education, we are fixated with quantitative indicators of performance. This fixation has led to miss (or forget) some important truths about education and the transmission of knowledge. I wholeheartedly disagree because the author of the piece is confounding two things.
We need to measure things! Measurements are crucial to our understandings of causal relations and outcomes. Like Diane Coyle, I am a big fan of the “dashboard” of indicators to get an idea of what is broadly happening. However, I agree with the authors that very often the statistics lose their entire meaning. And that’s when we start targeting them!
Once we know that this variable becomes the object of target, we act in ways that increase this variable. As soon as it is selected, we modify our behavior to achieve fixed targets and the variable loses some of its meaning. This is also known as Goodhart’s law whereby “when a measure becomes a target, it ceases to be a good measure” (note: it also looks a lot like the Lucas critique).
Although Goodhart made this point in the context of monetary policy, it applies to any sphere of policy – including education. When an education department decides that this is the metric they care about (e.g. completion rates, minority admission, average grade point, completion times, balanced curriculum, ratio of professors to pupils, etc.), they are inducing a change in behavior which alters the significance carried by this variable. This is not an original point. Just go to google scholar and type “Goodhart’s law and education” and you end up with papers such as these two (here and here) that make exactly the point I am making here.
In his Chronicle piece, Muller actually makes note of this without realizing how important it is. He notes that “what the advocates of greater accountability metrics overlook is how the increasing cost of college is due in part to the expanding cadres of administrators, many of whom are required to comply with government mandates“(emphasis mine).
The problem he is complaining about is not metrics per se, but rather the effects of having policy-makers decide a metric of relevance. This is a problem about selection bias, not measurement. If statistics are collected without an intent to be a benchmark for the attribution of funds or special privileges (i.e. that there are no incentives to change behavior that affects the reporting of a particular statistics), then there is no problem.
I understand that complaining about a “tyranny of metrics” is fashionable, but in that case the fashion looks like crocs (and I really hate crocs) with white socks.