Evidence-based policy needs theory

This imaginary scenario is based on an example from my paper with Baljinder Virk, Stella Mascarenhas-Keyes and Nancy Cartwright: ‘Randomized Controlled Trials: How Can We Know “What Works”?’ 

A research group of practically-minded military engineers are trying to work out how to effectively destroy enemy fortifications with a cannon. They are going to be operating in the field in varied circumstances so they want an approach that has as much general validity as possible. They understand the basic premise of pointing and firing the cannon in the direction of the fortifications. But they find that the cannon ball often fails to hit their targets. They have some idea that varying the vertical angle of the cannon seems to make a difference. So they decide to test fire the cannon in many different cases.

As rigorous empiricists, the research group runs many trial shots with the cannon raised, and also many control shots with the cannon in its ‘treatment as usual’ lower position. They find that raising the cannon often matters. In several of these trials, they find that raising the cannon produces a statistically significant increase in the number of balls that destroy the fortifications. Occasionally, they find the opposite: the control balls perform better than the treatment balls. Sometimes they find that both groups work, or don’t work, about the same. The results are inconsistent, but on average they find that raised cannons hit fortifications a little more often.

A physicist approaches the research group and explains that rather than just trying to vary the height the cannon is pointed in various contexts, she can estimate much more precisely where the cannon should be aimed using the principle of compound motion with some adjustment for wind and air resistance. All the research group need to do is specify the distance to the target and she can produce a trajectory that will hit it. The problem with the physicist’s explanation is that it includes reference to abstract concepts like parabolas, and trigonometric functions like sine and cosine. The research group want to know what works. Her theory does not say whether you should raise or lower the angle of the cannon as a matter of policy. The actual decision depends on the context. They want an answer about what to do, and they would prefer not to get caught up testing physics theories about ultimately unobservable entities while discovering the answer.

Eventually the research group write up their findings, concluding that firing the cannon pointed with a higher angle can be an effective ‘intervention’ but that whether it does or not depends a great deal on particular contexts. So they suggest that artillery officers will have to bear that in mind when trying to knock down fortifications in the field; but that they should definitely consider raising the cannon if they aren’t hitting the target. In the appendix, they mention the controversial theory of compound motion as a possible explanation for the wide variation in the effectiveness of the treatment effect that should, perhaps, be explored in future studies.

This is an uncharitable caricature of contemporary evidence-based policy (for a more aggressive one see ‘Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials’). Metallurgy has well-understood, repeatedly confirmed theories that command consensus among scientists and engineers. The military have no problem learning and applying this theory. Social policy, by contrast, has no theories that come close to that level of consistency. Given the lack of theoretical consensus, it might seem more reasonable to test out practical interventions instead and try to generalize from empirical discoveries. The point of this example is that without theory empirical researchers struggle to make any serious progress even with comparatively simple problems. The fact that theorizing is difficult or controversial in a particular domain does not make it any less essential a part of the research enterprise.

***

Also relevant: Dylan Wiliam’s quip from this video (around 9:25): ‘a statistician knows that someone with one foot in a bucket of freezing water and the other foot in a bucket of boiling water is not, on average, comfortable.’

Pete Boettke’s discussion of economic theory as an essential lens through which one looks to make the world clearer.

The minimum wage induced spur of technological innovation ought not be praised

In a recent article at Reason.comChristian Britschgi argues that “Government-mandated price hikes do a lot of things. Spurring technological innovation is not one of them”. This is in response to the self-serve kiosks in fast-food restaurants that seem to have appeared everywhere following increases in the minimum wage.

In essence, his argument is that minimum wages do not induce technological innovation. That is an empirical question. I am willing to consider that this is not the most significant of adjustment margins to large changes in the minimum wage. The work of Andrew Seltzer on the minimum wage during the Great Depression in the United States suggests that at the very least it ought not be discarded.  Britschgi does not provide such evidence, he merely cites anecdotal pieces of support. Not that anecdotes are bad, but those that are cited come from the kiosk industry – hardly a neutral source.

That being said, this is not what makes me contentious towards the article. It is the implicit presupposition contained within: that technological innovation is good.

No, technological innovation is not necessarily good. Firms can use two inputs (capital and labor) and, given prices and return rates, there is an optimal allocation of both. If you change the relative prices of each, you change the optimal allocation. However, absent the regulated price change, the production decisions are optimal. With the regulated price change, the production decisions are the best available under the constraint of working within a suboptimal framework. Thus, you are inducing a rate of technological innovation which is too fast relative to the optimal rate.

You may think that this is a little luddite of me to say, but it is not. It is a complement to the idea that there are “skill-biased” technological change (See notably this article of Daron Acemoglu and this one by Bekman et al.). If the regulated wage change affects a particular segment of the labor (say the unskilled portions – e.g. those working in fast food restaurants), it changes the optimal quantity of that labor to hire. Sure, it bumps up demand for certain types of workers (e.g. machine designers and repairmen) but it is still suboptimal. One should not presuppose that ipso facto, technological change is good. What matters is the “optimal” rate of change. In this case, one can argue that the minimum wage (if pushed up too high) induces a rate of technological change that is too fast and will act in disfavor of unskilled workers.

As such, yes, the artificial spurring of technological change should not be deemed desirable!

On “strawmanning” some people and inequality

For some years now, I have been interested in the topic of inequality. One of the angles that I have pursued is a purely empirical one in which I attempt to improvement measurements. This angle has yielded two papers (one of which is still in progress while the other is still in want of a home) that reconsider the shape of the U-curve of income inequality in the United States since circa 1900.

The other angle that I have pursued is more theoretical and is a spawn of the work of Gordon Tullock on income redistribution. That line of research makes a simple point: there are some inequalities that are, in normative terms, worrisome while others are not. The income inequality stemming from the career choices of a benedictine monk and a hedge fund banker are not worrisome. The income inequality stemming from being a prisoner of one’s birth or from rent-seekers shaping rules in their favor is worrisome.  Moreover, some interventions meant to remedy inequalities might actually make things worse in the long-run (some articles even find that taxing income for the sake of redistribution may increase inequality if certain conditions are present – see here).  I have two articles on this (one forthcoming, the other already published) and a paper still in progress (with Rosolino Candela), but they are merely an extension of the aforementioned Gordon Tullock and some other economists like Randall Holcombe, William Watson and Vito Tanzi. After all, the point that a “first, do no harm” policy to inequality might be more productive is not novel (all that it needs is a deep exploration and a robust exposition).

Notice that there is an implicit assumption in this line of research: inequality is a topic worth studying. This is why I am annoyed by statements like those that Gabriel Zucman made to ProMarket. When asked if he was getting pushback for his research on inequality (which is novel and very important), Zucman answers the following:

Of course, yes. I get pushback, let’s say not as much on the substance oftentimes as on the approach. Some people in economics feel that economics should be only about efficiency, and that talking about distributional issues and inequality is not what economists should be doing, that it’s something that politicians should be doing.

This is “strawmanning“. There is no economist who thinks inequality is not a worthwhile topic. Literally none. True, economists may have waned in their interest towards the topic for some years but it never became a secondary topic. Major articles were published in major journals throughout the 1990s (which is often identified as a low point in the literature) – most of them groundbreaking enough to propel the topic forward a mere decade later. This should not be surprising given the heavy ideological and normative ramifications of studying inequality. The topic is so important to all social sciences that no one disregards it. As such, who are these “some people” that Zucman alludes too?

I assume that “some people” are strawmen substitutes for those who, while agreeing that inequality is an important topic, disagree with the policy prescriptions and the normative implications that Zucman draws from his work. The group most “hostile” to the arguments of Zucman (and others such as Piketty, Saez, Atkinson and Stiglitz) is the one that stems from the public choice tradition. Yet, economists in the public-choice tradition probably give distributional issues a more central role in their research than Zucman does. They care about institutional arrangements and the rules of the game in determining outcomes. The very concept of rent-seeking, so essential to public choice theory, relates to how distributional coalitions can emerge to shape the rules of the game in a way that redistribute wealth from X to Y in ways that are socially counterproductive. As such, rent-seeking is essentially a concept that relates to distributional issues in a way that is intimately related to efficiency.

The argument by Zucman to bolster his own claim is one of the reason why I am cynical towards the times we live in. It denotes a certain tribalism that demonizes the “other side” in order to avoid engaging in them. That tribalism, I believe (but I may be wrong), is more prevalent than in the not-so-distant past. Strawmanning only makes the problem worse.

Low-Quality Publications and Academic Competition

In the last few days, the economics blogosphere (and twitterverse) has been discussing this paper in the Journal of Economic PsychologySimply put, the article argues that economists discount “bad journals” so that a researcher with ten articles in low-ranked and mid-ranked journals will be valued less than a researcher with two or three articles in highly-ranked journals.

Some economists, see notably Jared Rubin here, made insightful comments about this article. However, there is one comment by Trevon Logan that gives me a chance to make a point that I have been mulling over for some time. As I do not want to paraphrase Trevon, here is the part of his comment that interests me:

many of us (note: I assume he refers to economists) simply do not read and therefore outsource our scholarly opinions of others to editors and referees who are an extraordinarily homogeneous and biased bunch

There are two interrelated components to this comment. The first is that economists tend to avoid reading about minute details. The second is that economists tend to delegate this task to gatekeepers of knowledge. In this case, this would be the editors of top journals. Why do economists act as such? More precisely, what are the incentives to act as such? After, as Adam Smith once remarked, the professors at Edinburgh and Oxford were of equal skill but the former produced the best seminars in Europe because their incomes depended on registrations and tuition while the latter relied on long-established endowments. Same skills, different incentives, different outcomes.

My answer is as such: the competition that existed in the field of economics in the 1960s-1980s has disappeared.  In “those” days, the top universities such as Princeton, Harvard, MIT and Yale were a more or less homogeneous group in terms of their core economics. Lets call those the “incumbents”. They faced strong contests from the UCLA, Chicago, Virginia and Minnesota.  These challengers attacked the core principles of what was seen as the orthodoxy in antitrust (see the works of Harold Demsetz, Armen Alchian, Henry Manne), macroeconomics (Lucas Paradox, Islands model, New Classical Economics), political economy (see the works of James Buchanan, Gordon Tullock, Elinor Ostrom, Albert Breton, Charles Plott) and microeconomics (Ronald Coase). These challenges forced the discipline to incorporate many of the insights into the literature. The best example would be the New Keynesian synthesis formulated by Mankiw in response to the works of people like Ed Prescott and Robert Lucas. In those days, “top” economists had to respond to articles published in “lower-ranked” journals such as Economic Inquiry, Journal of Law and Economics and Public Choice (all of which have risen because they were bringing competition – consider that Ronald Coase published most of his great pieces in the JL&E).

In that game, economists were checking one another and imposing discipline upon each other. More importantly, to paraphrase Gordon Tullock in his Organization of Inquiry, their curiosity was subjected to social guidance generated from within the community:

He (the economist) is normally interested in the approval of his peers and and hence will usually consciously shape his research into a project which will pique other scientists’ curiosity as well as his own.

Is there such a game today? If in 1980 one could easily answer “Chicago” to the question of “which economics department challenges that Harvard in terms of research questions and answers”, things are not so clear today. As research needs to happen within a network where the marginal benefits may increase with size (up to a point), where are the competing networks in economics?

And there is my point, absent this competition (well, I should not say absent – it is more precise to speak of weaker competition) there is no incentive to read, to invest other fields for insights or to accept challenges. It is far more reasonable, in such a case, to divest oneself from the burden of academia and delegate the task to editors. This only reinforces the problem as the gatekeepers get to limit the chance of a viable network to emerge.

So, when Trevon bemoans (rightfully) the situation, I answer that maybe it is time that we consider how we are acting as such because the incentives have numbed our critical minds.

Lunchtime Links

  1. Interview with a secessionist
  2. Ducking questions about capitalism
  3. The perverse seductiveness of Fernando Pessoa
  4. Yet in this simple task, a doffer in the USA doffed 6 times as much per hour as an adult Indian doffer.”
  5. Conflicted thoughts on women in medicine
  6. The Devil You Know vs The Market For Lemons (car problems)

On Financial Repression and ♀ Labor Supply after 1945

I just came back from the Economic History Association meeting in San Jose. There are so many papers that are worth mentioning (and many have got my brains going, see notably the work of Nuno Palma on monetary neutrality after the “discovery” of the New World). However, the thing that really had me thinking was the panel on which one could find Barry Eichengreen and Carmen Reinhart (who was an early echo of the keynote speech by Michael Bordo).

Here’s why : Barry Eichengreen seemed uncomfortable with the current state of affairs regarding financial regulation and pointed out that the after-war period was marked by rapid growth and strong financial regulation. Then, Reinhart and Bordo emphasized the role of financial repression in depressing growth – notably in the period praised by Eichengreen. I have priors that make more favorable to the Reinhart-Bordo position, but I can’t really deny the point made by Eichengreen.

This had me thinking for some time during and after the talks. Both positions are hard to contest but they are mutually exclusive. True, it is possible that growth was strong in spite of financial repression, but some can argue that by creating some stability, regulations actually improved growth in a way that surpassed the negative effects caused by repression. But, could there be another explanation?

Elsewhere on this blog, I have pointed out that I am not convinced that the Thirty Glorious were that “Glorious”.  In line with my Unified Growth Theory inclinations (don’t put me in that camp, but don’t exclude me either I am still cautious on this), I believe that we need to account for demographic factors that foil long-term comparisons. For example, in a paper on Canadian economic growth, I pointed out that growth from 1870 to today is much more modest once we divide output by household-size population rather than overall population (see blog post here that highlights my paper). Later, I pointed out the ideas behind another paper (which I am still writing and for which I need more data, notably to replicate something like this paper) regarding the role of the unmeasured household economy. There, I argued that the shift of women from the household to the market over-measures the actual increase in output. After all, to arrive at the net value of increased labor force participation, one must deduce the value of foregone outputs in the household – something we know little about in spite of the work of people like Valerie Ramey.

Both these factors suggest the need for corrections based on demographic changes to better reflect actual living standards. These demographic changes were most pronounced in the 1945-1975 era – that of the era of rapid growth highlighted by Eichengreen and of financial repression highlighted by Reinhart and Bordo. If these changes were most momentous in that period, it is fair to say that the measurement errors they induce are also largest in that era.

So, simply put, could it be that these were not years of rapid growth but of modest growth that were overestimated?  If so, that would put the clash of ideas between Bordo-Reinhart and Eichengreen in a different light – albeit one more favorable to the former than the latter.

But heh, this is me speculating about where research could be oriented to guide some deeply relevant policy questions.

Minimum Wages: Where to Look for Evidence (A Reply)

Yesterday, here at Notes on Liberty, Nicolas Cachanosky blogged about the minimum wage. His point was fairly simple: criticisms against certain research designs that use limited sample can be economically irrelevant.

To put you in context, he was blogging about one of the criticisms made of the Seattle minimum wage study produced by researchers at the University of Washington, namely that the sample was limited to “small” employers. This criticism, Nicolas argues, is irrelevant since the researchers were looking for those who were likely to be the most heavily affected by the minimum wage increase since it will be among the least efficient firms that the effects will be heavily concentrated. In other words, what is the point of looking at Costco or Walmart who are more likely to survive than Uncle Joe’s store? As such, this is Nicolas’ point in defense of the study.

I disagree with Nicolas here and this is because I agree with him (I know, it sounds confused but bear with me).

The reason is simple: firms react differently to the same shock. Costs are costs, productivity is productivity, but the constraints are never exactly the same. For example, if I am a small employer and the minimum wage is increased 15%, why would I fire one of my two employees to adjust? If that was my reaction to the minimum wage, I would sacrifice 33% of my output for a 15% increase in wages which compose the majority but not the totality of my costs. Using that margin of adjustment would be insensible for me given the constraint of my firm’s size. I might be more tempted to cut hours, cut benefits, cut quality, substitute between workers, raise prices (depending on the elasticity of the demand for my services). However, if I am a large firm of 10,000 employees, sacking one worker is an easy margin to adjust on since I am not constrained as much as the small firm. In that situation, a large firm might be tempted to adjust on that margin rather than cut quality or raise prices. Basically, firms respond to higher labor costs (not accompanied by greater productivity) in different ways.

By concentrating on small firms, the authors of the Seattle study were concentrating on a group that had, probably, a more homogeneous set of constraints and responses. In their case, they were looking at hours worked. Had they blended in the larger firms, they would have looked for an adjustment on the part of firms less to adjust by compressing hours but rather by compressing the workforce.

This is why the UW study is so interesting in terms of research design: it focused like a laser on one adjustment channel in the group most likely to respond in that manner. If one reads attentively that paper, it is clear that this is the aim of the authors – to better document this element of the minimum wage literature. If one seeks to exhaustively measure what were the costs of the policy, one would need a much wider research design to reflect the wide array of adjustments available to employers (and workers).

In short, Nicolas is right that research designs matter, but he is wrong in that the criticism of the UW study is really an instance of pro-minimum wage hike pundits bringing the hockey puck in their own net!