Midweek Reader: The Drug War, the Opioid Crisis, and the Moral Hazard of Overdose Treatment

Today, I’m reviving an old series I attempted to start last year that never came to fruition: The midweek reader. A micro-blogging series in which I try to link to stories that are related to each other to provide deeper insight into an issue. This week, we’re looking at the relationship between the Opioid Crisis and the drug war, and the academic debate around a controversial paper finding moral hazard in policies that try to increase access to Naloxone.

  • At Harpers Magazine, Brian Gladstone has a fantastic long-form piece looking into how attempts to crack down on opioid addiction by targeting the prescription pain meds have left many patients behind and questioning the mainstream narrative that the rise of opioids was driven primarily by pain prescriptions. A slice:

    Yet even the most basic elements of this disaster remain unclear. For while it’s true that the past three decades saw a staggering upsurge in the prescribing of opioid medication, this trend peaked in 2010 and has been declining since: high-dose prescriptions fell by 41 percent between 2010 and 2015. The question, then, is why overdose deaths continue to skyrocket, rising 37 percent over the same period — and whether restricting access to regulated drugs is actually pushing people toward more lethal, unregulated ones, such as fentanyl, heroin, and carfentanil, a synthetic opioid 10,000 times stronger than morphine.

  • Similarly, at the Cato Institute, Jeffery A. Singer has a good piece exploring the relationship between America’s War on Drugs and the rise of opioid addictions. He concludes:

    Meanwhile, President Trump and most state and local policymakers remain stuck on the misguided notion that the way to stem the overdose rate is to clamp down on the number and dose of opioids that doctors can prescribe to their patients in pain, and to curtail opioid production by the nation’s pharmaceutical manufacturers. And while patients are made to suffer needlessly as doctors, fearing a visit from a DEA agent, are cutting them off from relief, the overdose rate continues to climb.

  • At Voxphilosopher Brendan de Kenessey of Harvard has a piece exploring the philosophy of the self and of rational choice to argue that it’s wrong to treat drug addiction as a moral failure. A slice:

    We tend to view addiction as a moral failure because we are in the grip of a simple but misleading answer to one of the oldest questions of philosophy: Do people always do what they think is best? In other words, do our actions always reflect our beliefs and values? When someone with addiction chooses to take drugs, does this show us what she truly cares about — or might something more complicated be going on?

  • An econometrics working paper by Jennifer L. Doleac of University of Virginia and Anita Mukherjee of the University of Wisconsin released earlier this month, which sparked spirited discussion, investigated the link between opioids and laws increasing access to Naloxone. They found the laws increased measurements of opioid use but did reduce mortality, which they theorize is because Naloxone increases moral hazard for addicts by reducing potential costs of an overdose. However, they conclude:

    Our findings do not necessarily imply that we should stop making Naloxone available to individuals suffering from opioid addiction, or those who are at risk of overdose. They do imply that the public health community should acknowledge and prepare for the behavioral effects we find here. Our results show that broad Naloxone access may be limited in its ability to reduce the epidemic’s death toll because not only does it not address the root causes of addiction, but it may exacerbate them. Looking forward, our results suggest that Naloxone’s effects may depend on the availability of local drug treatment: when treatment is available to people who need help overcoming their addiction, broad Naloxone access results in more beneficial effects. Increasing access to drug treatment, then, might be a necessary complement to Naloxone access in curbing the opioid overdose epidemic.

  •  Alex Gertner, a PhD candidate at UNC-Chaple Hill, published a criticism of Doleac Murkhejee at Vox pointing out that their data linking Naloxone and opioid-related hospital visits are not necessarily due to a casual story involving moral hazard:

    The authors find that naloxone access laws lead to more opioid-related emergency department visits, the premise being that naloxone access laws increase opioid overdoses. But there’s a far more likely explanation: People are generally instructed to seek medical care for overdose after receiving naloxone.

    Overdose is a general term to describe experiencing the toxic effects of drugs. People can overdose, and often do, without either dying or seeking medical attention. If people who would otherwise overdose without medical attention are instead using naloxone and going to emergency rooms, that’s a good thing.

  • The widest-ranging and most thorough critique of Doleac-Murkhejee comes from Frank, Pollack, and Humphries at the Journal of Health Affairs. They argue that the original authors (1) assume too much immediacy in effect of changes in Naloxone laws than is probably warranted (2) ignore a variety of exogenous variables like Medicare expansion. They conclude:

    We believe the best interpretation of Doleac and Mukherjee’s findings is that their main treatment variable—naloxone laws—thus far have had little impact on naloxone use or nonmedical opioid use during the period studied. This disappointing pattern commands attention and follow-up from both public health practitioners and public health researchers.

On the point of quantifying in general and quantifying for policy purposes

Recently, I stumbled on this piece in Chronicle by Jerry Muller. It made my blood boil. In the piece, the author basically argues that, in the world of education, we are fixated with quantitative indicators of performance. This fixation has led to miss (or forget) some important truths about education and the transmission of knowledge. I wholeheartedly disagree because the author of the piece is confounding two things.

We need to measure things! Measurements are crucial to our understandings of causal relations and outcomes.  Like Diane Coyle, I am a big fan of the “dashboard” of indicators to get an idea of what is broadly happening.  However, I agree with the authors that very often the statistics lose their entire meaning. And that’s when we start targeting them!

Once we know that this variable becomes the object of target, we act in ways that increase this variable. As soon as it is selected, we modify our behavior to achieve fixed targets and the variable loses some of its meaning. This is also known as Goodhart’s law whereby “when a measure becomes a target, it ceases to be a good measure” (note: it also looks a lot like the Lucas critique).

Although Goodhart made this point in the context of monetary policy, it applies to any sphere of policy – including education. When an education department decides that this is the metric they care about (e.g. completion rates, minority admission, average grade point, completion times, balanced curriculum, ratio of professors to pupils, etc.), they are inducing a change in behavior which alters the significance carried by this variable.  This is not an original point. Just go to google scholar and type “Goodhart’s law and education” and you end up with papers such as these two (here and here) that make exactly the point I am making here.

In his Chronicle piece, Muller actually makes note of this without realizing how important it is. He notes that “what the advocates of greater accountability metrics overlook is how the increasing cost of college is due in part to the expanding cadres of administrators, many of whom are required to comply with government mandates(emphasis mine).

The problem he is complaining about is not metrics per se, but rather the effects of having policy-makers decide a metric of relevance. This is a problem about selection bias, not measurement. If statistics are collected without an intent to be a benchmark for the attribution of funds or special privileges (i.e. that there are no incentives to change behavior that affects the reporting of a particular statistics), then there is no problem.

I understand that complaining about a “tyranny of metrics” is fashionable, but in that case the fashion looks like crocs (and I really hate crocs) with white socks.

Minimum Wages: Where to Look for Evidence (A Reply)

Yesterday, here at Notes on Liberty, Nicolas Cachanosky blogged about the minimum wage. His point was fairly simple: criticisms against certain research designs that use limited sample can be economically irrelevant.

To put you in context, he was blogging about one of the criticisms made of the Seattle minimum wage study produced by researchers at the University of Washington, namely that the sample was limited to “small” employers. This criticism, Nicolas argues, is irrelevant since the researchers were looking for those who were likely to be the most heavily affected by the minimum wage increase since it will be among the least efficient firms that the effects will be heavily concentrated. In other words, what is the point of looking at Costco or Walmart who are more likely to survive than Uncle Joe’s store? As such, this is Nicolas’ point in defense of the study.

I disagree with Nicolas here and this is because I agree with him (I know, it sounds confused but bear with me).

The reason is simple: firms react differently to the same shock. Costs are costs, productivity is productivity, but the constraints are never exactly the same. For example, if I am a small employer and the minimum wage is increased 15%, why would I fire one of my two employees to adjust? If that was my reaction to the minimum wage, I would sacrifice 33% of my output for a 15% increase in wages which compose the majority but not the totality of my costs. Using that margin of adjustment would be insensible for me given the constraint of my firm’s size. I might be more tempted to cut hours, cut benefits, cut quality, substitute between workers, raise prices (depending on the elasticity of the demand for my services). However, if I am a large firm of 10,000 employees, sacking one worker is an easy margin to adjust on since I am not constrained as much as the small firm. In that situation, a large firm might be tempted to adjust on that margin rather than cut quality or raise prices. Basically, firms respond to higher labor costs (not accompanied by greater productivity) in different ways.

By concentrating on small firms, the authors of the Seattle study were concentrating on a group that had, probably, a more homogeneous set of constraints and responses. In their case, they were looking at hours worked. Had they blended in the larger firms, they would have looked for an adjustment on the part of firms less to adjust by compressing hours but rather by compressing the workforce.

This is why the UW study is so interesting in terms of research design: it focused like a laser on one adjustment channel in the group most likely to respond in that manner. If one reads attentively that paper, it is clear that this is the aim of the authors – to better document this element of the minimum wage literature. If one seeks to exhaustively measure what were the costs of the policy, one would need a much wider research design to reflect the wide array of adjustments available to employers (and workers).

In short, Nicolas is right that research designs matter, but he is wrong in that the criticism of the UW study is really an instance of pro-minimum wage hike pundits bringing the hockey puck in their own net!

On Borjas, Data and More Data

I see my craft as an economic historian as a dual mission. The first is to answer historical question by using economic theory (and in the process enliven economic theory through the use of history). The second relates to my obsessive-compulsive nature which can be observed by how much attention and care I give to getting the data right. My co-authors have often observed me “freaking out” over a possible improvement in data quality or be plagued by doubts over whether or not I had gone “one assumption too far” (pun on a bridge too far). Sometimes, I wish more economists would follow my historian-like freakouts over data quality. Why?

Because of this!

In that paper, Michael Clemens (whom I secretly admire – not so secretly now that I have written it on a blog) criticizes the recent paper produced by George Borjas showing the negative effect of immigration on wages for workers without a high school degree. Using the famous Mariel boatlift of 1980, Clemens basically shows that there were pressures on the US Census Bureau at the same time as the boatlift to add more black workers without high school degrees. This previously underrepresented group surged in importance within the survey data. However since that underrepresented group had lower wages than the average of the wider group of workers without high school degrees, there was an composition effect at play that caused wages to fall (in appearance). However, a composition effect is also a bias causing an artificial drop in wages and this drove the results produced by Borjas (and underestimated the conclusion made by David Card in his original paper to which Borjas was replying).

This is cautionary tale about the limits of econometrics. After all, a regression is only as good as the data it uses and suited to the question it seeks to answer. Sometimes, simple Ordinary Least Squares are excellent tools. When the question is broad and/or the data is excellent, an OLS can be a sufficient and necessary condition to a viable answer. However, the narrower the question (i.e. is there an effect of immigration only on unskilled and low-education workers), the better the method has to be. The problem is that the better methods often require better data as well. To obtain the latter, one must know the details of a data source. This is why I am nuts over data accuracy. Even small things matter – like a shift in the representation of blacks in survey data – in these cases. Otherwise, you end up with your results being reversed by very minor changes (see this paper in Journal of Economic Methodology for examples).

This is why I freak out over data. Maybe I can make two suggestions about sharing my freak-outs.

The first is to prefer a skewed ratio of data quality to advanced methods (i.e. simple methods with crazy-data). This reduces the chances of being criticized for relying on weak assumptions. The second is to take a leaf out of the book of the historians. While historians are often averse to advantaged data techniques (I remember a case when I had to explain panel data regressions to historians which ended terribly for me), they are very respectful of data sources. I have seen historians nurture datasets for years before being willing to present them. When published, they generally stand up to scrutiny because of the extensive wealth of details compiled.

That’s it folks.

 

On doing economic history

I admit to being a happy man. While I am in general a smiling sort of fellow, I was delightfully giggling with joy upon hearing that another economic historian (and a fellow  Canadian from the LSE to boot), Dave Donaldson, won the John Bates Clark medal. I dare say that it was about time. Nonetheless I think it is time to talk to economists about how to do economic history (and why more should do it). Basically, I argue that the necessities of the trade require a longer period of maturation and a considerable amount of hard work. Yet, once the economic historian arrives at maturity, he produces long-lasting research which (in the words of Douglass North) uses history to bring theory to life.

Economic History is the Application of all Fields of Economics

Economics is a deductive science through which axiomatic statements about human behavior are derived. For example, stating that the demand curve is downward-sloping is an axiomatic statement. No economist ever needed to measure quantities and prices to say that if the price increases, all else being equal, the quantity will drop. As such, economic theory needs to be internally consistent (i.e. not argue that higher prices mean both smaller and greater quantities of goods consumed all else being equal).

However, the application of these axiomatic statements depends largely on the question asked. For example, I am currently doing work on the 19th century Canadian institution of seigneurial tenure. In that work, I  question the role that seigneurial tenure played in hindering economic development.  In the existing literature, the general argument is that the seigneurs (i.e. the landlords) hindered development by taxing (as per their legal rights) a large share of net agricultural output. This prevented the accumulation of savings which – in times of imperfect capital markets – were needed to finance investments in capital-intensive agriculture. That literature invoked one corpus of axiomatic statements that relate to capital theory. For my part, I argue that the system – because of a series of monopoly rights – was actually a monopsony system through the landlords restrained their demand for labor on the non-farm labor market and depressed wages. My argument invokes the corpus of axioms related to industrial organization and monopsony theory. Both explanations are internally consistent (there are no self-contradictions). Yet, one must be more relevant to the question of whether or not the institution hindered growth and one must square better with the observed facts.

And there is economic history properly done. It tries to answer which theory is relevant to the question asked. The purpose of economic history is thus to find which theories matter the most.

Take the case, again, of asymetric information. The seminal work of Akerlof on the market for lemons made a consistent theory, but subsequent waves of research (notably my favorite here by Eric Bond) have showed that the stylized predictions of this theory rarely materialize. Why? Because the theory of signaling suggests that individuals will find ways to invest in a “signal” to solve the problem. These are two competing theories (signaling versus asymetric information) and one seems to win over the other.  An economic historian tries to sort out what mattered to a particular event.

Now, take these last few paragraphs and drop the words “economic historians” and replace them by “economists”.  I believe that no economist would disagree with the definition of the tasks of the economist that I offered. So why would an economic historian be different? Everything that has happened is history and everything question with regards to it must be answered through sifting for the theories that is relevant to the event studied (under the constraint that the theory be consistent). Every economist is an economic historian.

As such, the economic historian/economist must use advanced tools related to econometrics: synthetic controls, instrumental variables, proper identification strategies, vector auto-regressions, cointegration, variance analysis and everything you can think of. He needs to do so in order to answer the question he tries to answer. The only difference with the economic historian is that he looks further back in the past.

The problem with this systematic approach is the efforts needed by practitioners.  There is a need to understand – intuitively – a wide body of literature on price theory, statistical theories and tools, accounting (for understanding national accounts) and political economy. This takes many years of training and I can take my case as an example. I force myself to read one scientific article that is outside my main fields of interest every week in order to create a mental repository of theoretical insights I can exploit. Since I entered university in 2006, I have been forcing myself to read theoretical books that were on the margin of my comfort zone. For example, University Economics by Allen and Alchian was one of my favorite discoveries as it introduced me to the UCLA approach to price theory. It changed my way of understanding firms and the decisions they made. Then reading some works on Keynesian theory (I will confess that I have never been able to finish the General Theory) which made me more respectful of some core insights of that body of literature. In the process of reading those, I created lists of theoretical key points like one would accumulate kitchen equipment.

This takes a lot of time, patience and modesty towards one’s accumulated stock of knowledge. But these theories never meant anything to me without any application to deeper questions. After all, debating about the theory of price stickiness without actually asking if it mattered is akin to debating with theologians about the gender of angels (I vote that they are angels and since these are fictitious, I don’t give a flying hoot’nanny). This is because I really buy in the claim made by Douglass North that theory is brought to life by history (and that history is explained by theory).

On the Practice of Economic History

So, how do we practice economic history? The first thing is to find questions that matter.  The second is to invest time in collecting inputs for production.

While accumulating theoretical insights, I also made lists of historical questions that were still debated.  Basically, I made lists of research questions since I was an undergraduate student (not kidding here) and I keep everything on the list until I have been satisfied by my answer and/or the subject has been convincingly resolved.

One of my criteria for selecting a question is that it must relate to an issue that is relevant to understanding why certain societies are where there are now. For example, I have been delving into the issue of the agricultural crisis in Canada during the early decades of the 19th century. Why? Because most historians attribute (wrongly in my opinion)  a key role to this crisis in the creation of the Canadian confederation, the migration of the French-Canadians to the United States and the politics of Canada until today. Another debate that I have been involved in relates to the Quiet Revolution in Québec (see my book here) which is argued to be a watershed moment in the history of the province. According to many, it marked a breaking point when Quebec caught up dramatically with the rest of  Canada (I disagreed and proposed that it actually slowed down a rapid convergence in the decade and a half that preceded it). I picked the question because the moment is central to all political narratives presently existing in Quebec and every politician ushers the words “Quiet Revolution” when given the chance.

In both cases, they mattered to understanding what Canada was and what it has become. I used theory to sort out what mattered and what did not matter. As such, I used theory to explain history and in the process I brought theory to life in a way that was relevant to readers (I hope).  The key point is to use theory and history together to bring both to life! That is the craft of the economic historian.

The other difficulty (on top of selecting questions and understanding theories that may be relevant) for the economic historian is the time-consuming nature of data collection. Economic historians are basically monks (and in my case, I have both the shape and the haircut of friar Tuck) who patiently collect and assemble new data for research. This is a high fixed cost of entering in the trade. In my case, I spent two years in a religious congregation (literally with religious officials) collecting prices, wages, piece rates, farm data to create a wide empirical portrait of the Canadian economy.  This was a long and arduous process.

However, thanks to the lists of questions I had assembled by reading theory and history, I saw the many steps of research I could generate by assembling data. Armed with some knowledge of what I could do, the data I collected told me of other questions that I could assemble. Once I had finish my data collection (18 months), I had assembled a roadmap of twenty-something papers in order to answer a wide array of questions on Canadian economic history: was there an agricultural crisis; were French-Canadians the inefficient farmers they were portrayed to be; why did the British tolerate catholic and French institutions when they conquered French Canada; did seigneurial tenure explain the poverty of French Canada; did the conquest of Canada matter to future growth; what was the role of free banking in stimulating growth in Canada etc.

It is necessary for the economic historian to collect a ton of data and assemble a large base of theoretical knowledge to guide the data towards relevant questions. For those reasons, the economic historian takes a longer time to mature. It simply takes more time. Yet, once the maturation is over (I feel that mine is far from being over to be honest), you get scholars like Joel Mokyr, Deirdre McCloskey, Robert Fogel, Douglass North, Barry Weingast, Sheilagh Ogilvie and Ronald Coase (yes, I consider Coase to be an economic historian but that is for another post) who are able to produce on a wide-ranging set of topics with great depth and understanding.

Conclusion

The craft of the economic historian is one that requires a long period of apprenticeship (there is an inside joke here, sorry about that). It requires heavy investment in theoretical understanding beyond the main field of interest that must be complemented with a diligent accumulation of potential research questions to guide the efforts at data collection. Yet, in the end, it generates research that is likely to resonate with the wider public and impact our understanding of theory. History brings theory to life indeed!

What Good Are Mathematical Models for the Science of Economics?

This question comes from co-blogger Warren Gibson’s piece in Econ Journal Watch titled “The Mathematical Romance: An Engineer’s View of  Mathematical Economics”:

Mathematics can be very alluring. Professional mathematicians speak frequently of “beauty” and “elegance” in their work. Some say that the central mystery of our universe is its governance by universal mathematical laws. Practitioners of applied math likewise feel special satisfaction when a well-crafted simulation successfully predicts real-world physical behavior.

But while the mathematicians, some of them at least, are explicit about doing math for its own sake, engineers are hired to produce results and economists should be, too. It’s fine if a few specialists labor at the outer mathematical edge of these fields, but the real needs and real satisfactions are to be found in applications.

Western civilization has brought us an explosion of human welfare: prosperity, longevity, education, the arts, and so on. We very much need the wisdom that economists can offer us to help understand and sustain this remarkable record. What good are engineers’ accomplishments in crash simulations if the benefits are denied to the world by trade barriers, stifling regulation, congested highways, or bogus global warming restrictions? What can mathematical economics contribute to such vital issues?

You can read the whole thing here (it’s also in the newly-renovated ‘recommendations’ section). Highly, uh, recommended! Mathematics is an important aspect of economics, I think, but Dr. Gibson and other Austrian critiques are more focused on the models that economists create rather than all of the wonderful data that can be quantified for calculation purposes.

I may be wrong, and I hope that my co-bloggers or others will correct me if I am, but either way Dr. Gibson’s critique of the mathematical models employed in the economics profession needs another good look.