Libertarianism and Neoliberalism – A difference that matters?

I recently saw a thoroughgoing Twitter conversation between a Caleb Brown, which most of you presumably know from the Cato Daily Podcast, and the Neoliberal Project, an American project founded to promote the ideas of neoliberalism, regarding the differences between libertarianism and neoliberalism. For those who follow the debate, it is nothing new that the core of this contention goes way beyond an etymological dimension – it is concerned with one of the most crucial topics in the liberal scholarship: the relationship between government and free markets.

Arbitrary categories?

I can understand the aim to further structure the liberal movement into subcategories which represent different types of liberalism. Furthermore, I often use these different subcategories myself to distance my political ideology from liberal schools I do not associate with, such as paleo-libertarianism or anarcho-capitalism. However, I do not see such a distinct line between neoliberalism and libertarianism in practice.

As describes by Caleb Brown (and agreed on by the Neoliberal Project), neoliberalism wants to aim the wealth generated by markets at specific social goals using some government mechanism, whilst libertarianism focuses on letting the wealth created by free markets flow where it pleases, so to say. In my opinion, the “difference” between these schools is rather a spectrum of trust in government measures with libertarianism on one side and neoliberalism on the other.

I’ve often reached a certain point in the same discussion with fellow liberals:

Neoliberal: I agree that free markets are the most efficient tool to create wealth. They are just not very good at distributing it. By implementing policy X, we could help to correct market failure Y.

Libertarian: Yeah, I agree with you. Markets do not distribute wealth efficiently. However, the government has also done a poor job trying to alleviate the effects of market failures, especially when we look at case Z… (Of course, libertarians bring forth other arguments than public choice, but it is a suitable example.)

After reaching this point, advocating for governmental measures to fix market failures often becomes a moral and personal objective. My favourite example is emission trading. I am deeply intrigued by the theoretical foundation of the Coase-Theorem and how market participants still can find a Pareto-efficient equilibrium by just negotiating. Based on this theoretical framework, I would love to see a global market for carbon emission trading.

However, various mistakes were made during the implementation of emission allowances. First, there were way too many emission allowances on the market which engendered the price to drop dangerously low. Additionally, important markets such as air and ship transportation were initially left out. All in all, a policy buttressed by a solid theory had a more than rough start due to bad implementation.

At this point, neoliberals and libertarians diverge in their responses. A libertarian sees another failure of the government to implement a well-intended policy, whereas a neoliberal sees a generally good policy which just needs a bit further improvement. In such cases, the line between neoliberals and libertarians becomes very thin. And from my point of view, we make further decisions based on our trust in the government and on our subjective-moral relation to the topic as well.

I saw government too often fail (e.g. engaging in industry politics), which should be left nearly entirely to free markets. However, I also saw the same government struggling to find an adequate response to climate change. Contrary, I believe that officials should carry on with their endeavours to counteract climate change whereas they should stay out of industry politics.

Furthermore, in the recent past, there has been a tremendous amount of libertarian policy proposals put forth which remodeled the role of government in a free society: A libertarian case for mandatory vaccination? Alright. A libertarian case for UBI? Not bad. A libertarian case for a border wall? I am not so sure about that one.

Although these examples may define libertarianism in their own context, the general message remains clear to me: libertarians are prone to support governmental measures if they rank the value of a specific end higher than the risk of a failed policy. Since such an article is not the right framework to gather a robust amount of data to prove my point empirically, I rely on the conjecture, that the core question of where the government must interfere is heavily driven by subjective moral judgements.

Summary

Neoliberals and Libertarians diverge on the issue of government involvement in the economy. That’s fine.

Governmental policies often do not fully reach their intended goals. That’s also fine.

The distinction between neoliberals and libertarians is merely a threshold of how much trust one puts in the government’s ability to cope with problems. Both schools should not value this distinction too much since it is an incredibly subjective issue.

Evidence-based policy needs theory

This imaginary scenario is based on an example from my paper with Baljinder Virk, Stella Mascarenhas-Keyes and Nancy Cartwright: ‘Randomized Controlled Trials: How Can We Know “What Works”?’ 

A research group of practically-minded military engineers are trying to work out how to effectively destroy enemy fortifications with a cannon. They are going to be operating in the field in varied circumstances so they want an approach that has as much general validity as possible. They understand the basic premise of pointing and firing the cannon in the direction of the fortifications. But they find that the cannon ball often fails to hit their targets. They have some idea that varying the vertical angle of the cannon seems to make a difference. So they decide to test fire the cannon in many different cases.

As rigorous empiricists, the research group runs many trial shots with the cannon raised, and also many control shots with the cannon in its ‘treatment as usual’ lower position. They find that raising the cannon often matters. In several of these trials, they find that raising the cannon produces a statistically significant increase in the number of balls that destroy the fortifications. Occasionally, they find the opposite: the control balls perform better than the treatment balls. Sometimes they find that both groups work, or don’t work, about the same. The results are inconsistent, but on average they find that raised cannons hit fortifications a little more often.

A physicist approaches the research group and explains that rather than just trying to vary the height the cannon is pointed in various contexts, she can estimate much more precisely where the cannon should be aimed using the principle of compound motion with some adjustment for wind and air resistance. All the research group need to do is specify the distance to the target and she can produce a trajectory that will hit it. The problem with the physicist’s explanation is that it includes reference to abstract concepts like parabolas, and trigonometric functions like sine and cosine. The research group want to know what works. Her theory does not say whether you should raise or lower the angle of the cannon as a matter of policy. The actual decision depends on the context. They want an answer about what to do, and they would prefer not to get caught up testing physics theories about ultimately unobservable entities while discovering the answer.

Eventually the research group write up their findings, concluding that firing the cannon pointed with a higher angle can be an effective ‘intervention’ but that whether it does or not depends a great deal on particular contexts. So they suggest that artillery officers will have to bear that in mind when trying to knock down fortifications in the field; but that they should definitely consider raising the cannon if they aren’t hitting the target. In the appendix, they mention the controversial theory of compound motion as a possible explanation for the wide variation in the effectiveness of the treatment effect that should, perhaps, be explored in future studies.

This is an uncharitable caricature of contemporary evidence-based policy (for a more aggressive one see ‘Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials’). Metallurgy has well-understood, repeatedly confirmed theories that command consensus among scientists and engineers. The military have no problem learning and applying this theory. Social policy, by contrast, has no theories that come close to that level of consistency. Given the lack of theoretical consensus, it might seem more reasonable to test out practical interventions instead and try to generalize from empirical discoveries. The point of this example is that without theory empirical researchers struggle to make any serious progress even with comparatively simple problems. The fact that theorizing is difficult or controversial in a particular domain does not make it any less essential a part of the research enterprise.

***

Also relevant: Dylan Wiliam’s quip from this video (around 9:25): ‘a statistician knows that someone with one foot in a bucket of freezing water and the other foot in a bucket of boiling water is not, on average, comfortable.’

Pete Boettke’s discussion of economic theory as an essential lens through which one looks to make the world clearer.

The Impossible Trinity of Liberal Democracy

In the first part of my series on democracy published a few years ago, I made a distinction between four senses in which the term “democracy” is used. To briefly recap, I made they were: a) a term of empty political praise for policies which partisans like b) an institutional decision-making process emphasizing the primacy of majoritarian opinion c) a generic term for the type of procedures which have been prevalent in the west, and d) an overarching term for the ethical commitments of liberals. In that series, I focused on the tension b) and d), mostly ignoring a) and c). (For Present purposes, my highly speculative musings on anarchism are irrelevant.

In a recent podcast of the Ezra Klein show  (which I highly recommend) discussing his book The People vs. Democracy: Why Our Freedom Is in Danger and How To Save It, Harvard political theorist Yascha Mounk and Ezra Klein were debating how pessimistic we should be about the prospects for the future of American Democracy. I don’t really wish to comment on whether we should be pessimistic or not, but I want to make a further distinction that clarifies some of the disagreements and points towards a deeper issue in the workings of democratic institutions. I will argue that democracy consists of a liberal, majoritarian, and procedural dimension and these dimensions are not reconcilable for very long.

Mounk makes a similar distinction to the one I made between democratic majoritarianism and liberalism as a reason to be pessimistic. Klein tended to push back, focusing on the ways in which modern American political culture is far more ethically liberal than it has ever been, as seen through the decline in racism since the middle of the twentieth century and decline in homophobia since the 1990s. Mounk, however, emphasized how respect for procedure in the American political process has declined during the Trump Era, as evidenced by Trump’s disrespect for the political independence of courts and agencies like the Department of Justice.

However, throughout Klein’s and Mounk’s debate, it became clear that there was another distinction which needed to be made explicitly, and one which I have tended to heavily under-emphasize in my own thinking on the feasibility of democracy. It seems to me there are at least three dimensions by which to judge the functioning of democracies which are important to distinguish:

  1. Majoritarianism—the extent to which a democracy is sensitive to majority public opinion. Democracy, in this dimension, is simply the tendency to translate majority opinion to public policy, as Mounk puts it.
  2. Liberalism—this refers to the ethical content towards which democracies in the west try to strive. This is the extent to which citizens are justly treated as moral equals in society; whether minority religious freedoms are respected, racial and ethnic minorities are allowed equal participation in society (economically and politically), and the extent to which general principles of liberal justice (however they may be interpreted) are enacted.
  3. Legal proceduralism—the extent to which political leaders and citizens respect the political independence of certain procedures. This dimension heavily emphasizes the liberal belief in the rule of law and the primacy of process. This can include law enforcement agencies such as the Department of Justice or the FBI, courts, and respect for the outcomes of elections even when partisan opponents are victorious.

It seems that there are reasons why one would want a democracy to retain all three features. Majoritarianism could be desirable to ensure stability, avoiding populist revolutions and uprising, and perhaps because one thinks it is just for government to be accountable to citizens. Liberalism, clearly, is desirable to ensure the society is just. Proceduralism is desirable to maintain the stability of the society given that people have deep political and philosophical disagreements.

Klein and Mounk’s debate, considering this explicit triadic distinction, can be (crudely) seen as Mounk initially emphasizing the tension between majoritarianism and liberalism in modern democracies. Klein pushes back saying that we are more liberal today than we’ve ever been, and perhaps the current majoritarian populist turn towards Trump should be put in context of other far more illiberal majoritarian populist impulses in the past. Mounk’s response seems to be that there’s also been a decline in respect for legal procedure in modern American politics, opening a danger for the instability of American democracy and a possible rise of authoritarianism.

First, it seems to me that both Mounk and Klein overemphasize respect for procedure in the past. As Robert Hasnas has argued, it has never been the case that anyone treats the law as independent simply because “the law is not a body of determinate rules that can be objectively and impersonally applied by judges” and therefore “what the law prescribes is necessarily determined by the normative predispositions of the one who is interpreting it.” There is always an ethical, and even a partisan political dimension, to how one applies procedure. In American history, this can be seen in ways that courts have very clearly interpreted law in motivated ways to justify a partisan, often illiberal, political view, such as Bowers v. Hardwick. There has always been a tendency for procedures to be applied in partisan ways, from the McCarthyite House Unamerican Committee, to the FBI’s persecution of civil rights leaders. Indeed, has Hasnas argues, the idea that procedures and laws can be entirely normatively and politically independent is a myth.

It is true, however, that Mounk does present reason to believe that populism makes disrespect for these procedures explicit. Perhaps one can say that while procedural independence is, in a pure sense, a myth, it is a constructive myth to maintain stability. People believing that elections are not independent, Trump’s disrespect for the independence of courts and justice, allows for a disintegration of those institutions into nothing but a Carl Schmitt-style, zero-sum war for power that can undermine stability of political institutions.

On the other hand, it seems worth emphasizing that there is often a tension between respect for procedure and the ethics of liberalism. Klein points out how there was large respect for legal procedure throughout American history that heavily undermined ethical liberalism, such as southerners who filibustered anti-lynching laws. Indeed, the justification for things such as the fugitive slave law was respect for the political independence of the legal right to property in slaves. All the examples of procedure being applied in politically biased and illiberal ways given moments ago support this point There is nothing in the notion that legal and electoral procedures are respected that guarantees those procedures in place will respect liberal principles of justice.

I remain agnostic as to whether we should be more pessimistic about the prospects for democracy in America today than at any other point in American history. However, at the very least, this debate reveals an impossible trinity, akin to the impossible trinity in monetary policy, between these three dimensions of democracy. If you hold majority opinion as primary, that includes populist urges to undermine the rule of law. Further, enough ink has been spilled on the tensions between majoritarianism and liberalism or effective policy. If you hold respect for procedure as primary, that includes the continuation procedures which are discriminatory and unjust, as well as procedures which restrict and undermine majority opinion. If you hold the justice of liberalism as primary, that will generate a tendency for morally virtuous liberals to want to undermine inequitable, unjust procedures and electoral outcomes and to want to restrict the ability of majorities to undermine minority rights.

The best a conventional democrat can do, it seems to me, is to pick two. A heavily majoritarian democracy where procedures are respected, which seems to be the dominant practice in American political history, is unlikely to be very ethically liberal. An ethically liberal and highly procedural government, something like a theoretically possible but practically unfeasible liberal dictator or perhaps a technocratic epistocracy (for which Jason Brennan argues), is a possible option but might be unstable if majorities see it as illegitimate or ethically unpalatable to procedural democrats. An ethically liberal but majoritarian democracy seems unworkable, given the dangers of populism to undermine minority rights and the rational ignorance and irrationality of voters. This option also seems to be what most western democracies are currently trending towards, which rightly worries Mounk since it is also likely to be extremely unstable. But if there’s a lesson to be learned from the injustice of American history and the rise of populism in the west it’s that choosing all three is not likely to be feasible over the long term.

The Dictator’s Handbook

I recently pointed you towards a book that has turned out to be a compelling and interesting read.

At the end of the day, it’s a straightforward application of public choice theory and evolutionary thinking to questions of power. Easy to understand theory is bundled with data and anecdotes* to elucidate the incentives facing dictators, democrats, executives, and public administrators. The differences between them are not discrete: they all face the same basic problem of compelling others’ behavior while facing some threat of replacement.

Nobody rules alone, so staying in power means keeping the right people happy and/or afraid. All leaders are constrained by their underlings. These underlings are necessary to get anything done, but they’re also potential rivals. For Bueno de Mesquita and Smith the crucial facts of a political order are a) how big a coalition (how many underlings) the ruler is beholden too and b) how replaceable the members of that coalition are.

The difference between liberal and illiberal orders boil down to differences in those two parameters. In democracies with a larger coalition and less replaceable coalition members, rulers behave better.

 

I got a Calculus of Consent flavor from Dictator’s Handbook. At the end of the day, collective decision making will reflect some version of “the will of the people… who matter.” But when we ask about the number of people who matter, we run into C of C thinking. Calling for bigger coalitions is another way of calling for an approach to an effective unanimity rule (at least at the constitutional stage).

In C of C the question of the optimal voting rule (majority vs. super majority vs. unanimity) boils down to a tradeoff between the costs of organizing and the costs of externalities imposed by the ruling coalition. On the graph below (from C of C) we’re comparing organization costs (J) against externality costs (I) (the net costs of the winning coalition’s inefficient policies). The idea is that a unanimity rule would prevent tyranny of the majority (i.e. I is downward sloping), but that doesn’t mean unanimity is the optimal voting rule.

Figure 18.  Click to open in new window.

But instead of asking “what’s efficient” let’s think think about what we can afford out of society’s production, then ask who makes what decisions. In a loose sense, we can think of a horizontal line on the graph above representing our level of wealth. If we** aren’t wealthy enough to organize, then the elites rule and maximize rent extraction. We can’t get far up J, so whichever coalition is able to rule imposes external costs at a high level on I.

But I‘s height is a function of rent extraction. Rulers face the classic conundrum of whether to take a smaller piece of a larger pie.

The book confirms what we already know: when one group can make decisions about what other groups can or must do, expect a negative sum game. But by throwing in evolutionary thinking it shed light on why we see neither an inexorable march of progress nor universal tyranny and misery.

As you travel back in time, people (on average) tend to look more ignorant, cruel, and superstitious. The “default state” of humanity is poverty and ignorance. The key to understanding economics is realizing that we’ve bootstrapped ourselves out of that position and we aren’t done yet.

The Dictator’s Handbook helped me realize that I’d been forgetting that the “default state” of political power is rule by force. The liberalization we’ve seen over the last 500 years has been just the first part of a bootstrapping process.

Understanding the starting point makes it clear that more inclusive systems use ideas, institutions, capital, and technology to abstract upward to more complex levels. Something like martial honor scales up the exercise of power from the tribe (who can The Chief beat up) to the fiefdom (now the Chief has sub-chiefs). Ideology and identity can tie fiefdoms into nation-states (now we’ve got a king and nobility). Wealth plus new ideologies create more inclusive and democratic political orders (now we’ve got a president and political parties). But each stage is built on the foundation set before. We stand on the shoulders of giants, but those giants were propped up by the non-giants around them.

Our world was built by backwards savages. The good news is that we can use the flimsier parts of the social structure we inherited as scaffolding for something better (while maintaining the really good stuff). What exactly this means is the tricky question. Which rules, traditions, organizations, and processes are worth keeping? How do we maintain those? How/when do we replace the rest? And what does “we” even mean?

Changing the world involves uncertainty. There are complex interrelations between every part of reality. And the knowledge society needs is scattered through many different minds. To make society better, we need buy-in from our neighbors (nobody rules alone). And we need to realize that the force we exert will be countered by an equal and opposite force some plural, imperfectly identifiable, maybe-but-probably-not equal, and only-mostly-opposite forces. There are complex and constantly shifting balances between different coalitions vying for power and the non-coalitions that might suddenly spring into action if conditions are right. Understanding the forces at play helps us see the constraints to political change.

And there’s good news: it is possible to create a ruling coalition that is more inclusive. The conditions have to be right. But at least some of those conditions are malleable. If we can sell people on the right ideas, we can push the world in the right direction. But we have to work at it, because there are plenty of people pitching ideas that will concentrate power and create illiberal outcomes.


*I read the audiobook, so I’m basically unable to vouch for the data analysis. Everything they said matched the arguments they were making, but without seeing it laid out on the page I couldn’t tell you whether what they left out was reasonable.

**Whatever that means…

Adam Smith: a historical historical detective?

9781107491700

Adrian Blau at King’s College London has an on-going project of making methods in political theory more useful, transparent and instructive, especially for students interested in historical scholarship.

I found his methods lecture, that he gave to Master’s students and went onto publish as ‘History of political thought as detective work’, particularly helpful for formulating my approach to political theory. The advantage of Blau’s advice is that it avoids pairing technique with theory. You can be a Marxist, a Straussian, a contextualist, anything or nothing, and still apply Blau’s technique.

Blau suggests that we adopt the persona of a detective when trying to understand the meaning of historical texts. That is, we should acknowledge

  • uncertainty associated with our claims
  • that facts of the matter will almost certainly be under-determined by the available evidence
  • that conflicting evidence probably exists for any interesting question
  • that interpreting any piece of evidence through any exclusive theoretical lens is likely to lead us to error

To make more compelling inferences in the face of these challenges, we can use techniques of triangulation (using independent sources of evidence together). This could include arguing for an interpretation of a thinker’s argument based on a close reading of their text, while showing that other people in the thinker’s social milieu deployed language in a similar way (contextual), and also showing how helpful that argument was for achieving a political end that was salient in that time and place (motivation).

Continue reading

What makes robust political economy different?

17760989_10211701794931604_8034787751519057151_o

I encountered what would later become important elements of Mark Pennington’s book Robust Political Economy in two articles that he wrote on the limits of deliberative democracy, and the relative merits of market processes, for social and ethical discovery, as well as a short book Mark wrote with John Meadowcroft, Rescuing Social Capital from Social Democracy. This research program inspired me to start my doctorate and pursue an academic career.  Why did I find robust political economy so compelling? I think it is because it chimed with my experience of encountering the limits of neo-classical formal models that I recount in my chapter, ‘Why be robust?’, of a new book, Interdisciplinary Studies of the Market Order.

While doing my master’s degree in 2009, I took a methodology course in rational choice theory at Nuffield College’s Center for Experimental Social Science. As part of our first class we were taken to a brand new, gleaming behavioural economics laboratory to play a repeated prisoners’ dilemma game. The system randomly paired anonymous members of the class to play against each other. We were told the objective of the game was to maximise our individual scores.

Thinking that there were clear gains to make from co-operation and plenty of opportunities to punish a defector over the course of repeated interactions, I attempted to co-operate on the first round. My partner defected. I defected a couple of times subsequently to show I was not a sucker. Then I tried co-operating once more. My partner defected every single time in the repeated series.

At the end of the game, we were de-anonymised and it turned out, unsurprisingly, that I had the lowest score in the class. My partner had the second lowest. I asked her why she engaged in an evidently sub-optimal strategy. She explained: ‘I didn’t think we were playing to get the most points. I was just trying to beat you!’

The lesson I took away from this was not that formal models were wrong. Game theoretic models, like the prisoners’ dilemma, are compelling and productive analytical tools in social science, clarifying the core of many challenges to collective action. The prisoners’ dilemma illustrates how given certain situations, or rules of the game, self-interested agents will be stymied from reaching optimal or mutually beneficial outcomes. But this experience suggested something more complex and embedded was going on even in relatively simple social interactions.

The laboratory situation replicated the formal prisoners’ dilemma model as closely as possible with explicit rules, quantified ‘objective’ (though admittedly, in this case, low-value) payoffs, and a situation designed to isolate players as if they were prisoners in different cells. Yet even in these carefully controlled circumstances, it turns out that the situation is subject to multiple interpretations and understandings.

Whatever the textual explanation accompanying the game, the score on the screen could mean something different to the various players. The payoffs for the representative agents in the game were not the same as the payoffs in the minds of the human players. In a sense, my partner and I were unwittingly playing different games (although I lost within either rules of the game!).

When we engage with the social world, it is not only the case that our interests may not align with other people. Social interaction is open-ended. We do not know all the possible moves in the game, and we do not know much about the preference set of everyone else who is playing. Indeed, neither they nor we know what a ‘complete’ set of preferences and payoffs would look like, even of our own. We can map out a few options and likely outcomes through reflection and experience but even then we may face outcomes we do not anticipate. As Peter Boettke explains: ‘we strive not only to pursue our ends with a judicious selection of the means, but also to discover what ends that we hope to pursue.’

In addition, the rules of the game themselves are not merely exogenous impositions on us as agents. They are constituted inter-subjectively by the practices, beliefs and values of the actors that are also participants in the social game. As agents, we do not merely participate in the social world. We also engage in its creation through personal lifestyle experimentation, cultural innovation, and establishing shared rules and structures. The social world thus presents inherent uncertainty and change that cannot be captured in a formal model that assumes fixed rules of the game and the given knowledge of the players.

It is these two ideas, both borrowed from the Austrian notion of catallaxy, that makes robust political economy distinct. First, neither our individual ends, nor means of attaining them, are given prior to participation in a collective process of trial and error. Second, the rules that structure how we interact are themselves not given but subject to a spontaneous, evolutionary process of trial and error.

I try to set out these ideas in a recent symposium in Critical Review on Mark Pennington’s book, and in ‘Why be robust?’ in Interdisciplinary Studies of the Market Order edited by Peter Boettke, Chris Coyne and Virgil Storr. The symposium article is available on open access and there is a working paper version of my chapter is available at the Classical Liberal Institute website.

Foucault’s biopolitics seems like it’s just a subtle form of nationalism

I’ve been slowly making my way through Michel Foucault’s The Birth of Biopolitics, largely on the strength of Barry’s recommendation (see also this fiery debate between Barry and Jacques), and a couple of things have already stood out to me. 1) Foucault, lecturing in 1978-79, is about 20 years behind Hayek’s 1960 book The Constitution of Liberty in terms of formulating interesting, relevant political theory and roughly 35 years behind his The Road to Serfdom (1944) in terms of expressing doubts over the expanding role of the state into the lives of citizens.

2) The whole series of lectures seems like a clever plea for French nationalism. Foucault is very ardent about identifying “neo-liberalism” in two different models, a German one and an American one, and continually makes references about the importation or lack thereof of these models into other societies.

Maybe I’m just reading too deeply into his words.

Or maybe Foucault isn’t trying to make a clever case for French nationalism, and is instead trying to undercut the case for a more liberal world order but – because nothing else has worked as well as liberalism, or even come close – he cannot help but rely upon nationalist sentiments to make his anti-liberal case and he just doesn’t realize what he’s doing.

These two thoughts are just my raw reactions to what is an excellent book if you’re into political theory and Cold War scholarship. I’ll be blogging my thoughts on the book in the coming weeks, so stay tuned!