Is persecution the purpose?


Last week, Rebecca Tuvel, an Assistant Professor of Philosophy, had her recent article in Hypatia, ‘In Defence of Transracialism’, denounced in an open letter signed by several professional scholars (among others). They accused her of harming the transgender community by comparing them with the currently more marginalized identity of transracialism. William Rein, on this blog, and Jason Brennan at Bleeding Heart Libertarians, have written valuable defences of Tuvel’s right to conduct academic research in this area even if some find it offensive.

Events have moved fast. The associate editors initially seemed to cave in to pressure and denounced the article they had only just published. The main editor, Sally Scholz, has since disagreed with the associate editors. Critically, Tuvel’s colleagues at Rhodes College have given her their support so it looks like the line for academic freedom might be holding in this case. Without wishing to engage too much in the hermeneutics of suspicion, I think there are grounds to doubt the depth of the critics’ attitudes. I base this on my reading of Judith Butler, who is one of the signatories to the open letter arguing for Tuvel’s article to be retracted.

Seven years ago, I managed to read Judith Butler’s Gender Trouble. Although there are many variations in the movement, Butler is a central figure in the post-structuralist , non-gender-essentialist, feminism that inspires much of the contemporary ‘social justice’ movement. When I got past Butler’s famously difficult prose, I found a great deal of ideas I agreed with. I wrote up a brief piece comparing Butler’s concerns with violently enforced gender conformity to classical liberal approaches to personal autonomy. I also identified some problems.

First, Butler’s critique of the natural sciences seems to completely miss the mark. Butler associates gender essentialism with the study of genetics, when, in fact, genetics has done more than almost anything else to explore the contingency and variation of biological sexual expression in nature. The same applies to race and ethnicity.

Second, more importantly, Butler insists that there is no underlying authentic gender or sexual identity. All identities are ultimately constituted by power relations and juridical discourses. You find this argument repeated among social justice proponents who insist all forms of identity are products of ‘social construction’ rather than ever being based on natural facts. As a result, all personal identity claims are only ever historical and strategic. They are attempts to disrupt power relation in order to liberate and empower the subaltern and oppressed albeit temporarily

I don’t think this is perfectly factually true but lets accept it for now as roughly true. This means that transracialism itself might become, or could already be, another example of the strategic disruption of contemporary juridical discourses, this time about race and ethnicity. The same people currently denouncing Tuvel could very easily insist on the acknowledgement of transracial identity in five or ten years time, and denounce those who hold their current views. From their own position, which explicitly rejects any ultimate restrictions on identity formation, we have no warrant to know otherwise.

In this sense, Tuvel might not be ‘wrong’ at all, just slightly ahead of the social justice curve. And her critics wouldn’t actually be changing their minds, just changing their strategies. Meanwhile, people who actually take their identities seriously should be wary of their academic ‘allies’. They can quickly re-orientate their attitude such that a previously oppressed identity comes to be re-configured as an oppressive and exclusionary construct.

If all claims in this area are strategic, rather than factual, as Butler claims, then why try to damage a philosopher’s career over it? Why provoke an academic journal almost to self-destruct? Rather than working out which ideas to denounce, we should critique the strategy of denouncement (or calling out) itself. In that vein, much as I disagree wholly with the stance of its editorial board, I think calling into question Hypatia’s status as an academic journal, is premature.

The Protestant Reformation and freedom of conscience

This year we celebrate 500 years of the Protestant Reformation. On October 31, 1517, the then Augustinian monk, priest, and teacher Martin Luther nailed at the door of a church in Wittenberg, Germany, a document with 95 theses on salvation, that is, basically the way people are led by the Christian God to Heaven. Luther was scandalized by the sale of indulgences by the Roman Catholic Church, believing that this practice did not correspond to the biblical teaching. Luther understood that salvation was given only by faith. The Catholic Church understood that salvation was a combination of faith and works.

The practice of nailing a document at the door of the church was not uncommon, and Luther’s intention was to hold an academic debate on the subject. However, Luther’s ideas found many sympathizers and a wide-spread protestant movement within the Roman Catholic Church was quickly initiated. Over the years, other leaders such as Ulrich Zwingli and John Calvin joined Luther. However, the main leaders of the Roman Catholic Church did not agree with the Reformers’ point of view, and so the Christian church in the West was divided into several groups: Lutherans, Anglicans, Reformed, Anabaptists, later followed by Methodists, Pentecostals and many others. In short, the Christian church in the West has never been the same.

The Protestant Reformation was obviously a movement of great importance in world religious history. I also believe that few would disagree with its importance in the broader context of history, especially Western history. To mention just one example, Max Weber’s thesis that Protestantism (especially Calvinism, and more precisely Puritanism) was a key factor in the development of what he called modern capitalism is very accepted, or at least enthusiastically debated. But I would like to briefly address here another impact of the Protestant Reformation on world history: the development of freedom of conscience.

Simply put, but I believe that not oversimplifying, after the fall of the Roman Empire and until the 16th century, Europe knew only one religion – Christianity – in only one variety – Roman Catholic Christianity. It is true that much of the paganism of the barbarians survived through the centuries, that Muslims occupied parts of Europe (mainly the Iberian Peninsula) and that other varieties of Christianity were practiced in parts of Europe (mainly Russia and Greece). But besides that, the history of Christianity was a tale of an ever-increasing concentration of political and ecclesiastical power in Rome, as well as an ever-widening intersection of priests, bishops, kings, and nobles. In short, Rome became increasingly central and the distinction between church and state increasingly difficult to observe in practice. One of the legacies of the Protestant Reformation was precisely the debate about the relationship between church and state. With a multiplicity of churches and strengthening nationalisms, the model of a unified Christianity was never possible again.

Of course, this loss of unity in Christendom can cause melancholy and nostalgia among some, especially Roman Catholics. But one of its gains was the growth of the individual’s space in the world. This was not a sudden process, but slowly but surely it became clear that religious convictions could no longer be imposed on individuals. Especially in England, where the Anglican Church stood midway between Rome and Wittenberg (or Rome and Geneva), many groups emerged on the margins of the state church: Presbyterians, Baptists, Congregationalists, Quakers, and so on. These groups accepted the challenge of being treated as second-class citizens, but maintaining their personal convictions. Something similar can be said about Roman Catholics in England, who began to live on the fringes of society. The new relationship between church and state in England was a point of discussion for many of the most important political philosophers of modernity: Thomas Hobbes, John Locke, Edmund Burke, and others. To disregard this aspect is to lose sight of one of the most important points of the debate in which these thinkers were involved.

The Westminster Confession of Faith, one of the most important documents produced in the period of the Protestant Reformation, has a chapter entitled “Of Christian Liberty, and Liberty of Conscience.” Of course there are issues in this chapter that may sound very strange to those who are not Christians or who are not involved in Christian churches. However, one point is immediately understandable to all: being a Christian is a matter of intimate forum. No one can be compelled to be a Christian. At best this obligation would produce only external adhesion. Intimate adherence could never be satisfactorily verified.

Sometime after the classical Reformation period, a new renewal religious movement occurred in England with the birth of Methodism. But its leading leaders, John Wesley and George Whitefield, disagreed about salvation in a way not so different from what had previously occurred between Luther and the Roman Catholic Church. However, this time there was no excommunication, inquisition or wars. Wesley simply told Whitefield, “Let’s agree to disagree.”

Agreeing to disagree is one of the great legacies of the Protestant Reformation. May we always try to convince each other by force of argument, not by force of arms. And that each one has the right to decide for themselves, with freedom of conscience, which seems the best way forward.

Auftragstaktik: Decentralization in military command

Many 20th century theorists who advocated central planning and control (from Gaetano Mosca to Carl Landauer, and hearkening back to Plato’s Republic) drew a direct analogy between economic control and military command, envisioning a perfectly functioning state in which the citizens mimic the hard work and obedience of soldiers. This analogy did not remain theoretical: the regimes of Mussolini, Hitler, and Lenin all attempted to model economies along military principles. [Note: this is related to William James’ persuasion tactic of “The Moral Equivalent of War” that many leaders have since used to garner public support for their use of government intervention in economic crises from Great Depression to the energy crisis to the 2012 State of the Union, though one matches the organizing methods of war to central planning and the other matches the moral commitment of war to intervention, but I digress.] The underlying argument of the “central economic planning along military principles” was that the actions of citizens would be more efficient and harmonious under direction of a scientific, educated hierarchy with highly centralized decision-making than if they were allowed to do whatever they wanted. Wouldn’t an army, if it did not have rigid hierarchies, discipline, and central decision-making, these theorists argued, completely fall apart and be unable to function coherently? Do we want our economy to be the peacetime equivalent of an undisciplined throng (I’m looking at you, Zulus at Rorke’s Drift) while our enemies gain organizational superiority (the Brits had at Rorke’s Drift)? While economists would probably point out the many problems with the analogy (different sets of goals of the two systems, the principled benefits of individual liberty, etc.), I would like to put these valid concerns aside for a moment and take the question at face value. Do military principles support the idea that individual decision-making is inferior to central control? Historical evidence from Alexander the Great to the US Marine Corps suggests a major counter to this assertion, in the form of Auftragstaktik.


Auftragstaktik was developed as a military doctrine by the Prussians following their losses to Napoleon, when they realized they needed a systematic way to overcome brilliant commanders. The idea that developed, the brainchild of Helmuth von Moltke, was that the traditional use of strict military hierarchy and central strategic control may not be as effective as giving only the general mission-based, strategic goals that truly necessitated central involvement to well-trained officers who were operating on the front, who would then have the flexibility and independence to make tactical decisions without consulting central commanders (or paperwork). Auftragstaktik largely lay dormant during World War I, but literally burst onto the scene as the method of command that allowed (along with the integration of infantry with tanks and other military technology) the swift success of the German blitzkrieg in World War II. This showed a stark difference in outcome between German and Allied command strategies, with the French expecting a defensive war and the Brits adhering faithfully and destructively to the centralized model. The Americans, when they saw that most bold tactical maneuvers happened without or even against orders, and that the commanders other than Patton generally met with slow progress, adopted the Auftragstaktik model. [Notably, this also allowed the Germans greater adaptiveness and ability when their generals died–should I make a bad analogy to Schumpeter’s creative destruction?] These methods may not even seem foreign to modern soldiers or veterans, as it is still actively promoted by the US Marine Corps.

All of this is well known to modern military historians and leaders: John Nelson makes an excellent case for its ongoing utility, and the excellent suggestion has also been made that its principles of decentralization, adaptability, independence, and lack of paperwork would probably be useful in reforming non-military bureaucracy. It has already been used and advocated in business, and its allowance for creativity, innovation, and reactiveness to ongoing complications gives new companies an advantage over ossified and bureaucratic ones (I am reminded of the last chapter of Parkinson’s Law, which roughly states that once an organization has purpose-built rather than adapted buildings it has become useless). However, I want to throw in my two cents by examining pre-Prussian applications of Auftragstaktik, in part to show that the advantages of decentralization are not limited to certain contexts, and in part because they give valuable insight into the impact of social structures on military ability and vice versa.

Historical Examples

Alexander the Great: Alexander was not just given exemplary training by his father, he also inherited an impressive military machine. The Macedonians had been honed by the conquest of neighboring Illyria, Thrace, and Paeonia, and the addition of Thessalian cavalry and Greek allies in the Sacred Wars. However, as a UNC ancient historian found, the most notable innovations of the Macedonians were their new siege technologies (which allowed a swifter war–one could say, a blitzkrieg–compared to earlier invasions of Persia) and their officer corps. This officer corps, made up of the king’s “companions,” was well trained in combined-arms hoplite and cavalry maneuvers, and during multiple portions of his campaign (especially in Anatolia and Bactria) operated as leaders of independent units that could cover a great deal more territory than one army. In set battles, the Macedonians showed a high degree of maneuverability, with oblique advances, effective use of reserves, and well-timed cavalry strikes into gaps in enemy formations, all of which depended on the delegation of tactical decision-making. This contrasted with the Persians, who followed standards into battle without organized ranks and files, and the Greek hoplites, whose phalanx depended mostly on cohesion and group action and therefore lacked flexibility. [Also, fun fact, the Macedonians had the only army in recorded history in which bodies of troops were identified systematically by the name of their leader. This promoted camaraderie and likely indicates that, long-term, the soldiers became used to the tactical independence and decision-making of that individual. Imagine dozens of Rogers’ Rangers.]

The Roman legion: As with any great empire, the Macedonians spread through their military innovations, but then ossified in technique over the next 150 years. When the Romans first faced a major Hellenistic general, Pyrrhus, they had already developed the principles of the system that would defeat the Macedonian army: the legion. In the early Roman legion, two centuries were combined into a maniple, and maniples were grouped into cohorts, allowing for detachment and independent command of differing group sizes. Crucially, centurions maintained discipline and the flexible but coordinated Roman formations, and military tribunes were given tactical control of groups both during and between battles. The flexibility of the Roman maniples was shown at the Battle of Cynoscephalae, in which the Macedonian phalanx–which had frontal superiority through its use of the sarissa and cohesion but little maneuverability–became disorganized on rough ground and was cut to pieces on one flank by the more mobile and individually capable Roman legionaries, This (as well as many battles in the Macedonian and Syrian Wars proved) showed the value of flexibility and individual action in a disciplined force, but where was the Auftragstaktik? At Cynoscephalae, after defeating one flank, the Romans on that flank dispersed to loot the Macedonian camp. In antiquity, this generally resulted in those troops becoming ineffective as a fighting force, and many a battle was lost because of pre-emptive looting. However, in this case, an unnamed tribune–to whom the duty of tactical decisions had been delegated–reorganized these looters and brought them to attack the rear of the other Macedonian flank, which had been winning. This resulted in a crushing victory and contributed to he Roman conquest of Greece. Decentralized control was also a hallmark of Julius Caesar himself, who frequently sent several cohorts on independent campaigns in Gaul under subordinates such as Titus Labienus, allowing him to conquer the much more numerous Gauls through local superiority, lack of Gallic unity, and organization. Also, at the climactic Battle of Alesia, Caesar used small, mobile reserve units with a great deal of tactical independence to hold over 20 km of wooden walls against a huge besieging force.

The Vikings: I do not mean to generalize about Vikings (who could be of many nations–the term just means “raider”) when they do not have a united culture, but in their very diversity of method and origin, they demonstrate the effectiveness of individualism and decentralization. Despite being organized mostly based on ship-crews led by jarls, with central leadership only when won by force or chosen by necessity, Scandinavian longboatmen and warriors exerted their power from Svalbard to Constantinople to Sicily to Iceland and North America from the 8th to 12th centuries. The social organization of Scandinavia may have been the most free (in terms of individual will to do whatever one wants–including, unfortunately, slaughter, but also some surprisingly progressive women’s rights to decisions) in recorded history, and this was on display in the famous invasion of the Great Heathen Army. With as few as 3,500 farmer-raiders and 100 longboats to start, the legendary sons of Ragnar Lothbrok and the Danish invaders, with jarls as the major decision-makers of both strategic and tactical matters for their crews, won a series of impressive battles over 20 years (described in fascinating, if historical-fiction, detail in the wonderful book series and now TV series The Last Kingdom), almost never matching the number of combatants of their opponents, and took over half of England. The terror and military might associated with the Vikings in the memories of Western historians is a product of the completely decentralized, nearly anarchic methods of Scandinavian raiders.

The Mongols: You should be sensing a trend here: cultures that fostered lifelong training and discipline (and expertise in siege engineering, which seems to have correlated with the tactics I describe, as the Macedonians, Romans, and Mongols were each the most advanced siege engineers of their respective eras) tended to have more trust in well-trained subordinates. This brought them great military success and also makes them excellent examples of proto-Auftragstaktik. The Mongols not only had similar mission-oriented commands and tactical independence, but they also had two other aspects of their military that made them highly effective over an enormous territory: their favored style of horse-archer skirmishing gave natural flexibility and their clan organization allowed for many independently-operating forces stretching from Poland to Egypt to Manchuria. The Mongols, like the Romans, demonstrate how a force can have training/discipline without sacrificing the advantages based on tactical independence, and the two should never be mixed up!

The Americans in the French and Indian War and the Revolutionary War: Though this is certainly a more limited example, there were several units that performed far better than others among the Continentals. The aforementioned Rogers’ Rangers operated as a semi-autonomous attachment to regular forces during the French and Indian War, and were known for their mobility, individual experience and ability, and tactical independence in long-range, mission-oriented reconnaissance and ambushes. This use of savvy, experienced woodsman in a semi-autonomous role was so effective that the ranger corps was expanded, and similar tactical independence, decentralized command, and maneuverability were championed by the Green Mountain Boys, the heroes of Ticonderoga. Morgan’s Rifles used similar experience and semi-autonomous flexibility to help win the crucial battles of Saratoga and Cowpens, which allowed the nascent Continental resistance to survive and thrive in the North outside of coastal cities and to capture much of the South, respectively. The forces of Francis Marion also used proto-guerrilla tactics with decentralized command and outperformed the regulars of Horatio Gates. Given the string of unsuccessful set-piece battles fought by General Washington and his more conventional subordinates, the Continentals depended on irregulars and unconventional warfare to survive and gain victories outside of major ports. These victories (especially Saratoga and Cowpens) cut off the British from the interior and forced the British into stationary posts in a few cities–notably Yorktown–where Washington and the French could siege them into submission. This may be comparable to the Spanish and Portuguese in the Peninsular War, but I know less about their organization, so I will leave the connection between Auftragstaktik and early guerrilla warfare to a better informed commenter.

These examples hopefully bolster the empirical support for the idea that military success has often been based, at least in part, on radically decentralizing tactical control, and trusting individual, front-line commanders to make mission-oriented decisions more effectively than a bureaucracy could. There are certainly many more, and feel free to suggest examples in the comments, but these are my favorites and probably the most influential. This evidence should cause a health skepticism toward argument for central control on the basis of the efficiency or effectiveness demonstrated in military central planning. Given the development of new military technologies and methods of campaign (especially guerilla and “lone wolf” attacks, which show a great deal of decentralized decision-making) and the increasing tendency since 2008 to revert toward ideas of central economic planning, we are likely to get a lot of new evidence about both sides of this fascinating analogy.

Where is the line between sympathy and paternalism?

In higher-ed news two types of terrifying stories come up pretty frequently: free speech stories, and Title IX stories. You’d think these stories would only be relevant to academics and students, but they’re not. These issues are certainly very important for those of us who hang out in ivory towers. But those towers shape the debate–and unquestioned assumptions–that determine real world policy in board rooms and capitols. This is especially true in a world where a bachelor’s degree is the new GED.

The free speech stories have gotten boring because they all take the following form: group A doesn’t want to let group B talk about opinion b so they act like a bunch of jackasses. Usually this takes place at a school for rich kids. Usually those kids are majoring in something that will give them no marketable skills.

The Title IX stories are Kafkaesque tales where a well-intentioned policy (create a system to protect people in colleges from sexism and sexual aggression) turns into a kangaroo court that allows terrible people to ruin other people’s lives. (I hasten to add, I’m sure Title IX offices do plenty of legitimately great work.)

A great article in the Chronicle gives an inside look at one of these tribunals. For the most part it’s chilling. Peter Ludlow had been accused of sexual assault, but the claims weren’t terribly credible. As far as I can tell (based only on this article) he did some things that should raise some eyebrows, but nothing genuinely against any rules. Nonetheless, the accusations were a potential PR and liability problem for the school so he had to go, regardless of justice.

The glimmer of hope comes with the testimony of Jessica Wilson. She managed to shake them out of their foregone conclusion and got them to consider that women above the age of consent can be active participants in their own lives instead of victims waiting to happen. Yes, bad things happen to women, but that’s not enough to jump to the conclusion that all women are victims and all men are aggressors.

The big question at the root of these types of stories is how much responsibility we ought to take for our lives.

Free speech: Should I be held responsible for saying insensitive (or unpatriotic) things? Who would enforce that obligations? Should I be held responsible for dealing with the insensitive things other people might say? Or should I even be allowed to hear what other people might say because I can’t take responsibility for evaluating it “critically” and coming to the right conclusion.

Title IX: Should women be responsible for their own protection, or is that akin to blaming the victim? We’ve gone from trying to create an environment where everyone can contribute to taking away agency. In doing so we’ve also created a powerful mechanism that can be abused. This is bad because of the harm it does to the falsely accused, but it also has the potential to delegitimize the claims of genuine victims and fractures society. But our forebears weren’t exactly saints when it came to treating each other justly.

Where is the line between helping a group and infantilizing them?

At either end of a spectrum I imagine caricature versions of a teenage libertarian (“your problems are your own, suck it up while I shout dumb things at you”) and a social justice warrior (“it’s everyone else’s fault! Let’s occupy!”). Let’s call those end points Atomistic Responsibility and Social Responsibility. More sarcastically, we could call them Robot and Common Pool Responsibility. Nobody is actually at these extreme ends (I hope), but some people get close.

Either one seems ridiculous to anyone who doesn’t already subscribe to that view, but both have a kernel of truth. Fair or not, you have to take responsibility for your life. But we’re all indelibly shaped by our environment.

Schools have historically adopted a policy towards the atomistic end, but have been trending in the other direction. I don’t think this is universally bad, but I think those values cannot properly coexist within a single organization.

We can imagine some hypothetical proper point on the Responsibility Spectrum, but without a way to objectively measure virtue, the position of that point–the line between sympathy and paternalism–its location is an open question. We need debate to better position and re-position that line. I would argue that Western societies have been doing a pretty good job of moving that line in the right direction over the last 100 years (although I disagree with many of the ways our predecessors have chosen to enforce that line).

But here’s the thing: we can’t move in the right direction without getting real-time feedback from our environments. Without variation in the data, we can’t draw any conclusions. What we need more than a proper split of responsibility, is a range of possibilities being constantly tinkered with and explored.

We need a diversity of approaches. This is why freedom of speech and freedom of association are so essential. In order to get this diversity, we need federalism and polycentricity–stop trying to impose order from the top down on a grand scale (“think globally, act locally“), and let order be created from the bottom up. Let our organizations–businesses, churches, civic associations, local governments and special districts–adapt to their circumstances and the wishes of their stakeholders.

Benefiting from this diversity requires open minds and epistemic humility. We stand on the shore of a vast mysterious ocean. We’ve waded a short distance into the water and learned a lot, but there’s infinitely more to learn!

(Sidenote: Looking for that Carl Sagan quote I came across this gem:

People are not stupid. They believe things for reasons. The last way for skeptics to get the attention of bright, curious, intelligent people is to belittle or condescend or to show arrogance toward their beliefs.

That about sums up my approach to discussing these sorts of issues. We’d all do better to occasionally give our opponents the benefit of the doubt and see what we can learn from them. Being a purist is a great way to structure your thought, but empathy for our opponents is how we make our theories strong.

Does business success make a good statesmen?

Gary Becker made the distinction between two types of on-the-job training: general and specific. The former consist of the skills of wide applicability, which enable the worker to perform satisfactorily different kinds of jobs: to keep one’s commitments, to arrive on time to work, to avoid disturbing behavior, etc.. All of them are moral traits that raise the productivity of the worker whichever his occupation would be. On the other hand, specific on-the-job training only concerns the peculiarities of a given job: to know how many spoons of sugar your boss likes for his coffee or which of your employees is better qualified to deal with the public. The knowledge provided by the on-the-job training is incorporated to the worker, it travels with him when he moves from one company to another. Therefore, while the general on-the-job training increases the worker productivity in every other job he gets, he makes a poor profit from the specific one.

Of course, it is relative to each profession and industry whether the on-the-job training is general or specific. For example, a psychiatrist who works for a general hospital gets specific training about the concrete dynamics of its internal organization. If he later moves to a position in another hospital, his experience dealing with the internal politics of such institutions will count as general on-the-job training. If he then goes freelance instead, that experience will be of little use for his career. Nevertheless, even though the said psychiatrist switches from working for a big general hospital to working on his own, he will carry with him a valuable general on-the-job training: how to look after his patients, how to deal with their relatives, etc.

So, to what extent will on-the-job training gained by a successful businessman enable him to be a good statesman? In the same degree that a successful lawyer, a successful sportsman, a successful writer is enabled to be one. Every successful person carries with him a set of personal traits that are very useful in almost every field of human experience: self confidence, work ethics, constancy, and so on. If you lack any of them, you could hardly be a good politician, so as you rarely could achieve anything in any other field. But these qualities are the typical examples of general on-the-job training and what we are inquiring here is whether the specific on-the-job training of a successful businessman could enable him with a relative advantage to be a better politician -or at least have a better chance of being a good one.

The problem is that there is no such a thing as an a priori successful businessman. We can state that a doctor, an engineer, or a biologist need to have certain qualifications to be a competent professional. But the performance of a businessman depends on a multiplicity of variables that prevents us from elucidating which traits would lead him to success.

Medicine, physics, and biology deal with “simple phenomena”. The limits to the knowledge of such disciplines are relative to the development of the investigations in such fields (see F. A. Hayek, “The Theory of Complex Phenomena”). The more those professionals study, the more they work, the better trained they will be.

On the other hand, the law and the market economy are cases of “complex phenomena” (see F. A. Hayek, Law, Legislation and Liberty). Since the limits to the knowledge of such phenomena are absolute, a discovery process of trial and error applied to concrete cases is the only way to weather such uncertainty. The judge states the solution the law provides to a concrete controversy, but the lawmaker is enabled to state what the law says only in general and abstract terms. In the same sense, the personal strategy of a businessman is successful only under certain circumstances.

So, how does the market economy survive to its own complexity? The market does not need wise businessmen, but lots of purposeful ones, eager to thrive following their stubborn vision of the business. Most of them will be wrong about their perception of the market and subsequently will fail. A few others will prosper, since their plans meet -perhaps by chance- the changing demands of the market. Thus, the personal traits that led a successful businessman to prosperity were not universal, but the right ones for the specific time he carried out his plans.

Having said that, would a purposeful and stubborn politician a good choice for government? After all, Niccolo Macchiavelli had pointed out that initiative was the main virtue of the prince. Then, a good statesman would be the one who handles successfully the changing opportunities of life and politics. Notwithstanding, The Prince was -as Quentin Skinner showed- a parody: opportunistic behaviour is no good to the accomplishment of public duties and the protection of civil liberties.

Nevertheless, there is still a convincing argument for the businessman as a prospect of statesman. If he has to deal with the system of checks and balances -the Congress and the Courts-, the law will act as the selection process of the market. Every time a decision based on expediency collides with fundamental liberties, the latter must withstand the former. A sort of natural selection of political decisions.

Quite obvious, but not so trite. For a stubborn and purposeful politician not to become a menace to individual and public liberties, his initiative must not venture into constitutional design. No bypasses, no exceptions, not even reforms to the legal restraints to the public authority must be allowed, even in the name of emergency. Especially for most of the emergencies often brought about by measures based on expediency.

What makes robust political economy different?


I encountered what would later become important elements of Mark Pennington’s book Robust Political Economy in two articles that he wrote on the limits of deliberative democracy, and the relative merits of market processes, for social and ethical discovery, as well as a short book Mark wrote with John Meadowcroft, Rescuing Social Capital from Social Democracy. This research program inspired me to start my doctorate and pursue an academic career.  Why did I find robust political economy so compelling? I think it is because it chimed with my experience of encountering the limits of neo-classical formal models that I recount in my chapter, ‘Why be robust?’, of a new book, Interdisciplinary Studies of the Market Order.

While doing my master’s degree in 2009, I took a methodology course in rational choice theory at Nuffield College’s Center for Experimental Social Science. As part of our first class we were taken to a brand new, gleaming behavioural economics laboratory to play a repeated prisoners’ dilemma game. The system randomly paired anonymous members of the class to play against each other. We were told the objective of the game was to maximise our individual scores.

Thinking that there were clear gains to make from co-operation and plenty of opportunities to punish a defector over the course of repeated interactions, I attempted to co-operate on the first round. My partner defected. I defected a couple of times subsequently to show I was not a sucker. Then I tried co-operating once more. My partner defected every single time in the repeated series.

At the end of the game, we were de-anonymised and it turned out, unsurprisingly, that I had the lowest score in the class. My partner had the second lowest. I asked her why she engaged in an evidently sub-optimal strategy. She explained: ‘I didn’t think we were playing to get the most points. I was just trying to beat you!’

The lesson I took away from this was not that formal models were wrong. Game theoretic models, like the prisoners’ dilemma, are compelling and productive analytical tools in social science, clarifying the core of many challenges to collective action. The prisoners’ dilemma illustrates how given certain situations, or rules of the game, self-interested agents will be stymied from reaching optimal or mutually beneficial outcomes. But this experience suggested something more complex and embedded was going on even in relatively simple social interactions.

The laboratory situation replicated the formal prisoners’ dilemma model as closely as possible with explicit rules, quantified ‘objective’ (though admittedly, in this case, low-value) payoffs, and a situation designed to isolate players as if they were prisoners in different cells. Yet even in these carefully controlled circumstances, it turns out that the situation is subject to multiple interpretations and understandings.

Whatever the textual explanation accompanying the game, the score on the screen could mean something different to the various players. The payoffs for the representative agents in the game were not the same as the payoffs in the minds of the human players. In a sense, my partner and I were unwittingly playing different games (although I lost within either rules of the game!).

When we engage with the social world, it is not only the case that our interests may not align with other people. Social interaction is open-ended. We do not know all the possible moves in the game, and we do not know much about the preference set of everyone else who is playing. Indeed, neither they nor we know what a ‘complete’ set of preferences and payoffs would look like, even of our own. We can map out a few options and likely outcomes through reflection and experience but even then we may face outcomes we do not anticipate. As Peter Boettke explains: ‘we strive not only to pursue our ends with a judicious selection of the means, but also to discover what ends that we hope to pursue.’

In addition, the rules of the game themselves are not merely exogenous impositions on us as agents. They are constituted inter-subjectively by the practices, beliefs and values of the actors that are also participants in the social game. As agents, we do not merely participate in the social world. We also engage in its creation through personal lifestyle experimentation, cultural innovation, and establishing shared rules and structures. The social world thus presents inherent uncertainty and change that cannot be captured in a formal model that assumes fixed rules of the game and the given knowledge of the players.

It is these two ideas, both borrowed from the Austrian notion of catallaxy, that makes robust political economy distinct. First, neither our individual ends, nor means of attaining them, are given prior to participation in a collective process of trial and error. Second, the rules that structure how we interact are themselves not given but subject to a spontaneous, evolutionary process of trial and error.

I try to set out these ideas in a recent symposium in Critical Review on Mark Pennington’s book, and in ‘Why be robust?’ in Interdisciplinary Studies of the Market Order edited by Peter Boettke, Chris Coyne and Virgil Storr. The symposium article is available on open access and there is a working paper version of my chapter is available at the Classical Liberal Institute website.

Can Median Voter Theorem explain political polarization?

When I began dipping my toes into game theory and rational choice theory, like many others, I learned about the Median Voter Theorem (MVT). This theory is essentially the Hotelling’s Law of voting, in which two competing politicians, on any given issue, will adopt views similar to the median on a spectrum of views of that issue, in order to maximize the number of votes they receive. Any movement toward either extreme, so the theory goes, would allow the opponent to gain the votes of centrists by moving in the same direction, but not as far, effectively gaining all voters on the other extreme AND the centrists. According to MVT, the most successful politicians should, if rational choice theory can be said to apply to elections, represent (if not hold) the views closest to those of the median voter, who should be relatively “centrist” even if extremist voters outnumber centrists.

This is, rather dramatically, not the case. History and current events offer a plethora of examples: a brief look at the makeup of the US government implies that centrist voices (and especially centrist voters) are outnumbered and drowned out. If MVT has any effect at all, why is increasing political polarization such a hot topic?

What are the possible explanations for this? Is MVT fundamentally wrong in its core idea, and is voting in fact not possible to model in rational choice theory? I do not think so, so here are several ideas, some old and one novel, about why the application of MVT does not lead to centrist politicians winning most elections in practice:

  • Voter preferences are polarized. If voter opinions are not only not normally distributed, but are in fact gathered at two poles with no centrist voters, MVT may actually function, but it would predict that centrists would lose a lot (because the median voter would be at one pole or the other). The idea that voters resemble a barbell graph more than a normal distribution may be very salient, because many issues are dominated by extreme views. However, the voting population is demonstrably more centrist on some issues, so this cannot fully explain the difference between reality and MVT.
  • Third parties spoil things. This hearkens back not only to Arrow’s Impossibility Theorem, but also to the influence of Ross Perot and Ralph Nader on presidential elections. Third party participation does not spoil MVT, because it still fundamentally follows the idea of rational choice theory about elections, but it complicates two-candidate models.
    • Further note: multi-party systems are vulnerable to extremists. The possibility of invasion by extreme views is higher in multi-party systems, because if multiple centrist parties compete for median voters, extreme groups gain power disproportionate to their constituency.
  • Primaries spoil things. This is an old idea in US politics: in primaries, you have to run to the extremes, and then in the general election, run to the center. The very logic behind this idea, and any empirical evidence of it, proves the viability of MVT, but it does introduce complications to any model of it that fails to account for the two different elections many politicians must win. Also, we must remember to factor in the fact that politicians have a negative utility in abandoning past positions (especially recent decisions) of seeming inauthentic or losing their base.
  • The Electoral College spoils things. Outside of US presidential elections, “swing states” may not have the same power, but in the US, candidates would be “rational” to pursue votes in swing states much more assiduously than they do in states that are nearly sure bets. This is only narrowly applicable, but the campaign focus on swing states indicates that candidates do at least campaign rationally (see academic studies or news reports), in that they seek votes strategically and not indiscriminately.
  • Voter turnout skews MVT. In elections without mandatory voting, those who are less committed to the issues at stake—who are more likely to be centrists—tend to vote less than those who care greatly one way or another. Therefore, voting may be more about mobilizing one’s base than appealing to centrists. (Note: this is not an endorsement of mandatory voting, which can allow more zero-sum games in politics and the enforcement of which seems worse than the problem).
  • People do not vote based on rational weighing of stances. This would be a troubling conclusion, and suggests that people may align with parties and candidates based on motives other than matching candidate views to their own. While this is certainly true to some extent, MVT can still predict the overall “centrism” of politicians. Also, just because many people have an irrational method of weighing stances does not mean that the aggregate result does not mean that the utility of favoring centrist positions would not be positive.
  • Voting is multidimensional. Since single candidates represent dozens if not hundreds of salient issues in a single election, voters are forced to compromise on some issues in order to win others. This multidimensionality is not a counterargument to MVT, but an extreme complication, because “median” politicians could lose based on voter preference not within an issue, but between issues. That is, a politician can gain voters by recognizing weighting each voter’s stance by how likely that issue is to change their vote. This can be observed in the fact that referenda tend to follow MVT in a more straightforward fashion than elections. It is also a possible explanation for the rise of certain coalitions: no matter if constituents have deep disagreements on many issues, they end up aligned behind the same candidate based on inter-issue preferences. This may have been a major motivation for federalization and separation of power: different decisions are constrained to certain elections, allowing voters to communicate on each individual issue more clearly because of the reduced multidimensionality. The game theory on multidimensional voting is well developed, but still has some complications that have not been specifically argued:
    • Complication: Special interest compared to general interest. Multidimensional voting allows politicians to promise concentrated special interest to voters on certain issues in order to gain votes rather than appeal to general interest across many issues. The implications of this have been shown in game form, especially in special interest influence on enforcement. In this, public choice theory proves the idea that governments tend to favor dispersed costs and concentrated gains.
    • Complication: Singe issue voters. This is a subset of the above, or at least related to it. In multidimensional elections, voters obviously have to weigh issue preferencing. However, if a voter decides that they would choose a candidate so long as he agrees with the voter on a single issue, then on that issue, the single-issue voter has a hugely disproportionate influence on the candidate’s opinion on that issue. The reason: if the candidate runs to the center on that single issue, he risks his opponent capturing the vote of all single-issue voters on that side by running slightly more to the extreme. The Nash Equilibrium of such a situation would depend on how many single-issue voters there are at either extreme (I assume here that single-issue voters are not centrists, based on examples shown below, but I am open to argument) and how much of the non-single-issue constituency is ceded by focusing on single-issue voters, but it is distinctly possible that issue preferencing, especially the power of the ultimatum implied in single-issue voting, makes it rational for politicians to run to the extremes. Interesting examples of this phenomenon include background checks for guns, defense spending, and possibly marijuana legalization. (Please note that this does not constitute an endorsement of these ideas—whether the majority is correct is a different question from whether the majority idea is enacted by politicians).
      • The single-issue voter idea continues to fascinate me. Most of all, it fascinates me how little (apart from some basic preferencing models in the literature) I can find that either theorizes or empirically examines the specific influence of single-issue voters and voter preferencing. I hope it is out there, and I just can’t find it (so send it to me if you have found one!). But if not, is anyone out there a public choice theorist who wants to help me figure this out?

Thanks for sticking with a long read, and please give me feedback if you have any examples of this phenomenon or another angle on MVT!