Rent-Seeking Rebels of 1776

Since yesterday was Independence Day, I thought I should share a recent piece of research I made available. A few months ago, I completed a working paper which has now been accepted as a book chapter regarding public choice theory insights for American economic history (of which I talked about before).  That paper simply argued that the American Revolutionary War that led to independence partly resulted from strings of rent-seeking actions (disclaimer: the title of the blog post was chosen to attract attention).

The first element of that string is that the Americans were given a relatively high level of autonomy over their own affairs. However, that autonomy did not come with full financial responsibility.  In fact, the American colonists were still net beneficiaries of imperial finance. As the long period of peace that lasted from 1713 to 1740 ended, the British started to spend increasingly larger sums for the defense of the colonies. This meant that the British were technically inciting (by subsidizing the defense) the colonists to take aggressive measures that may benefit them (i.e. raid instead of trade). Indeed, the benefits of any land seizure by conflict would large fall in their lap while the British ended up with the bill.

The second element is the French colony of Acadia (in modern day Nova Scotia and New Brunswick). I say “French”, but it wasn’t really under French rule. Until 1713, it was nominally under French rule but the colony of a few thousands was in effect a “stateless” society since the reach of the French state was non-existent (most of the colonial administration that took place in French North America was in the colony of Quebec). In any case, the French government cared very little for that colony.   After 1713, it became a British colony but again the rule was nominal and the British tolerated a conditional oath of loyalty (which was basically an oath of neutrality speaking to the limited ability of the crown to enforce its desires in the colony). However, it was probably one of the most prosperous colonies of the French crown and one where – and this is admitted by historians – the colonists were on the friendliest of terms with the Native Indians. Complex trading networks emerged which allowed the Acadians to acquire land rights from the native tribes in exchange for agricultural goods which would be harvested thanks to sophisticated irrigation systems.  These lands were incredibly rich and they caught the attention of American colonists who wanted to expel the French colonists who, to top it off, were friendly with the natives. This led to a drive to actually deport them. When deportation occurred in 1755 (half the French population was deported), the lands were largely seized by American settlers and British settlers in Nova Scotia. They got all the benefits. However, the crown paid for the military expenses (they were considerable) and it was done against the wishes of the imperial government as an initiative of the local governments of Massachusetts and Nova Scotia. This was clearly a rent-seeking action.

The third link is that in England, the governing coalitions included government creditors who had a strong incentives to control government spending especially given the constraints imposed by debt-financing the intermittent war with the French.  These creditors saw the combination of local autonomy and the lack of financial responsibility for that autonomy as a call to centralize management of the empire and avoid such problems in the future. This drive towards centralization was a key factor, according to historians like J.P. Greene,  in the initiation of the revolution. It was also a result of rent-seeking on the part of actors in England to protect their own interest.

As such, the history of the American revolution must rely in part on a public choice contribution in the form of rent-seeking which paints the revolution in a different (and less glorious) light.

The Deleted Clause of the Declaration of Independence

As a tribute to the great events that occurred 241 years ago, I wanted to recognize the importance of the unity of purpose behind supporting liberty in all of its forms. While an unequivocal statement of natural rights and the virtues of liberty, the Declaration of Independence also came close to bringing another vital aspect of liberty to the forefront of public attention. As has been addressed in multiple fascinating podcasts (Joe Janes, Robert Olwell), a censure of slavery and George III’s connection to the slave trade was in the first draft of the Declaration.

Thomas Jefferson, a man who has been criticized as a man of inherent contradiction between his high morals and his active participation in slavery, was a major contributor to the popularizing of classical liberal principles. Many have pointed to his hypocrisy in that he owned over 180 slaves, fathered children on them, and did not free them in his will (because of his debts). Even given his personal slaves, Jefferson made his moral stance on slavery quite clear through his famous efforts toward ending the transatlantic slave trade, which exemplify early steps in securing the abolition of the repugnant act of chattel slavery in America and applying classically liberal principles toward all humans. However, this very practice may have been enacted far sooner, avoiding decades of appalling misery and its long-reaching effects, if his (hypocritical but principled) position had been adopted from the day of the USA’s first taste of political freedom.

This is the text of the deleted Declaration of Independence clause:

“He has waged cruel war against human nature itself, violating its most sacred rights of life and liberty in the persons of a distant people who never offended him, captivating and carrying them into slavery in another hemisphere or to incur miserable death in their transportation thither.  This piratical warfare, the opprobrium of infidel powers, is the warfare of the Christian King of Great Britain.  Determined to keep open a market where Men should be bought and sold, he has prostituted his negative for suppressing every legislative attempt to prohibit or restrain this execrable commerce.  And that this assemblage of horrors might want no fact of distinguished die, he is now exciting those very people to rise in arms among us, and to purchase that liberty of which he has deprived them, by murdering the people on whom he has obtruded them: thus paying off former crimes committed against the Liberties of one people, with crimes which he urges them to commit against the lives of another..”

The second Continental Congress, based on hardline votes of South Carolina and the desire to avoid alienating potential sympathizers in England, slaveholding patriots, and the harbor cities of the North that were complicit in the slave trade, dropped this vital statement of principle

The removal of the anti-slavery clause of the declaration was not the only time Jefferson’s efforts might have led to the premature end of the “peculiar institution.” Economist and cultural historian Thomas Sowell notes that Jefferson’s 1784 anti-slavery bill, which had the votes to pass but did not because of a single ill legislator’s absence from the floor, would have ended the expansion of slavery to any newly admitted states to the Union years before the Constitution’s infamous three-fifths compromise. One wonders if America would have seen a secessionist movement or Civil War, and how the economies of states from Alabama and Florida to Texas would have developed without slave labor, which in some states and counties constituted the majority.

These ideas form a core moral principle for most Americans today, but they are not hypothetical or irrelevant to modern debates about liberty. Though America and the broader Western World have brought the slavery debate to an end, the larger world has not; though countries have officially made enslavement a crime (true only since 2007), many within the highest levels of government aid and abet the practice. 30 million individuals around the world suffer under the same types of chattel slavery seen millennia ago, including in nominal US allies in the Middle East. The debates between the pursuit of non-intervention as a form of freedom and the defense of the liberty of others as a form of freedom have been consistently important since the 1800’s (or arguably earlier), and I think it is vital that these discussions continue in the public forum. I hope that this 4th of July reminds us that liberty is not just a distant concept, but a set of values that requires constant support, intellectual nurturing, and pursuit.

For more underrecognized history surrounding the founding of America, see my Before the Fourth series!

Adam Smith on the character of the American rebels

They are very weak who flatter themselves that, in the state to which things have come, our colonies will be easily conquered by force alone. The persons who now govern the resolutions of what they call their continental congress, feel in themselves at this moment a degree of importance which, perhaps, the greatest subjects in Europe scarce feel. From shopkeepers, tradesmen, and attornies, they are become statesmen and legislators, and are employed in contriving a new form of government for an extensive empire, which, they flatter themselves, will become, and which, indeed, seems very likely to become, one of the greatest and most formidable that ever was in the world. Five hundred different people, perhaps, who in different ways act immediately under the continental congress; and five hundred thousand, perhaps, who act under those five hundred, all feel in the same manner a proportionable rise in their own importance. Almost every individual of the governing party in America fills, at present in his own fancy, a station superior, not only to what he had ever filled before, but to what he had ever expected to fill; and unless some new object of ambition is presented either to him or to his leaders, if he has the ordinary spirit of a man, he will die in defence of that station.

Found here. Today, many people, especially libertarians in the US, celebrate an act of secession from an overbearing empire, but this isn’t really the case of what happened. The colonies wanted more representation in parliament, not independence. London wouldn’t listen. Adam Smith wrote on this, too, in the same book.

Smith and, frankly, the Americans rebels were all federalists as opposed to nationalists. The American rebels wanted to remain part of the United Kingdom because they were British subjects and they were culturally British. Even the non-British subjects of the American colonies felt a loyalty towards London that they did not have for their former homelands in Europe. Smith, for his part, argued that losing the colonies would be expensive but also, I am guessing, because his Scottish background showed him that being an equal part of a larger whole was beneficial for everyone involved. But London wouldn’t listen. As a result, war happened, and London lost a huge, valuable chunk of its realm to hardheadedness.

I am currently reading a book on post-war France. It’s by an American historian at New York University. It’s very good. Paris had a large overseas empire in Africa, Asia, Oceania, and the Caribbean. France’s imperial subjects wanted to remain part of the empire, but they wanted equal representation in parliament. They wanted to send senators, representatives, and judges to Europe, and they wanted senators, representatives, and judges from Europe to govern in their territories. They wanted political equality – isonomia – to be the ideological underpinning of a new French republic. Alas, what the world got instead was “decolonization”: a nightmare of nationalism, ethnic cleansing, coups, autocracy, and poverty through protectionism. I’m still in the process of reading the book. It’s goal is to explain why this happened. I’ll keep you updated.

Small states, secession, and decentralization – three qualifications that layman libertarians (who are still much smarter than conservatives and “liberals”) argue are essential for peace and prosperity – are worthless without some major qualifications. Interconnectedness matters. Political representation matters. What’s more, interconnectedness and political representation in a larger body politic are often better for individual liberty than smallness, secession, and so-called decentralization. Equality matters, but not in the ways that we typically assume.

Here’s more on Adam Smith at NOL. Happy Fourth of July, from Texas.

The Behavioural Economics of the “Liberty & Responsibility” couple.

The marketing of Liberty is enclosed with the formula “Liberty + Responsibility.” It is some sort of “you have the right to do what you please, BUT you have to be responsible for your choices.” That is correct: the costs and profits enable rationality to our decisions. The lack of the former brings about the tragedy of the commons outcome. In a world where everyone is accountable for his choices, the ideal of liberty as absence of arbitrary coercion will be delivered by the resulting structure of rational individual decisions limiting our will.

The couple of Liberty and Responsibility is right BUT unattractive. First of all, the formula is not actually “Liberty + Responsibility,” but “Liberty as Absence of Coercion – What Responsibility Takes Away.” The latter is still right: Responsibility transforms negative liberty as “absence of coercion” into “absence of arbitrary coercion.” The problem remains a matter of marketing of ideas.

David Hume is a strong candidate for the title of “First Behavioural Economist,” since he had stated that it is more unpleasant for a man to have the unfulfilled promise of a good than not having neither the good nor the promise of it. The latter might be a desire, while the former is experienced as a dispossession. The couple “Liberty – Responsibility” dishes out the same kind of deception.

It is like someone who tells to you: “do what you want, enjoy 150% of Liberty”; and suddenly, he warns you: “but wait! You know there’s no such thing as a free lunch, if you are 150% free, someone will be 50% your slave. Give that illegitimate 50% of freedom back!” And he will be -again- right: being responsible makes everybody 100% free. Right – albeit disappointing.

Perhaps we should restate the formula the other way around: “Being 100% responsible for your choices gives you the right to claim 100% of your freedom.” Only a few will be interested in being more than 100% responsible for anything. But if it happens that someone is expected to deal alone with his own needs, at least he will be entitled to claim the right to his full autonomy.

The formula of “Responsibility + Liberty” is associated with the evolutionist notion of liberties, which means rights to be conquered, one by one. Being responsible and then free means that Liberty is not an unearned income to be neutrally taxed. It is not a “state of nature” to give in exchange for civilization, but a project to grow, a goal, a raison d’etre.

Putting first Responsibility and then Liberty determines a curious outcome: you are consciously free to choose the amount of freedom you are really willing to enjoy. Markets and hierarchies are, then, not antagonistic terms, but structures of cooperation freely consented. Moreover, what we trade are not goods, not even rights on goods, but parcels of our sphere of autonomy.

The importance of understanding causal pathways: the case of affirmative action.

Let us put aside the question of whether affirmative action is a desirable goal. Instead I wish to ponder how to implement affirmative action, given that it will be implemented in some form regardless.

The logic of most affirmative action programs is that X vulnerable community’s outcomes (Y) are significantly below the average. For the sake of example let us say that X is Cherokees and Y is the number of professional baseball players from that ethno-racial group.

Y = f(X) 

A public policy analyst who simply noted the under representation of Cherokees in the MLB, without digging deeper into the causal pathway, may propose that quotas be implemented requiring teams to have a certain share of Cherokee players. Such a proposal would be a bad one. It would be bad because it could lead to privileged Cherokees gaining spots in the MLB at the expense of less privileged individuals from other ethno-racial groups.

A better public policy analysis would note that Cherokees are less likely to enter professional baseball because they are malnourished (Z). This analyst, recognizing the causal pathway, may instead propose a program be implemented to deal with malnourished individuals regardless of their ethno-racial identity.

Y = f(X); X = f(Z) 

Most affirmative action programs that I have come across are of the former type. They recognize that X ethno-racial group is performing poorly in Y outcome, and propose action without acknowledging Z. We need more programs that are designed with Z in mind.

I do not say any of this because I am an upper class white male who resents others receiving affirmative action. To the contrary. I have benefited from this type of affirmative action several times in my life. On paper I am a gold mine for a human resources worker looking to fulfill diversity quotas: I am a undocumented Hispanic of Black-Jewish descent who was raised in a low income household. I am not however vulnerable. I come from a low income household, but my Z is not low. Not really.

Despite my demographic group, I am not malnourished. I could stand to lose weight, but I am not unhealthy. I attended a state university, but my undergraduate education is comparable to that of someone who attended a public ivy. My intelligence is on the right side of the bell curve. Absent affirmative action I am confident I would achieve entry into the middle class.

Nor am I a rarity among beneficiaries. My observation is that many beneficiaries of affirmative action programs are not low on Z and left alone would achieve success on their own. Affirmative action programs are often constructed in such a way that someone low on Z could not navigate their application process. It may seem egalitarian to require applicants to submit course transcripts, to write essays, or present letters of recommendations. However these seemingly simple tasks require a level of Z that the truly under privileged do not have.

Good public policy analysis requires us to understand causal pathway of why X groups do not achieve success at similar rates as other groups. We must design programs that target undernourishment instead of simply targeting Cherokees. If we fail to do so we may have more Cherokees playing for the Dodgers, but will have failed to solve the deeper program.

Note that I say vulnerable as opposed to ‘minority’ in the above passage. This is to acknowledge that many so-called minority groups are nothing of the sort. Hispanics, Blacks, and Asians form majorities in various parts of southwest, south, and the pacific (e.g. Hawaii). Women likewise are not a minority, but are often covered by affirmative action programs. Jews are in many instances minorities, but in contemporary life are far from under represented in society’s top professions. This distinction may seem too obvious to be worth making, but it is not. Both sides of the political spectrum forget that the ultimate goal of affirmative action is to aid vulnerable individuals.  Double emphasize on individuals.

The Old Deluder Satan Act: Literacy, Religion, and Prosperity

So, my brother (Keith Kallmes, graduate of the University of Minnesota in economics and history) and I have decided to start podcasting some of our ideas. The topics we hope to discuss range from ancient coinage to modern medical ethics, but with a general background of economic history. I have posted here our first episode, the Old Deluder Satan Act. This early American legislation, passed by the Massachusetts Bay Colonists, displays some of the key values that we posit as causes of New England’s principal role in the Industrial Revolution. The episode: 

We hope you enjoy this 20-minute discussion of the history of literacy, religion, and prosperity, and we are also happy to get feedback, episode suggestions, and further discussion in the comments below. Lastly, we have included links to some of the sources cited in the podcast.


Sources:

The Legacy of Literacy: Continuity and Contradictions in Western Culture, by Harvey Graff

Roman literacy evidence based on inscriptions discussed by Dennis Kehoe and Benjamin Kelly

Mark Koyama’s argument

European literacy rates

The Agricultural Revolution and the Industrial Revolution: England, 1500-1912, by Gregory Clark

Abstract of Becker and Woessman’s “Was Weber Wrong?”

New England literacy rates

(Also worth a quick look: the history of English Protestantism, the Puritans, the Green Revolution, and Weber’s influence, as well as an alternative argument for the cause of increased literacy)

Immigration and States’ Rights

Bryan Caplan (arguing the affirmative) and Christopher Wellman recently debated whether immigration is a human right.

Wellman won the debate according to audience votes, but I think his argument was significantly weaker. He made confused arguments that, when given second thought lend credence to Caplan’s position. But through hand waving he transitioned to “and therefore states’ rights!” I am far from convinced that state’s rights are valid, but I do want to explore an interesting issue he raised: the moral weight of collective phenomena.

Markets generate economic information more intelligently than any individual participant. Competition and collaboration in cultural spaces generate more and better art than any individual on their own. Society is the outcome of individual choices, but the collective is something apart from those individuals.

We have various collectives (e.g. cultural regions, markets, local communities, families, national identities, sports fandom, science, etc.), many of which are special. They provide club goods (sometimes club bads), and require the support of their members. These networks exhibit emergent properties–the whole is more than the sum of its parts.

So surely those members should have some say in the management of the collective?

This is where Wellman went off track. Yes, these collectives are important. Yes, they require some form of governance. But that doesn’t unambiguously imply involvement of government.

Consider an excellent example Wellman gives: families. Families are an essential part of the structure of society and one we are each deeply familiar with. If there’s a collective entity with moral weight, surely it’s the family.

Wellman posed the hypothetical around the 32:45 mark: what if he returned home and found that his wife had unilaterally adopted a new child? Clearly this is freedom of association run amok! But the example doesn’t imply the need for state involvement; it implies the need for couples therapy! If he and his wife together decide to adopt, then the question remains, “why should the government have a say in this?” Currently it does, which means that whatever the median voter is cool with is acceptable, even if that means preventing this adoption that clearly doesn’t affect them. That seems untenable unless we have strong evidence that adoptions tend to create large negative spillovers.

The moral weight of a family doesn’t imply either state involvement or democratic decision making. Members can be added to a family through birth or marriage. The decision is made by the one or two individuals most directly involved (perhaps with some role for other family members). And those decisions are made non-coercively. Parents may intervene to prevent teenage Romeos and Juliettes from getting married, but adults are basically allowed to make their own decision.

I’m guessing here, but I’d bet that 90% of people would agree that the way we do freedom of association in families is basically the right way to do things.

Polycentrism!

The scope of a family does not fit neatly into the boxes drawn on a map, nor do most other collective phenomena. Red Sox Nation isn’t just Boston. Regional cultures overlap. Languages cross borders.

We want the collective decision making institutions to reflect the area of spill-overs. Decisions affecting a family should be made within the family. I shouldn’t be directly involved in decisions about how to provide local public services in San Diego. Global spillovers justify global decision making, but local spillovers don’t.

When it comes to immigration, we have to ask:

  1. What collectives will they affect? (certain labor markets, local communities)
  2. Are they likely to create large negative spillovers?
  3. What is the current form of institutions governing those collectives?

There are high stakes for many potential immigrants (especially those coming from places typical Americans are most afraid of), so we should probably go a step further: if there’s a solution to some potential spillover problem that isn’t significantly more costly than immigration restrictions, we should feel obliged to use that solution. For example, it should be easier to come here to live and work than it is to get welfare benefits (although getting that policy to work raises a host of other questions).

Rights imply action

Let’s agree on this: there are collective phenomena that are special. We want to take care of these phenomena which means figuring out the appropriate form of governance for each case.

Wellman gives another family example that blows his own argument out of the water: what if he was put in an arranged marriage? This would deny him important scope for self-determination. And therefore (he argues) states, being important collective phenomena, have a right to self-determination.

How did the audience not notice this?! Immigration restrictions deny me choice over who to voluntarily associate with and so deny me scope for self-determination.

Even if it feels weird from a rational-individualist perspective, there is something special about (e.g.) a country. But that doesn’t mean we should abandon methodological individualism. We know that only individuals make choices, even if they make those choices for the sake of collectives. A collective can have moral weight but still lack the ability to choose. To my mind, this kills the idea of states’ rights (as in “right to do x” or “right to self-determination”) in general.

What we’re left with is the original question: how do we manage the collective? What decisions do we make collectively, and what do we decide piecemeal?

For many (most?) collectives, including the most important ones, we allow freedom of (dis)association and leave the state out of it. Wellman did not answer the question of “why should immigration be different?” I suspect there are strong arguments to be made, but the closest I heard in this debate is that we can think of this as a question of governance, and that government sometimes provides governance.

As Wellman points out (around the 30:00 mark) there is (sometimes) a tension between rules favoring individual freedom and rules requiring collective decision making. There are plenty of examples of scenarios where we uncontroversially prefer to limit some individual rights–we do this automatically with negative rights by denying you the freedom to murder in support of your right to life.

It’s not clear to me that the expected effects of immigrants are widespread enough to justify as sweeping a policy as “only the following people are allowed in these particular thousands of square miles.” For immigration (but not access to the welfare state), the presumption of liberty seems the way to go.

tl;dr: We have various collective goods that are special (e.g. the “character” of a community). This calls for some form of governance to allow the individuals directly involved to manage collective goods. This frequently calls for constraints on individual freedoms for the benefit of the community, but that doesn’t mean that the special collective identity of a country justifies a presumption of closed borders.

The debate over whether the nation state is violating human rights by restricting immigration (with caveats made for “obviously” reasonable restrictions like keeping out known murderers) is not closed by pointing out that there is a collective good associated with the nation state. States can be special without having states’ rights.

A Right is Not an Obligation

Precision of language in matters of science is important. Speaking recently with some fellow libertarians, we got into an argument about the nature of rights. My position: A right does not obligate anyone to do anything. Their position: Rights are the same thing as obligations.

My response: But if a right is the same thing as an obligation, why use two different words? Doesn’t it make more sense to distinguish them?

So here are the definitions I’m working with. A right is what is “just” or “moral”, as those words are normally defined. I have a right to choose which restaurant I want to eat at.

An obligation is what one is compelled to do by a third party. I am obligated to sell my car to Alice at a previously agreed on a price or else Bob will come and take my car away from me using any means necessary.

Let’s think through an example. Under a strict interpretation of libertarianism, a mother with a starving child does not have the right to steal bread from a baker. But if she does steal the bread, then what? Do the libertarian police instantly swoop down from Heaven and give the baker his bread back?

Consider the baker. The baker indeed does have a right to keep his bread. But he is no under no obligation to get his bread back should it get stolen. The baker could take pity on the mother and let her go. Or he could calculate the cost of having one loaf stolen is low to expend resources to try to get it back.

Let’s analyze now the bedrock of libertarianism, the nonaggression principle (NAP). There are several formulations. Here’s one: “no one has a right to initiate force against someone else’s person or property.” Here’s a more detailed version, from Walter Block: “It shall be legal for anyone to do anything he wants, provided only that he not initiate (or threaten) violence against the person or legitimately owned property of another.”

A natural question to ask is, what happens if someone does violate the NAP? One common answer is that the victim of the aggression then has a right to use force to defend himself. But note again, the right does not imply an obligation. Just because someone initiates force against you, does not obligate you or anyone else to respond. Pacifism is consistent with libertarianism.

Consider another example. Due to a strange series of coincidences, you find yourself lost in the woods in the middle of a winter storm. You come across an unoccupied cabin that’s obviously used as a summer vacation home. You break in, and help yourself to some canned beans and shelter, and wait out the storm before going for help.

Did you have a right to break into the cabin? Under some strict interpretations of libertarianism, no. But even if this is true, all it means is that the owners of the cabin have the right, but not obligation, to use force to seek damages from you after the fact. (They also had the right to fortify their cabin in such a way that you would have been prevented from ever entering.) But they may never exercise that right; you could ask for forgiveness and they might grant it.

Furthermore, under a pacifist anarchocapitalist order, the owners might not even use force when seeking compensation. They might just ask politely; and if they don’t like your excuses, they’ll simply leave a negative review with a private credit agency (making harder for you to get loans, jobs, etc.).

The nonaggression principle, insofar as it is strictly about rights (and not obligations), is about justice. It is not about compelling people to do anything. Hence, I propose a new formulation of the NAP: using force to defend yourself from initiations of force can be consistent with justice.

This formulation makes clear that using force is a choice. Initiating force does not obligate anyone to do anything. “Excessive force” may be a possibile injustice.

In short, justice does not require force.

Paradoxical Geniuses: “Let us burn the ships”

In 1519, Hernán Cortés landed 500 men in 11 ships on the coast of the Yucatan, knowing that he was openly disobeying the governor of Cuba and that he was facing unknown numbers of potential enemies in an unknown situation. Regardless of the moral implications, what happened next was strategically extraordinary: he and his men formed a local alliance, and despite having to beat a desperate retreat on La Noche Triste, they conquered the second largest empire in the New World. As the expeditionary force landed, Cortés made a tactically irrational decision: he scuttled all but one of his ships. In doing so, he hamstrung his own maneuverability, scouting, and communication and supply lines, but he gained one incredible advantage: the complete commitment of his men to the mission, for as Cortés himself said, “If we are going home, we are going in our foes’ ships.” This strategic choice highlights the difference between logic and economists’ concept of “rationality,” in that illogical destruction of one’s own powerful and expensive tools creates a credible commitment that can overcome a serious problem in warfare, that of desertion or cowardice. While Cortés certainly increased the risk to his own life and that of his men, the powerful psychology of being trapped by necessity brought out the very best of the fighting spirit in his men, leading to his dramatic victory.

This episode is certainly not unique in the history of warfare, and was not only enacted by leaders as a method of ensuring commitment, but actually underlay the seemingly crazy (or at least overly risky) cultural practices of several ancient groups. The pervasiveness of these psychological strategies shows that, whether each case was because of a genius decision or an accident of history, they conferred a substantial advantage to their practitioners. (If you are interested in how rational choices are revealed in the history of warfare, please also feel free to read about hostage exchanges and ransoming practices from an earlier blog!) I have collected some of the most interesting examples that I know of, but the following is certainly not an exhaustive list and I encourage other episodes to be mentioned in the comments:

  • Julian the Apostate
    • Julian the Apostate is most famous for his attempt to reverse Constantine the Great’s Christianization of the Roman Empire, but he was also an ambitious general whose audacity gained him an incredible victory over Germanic invaders against steep odds. He wanted to reverse the stagnation of Roman interests on the Eastern front, where the Sasanian empire had been challenging the Roman army since the mid-3rd century. Having gathered an overwhelming force, he marched to the Euphrates river, took ships from there to the Sasanian capital, while the Sasanians used slash-and-burn tactics to slow his advance. When Julian found the capital (Ctesiphon) undefended, he worried that his men would want to loot the capital and return homeward, continuing the status quo of raiding and retreating. To prevent this, in a move much like that of Cortés, he set fire to his ships and forced his men to press on. In his case, this did not end with stunning victory; Julian overextended his front, was killed, and lost the campaign. Julian’s death shows the very real risks involved in this bold strategy.
  • Julius Caesar
    • Julian may have taken his cue from a vaunted Roman historical figure. Dramatized perfectly by HBO, the great Roman general and statesman Julius Caesar made huge gamble by taking on the might of the Roman Senate. Despite being heavily outnumbered (over 2 to 1 on foot and as much as 5 to 1 in cavalry), Caesar committed to a decisive battle against his rival Pompey in Greece. While Pompey’s troops had the option of retreating, Caesar relied on the fact that his legionaries had their backs to the Mediterranean, effectively trapping them and giving them no opportunity to rout. While Caesar also tactically out-thought Pompey (he used cunning deployment of reserves to stymie a cavalry charge and break Pompey’s left flank), the key to his victory was that Pompey’s numerically superior force ran first; Pompey met his grisly end shortly thereafter in Egypt, and Caesar went on to gain power over all of Rome.
  • Teutones
    • The impact of the Teutones on the Roman cultural memory proved so enduring that Teutonic is used today to refer to Germanic peoples, despite the fact that the Teutones themselves were of unknown linguistic origin (they could very well have been Celtic). The Teutones and their allies, the Cimbri, smashed Roman armies which were better trained and equipped multiple times in a row; later Roman authors said they were possessed by the Furor Teutonicus, as they seemed to posses an irrational lack of fear, never fleeing before the enemy. Like many Celtic and Germanic peoples of Northern Europe, the Teutones exhibited a peculiar cultural practice to give an incentive to their men in battle: all of the tribe’s women, children, and supplies were drawn up on wagons behind the men before battles, where the women would take up axes to kill any man who attempted to flee. In doing so, they solved the collective action problem which plagued ancient armies in which a few men running could quickly turn into a rout. If you ran, not only would you die, but your wife and children would as well, and this psychological edge allowed a roving tribe to place the powerful Roman empire in jeopardy for a decade.
  • The Persian emperors
    • The earliest recorded example of paradoxical risk as a battle custom is the Persian imperial practice of bringing the women, children, and treasure of the emperor and noble families to the war-camp. This seems like a needless and reckless risk, as it would turn a defeat into a disaster in the loss of family and fortune. However, this case is comparable to that of the Teutones, in that it demonstrated the credible commitment of the emperor and nobles to victory, and used this raising of the stakes to incentivize bravery. While the Persians did conquer much of the known world under the nearly mythical leadership of Cyrus the Great, this strategy backfired for the last Achaemenid Persian emperor: when Darius III confronted Alexander the Great at Issus, Alexander’s crack hypaspist troops routed Darius’ flank as well as Darius himself! The imperial family and a great hoard of silver fell into Alexander’s hands, and he would go on to conquer the entirety of the Persian empire.

These examples show the diversity of cultural and personal illustrations of the rational choice theory and psychological warfare that typified some of the most successful military leaders and societies. As the Roman military writer Vegetius stated, “an adversary is more hurt by desertion than slaughter.” Creating unity of purpose is by no means an easy task, and balancing the threat of death by frontline combat with the threat of death during a rout was a problem that plagued leaders from the earliest recorded histories forward (in ancient Greek battles, there were few casualties on the line of battle and the majority of casualties took place during flight from the battlefield. This made the game theoretical choice for each soldier an interesting balance of possibly dying on the line but living if ONLY he ran away, but having a much higher risk of death if a critical mass of troops ran away–perhaps this will be fodder for a future post?). This was a salient and even vital issue for leaders to overcome, and despite the high risks that led to the fall of both Julian and Darius, forcing credible commitment to battle is a fascinating strategy with good historical support for its success. The modern implications of credible commitment problems range from wedding rings to climate accords, but very few modern practices utilize the “illogical rationality” of intentional destruction of secondary options. I continue to wonder what genius, or what society, will come up with a novel application of this concept, and I look forward to seeing the results.

P.S.–thanks to Keith Kallmes for the idea for this article and for helping to write it. Truly, it is his economic background that leads to many of these historical questions about rational choice and human ingenuity in the face of adversity.

Could the DUP push UK Conservatives towards a ‘Norway Option’?

Last year, Britain voted to leave the European Union under a banner of anti-immigration and protectionism. Since then, both social democrats and classical liberals have been waiting to catch a break. Ever the optimist, I hope they may have just got one, from an unlikely source, the Democratic Unionist Party. They are a Northern Ireland-based Protestant party that is usually at the margins of national British politics. Thanks to the outcome of the latest general election, they may be in a position to force the British Conservatives towards a more trade and immigration friendly Brexit.

In April, Prime Minister (for now) Theresa May called a snap election. She didn’t need to face the electorate until 2020, but decided to gamble, thinking that she would increase her working majority of Conservative MPs. Instead, as we discovered yesterday after the polls closed, she did the opposite, reducing the slim majority that David Cameron won in 2015 to a mere plurality. This was against one of the most radically left-wing opponents in decades, Jeremy Corbyn.

This was a dismal failure for the Conservatives but the result is a relatively good sign for liberals. I feared that Theresa May’s conservative-tinged anti-market, anti-human rights, authoritarian corporatism was exactly what centrist voters would prefer. It turns that Cameron’s more liberal conservativism actually won more seats. Not only is an outward-looking liberalism correct, de-emphasizing it turns out not be a popular move after all.

Without a majority, the Conservatives need to form a coalition or come to an informal agreement with another party. This seems likely impossible with Labour, the Scottish Nationalists or the Liberal Democrats who have all campaigned heavily against the Conservatives and disagree on key issues, such as whether Britain should leave the European Union at all. This leaves the DUP.

In terms of ideology, the DUP is far to the right of most British Conservatives. Their opposition to gay marriage, abortion, and occasional support for teaching creationism, means that they have more in common with some Republican Christian groups in the United States than the secular mainstream in the rest of the United Kingdom. Historically, at least, they have links with pro-unionist paramilitaries that have terrorized Irish Catholic separatists.

There is, however, one way in which the DUP are comparatively moderate. While content with the UK leaving the European Union, they want to keep the land border between Northern Ireland and the Republic of Ireland (an EU member) open. Closing it would reduce critical cross-border trade with an economically dynamic neighbor and re-ignite violent tensions between the Protestant and Catholic communities in Northern Ireland.

How could this be achieved? Leaving the EU while keeping a relatively open trading and immigration relationship is similar to the so-called Norway Option. Norway is within the single market but can exempt itself from many parts of EU law. In return, it has no direct representation in EU institutions. If the EU could accept such an arrangement, then the DUP may be able to make Conservatives commit to it.

Of course, the DUP will extract other perks from their major partners as part of any deal. But their social policy preferences are so far to the right of people in England, Wales and Scotland that this will hopefully have to take the form of fiscal subsidies to their home region (economically damaging but could at least avoid infringing civil liberties).

It might seem paradoxical that an extreme party may have a moderating influence on overall policy. However, social choice theory suggests that democratic processes do not aggregate voter, or legislator, preferences in a straightforward way. Because preferences exist along multiple dimensions, they are neither additive nor linear. This can produce perverse and chaotic outcomes, but it can also generate valuable bargains between otherwise opposed parties. In this case, one right-wing party produces an authoritarian Brexit. But two right-wing parties could equal a more liberal outcome.

That’s the theory. Has something like this ever happened in practice? Arguably, Canada is an outstanding example of how a minority party with many internally illiberal policy preferences produces liberal outcomes (see the fascinating Vaubel, 2009, p.25 for the argument). There, the need to placate the separatist movement in Quebec involved leaving more powers to the provinces in general, thus keeping Canada as a whole much more decentralized than Anglo-Canadian preferences alone could have assured. Will the DUP do the same for Britain? We can but hope.

Pittsburgh, not Paris: What’s a libertarian response?

A lot has been said about Trump pulling the US out of the Paris Accords. Leftists have been apoplectic, foaming at the mouth even. Conservatives are baffled, if they have anything to say at all. What should libertarians think?

Libertarians in the United Kingdom, States, and Provinces are generally unilateralists (not isolationists), whereas libertarians in Europe, South Africa, and Latin America are generally multilateralists. I’m of the opinion that American libertarians are wholly wrong to claim that their foreign policy is libertarian. It’s not libertarian at all. Unilateralism is combative rather than cooperative and relies on nationalism rather than internationalism to make its arguments.

Multilateralism forces factions to come to a consensus, thus slowing down government action at the international level, while also forcing factions to interact with each other in a diplomatic manner at that same international level. Unilateralism allows states to do whatever they want, regardless of what others may think. Now let me remind you of what libertarianism stands for: peace, prosperity, and freedom through mutually beneficial exchange and agreed-upon rules that can be changed provided they go through the proper channels (legislation, judiciary, executive). (Am I wrong here?)

Which sounds more libertarian to you?

Now that we have issues of doctrine out of the way, what’s really interesting to note is the Left’s inability to see what Trump is actually doing: wagging the dog. Trump’s term as executive is not going well (surprise, surprise). And so, he does a mean-spirited thing that he hopes will distract.

Here’s how I see the Paris Accords (chime in if you disagree):

  • They (it?) have not, and will not – ever – accomplish anything in regard to climate change, but
  • because of this it is also an organization that is wholly non-threatening. It’s just a bunch of countries getting together, in good faith, to solve a problem (real or imagined)

Some hardline factions on the conservative wing in the US didn’t like that the Paris Accords are essentially glorified intern conventions, and some Leftist factions on the American Left absolutely revere green initiatives (even if they’re no good at greening anything other than lobbyist’s pocketbooks), so Trump pulled the plug.

#covfefe

The death of reason

“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”

Thomas Kuhn, The Structure of Scientific Revolutions p. 111

I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.

My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”

And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.

The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.

If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.

The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., those of Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.

Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary. (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.

Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.

Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unconfirmed auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.

But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”

Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.

For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.

For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.

So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.

Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms, participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.

From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.

This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acolytes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.

There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.

Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremberg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…

In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”

Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.

Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocutors, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.

Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. The Final Solution is not about extermination anymore but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.

The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.

Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it is climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.

Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermione Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”

“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233

Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.

Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.

When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.

The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.

Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.

Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.

Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.

If we don’t posit a baseline-rationality — Hermione Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.

There were other attacks on the hoax, from SlateSalon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.

Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.

The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”

Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.

In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.

However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.

A Short Note on “Net Neutrality” Regulation

Rick Weber has a good note lashing out against net neutrality regulation. The crux of his argument is that there are serious costs to consumers in terms of getting content slower to enforced net neutrality. But even if we ignore his argument, what if regulation isn’t even necessary to preserve the benefits of net neutrality (even though there really never was net neutrality as proponents imagine it to begin with, and it has nothing to do with fast lanes but with how content providers need to go through a few ISPS)? In fact, there is evidence that the “fast lane” model that net neutrality advocates imagine would

In fact, there is evidence that the “fast lane” model that net neutrality advocates imagine would happen in the absence of regulatory intervention is not actually profitable for ISPs to pursue, and has failed in the past. As Timothy Lee wrote for the Cato Institute back in 2008:

The fundamental difficulty with the “fast lane” strategy is that a network owner pursuing such a strategy would be effectively foregoing the enormous value of the unfiltered content and applications that comes “for free” with unfiltered Internet access. The unfiltered internet already offers breathtaking variety of innovative content and application, and there is every reason to expect things to get even better as the availabe bandwidth continues to increase. Those ISPs that continue to provide their users with faster, unfiltered access to the Internet will be able to offer all of this content to their customers, enhancing the value of their pipe at no additional cost to themselves.

In contrast, ISPs that chose not to upgrade their customers’ Internet access but instead devote more bandwidth to a proprietary “walled garden” of affiliated content and applications will have to actively recruit each application or content provider that participates in the “fast lane” program. In fact, this is precisely the strategy that AOL undertook in the 1990s. AOL was initially a propriety online service, charged by the hour, that allowed its users to access AOL-affiliated online content. Over time, AOL gradually made it easier for customers to access content on the Internet so that, by the end of the 1990s, it was viewed as an Internet Service Provider that happened to offer some propriety applications and content as well. The fundamental problem requiring AOL to change was that content available on the Internet grew so rapidly that AOL (and other proprietary services like Compuserve) couldn’t keep up. AOL finally threw in the towel in 2006, announcing that the proprietary services that had once formed the core of its online offerings would become just another ad-supported web-site. A “walled garden/slow lane” strategy has already proven unprofitable in the market place. Regulations prohibiting such a business model would be suprlusage.

It looks like it might be the case that Title II-style regulation is a solution in search of a problem. Add to it the potential for ISPs and large companies to lobby regulators to erect other barriers to entry to stop new competitors, like what happened with telecommunications companies under Title II and railroad companies under the Interstate Commerce Commission, and the drawbacks of pure net neutrality Rick pointed out, and it looks like a really bad policy indeed.

Vincent Geloso Interviewed for his Work on the War on Drugs

Regular readers of NOL know that fellow notewriter Vincent Geloso has done a lot of great work on the war on drugs. Dr. Geloso was recently on Student for Liberty’s Podcast to discuss a paper he recently co-authored compiling data on the effects of the war on drugs on increased security costs, which he previewed a few months ago on NOL. He had a wide-ranging discussion on his findings, secondary effects of the war on drugs in terms of economic costs, the psychology of policing with the war on drugs, and comparing the drug war to prohibition. Check out the discussion.

P.S. If you’re not already listening to SFL On Air, you should and not just because I’m in charge of marketing for it.

What is the optimal investment in quantitative skills?

As I plan out my summer plans I am debating how to allocate my time in skill investment. The general advice I have gotten is to increase my quantitative skills and pick up as much about coding as possible. However I am skeptical that I really should invest too much in quantitative skills. There are diminishing returns for starters.

More importantly though artificial intelligence/computing is increasing every day. When my older professors were trained they had to use IBM punch cards to run simple regressions. Today my phone has several times more the computing power, not to mention my PC. I would not be surprised if performing quantitative analysis is taken over entirely by AI within a decade or two. Even if it isn’t, it will surely be easier and require minimal knowledge of what is happening. In which case I should invest more heavily in skills that cannot be done by AI.

I am thinking, for example, of research design or substantive knowledge of research areas. AI can beat humans in chess, but I can’t think of any who have written a half decent history text.

Mind you I cannot abandon learning a base level of quantitative knowledge. AI may take over in the nex decade, but I will be on the job market and seeking tenure before then (hopefully!).