Worth a gander

  1. good update on the mayhem in the Middle East
  2. as good as that update is, though: Iraq, Saudi Arabia to reopen border crossings after 27 years
  3. great read on Russia’s Far East and Russia’s travel writing genre
  4. in Russia, Lutheranism (Protestantism) is considered a “traditional” religion (h/t NEO)
  5. how social is reason?

Make neo-Nazis flop off Broadway: public choice and Tina Fey’s sheetcaking

tumblr_n2evlqgunn1sbf1mqo4_500

A week ago a white supremacist rally in Charlottesville protesting the taking down of Confederate Memorial statues turned fatally violent. Other protests were due to take place this weekend in multiple U.S. cities, including New York (now postponed). How should citizens and public authorities deal with this upsurge in violent neo-Nazi protest? I am with Tina Fey on this one: don’t show up, have some cake, and encourage the NYPD to prevent violence.

Some on the left have tried opportunistically and mistakenly to associate Virginian school public choice scholarship with the far-right. This is a sadly missed opportunity because James Buchanan’s theory of club goods helps explain how far-right street protests emerge and suggest how authorities might best subdue them. I draw on John Meadowcroft’s and Elizabeth Morrow’s analysis of the far-right English Defence League (EDL).

Continue reading

Lunchtime Links

  1. violence among foragers [pdf]
  2. building legal order in ancient Athens [pdf]
  3. why Congo persists [pdf]
  4. toward an old new paradigm in American international relations [pdf]

Worth a gander

  1. Zero hour for Generation X
  2. Confederate flags and Nazi swastikas together? That’s new.
  3. America at the end of all hypotheticals
  4. What’s left of libertarianism?
  5. Factual free market fairness
  6. Thinking about costs and benefits of immigration

Is Socialism Really Revolutionary?

A central feature of Karl Marx’s thought is its teleological character: the world walks inexorably towards communism. It is not a question of choices. It is not a question of individual decisions. Communism is simply the direction in which the world walks. Capitalism will collapse not because of some external force, but because of its own internal contradictions (centrally the exploitation of the workers).

I don’t know exactly what History classes are like in other countries, but in basically all my academic trajectory I was bombarded with some version of Marxism. Particularly as far as my country was concerned, the question was not whether a socialist revolution would happen, but why it was taking so long! Looking at events in the past, the reading was as follows: the bourgeoisie overthrew the Old Regime in the French Revolution. At that time the bourgeoisie were revolutionaries (and therefore left-wing). However, overthrowing the monarchy and establishing a constitutional government, the bourgeois became advocates of the new order (and therefore, reactionary, or right-wing). Socialists have become the new revolutionaries, the new left, the new radicals.

This way of seeing history has a Hegelian background: there are no absolutes. History moves through a process of thesis, antithesis, and synthesis. History’s god is learning to be a god. I’ve written earlier here about how this kind of relativistic view does not stand on its own terms. Now I would like to say that this way of looking at history can be intellectually dishonest.

According to the historical view I have learned, there is no absolute of what is left or right. One political group is always to the left or to the right of another, depending on how much this group is revolutionary or reactionary. Thus, the bourgeois were revolutionaries at one time, but today they are no longer. But what happens when the Socialists come to power? Do not they themselves become reactionary, defenders of the status quo? According to everything they taught me, no. The revolution is permanent. My assessment is that at this point they are partly right: the revolution must be permanent.

Socialists can not take the risk of becoming exactly what they fought at the first place. In practice, however, this is not the case: the Socialists occupy the posts of the state and begin to defend their position and these positions more than anything else. That’s what I see in my country today. In practice, it is impossible to be revolutionary all the time, just as it is impossible to be relativistic in a consistent way. I have not yet met a person who, looking at the red light, said “but to me it’s green and all these other cars are just a narrative of patriarchal society.”

Politics is unfortunately, for the most part, simply a search for power. Even the most idealistic groups need the power to put their agendas into practice. And experience shows that once installed in power, many idealistic groups become pragmatic.

Socialism is not revolutionary. It is only a reaction against the real revolution that is capitalism defended by classical liberalism. Classical liberalism says: men are all equal, private property is inviolable, exchanges can only occur voluntarily and no one can be forced to work against their will. Marxism responds: men are not all the same (they are divided into classes), private property is relative (if it is in the interest of the collective I can take what was once yours) and you will work for our cause, whether or not you want to. In short, Marxism is a return to the Old Regime.

When Should Intellectuals be held Accountable for Popular Misrepresentations of their Theories?

Often an academic will articulate some very nuanced theory or ideological belief which arises out of a specialized discourse, and specialized background knowledge, of their discipline. It is not too surprising that when her theory gets reprinted in a newspaper by a non-specialist journalist, taken up by a politican to support a political agenda, or talked about on the street by the layman who doesn’t possess specialized knowledge, the intellectual’s theory will be poorly understood, misrepresented, and possibly used for purposes that are not only not justified but the exact opposite of her intentions.

This happens all the time in any discipline. Any physicist who reads a You Tube thread about the theory of relativity, an economist who opens a newspaper, biologist who reads the comments section of a Facebook post on GMOs, psychologist who hears jokes about Freud, or philosopher who sees almost any Twitter post about any complex world-historical thinker knows what I’m talking about. Typically, it is assumed that a popularizer or layperson who misunderstands such complex nuanced academic theories always must be answerable to their most intellectually responsible, academic articulation. It is usually assumed that an intellectual theorist should never be concerned with the fact that her theories are being misunderstood by popular culture, and certainly, she shouldn’t change a theory just because it is being misunderstood.

For many disciplines in many contexts, this seems to be true. The theory of relativity shouldn’t be changed just because most people do not possess the technical knowledge to understand it and popularizers often oversimplify it. Just because people do not understand that climate change means more than rising temperatures doesn’t mean it is not true. The fact that some young earth creationist thinks that the existence of monkeys disproves revolution doesn’t mean an evolutionary biologist should care.

Further, it’s not just natural sciences to which these apply, but also the social sciences. Just because methodological individualism is often misunderstood as atomistic, reductive ethical individualism doesn’t mean economists should abandon it any more than people’s various misunderstandings of statistical methods mean scientists should abandon those methods. Likewise, the fact that rational choice theory is misunderstood as meaning people only care about money, or that Hayek’s business cycle theory is misunderstood as meaning only central banks can cause recessions, or that a Keynesian multiplier is misunderstood as meaning that all destructive stimulus is desirable because it equally increases GDP does not mean that economists who use them should abandon those theories based on non-substantive criticisms based on over simplified or straw man misunderstandings of those theories.

On the other hand, there are other times where it seems that popular misunderstandings of some academic writings do matter. Not just in the sense that a layperson not understanding science leads them to do unhealthy things, and therefore the layperson should be educated on what scientific theories actually say, but in the sense that popular misunderstandings point out some deficiencies in the theory itself that the theorist should correct.

To take an example (which I’m admittedly somewhat simplifying) from intellectual history. Early in his career John Dewey advocated quasi-Hegelian comparisons of society to a “social organism.” For example, in an 1888 essay he defended democracy because it “approaches most nearly the ideal of all social organization; that in which the individual and society are organic to each other.” Though Dewey never meant such metaphors to undermine individuality and imply some form of authoritarian collectivism, he did want to emphasize the extent to which individuality was constituted by collective identifications and social conditions and use that as a normative ideological justification for democratic forms of government.

By 1939, after the rise of Bolshevism, fascism, and various other forms of Hegelian-influenced illiberal, collectivist, authoritarian governments, he walked back such metaphors saying this:

My contribution to the first series of essays in Living Philosophies put forward the idea of faith in the possibilities of experience at the heart of my own philosophy. In the course of that contribution, I said, “Individuals will always be the center and the consummation of experience, but what the individual actually is in his life-experience depends upon the nature and movement of associated life.” I have not changed my faith in experience nor my belief that individuality is its center and consummation. But there has been a change in emphasis. I should now wish to emphasize more than I formerly did that individuals are the final decisive factors of the nature and movement of associated life.

[…] The fundamental challenge compels all who believe in liberty and democracy to rethink the whole question of the relation of individual choice, belief, and action to institutions, and to reflect on the kind of social changes that will make individuals actually the centers and the possessors of worthwhile experience. In rethinking this issue in light of the rise of totalitarian states, I am led to emphasize the idea that only the voluntary initiative and voluntary cooperation of individuals can produce social institutions that will protect the liberties necessary for achieving development of genuine individuality.

In other words, Dewey recognized that such a political theory could be easily misunderstood and misapplied for bad uses. His response was to change his emphasis, and his use of social metaphors, to be more individualistic since he realized that his previous thoughts could be so easily misused.

To put a term to it, there are certain philosophical beliefs and social theories which are popularly maladaptive, that is regardless of how nuanced and justifiable it is in the specialized discourse of some intellectual theorist they will very often be manipulated and misused in popular discourse for other nefarious purposes.

To take another example, some “white nationalist” and “race realist” quasi-intellectuals make huge efforts to disassociate themselves with explicitly, violently racist white supremacists. They claim that they don’t really hate non-whites and want to hurt them or deprive them of rights, just that they take pride in their “white” culture and believe in (pseudo-)scientific theories which purport to show that non-whites are intellectually inferior. It is not very surprising, to most people, that in practice the distinction between a “peaceful” race realist and a violently racist white supremacist is extremely thin, and most would rightly conclude that means there is something wrong with race realism and race-based nationalist ideologies no matter how much superficially respectable academic spin is put on them because they are so easily popularly maladaptive.

The question I want to ask is how can we more explicitly tell when theorists should be held accountable for their popularly maladaptive theories? When does it matter that public misinterpretation of a somewhat specialized theory points to something wrong with that theory? In other words, when is the likelihood of a belief’s popularly mal-adaptivity truth-relevant?  Here are a few examples where it’s a pretty gray area:

  1. It is commonly claimed by communitarian critics of liberalism that liberalism reduces to atomistic individualism that robs humanity of all its desire for community and family and reduces people to selfish market actors (one of the original uses of the term “neoliberal”). Liberals, such as Hayek and Judith Shklar, typically respond by saying that liberal individualism, properly understood, fully allows individuals to make choices relevant to such communal considerations. Communitarians sometimes respond by pointing out that liberalism is so often misunderstood publicly as such and say that this shows there is something wrong with liberal individualism.
  2. It is claimed by critics of postmodernism and forms of neo-pragmatism that they imply some problematic form of relativism which makes it impossible to rationally adjudicate knowledge-claims. Neo-pragmatists and postmodernists respond by pointing out this is misunderstanding their beliefs, the idea that our understanding of truth and knowledge isn’t algorithmically answerable to correspondence doesn’t mean it’s irrational, postmodernism is about skepticism towards meta-narratives not skepticism towards all rational knowledge itself, and (as Richard Bernstein argued) these perspectives often make hardcore relativism as incoherent as hardcore objectivism. The critic sometimes responds by citing examples of lay people and low-level academics using this to defend absurd scientific paradigms and relativistic-sounding theories of morality or epistemology and this should make us skeptical of postmodernism or neo-pragmatism.
  3. Critics of Marxism and socialism often point out that Marxism and socialism often transform into a form of authoritarianism, such as in the Soviet Union or North Korea. Marxist and socialists respond by saying that all these communist leaders misused Marxist doctrine, Marx doesn’t really imply anything that would lead of necessity to authoritarianism, and socialism can work in a democratic, more free context. The critic (such as Don Lavoie) will point out that the institutional incentive-structure of socialism ncessarily require a sort of militarism due to the economic incentives faced by socialist governments regardless of the good intentions of the pure intentions of the socialist theorist, in other words they claim that socialism is inherently popularly maladaptive due to the incentives it creates. The socialist still thinks this isn’t the case and, regardless, the fact that socialism has turned authoritarian in the past was because it was in the hands of irresponsible revolutionaries/popularizers and that isn’t relevant to socialism’s truth.
  4. Defenders of traditional social teachings of Christianity with respect to homosexuality claim there is nothing inherently homophobic about the idea that homosexual acts are a sin. In the spirit of “Love the sinner, hate the sin,” they claim that being gay isn’t a sin but homosexual acts are the sin. Christians should show love and compassion for gay people, they say, while still condemning their sexual behavior. Secular and progressive Christian critics respond by pointing out how, in practice, Christians do often act very awfully towards gay people. They point out it is very difficult for most Christians who believe homosexual acts are sinful to separate the “sin” from the “sinner” in practice regardless of the intellectually pure intentions of their preacher, and that such a theological belief is often used to justify homophobic cruelty. Since you will judge a faith by its fruits (a pretty Christian way of saying that popular mal-adaptivity is truth-relevant), we should be skeptical of traditional teachings on homosexuality. The traditionalist remains unconvinced that this matters.

It is important to distinguish between two questions: whether these beliefs are popularly maladaptive empirically (or, perhaps, just very likely to be) and whether the possibility of them being popularly maladaptive is relevant to their truth. For example, a liberal could respond to her communitarian critic by pointing out empirical evidence that individuals engaging in market exchange in liberal societies aren’t selfish and uncaring about their communities to undermine the claim that their individualism is popularly maladaptive in the first place. But that response is different from a liberal saying that just because their individualism has been misunderstood means that they shouldn’t care about it.

We should also distinguish the question of whether beliefs are likely to be maladaptive from whether their mal-adaptivity is truth-relevant. For example, it is conceivable that a popularized atheism would be extremely nihilistic even if careful existentialist atheists want to save us from nihilism. An atheist could say that appears unlikely since most non-intellectual atheists aren’t really nihilists (which would answer the former question), or by saying that people’s misunderstanding of the ethical implications of God’s non-existence is not relevant to the question of whether God exists (which would answer the latter question). For now, I am only concerned with when maladaptivity is truth-relevant.

There are a couple of responses which seem initially plausible but are unconvincing. One potential response is that positive scientific theories (such as evolution and monetary economics) do not need to worry about whether they are likely to be popularly maladaptive, but normative moral or philosophical theories (such as liberal individualism or theological moral teachings) do not.

However, this confuses the fact that scientists do often make normative claims based on their theories which seem irrelevant to their popular interpretation. For an instance, it’s not clear that a monetary economist, who makes normative policy conclusions based on their theories, should care if the layman does not understand how, for example, the Taylor Rule, Nominal Income Target, or Free Banking should work. Further, there are philosophical theories where popular maladaptively doesn’t seem to matter; for example, Kantians shouldn’t really fret if an introductory student doesn’t really grasp Kant’s argument for the synthetic a priori, and analytic philosophers shouldn’t care if most people don’t understand Quine’s objections to the analytic/synthetic distinction.

I’m unsure exactly how to answer this question, but it seems like answering it would clear up a lot of confusion in many disagreements.

The Counterfactual and the Factual

Historians often appear skeptical of counterfactual arguments. E.H. Carr argued that “a historian should never deal in speculation about what did not happen” (Carr, 1961, 127). Michael Oakeshott described counterfactual reasoning as ‘a monstrous incursion of science into the world of history’ (quoted in Ferguson, 1999). More recently, Eric Foner is reported to have found “counterfactuals absurd. A historian’s job is not to speculate about alternative universes …It’s to figure out what happened and why” (cited in Parry, 2016, here).

Such skepticism is striking to the modern economic historian, who since Robert Fogel’s work on the impact of the railroad on American economic growth has been trained to think explicitly in terms of counterfactuals. Far from being the absurdity Foner suggests, counterfactuals represent the gold standard in economic history today. Why? Because they are the sine qua of causal analysis. As David Hume noted, a counterfactual is exactly what we invoke whenever we use the word “cause”: “an object, followed by another, . . . where, if the first object had not been, the second would had never existed” (Hume, 1748, Part II).

Hume’s reasoning can best be understood in the context of a controlled experiment. Suppose a group of randomly selected patients are treated with a new drug while another randomly selected group are assigned a placebo. If the treatment and control groups were ex ante indistinguishable, then the difference between the outcomes for these two groups is the causal effect of the drug. The outcome for the control group provides the relevant counterfactual which enables us to assess the effectiveness of the drug.

The modern revival of economic history is based largely on the skill with which economic historians have been able to use econometric tools to replicate this style of experimental design using observational data. Such techniques enable economic historians to assess such counterfactuals as how much did slavery contribute to Africa’s underdevelopment?, what was the impact of the Peruvian Mita? or the effects of the Dust bowl?

The rejection of the counterfactual approach by historians such as Foner seems to run deep and constitutes a major divide between historians and economic historians; it is therefore well worth exploring its source.


To begin with, let’s set aside some of the reasons why historians have dismissed counterfactuals in the past. We need not, for instance, pay too much attention to the attachment of Marxists (like Carr) and Hegelian idealists (like Oakeshott) to teleological history. Of course, if history represents the unfolding of a dialectical process, then events that did not occur cannot, by definition, constitute the subject of historical analysis. Crude Marxism (and Hegelianism) is, I hope, still out of favor. But another reason why historians are skeptical of the counterfactual seems better grounded. And this is historians’ attachment to the factual.

Consider, Niall Ferguson’s edited volume Virtual History. It provides an excellent defense of counterfactual history. The counterfactuals considered by Ferguson and co, however, are largely in military or diplomatic history: what would have happened had the Nazis’ invaded Britain? etc.

These counterfactuals are a useful way to think through a question. But their power typically depends on reversing a single decision or event, i.e. suppose Hitler doesn’t issue his Stop Order in June 1940 or Edward Grey decides not to defend Belgium neutrality, what then? To be plausible everything else has to be held constant. This means that counterfactuals in diplomatic and military history shed light on the short term consequences of particular events. But the ceteris paribus assumption becomes harder to maintain as we consider events further removed from the initial counterfactual intervention. Thus, we have a reasonable idea of what Nazi rule of Britain in 1940 might have looked like — with the SS hunting down Jews, liberals, and intellectuals and restoring Edward VIII to the throne. But once we consider the outcomes of a Nazi ruled Britain into the 1950s and 1960s, we have much less guidance. Lacking any documentary evidence of the intentions of Britain’s Nazi rulers in the post-war era leaves us in the realm of historical fiction like Robert Harris’ Fatherland or CJ Sansom’s Dominion; there are simply too many degrees of freedom to do conduct historical analysis. Counterfactuals become problematic once we run out of facts to discipline our analysis.

This is the one fact it a valid reason for historians to be skeptical of counterfactuals. The actual historical record has to serve as a constant constraint on historical writing. This goes back to Leopold von Ranke, the scholar responsible for history’s emergence as an academic discipline in the 19th century. Ranke and his followers insisted on rigorous documentation and established the idea that the craft of the historian lay in the discovery, assembly, and analysis of primary sources. Ranke urged historians to focus on what actually happened; simply put, the facts ma’am, just the facts. Many criticisms have been levied at Ranke in the intervening 150 years, and to jaded post-modern eyes this approach no doubt appears hopeless naïve. But we should not dismiss Ranke’s strictures too quickly given what happens when historians abandon them (here and here). What is important here is that the same Rankian strictures that helped form history as an academic discipline, also rule out speculating about things that didn’t happen. They instill in historians a natural skepticism of counterfactual, alternative, history.

Moreover, while military history lends itself naturally to counterfactual analysis, other areas of history such as social or economic history where change is typically more gradual appear less suitable. After all: how is one to assess such complex counterfactuals as the fate of slavery in the US South in the absence of the Civil War?


These are questions which benefit from counterfactual reasoning but which, unlike diplomatic, political or military history, often requires training in the social sciences to answer. For example, take a question that is of interest to historians of capitalism: would slavery have disappeared quickly without the civil war?

From the 1950s to the 1970s, cliometric historians utilized economic theory to try to answer this. They employed economic models to assess the profitably of slavery and to infer the expectations of slave owners in the south (here). The main finding was that, contrary to the suppositions of historians (who at the time were often sympathetic to the southern cause): slavery was extremely profitable in 1860 and slaveholders foresaw the institution lasting indefinitely. In this case, their use of counterfactual reasoning overturned the previous historical orthodoxy.

The issue of the economic importance of slavery to the American economy in the early nineteenth century is also a counterfactual question. Implicitly it asks what would GDP have been in the absence of the slave-produced cotton. Here it is not only economic historians who are making counterfactual arguments. Foner championed Ed Baptist’s book The Half Has Never Been Told. But in it, Baptist argued that almost 50% of GDP in 1836 was due to slavery, itself a counterfactual argument. He is arguing that, in the absence of slavery, the American economy would have been roughly half the size that it was. This claim is certainly false based as it is on double-counting. But the problem with Baptist’s argument is not that he had made a counterfactual claim, but that he conducted counterfactual analysis ineptly and that his estimates are riddled with errors (see here and here).

All of this sheds light on why counterfactuals are so often dismissed by historians. There is an important and deeply shared sense that the counterfactual approach is ahistorical and an unfamiliarity with the techniques involved. A natural lesson from the Baptist affair is that historians should become more familiar with the powerful tools social scientists have to assess counterfactual questions. Taking counterfactuals seriously is a way to make progress on uncovering answers to important historical questions. But there is also a sense in which the historians’ suspicion of counterfactual may be justified.


There remain many questions where counterfactuals are not especially useful. The more complex the event, the harder it is to isolate the relevant counterfactual. Recently Bruno Gonçalves Rosi at Notes on Liberty suggested such a counterfactual: “no Protestant Reformation, no freedom of conscience as we know today”.

But in comparison to what we have considered thus far, this is a tricky counterfactual to assess. Suppose Bruno had said, “no Martin Luther, no freedom of conscience as we know it today”. This would be easier to argue against as one could simply note that absent Luther there probably won’t have been a Reformation starting in 1517, but at some point in the 1520s-1530s, it is likely that someone else would have taken Luther’s place and overthrown the Catholic Church. But taking the entire Reformation as a single treatment and assessing its causal effect is much harder to do.

In particular, we have to assess two separate probabilities: (i) the probability of freedom of conscience emerging in Europe in the absence of the Reformation (P(Freedom of conscience|No Reformation)); and (ii) the probability of freedom of conscience emerging in Europe in the presence of the Reformation (P(Freedom of conscience| Reformation)). For Bruno’s argument to hold we don’t just need P(FC|R) > P (FC|NR), which is eminently plausible. We also need P(FC|NR) to equal zero. This seems implausible.

The problem becomes still more complex once one recognizes that the Protestant Reformation was itself the product of economic, social, political and technological changes taking place in Europe. If our counterfactual analysis takes away the Reformation but leaves in place the factors that helped to give rise to it (urbanization, the printing press, political fragmentation, corruption etc.), then it is unclear what the counterfactual actually tells us. This problem can be illustrated by considering a causal diagram of the sort developed by Judea Perle (2000).

Here we are interested in the effect of D (the Reformation) on Y (freedom of conscience). The problem is that if we observe a correlation between D and Y, we don’t know if it is causal. This is because of the presence of A, B, and F. Perhaps these can be controlled for. But there is also C. We can think of C as the printing press.

The printing press has a large role in the success of the Reformation (Rubin 2014). But it also stimulated urbanization and economic growth and plausibly had an independent role in stimulating the developments that eventually gave rise to modern liberalism, rule of law, and freedom of conscience. The endogeneity problem here seems intractable.

Absent some way to control for all these potential confounders, we are unable to estimate the causal effects of the Protestant Reformation on something like freedom of conscience. In contrast to the purely economic questions considered above, we don’t have a good theoretical understanding of the emergence of religious freedom. Counterfactual reasoning only gets us so far.

Historians need economic history (and this means economic theory and econometrics). And economists need historians. They need historians to make sense of the complexity of the world and because of their expertise and skill in handling evidence.

BC’s weekend reads

  1. the Kurdish bourgeoisie is against separatism (kinda, sorta)
  2. Qatar waives visas for 80 nationalities amid Gulf boycott
  3. doesn’t Pakistan already suck? Isn’t that why this is happening in the first place?
  4. Similar moves are open to someone living in Pakistan. But those are different contexts than France or the US.
  5. I read this twice, very carefully, but am unconvinced (the use of stats is amateurish)
  6. The music was acid house, the drug: Ecstasy.
  7. The Plastic Pink Flamingo, in America [pdf]

Public choice and market failure: Jeffrey Friedman on Nancy MacLean

Screen Shot 2017-08-09 at 23.25.50

Jeffrey Friedman has a well-argued piece on interpreting public choice in the wake of Nancy MacLean’s conspiratorial critique of one of its founding theorists, James Buchanan. While agreeing that MacLean is implausibly uncharitable in her interpretation of Buchanan, Friedman suggests that many of Buchanan’s defenders are themselves in an untenable position. This is because public choice allows theorists to make uncharitable assumptions about political actors that they have never met or observed. In this sense, MacLean is simply imputing her preferred own set of bad motives onto her political opponents. What is sauce for the goose is good for the gander.

I think Friedman’s arguments are a valid critique of the way that public choice is sometimes deployed in popular discourse. A lot of libertarian commentary assumes that those seeking political power are uniquely bad people, always having self-interest and self-aggrandisement as their true aim. Given that this anti-politics message is associated with getting worse political leaders who are becoming progressively less friendly to individual liberty, this approach to characterising politicians seems counterproductive. However, I don’t think Friedman’s position is such a good fit for Buchanan himself or most of those working in the scholarly public choice tradition.

Continue reading

The case of James Damore: defending the Google engineer who was fired over a ‘controversial memo’

Yesterday, news broke out that Google had fired their engineer James Damore for disseminating a memo which was meant to be an invitation for open and honest discussion on Google’s left bias. CEO Sundar Pichai said the engineer had violated Google’s code of conduct by ‘advancing harmful gender stereotypes’.

DasKapital calls Damore a ‘diversity hater’, and Metro News calls him ‘anti-women’. The Guardian calls the memo ‘sexist’ and shamelessly maintains that the memo argues for the “biological inferiority of his female colleagues, and how this made them less suitable for tech.”

Reading through the 10-page memo myself, I find the memo very reasonable and I stand behind it 100%. Like Damore, I believe that we should stop assuming that gender gaps imply sexism.

What is Damore arguing against?

Damore argues that “Google’s left bias has created a politically correct monoculture that maintains its hold by shaming dissenters into silence.”

The several discriminatory practices that Google has instituted as a result of their left bias are:

  • Programs, mentoring, and classes only for people with a certain gender or race.
  • A high priority queue and special treatment for “diversity” candidates.
  • Hiring practices which can effectively lower the bar for “diversity” candidates by decreasing the false negative rate.
  • Reconsidering any set of people if it’s not “diverse” enough, but not showing that same scrutiny in the reverse direction (clear confirmation bias).
  • Setting org level OKRs for increased representation which can incentivize illegal discrimination.

In short, Damore argues that:

  • Google’s political bias has equated the freedom from offense with psychological safety, but shaming into silence is the antithesis of psychological safety.
  • This silencing has created an ideological echo chamber where some ideas are too sacred to be honestly discussed.
  • The lack of discussion fosters the most extreme and authoritarian elements of this ideology.
  • Extreme: all disparities in representation are due to oppression.
  • Authoritarian: we should discriminate to correct for this oppression.
  • Differences in distributions of traits between men and women may in part explain why we don’t have 50% representation of women in tech and leadership. Discrimination to reach equal representation is unfair, divisive, and bad for business.

What did Damore write about women?

Damore writes that women and men differ biologically and that this results in different personality traits, preferences they hold, and the career choices they make.

On biological differences

Damore writes that men and women differ biologically in many ways and that not all differences are socially constructed:

I’m simply stating that the distribution of preferences and abilities of men and women differ in part due to biological causes and that these differences may explain why we don’t see equal representation of women in tech and leadership. Many of these differences are small and there’s significant overlap between men and women, so you can’t say anything about an individual given these population level distributions.

On personality differences

Damore writes that women, on average, have more openness directed towards feelings and aesthetics rather than ideas. This makes them have a stronger interest in people rather than things and explains in part why women relatively prefer jobs in social or artistic areas.

In addition, women express their extraversion as gregariousness and agreeableness rather than assertiveness.

Women, on average, also have higher anxiety, and lower stress tolerance which makes high stress jobs less attractive to women.

Compared to men, women on average also look for more work-life balance.

Damore’s overall message

Damore explains his overall message as follows:

I hope it’s clear that I’m not saying that diversity is bad, that Google or society is 100% fair, that we shouldn’t try to correct for existing biases, or that minorities have the same experience of those in the majority. My larger point is that we have an intolerance for ideas and evidence that don’t fit a certain ideology. I’m also not saying that we should restrict people to certain gender roles; I’m advocating for quite the opposite: treat people as individuals, not as just another member of their group (tribalism).

Again, I think this is extremely reasonable. Unfortunately, in a world driven by irrational and zealous egalitarians, those who use logic and reason are easily labeled bigots.

Reference

Damore, J. (2017). Google’s Ideological Echo Chamber

Lunchtime Links

  1. My country, your colony | why the Holocaust in Europe?
  2. compliance and defiance to national integration in Africa [pdf] | on doing economic history
  3. ethnonationalism and nation-building in Siberia [pdf] | cosmopolitanism and nationalism
  4. political centralization and government accountability [pdf] | decentralization in military command
  5. unified China and divided Europe [pdf] | unilateralism is not isolationism

Why I’m No Longer a Christian: An Autobiographical/ Philosophical/ Therapeutic Explanation to Myself

Note: This was written about 18 months ago and posted on my now-defunct blog. I figure it might be worth reposting, mostly for posterity.

Throughout most of my youth, like the majority of middle-class Americans, I was raised as a Christian. As an argumentative and nerdy teenager, much of the intellectual energy throughout my adolescence was dedicated to the fervent apologetics of the Christian faith. In my eyes, I was trying to defend some deep, correspondent truth about the Lord. Today, I realize that was mostly youthful self-deception. I was trying to make beliefs I had made an epistemic and personal commitment to due to my social situation work with the experiences of the modern world I was thrust into. There is nothing wrong with my attempts to find some reason to cling to my contingent religious beliefs, and there is nothing wrong with people who succeed in that endeavor, but it was wrong for me to think I was doing anything more than that—something like defending eternal truths I knew certainly through faith, which I did so dogmatically.

As the title of this post suggests, my quest to make my religious beliefs work was ultimately unsuccessful, or at least have been up until this point (I’m not arrogant enough to assume I’ve reached the end of my spiritual/religious journey). For a variety of personal and intellectual reasons, I have since become a sort of agnostic/atheist in the mold of Nietzsche, or more accurately James (not Dawkins). Most of the point of this post is to spell out for my own therapeutic reasons the philosophical and personal reasons why I have the religious beliefs I have now at the young age of twenty. To the readers, this is ultimately a selfish post in that as the target audience is myself, both present and future. Nonetheless, I hope you enjoy this autobiographical/religious/philosophical mind vomit. Please, read it as like you would a novel—albeit a poorly written one—and not a philosophical or religious treatise.

Perhaps the best place to start is at the beginning of my childhood. But to understand that, I guess it’s better to start with my mother and father’s upbringing. My mother came from an intensely religious Baptist household with a mother who, to be blunt, used religion as a manipulative tool to the point of abuse. If her children disobeyed her, it was obviously the influence of Satan. Of course, any popular culture throughout my mother’s childhood was regarded as the work of Satan. I’ll spare you the details, but the upshot is this caused my mother some religious struggles that I inherited. My father came from a sincere though not devoted Catholic family. For much of my father’s young adulthood and late adolescence, religion took a backseat. When my parents met my father was an agnostic. He converted to Christianity by the time they married, but his religious beliefs were always more intimately personal and connected with his individual, private pursuit of happiness than anything else—a fact that has profoundly influenced the way I think about religion as a whole.

Though neither of my parents were at all interested in shoving religion down my throat, I kind of shoved it down my own throat as a child. I was surrounded by evangelical—for want of better word—propaganda throughout my childhood as we mostly attended non-denominational, moderately evangelical churches throughout my childhood. My mother mostly sheltered me from my grandmother’s abuses of religion, and she reacted to her grandmother’s excesses appropriately by trying to make my religious upbringing centered on examples of God’s love. However, her struggles with religion still had an impact on me as she wavered between her adult commitments to an image of an all-loving deity with the remnants of her mother’s conception of good as the angry, vengeful, jealous God of the Old Testament. She never really manifestly expressed the latter conception, but it was implicit, just subtlety enough for my young mind to notice, in the way some of the churches we chose in my youth expressed the Gospel.

At the age of seven, we moved from Michigan to the heart of the Bible belt in Lynchburg, VA, home to one of the largest evangelical colleges in the world: Liberty University. Many of the churches we attended in Virginia had Liberty professors as youth leaders, ministers, and the like, so Jerry Falwell’s Southern Baptist conception of God which aligned closely with my grandmother’s was an influence on me through my early teenage years. Naturally, religion was closely linked with political issues of the day. God blessed Bush’s war in Iraq, homosexuality was an abhorrent sin, abortion was murder, and nonsense like that was fed to me. Of course, evolution was an atheist lie and I remember watching creationist woo lectures with my mother while she was taking an online biology course from Liberty (she isn’t a creationist, for the record, and her major was psychology, which Liberty taught well enough).

Though it certainly wasn’t as extreme, some of the scenes in the documentary Jesus Camp are vaguely like experiences I had around this time. I was an odd kid who got interested in these serious “adult” issues at the age of nine while most of my friends were watching cartoons, so I swallowed the stock evangelical stance hook, line, and sinker. But there was something contradictory between my mother’s reservations about an angry God and refusal to push my religious beliefs in any direction thanks to the influence of her mother, my father’s general silence about religious issues unless the conversation got personal or political, and the strong evangelical rhetoric that the culture around me was spewing.

Around seventh grade, we moved from Virginia to another section of the Bible-Belt, Tennessee. For my early high school years, my interest in evangelical apologetics mostly continued. However, religion mostly took a backseat to my political views. With the beginning of the recession, I became far more interested in economics: I wanted an explanation for why there were tents with homeless people living in them on that hill next to Lowe’s. My intellectual journey on economics is a topic for another day, but generally, the political component of my religious views was slowly becoming less and less salient. I became more apathetic about social issues and more focused on economic issues.

It was around this time I also became skeptical of the theologically-justified nationalistic war-mongering fed to me by the Liberty crowd in Virginia. We lived near Ft. Campbell and I had the displeasure of watching family after family of my friends ruined because their dad went to Afghanistan and didn’t come back the same, or didn’t come back at all. The whole idea of war just seemed cruel and almost unjustifiable to me, even though I still would spout the conservative line on it externally I was internally torn. I would say I was beginning to subconsciously reject Christianity’s own ontology of violence (apologies to Millbank).

It was also around this time, ninth grade, that I began more systemically reading the Old Testament. War is a common theme throughout the whole thing, and all I could think of as I read about the conquer of Israel, the slaying of Amalekites, the book of Job, and the like were my personal experiences with my friends who were deeply affected by the Wars in Iraq and Afghanistan. At this point, there was skepticism and doubt about how God could justly wage war and commit cruel mass-killings in Biblical times.

Around tenth grade, I became immaturely interested in philosophy. I’m ashamed to admit it today, but Ayn Rand was my gateway drug to what would become an obsession of mine up until now. I loved elements of Rand’s ethics, her individualism, her intense humanism (which I still appreciate on some level), and of course her economics (which I also still appreciate, though they are oversimplified). But her polemics against religion and her simplistic epistemological opposition between faith and reason put me in an odd position. What was I, a committed evangelical Christian, to do with my affinities with Rand? Naturally, I should’ve turned to Aquinas, whose arguments for the existence of God and his unification of faith and reason I now can appreciate. However, at the time, I instead had the misfortune of turning to Descartes, whose rationalism seemed to me seemed to jive with what I saw in Rand’s epistemology (today, I definitely would not say that about Rand and Descartes at all as Rand is far more Aristotelian, ah the sophomoric follies of youth). Almost all of my subsequent intellectual journey with religion and philosophy could be considered a fairly radical reaction to the dogmas that I had bought at this time.

I had fully bought perhaps the worst of Rand and Descartes. Descartes’ philosophical method and “proofs” of God, with all the messy metaphysical presumptions of mind-body dualism (though I might’ve implicitly made a greater separation between “mind” and “spirit” than Descartes would’ve), the correspondence theory of truth, quest for certainty, and spectator theory of knowledge, the ego theory of the self, and libertarian free will. From Rand I got the worst modernist presumptions she took from Descartes, what Bernstein calls the “Cartesian Anxiety” in her dogmatic demand for objectivism, as well as her idiosyncratic views on altruism (though I never really accepted ethical egoism, or believed she was really an ethical egoist). The flat, horribly written protagonists of Atlas Shrugged and Fountainhead I took to be somehow emblematic of the Christian conception of God (don’t ask me what in the hell I was thinking). Somehow, I couldn’t explain it then coherently and cringe at it now, I had found a philosophical foundation of sorts for a capital-C Certain belief in protestant Christianity in God and a watered down Randian ethics. Around this time, I also took an AP European History class, and my studies (and complete misreadings of) traditional Lutheranism and Catholicism reinforced my metaphysical libertarianism and Cartesian epistemological tendencies.

Around this time, my parents became dissatisfied with the aesthetic and teachings of evangelical non-denominational churches, and we started attending a run-of-the-mill, mainline PCUSA church my mom had discovered through charity programs she encountered as a social worker. I certainly didn’t buy Presbyterianism’s lingering affinities for Calvinism inherited from Knox (such as their attempt to retain the language of predestination while affirming Free Will), but the far more politically moderate to apolitical sermons, as well as focus on the God of the New Testament as opposed to my Grandmother’s God, was a refreshing change of pace from the evangelical dogmatism I had become accustomed to in Virginia. It fit my emerging Rand-influenced transition to political libertarianism well enough, and the old-church aesthetic and teaching methods fit well with the more philosophical outlook I had taken on religion.

In eleventh grade, we moved back to Michigan in the absolute middle of nowhere. Virtually every single protestant church within a twenty-mile radius was  either some sort of dogmatically evangelical nondenominational super-church where the populist, charismatic sermons were brought to you buy Jesus, Inc.; or an equally evangelical tiny rural church with a median age of 75 where the sermons were the somewhat incoherent and rabidly evangelical ramblings of an elderly white man. Our young, upper-middle-class family didn’t fit into the former theologically or demographically and certainly didn’t fit into the later theologically or aesthetically. After about a year of church-shopping, our family stopped going to church altogether.

Abstaining from church did not dull my religion at all. Sure, the ethical doubts I was having at the time and the epistemological doubts caused by my philosophical readings were working in the background, but in a sense, this was my most deeply religious time. I had taken up fishing almost constantly all summer since we lived on a river, and much of my thoughts while sitting with the line in the water revolved around religion or politics. When my thoughts turned religious, there was always a sense of romantic/transcendentalist (I was reading Thoreau, Emerson, and Whitman in school at the time) sublimity in nature that I could attribute to God. Fishing, romping around in the woods, hunting, and experiencing nature became the new church for me and was a source of private enjoyment and self-creation (you can already see where my affinities for Rorty come from) in my late teens. Still, most of my intellectual energy was spent on political and economic interests and by now I was a fully committed libertarian.

Subconsciously earlier in my teens, but very consciously by the time I moved to Michigan, I had begun to realize I was at the very least on the homosexual spectrum, quietly identifying as bisexual at the time. The homophobic religious rhetoric of other Christians got on my nerves, but in rural northern Michigan I was mostly insulated from it and it never affected me too deeply. Since I assumed I was bi, it wasn’t that huge a deal in terms of my identity even if homosexuality was a sin, which I doubted it was though I couldn’t explain why, so I never really thought too deeply about it. However, it did contribute to my ethical doubts about Christianity further; if God says homosexuality is a sin, and Christians are somehow justified in oppressing homosexuality, how does that bode for God’s cruelty? It became, very quietly, an anxiety akin to the anxieties I was having about war when I moved to Tennessee.

Though abstaining from church didn’t cheapen my experience of religion, my exposure to my grandmother’s angry God did.  Up until that point, I had mostly been ignorant of her religious views because we lived so far away; but moving back to Michigan, as well as some health issues she had, thrust her religious fervor back into my—and my mother’s—consciousness. The way she talked about it and acted towards non-Christians reeked of the worst of I Samuel, Johnathan Edwards, John Calvin, and Jerry Falwell rolled into one. My skepticism towards the potential cruelty of the Christian God caused by my experiences with war and homophobia were really intensified by observing my maternal grandmother.

The year was 2013, I had just graduated from High School, I had just turned eighteen, and I had chosen my college. I had applied to some local state school as a backup which I only considered because it was a full-ride scholarship, my father’s alma-mater, the University of Michigan, and Hillsdale College. After the finances were taken care of, I’m fortunate enough to be a member of the upper-middle class, the real choices were between Michigan and Hillsdale. For better or for worse, I chose the latter.

My reasons for choosing Hillsdale were mostly based on misinformation about the college’s mission. Sure, I knew it was overwhelmingly conservative and religious. But I thought there was far more of a libertarian bent to campus culture. The religious element was sold to me as completely consensual, not enforced by the college at all other than a couple vague comments about “Judeo-Christian values” in the mission statement. I wanted a small college full of intellectually impassioned students who were dedicated to, as the college mission statement said, “Pursuing Truth, Defending Liberty.” The “defending liberty” part made me think the college was more libertarian, and the “pursuing truth” part made me assume it was very open-minded as a liberal arts education was supposed to be. I figured there’d probably be some issues about my budding homosexuality/bisexuality, but since it wasn’t a huge deal at the time for me personally, and some students I’d talked to said it wasn’t a big deal there, I thought I could handle it. Further, I suspected my major was going to be economics, and Hillsdale’s economics department—housing Ludwig von Mises’ library—is a dream come true (my opinion on this hasn’t changed).

If I ever had problems misunderstanding the concept of asymmetric information, the lies I was told as an incoming student to Hillsdale cleared them up. The Hillsdale I got was far more conservative than I could ever imagine and in a ridiculously dogmatic fashion. It was quickly revealed to be not the shining example of classical liberal arts education I had hoped for, but instead little more than a propaganda mill for a particularly nasty brand of Straussian conservatism. The majority of the students were religious in the same sense of my grandmother. Though they would intellectually profess to a different concept of God than my grandmother’s simplistic, lay-man Baptist understanding of God as an angry, jealous judge, the fruits of their faith showed little difference. My homosexual identity—by this point I’d abandoned the term “bisexual”—quickly became a focal point of my religious anxiety. Starting a few weeks in my freshman year, I began to fall into a deep depression, largely thanks to my treatment by these so-called “Christians”—that would cripple me for the next two years and that I am still dealing with the after-shocks of as I write this.

Despite the personal issues I had with my peers at Hillsdale, the two years I spent there were hands-down the two most intellectually exciting years of my life until that point. My first semester, I took an Introduction to Philosophy class. My professor, James Stephens, turned out to be a former Princeton student and had Richard Rorty and Walter Kauffman as mentors. His introductory class revolved first around ancient Greek philosophy, in particular, Plato’s Phaedo, then classical epistemology, particularly Descartes, Kant, and Hume, and a lot of experimental philosophy readings from the likes of Stephen Stitch and Joshua Knobe. The class primarily focused on issues in contemporary metaphysics which I had struggled with since I discovered Rand—like libertarian free will and theories of the self—epistemological issues, and metaphilosophical issues of method. Though only an intro class outside of my major, no class has changed my worldview quite as much as this.

In addition to the in-class readings, I read philosophy prolifically and obsessively outside of class as a matter of personal interest. That semester I had finished Stitch’s book The Fragmentation of Reason (which I wouldn’t have understood without extensive talks with Dr. Stephens in office hours), worked through most of Kant’s Critique of Pure Reason, basically re-learned Cartesianism, and read Hume’s Treatise. By the end of the class, I had completely changed almost every element of my philosophical world view. I went from a hardcore objectivist Cartesian to a fallibilist pragmatist (I had also read James due to Stitch’s work with him), from a fire-breathing metaphysical libertarian to a squishy compatibilist, from someone who had bought a simple referential view of language to a card-carrying Wittgensteinianist (of the Investigations, that is).

Other classes I took that first semester also would have a large impact on me. In my “Western Heritage” class—Hillsdale’s pretentious and propagandized term for what would usually be called something like “History of Western Thought: Ancient Times to 1600”—I essentially relearned all the theology I had poorly understood in my high school AP Euro class by reading church fathers and Catholic saints like Augustine, Tertullian, and, of course, Aquinas as well as rereading the likes of Luther and Calvin. Additionally, and this would have the most profound intellectual influence on me of anything I have ever read, I read Hayek’s Constitution of Liberty in my first political economy class cemented my epistemological fallibilism (although, I also read Fatal Conceit for pleasure which influenced me even more).

Early on that year, after reading Plato and Augustine, I began to become committed to some sort of Platonism, and for a second considered some sort of Eastern Orthodoxy. By this point, I was a political anarchist and saw the hierarchical and top-down control of Catholicism as too analogous to coercive statist bureaucracy. By contrast, the more communal structure of Orthodoxy, though still Hierarchical, seemed more appealing. To paraphrase Richard Rorty on his own intellectual adolescence, I had desperately wanted to become one with God, a desperation I would later react to violently. I saw Plato’s ideas of the Forms and Augustine’s incorporation of them into Christianity as a means to do that. But as I kept reading, particularly James, Hume, Kant, and Wittgenstein, the epistemological foundations of my Platonist metaphysical and theological stances crumbled. I became absolutely obsessed with the either-or propositions of the “Cartesian anxiety” and made a hobby of talking to my classmates in a Socratic fashion to show that they couldn’t be epistemically Certain in the Cartesian sense, much to the chagrin of most of my classmates. You could’ve played a drinking game of sorts during those conversations in which you took a shot every time I said some variation “How do you know that?” and probably give your child fetal alcohol syndrome, even if you weren’t pregnant or were a male.

In the second semester of my freshman year, I had turned more explicitly to theological readings and topics in my interests. (Keep in mind, I was mostly focusing on economics and math in class, almost all of this was just stuff I did on the side. I didn’t get out much in those days largely due to the social anxiety caused by the homophobia of my classmates.) My fallibilist/pragmatist epistemic orientation, as well as long with conversations with a fellow heterodox Hillsdale student from an Evangelical background, wound up with me getting very interested in “radical theology.” That semester, John Caputo had come to Hillsdale to discuss his book The Insistence of God. I attempted to read it at the time but was not well-versed enough in continental philosophy to really get what was going on in it. Nonetheless, my Jamesean orientation had me deeply fascinated in much of what Caputo was getting across.

My theological interests were twofold: first, more of an epistemic question, how can we know God exists? My conclusion was that we can’t, but whether God exists or not is irrelevant—what matters is the impact the belief of God has on our lives existentially and practically. This was the most I could glean out of Caputo’s premise “God doesn’t exist, he insists” without understanding Derrida, Nietzsche, Hegel, and Foucault. I began calling myself terms like “agnostic Christian,” “ignostic Christian,” or “pragmatist Christian” to try and describe my religious views. This also led me to a thorough rejection of Biblical literalism and infallibility, I claimed it was more a historical document on man’s interaction with God from man’s flawed perspective.

But, now in the forefront, were questions of Christianity’s ethical orientation that had lingered at the back of my mind since the early teens: why did the Christian God seem so cruel to me? I had resolved most of it with my rejection of Biblical infallibility. Chances are, God didn’t order the slaughter of Amalekites, or Satan’s torture of Job, or any of the other cruel acts in the Old Testament—the fallen humans who wrote the Bible misunderstood it. Chances are, most of the Old Testament laws on things like homosexuality were meant specifically for that historically contingent community and were not eternal moral laws and God of the New Testament, as revealed by Jesus, was the most accurate depiction of God in the Bible. Paul’s prima facie screeds against homosexuality in the New Testament, when taken in context and hermeneutically analyzed, probably had nothing to do with homosexuality as we know it today (I found this sermon convincing on that note). God sent Jesus not as a substitute for punishment but to act as an exemplar for how to love and not be cruel to others. I could still defend the rationality of my religious faith on Jamesean grounds, I was quoting Varieties of Religious Experience and Pragmatism more than the Bible at that point.  I also flirted with some more metaphysically robust theologies. Death of God theology seemed appealing based off of the little I knew about Nietzsche, and process theology to me bore a beautiful resemblance to Hayek’s concept of spontaneous order. Even saying it now, much of that sounds convincing and if I were to go back to Christianity, most of those beliefs would probably remain in-tact.

But still, there was this nagging doubt that the homophobic, anti-empathetic behavior of the Hillsdale “Christians” somehow revealed something rotten about Christianity as a whole. The fact that the church had committed so many atrocities in the past from Constantine using it to justify war, to the Crusades, to the Spanish conquistadors, to the Salem witch trials, to the persecution of homosexuals and non-believers throughout all of history still rubbed me the wrong way. Jesus’ line about judging faith by its fruits became an incredibly important scripture for me with my interest in William James. That scripture made me extremely skeptical of the argument that the actions of fallen humans do not reflect poorly on the TruthTM about the Christian God. What was the cash value of Christian belief if it seemed so obviously to lead to so much human cruelty throughout history and towards me personally?

That summer and the next semester, two books, both written by my philosophy professor’s mentors coincidentally enough as I had independently come across them, once again revolutionized the way I looked at religion. The first was Richard Rorty’s Philosophy and the Mirror of Nature, the second was William Kaufmann’s Nietzsche: Psychologist, Philosopher, Antichrist which I had read in tandem with most of Nietzsche’s best-known work (ie., Beyond Good and Evil, Thus Spake Zarathustra, Genealogy of Morals, and, most relevant to this discussion, The Antichrist).

Rorty had destroyed any last vestiges of Cartesianism or Platonism I had clung to. His meta-philosophical critique of big-P Philosophy that tries to stand as the ultimate judge of knowledge claims of the various professions around me completely floored me. His incorporation of Kuhnian philosophy of science and Gadamer’s hermeneutics was highly relevant to my research interests in the methodology of economics. Most importantly for religion was his insistence, though more explicit in his later works I had noticed it fairly heavily in PMN, that we are only answerable to other humans. There is no world of the forms to which we can appeal to, there is no God to whom we are answerable to, there is no metaphysical concepts we can rely on to call a statement true or false. The measurement of truth is the extent to which it helps us cope with the world around us, the extent to which it helps us interact with our fellow human beings.

Nietzsche’s concept of the Death of God haunted me, and now that I was beginning to read more continental philosophy some of the concepts in Caputo that flew over my head began to make sense. The Enlightenment Project to ground knowledge had made God, at least for much of the intellectual class who were paying attention to the great philosophical debates, a forced option. No longer could we rely on the Big Other to ground all our values, we had to reevaluate all our values and build a meaningful life for ourselves. Additionally Nietzsche’s two great criticisms of Christianity in the Antichrist stuck in my mind. Nietzsche’s critique that it led to the inculcation of slave morality, a sort of resentiment for the “lower people” didn’t quite stick because it seemed cruel. But his view that Christianity’s command to  “store our treasures in heaven” took all the focus off of this world, it ignored all those pragmatic and practical results of our philosophical beliefs that had become so important to me thanks to Matthew 7:16 and William James, and instead focused on our own selfish spiritual destiny did stick.The first critique didn’t quite ring with me because Nietzsche’s anti-egalitarian, and to be honest quite cruel,  attitude seemed as bad as what I saw the Christians doing to me. But his criticism of Christianity’s afterworldly focus on the afterlife rather than the fruits of their faith in this life posed a serious threat to my beliefs, and helped explain why the empathetic, homophobic hatred I was experiencing from my classmates was causing so much religious anxiety and cognitive dissonance.

(Note: Clearly, I’m violently oversimplifying and possibly misreading both Nietzsche and Rorty in the previous two paragraphs, but that’s beside the point as I’m more interested in what they made me think of in my intellectual development, not what they actually thought themselves.)

Still, through most of my sophomore year, I tried to resist atheism as best I could and cling to what I saw as salvageable in Christianity: the idea of universal Christian brotherhood and its potential to lead people to be kind to each other was still promising. Essentially, I still wanted to salvage Jesus as a didactic exemplar of moral values of empathy and kindness, if not in some metaphysical ideal of God, at least in the narrative of Jesus’s life and his teaching. Ben Franklin’s proto-pragmatic, yet still virtue ethical, view on religion in his Autobiography lingered in my mind very strongly during this phase. I still used the term “agnostic Christian” through most of that time and self-identified as a Christian, but retrospectively the term “Jesusist” probably better described the way I was thinking at that time.

I came to loathe (and still do) what Paul had done to Christianity: turning Jesus’ lessons into absolutist moral laws rather than parables on how to act kinder to others. See, for example, Paul’s treatment of sexual ethics in 1 Corinthians. Paul represented the worst slave-morality tendencies Nietzsche ridiculed to the extreme, and the way he acted as if there was only one way—which happened to be his what I saw as very cruel way—to experience Jesus’ truth in religious community in all his letters vexed me. Additionally, I loathed Constantine for turning Christianity into a tool to justify governmental power and coercion, which it remained throughout the reign of the Holy Roman Empire, Enlightenment-era absolutism, and into modern social conservative theocratic tendencies in America.

But the idea of an all-loving creator, if not a metaphysical guarantee of meaning and morality, sending his son/himself as an exemplar for what humanity can and should be still was extremely—and in many ways still is—attractive to me. I flirted with the Episcopalian and Unitarian Universalist churches, but something about their very limited concept of community rubbed me the wrong way (I probably couldn’t justify it or put my finger on it).

Clearly, my religious and philosophical orientation (not to mention my anarchist political convictions) put me at odds with Hillsdale orthodoxy. I started writing papers that were pretty critical of my professor’s lectures at times (though I still managed to mostly get A’s on them). These essays were particularly critical in my Constitution (essentially a Jaffaite propaganda class) and American Heritage (essentially a history of American political thought class, which was taught very well by a brilliant orthodox Catholic Hillsdale grad) classes. I was writing editorials in the student paper subtlety ridiculing Hillsdale’s homophobia and xenophobia, and engaging in far too many Facebook debates on philosophy, politics, and religion that far too often got far too personal.

In addition, in the beginning of my sophomore year, I came out as gay publicly. With the Supreme Court decision coming up the following summer, never had Hillsdale’s religiously-inspired homophobia reached such a fever-pitch. I could hardly go a day without hearing some homophobic slur or comment and the newspaper was running papers—often written by professors—claiming flat out false things about gay people (like comparing it to incest, saying that no society has ever had gay marriage and the like). The fruit/cash value of Jesus’ teachings was quite apparently not turning out to be the empathetic ethos I had hoped for, the rotten elements of the Old Testament God which my grandmother emphasized, the Pauline perversions, and Constantine’s statism were instead dominating the Christian ethos.

At the end of that academic year (culminating with this) I suffered a severe mental breakdown largely due to Hillsdale’s extreme homophobia. By the beginning of the next school year, I was completely dysfunctional academically, intellectually, and socially; I was apathetic about all the intellectual topics I had spent my entire thinking life occupied with, completely jaded about the future, and overall extraordinarily depressed. I’ll spare the dirty details, but by the end of the first month of my Junior year, it became clear I could no longer go on at Hillsdale. I withdrew from Hillsdale, and transferred to the University of Michigan.

That pretty much takes me up to present day. But coming out of that depression, I began to seriously pick back up the question of why Christianity—even the good I saw in Jesusism—no longer seemed true in the pragmatic sense. Why was this religion I had spent my whole life so committed to all of a sudden utterly lacking in cash value?

I found my answer in Rorty and Nietzsche one cold January day while I took a weekend trip to Ann Arbor with my boyfriend. I sat down at a wonderful artisan coffee shop set in a quaint little arcade tucked away in downtown Ann Arbor, and was re-reading Rorty’s Contingency, Irony, and Solidarity. Rorty’s continued insistence that “cruelty is the worst thing you can do,” even if he couldn’t metaphysically or epistemically justify it, seemed to be a view I had from the very beginning when my doubts about the Christian faith started thanks to my experiences with the victims of war.

Now, I can say that the reason I’m not a Christian—and the reason I think it would be a good idea if Christianity as religion faded out as a public metanarrative (though not as a private source for joy and self-creation that my dad exemplified)—is because Christianity rejects the idea that cruelty is the worst thing you can do. According to Christian orthodoxy (or, at least, the protestant sola fide sort I grew up with), you can be as outrageously, sadistically, egomaniacally cruel to another person as you want, and God will be perfectly fine with it if you believe in him. If Stalin would “accept God into his heart”—whatever that means—his place in paradise for eternity is assured, even if he had the blood of fifty million strong on his hands.

I have no problem with that per se, I agree with Nietzsche that retributive justice is little more than a thinly veiled excuse for revenge. Further, I agree with Aang from Avatar: The Last Airbender in saying “Revenge is like a two-headed rat viper: while you watch your enemy go down, you’re being poisoned yourself.” As an economist, the whole idea of revenge kind of seems to embrace the sunk cost fallacy. I still regard radical forgiveness and grace as among the best lessons Christianity has to offer, even forgiveness for someone like Stalin.

What seems absurd is that while Stalin could conceivably get a pass, even the kindest, most genuinely empathetic, and outstanding human being will be eternally damned and punished by God simply for not believing. For the Christian, the worst thing you can do is not be cruel, the worst thing you can do is reject their final vocabulary. When coupled with Nietzsche’s insights that Christianity is so focused on the afterlife that it ignores the pragmatic consequences of actions in this life, it is no wonder that Christianity has bred so much cruelty throughout history. Further, the idea that we are ultimately answerable to a metaphysical Big Other rather than to our fellow human beings (as Rorty would have it) seems to cheapen the importance of our other human beings. The most important thing to Christians is God, not your fellow man.

Of course, the Christian apologist will remark that “TrueTM” Christianity properly understood does not necessarily entail that conclusion. No true Scotsman aside, the point is well taken. Sure, the concept of Christian brotherhood teaches that since your fellow man is created in God’s image harming him is the same as harming God. Sure, Jesus does teach the most important commandment is essentially in line with my anti-cruelty. Sure, different sects of Christianity have a different view of divinity that are more nuanced than the one I gave.

But, again, if we judge this faith by its fruits, if we empirically look at the cash value of this belief, if we look at the revealed preference of many if not most Christians, it aligns more with my characterization than I would like. Between the emphasis on the afterlife, the fundamentally anti-humanist (in a deep sense) ethical orientation, and the belief that cruelty is not the worst thing you can do, I see little cash value to Christianity and a whole bunch of danger that it is highly apt—and clearly has been empirically—to be misused for sadistic purposes.

This is not to say Christianity is completely (pragmatically) false. I also agree with Rorty when he says the best way to reduce cruelty and advance human rights is through “sentimental education.” The tale of Jesus, if understood the way we understand a wonderful work of literature—like Rorty himself characterizes writers like Orwell—should live on. It may sound corny and blasphemous, but if “Christian” were simply the name of the Jesus “fandom,” I’d definitely be a Christian. I also certainly don’t think Christianity is something nobody should believe. The cash value of a belief is based on the myriad of particular contingencies of an individual or social group, and those contingencies are not uniform to my experience. However, from my contingent position, I cannot in good faith have faith.

Perhaps it is a sad loss, perhaps it is a glorious intellectual and personal liberation, and perhaps it is something else. Only time will tell. Anyways, 6,325 words later I hope I have adequately explained to myself why I am not a Christian.

When to list working papers?

I have been updating my CV the past weekend and as a process have spent more time than I should have looking at other’s CV for reference. The experience has reminded me of two things, (1) I do not share other’s infatuation with latex and (2) I despise how working papers are listed.

My primary concern with many CVs is that some people list working papers along with peer reviewed published papers. I cannot help but feel this is weaseling. This is not aided when people list “revise and resubmits” along with actual publications. An R&R is not a publication. By all means it is a good sign that a paper will get published, but it is not a publication.

My second concern is that people list working papers, but offer no link to a draft copy. In the absence of a readily accessible draft, how am I to know if someone has a ‘real’ working paper or simply some regression results on a power point? I am especially irked when I contact an author asking for a draft of their working paper and am told that no such draft exists.

I’m still a graduate student, but if I am to be humored I think academia would benefit if it became the norm to list working papers (and R&Rs) in a separate section and if it were required to upload a draft on SSRN (or whatever your preferred depository is).

Likewise I think it best to list book reviews and other non-peer reviewed materials separately. I was surprised the other day to find people who listed op-eds in local newspapers or blog posts under publications. Don’t get me wrong – I think some blog posts (especially those on a certain site) are great reads! But peer reviewed publications they are not.

Does this sound reasonable?

Mea Culpa: Israel and Palestine

So, I let myself be captured by Irfan’s cultured, bright, well-spoken, and fact-studded critique. He is right, on the main. My short essay is loose on many facts. I did not know what I did not know. And where it’s not completely wrong, it’s often sloppy. So, for example, I shouldn’t have said that Jews were not allowed on the mosque’s esplanade. I should have said (and the damned thing is that I knew it) that they were not allowed to pray there – but then, what if a Jew takes a walk on the esplanade and prays inside his head, and what if, unbeknownst to him, his lips move a little? As they say in French: “Irfan m’a mené en bateau.” At any rate, I will simply confess that nearly all my facts are wrong so I can recover my purpose, at last. Don’t worry, I won’t take much of your time. Here are a handful of real real facts and their obvious implications:

  • Palestinian Muslims (or a single one) assassinated two Israeli police officers a few weeks ago on the mosques esplanade or near it.
  • The assassin or assassins used a gun or guns.
  • Israeli authorities – that exercise de facto control over the area- responded by setting up metal detectors on entrances to the mosque’s esplanade.
  • Metal detectors are useful to alert to the presence of most firearms and of some bombs.
  • Palestinian Muslims protested this measure in several ways, including with riots.
  • The people whose safety could have been improved by the existence of the metal detectors were both Israeli security forces in the area and the great many Palestinian Muslims who constitute the bulk of the visitors to the same area.
  • Thus, Palestinian Muslims protested -including with rioting – security measures that were likely to benefit them most (in terms of numbers).*

That is collective irrationality.

I suppose that Irfan, or another subtle defender of irrationality, will argue that the installation of metal detectors at those sites is another step in Palestinians’ loss of sovereignty over the Holy Places, and thus the violent reaction. Sure thing! This defense implies that Palestinian Muslims have to be ready to be assassinated by other Palestinian Muslims in order to enforce a shred of Palestinian Muslim sovereignty over that small area.

That is insane.


*I ignore, of course, the idiotic view that Muslim terrorists could not possibly kill other Muslims at a sacred site of Islam. Muslims have been killing tens of thousands of Muslims, specifically, for the past twenty years. Some terrorists, who called themselves Muslims, chose to engage precisely in mass killing at Muslim religious sites such as mosques. And then, there are Jewish terrorists, and even the occasional dangerous illuminated Christian.

Freedom of Conscience and the Rule of Law

Of course the concept of “freedom of conscience” was forged in Europe by Spinoza, Locke, Voltaire, John Stuart Mill, and many other philosophers. But the freedom of conscience as an individual right that belongs to set of characteristics which defines the rule of law is an American innovation, which later spread to Latin America and to the Old Continent.

This reflection comes from the dispute which has been aroused in Notes On Liberty about the Protestant Reformation and freedom of conscience. Now, my intention is not to mediate between Mark and Bruno, but to bring to the Consortium a new line of debate. What I would like to polemize is what defines which rights to be protected by the rule of law. In this sense, might we regard a political regime that bans freedom of conscience as based on the rule of law? I am sure that no one would dare to do so. But, instead, would anyone dare to state that unification of language in a given country hurts the rule of law? I am afraid that almost nobody would.

Nevertheless, this is a polemical question. For example, the current Catalan independence movement has the language of Catalan as one of its main claims, so tracing the genealogy of the rights that constitutes the concept of rule of law is a meaningful task —and this is why the controversy over the Protestant Reformation and the origin of Freedom of Conscience at NOL is so interesting.

Before the Protestant Reformation, the theological, philosophical, scientific, and political language of Europe was unified in Latin. On the other hand, the languages used by the common people were utterly fragmented. A multiplicity of dialects were spoken all over Europe. The Catholic Kings of Spain, for example, unified their kingdom under the same religion, but they did not touch the local dialects. A very similar situation might be found in the rest of Europe: kingdoms with one religion and several dialects.

There was a strong reason for this to be so. Before the Medieval Ages Bibles in vernacular had existed, but the literacy rate was so low that the speed of evolution and fragmentation of the dialects left those translations obsolete and incomprehensible. Since printing books was extremely costly (this was before the invention of  the printing press), the best language to write and print books and constitutional documents was Latin.

The Evangelical movement, emerged out of the Protestant Reformation, meant that final authority of religion was not the Papacy any more but the biblical text. What changed was the coordination problem. Formerly, the reference was the local bishop, who was linked to the Bishop of Rome. (Although with the Counter-Reformation, in some cases, like Spain, the bishops were appointed by the king, a privilege obtained in exchange for remaining loyal to the Pope). On the other hand, in the Reformation countries, the text of the Bible as final authority on theological matters demanded the full command of an ability not so extended until that moment: literacy.

It is well-known that the Protestant Reformation and the invention of printing expanded the translations of the Bible into the vernacular. But always goes completely unnoticed that by that time the concept of a national language hardly existed. In the Reformist countries the consolidation of a national language was determined by the particular vernacular which was chosen to translate the Bible into.

Evidently, the extension of a common language among the subjects of a given kingdom had reported great benefits to its governance, since the tendency was followed by the monarchies of France and Spain. The former extended the Parisian French over the local patois and, in Spain of the XVIII Century, the Bourbon Reforms imposed Castilian as the national Spanish language. The absolute kings, who each of them had inherited a territory unified by a single religion, sowed the seeds of national states aggregated by a common language. Moreover, Catholicism became more dependent on absolute kings than on Rome —and that is why Bruno finds some Catholics arguing for the separation of Church from the state.

Meanwhile, in the New World, the Thirteen Colonies were receiving the European immigration mostly motivated on the lack of religious tolerance in their respected countries of origin. The immigrants arrived carrying with them all kind of variances of Christian confessions and developed new and unexpected ones. All those religions and sects had a common reference: the King James Bible.

My thesis is that it was the substitution of religion for language as the factor of cohesion and mechanism of social control that made possible the development of the freedom of conscience. The political power left what was inside of the mind of their subjects a more economical device: language. Think what you wish, believe what you wish, read what you wish, write what you wish, say what you wish, as long as I understand what you do and you can understand what I mean.

Moreover, an official language became a tool of accountability and a means of knowing the rights and duties of an individual before the state. The Magna Carta (1215) was written in Medieval Latin while the Virginia Declaration of Rights (1776), in English. Both documents were written in the language that was regarded as proper in their respective time. Nevertheless, the language which is more convenient to the individual for the defense of his liberties is quite obvious.

Often, the disputes over the genealogy of rights and institutions go around two poles: ideas and matter. I think it is high time to go along the common edge of both of them: the unintended consequences, the “rural nomos,” the complex phenomena. In this sense, but only in this sense, tracing the genealogy – or, better, the “nomology” – of the freedom of conscience as an intended trait of the concept of “rule of law” is worth our efforts.