When Should Intellectuals be held Accountable for Popular Misrepresentations of their Theories?

Often an academic will articulate some very nuanced theory or ideological belief which arises out of a specialized discourse, and specialized background knowledge, of their discipline. It is not too surprising that when her theory gets reprinted in a newspaper by a non-specialist journalist, taken up by a politican to support a political agenda, or talked about on the street by the layman who doesn’t possess specialized knowledge, the intellectual’s theory will be poorly understood, misrepresented, and possibly used for purposes that are not only not justified but the exact opposite of her intentions.

This happens all the time in any discipline. Any physicist who reads a You Tube thread about the theory of relativity, an economist who opens a newspaper, biologist who reads the comments section of a Facebook post on GMOs, psychologist who hears jokes about Freud, or philosopher who sees almost any Twitter post about any complex world-historical thinker knows what I’m talking about. Typically, it is assumed that a popularizer or layperson who misunderstands such complex nuanced academic theories always must be answerable to their most intellectually responsible, academic articulation. It is usually assumed that an intellectual theorist should never be concerned with the fact that her theories are being misunderstood by popular culture, and certainly, she shouldn’t change a theory just because it is being misunderstood.

For many disciplines in many contexts, this seems to be true. The theory of relativity shouldn’t be changed just because most people do not possess the technical knowledge to understand it and popularizers often oversimplify it. Just because people do not understand that climate change means more than rising temperatures doesn’t mean it is not true. The fact that some young earth creationist thinks that the existence of monkeys disproves revolution doesn’t mean an evolutionary biologist should care.

Further, it’s not just natural sciences to which these apply. Just because methodological individualism is often misunderstood as atomistic, reductive ethical individualism doesn’t mean economists should abandon it any more than people’s various misunderstandings of statistical methods mean scientists should abandon those methods. Likewise, the fact that rational choice theory is misunderstood as meaning people only care about money, or that Hayek’s business cycle theory is misunderstood as meaning only central banks can cause recessions, or that a Keynesian multiplier is misunderstood as meaning that all destructive stimulus is desirable because it equally increases GDP does not mean that economists who use them should abandon those theories based on non-substantive criticisms based on straw-manned versions of their theories.

On the other hand, there are other times where it seems that popular misunderstandings of some academic writings do matter. Not just in the sense that a layperson not understanding science leads them to do unhealthy things, and therefore the layperson should be educated on what scientific theories actually say, but in the sense that popular misunderstandings point out some deficiencies in the theory itself that the theorist should correct.

To take an example (which I’m admittedly somewhat simplifying) from intellectual history, early in his career John Dewey advocated quasi-Hegelian comparisons of society to a “social organism.” For example, in an 1888 essay he defended democracy because it “approaches most nearly the ideal of all social organization; that in which the individual and society are organic to each other.” Though Dewey never meant such metaphors to undermine individuality and imply some form of authoritarian collectivism, he did want to emphasize the extent to which individuality was constituted by collective identifications and social conditions and use that as a normative ideological justification for democratic forms of government.

By 1939, after the rise of Bolshevism, fascism, and various other forms of Hegelian-influenced illiberal, collectivist, authoritarian governments, he walked back such metaphors saying this:

My contribution to the first series of essays in Living Philosophies put forward the idea of faith in the possibilities of experience at the heart of my own philosophy. In the course of that contribution, I said, “Individuals will always be the center and the consummation of experience, but what the individual actually is in his life-experience depends upon the nature and movement of associated life.” I have not changed my faith in experience nor my belief that individuality is its center and consummation. But there has been a change in emphasis. I should now wish to emphasize more than I formerly did that individuals are the final decisive factors of the nature and movement of associated life.

[…] The fundamental challenge compels all who believe in liberty and democracy to rethink the whole question of the relation of individual choice, belief, and action to institutions, and to reflect on the kind of social changes that will make individuals actually the centers and the possessors of worthwhile experience. In rethinking this issue in light of the rise of totalitarian states, I am led to emphasize the idea that only the voluntary initiative and voluntary cooperation of individuals can produce social institutions that will protect the liberties necessary for achieving development of genuine individuality.

In other words, Dewey recognized that such a political theory could be easily misunderstood and misapplied for bad uses. His response was to change his emphasis, and his use of social metaphors, to be more individualistic since he realized that his previous thoughts could be so easily misused.

To put a term to it, there are certain philosophical beliefs and social theories which are popularly maladaptive, that is regardless of how nuanced and justifiable it is in the specialized discourse of some intellectual theorist they will very often be manipulated and misused in popular discourse for other nefarious purposes.

To take another example, some “white nationalist” and “race realist” quasi-intellectuals make huge efforts to disassociate themselves with explicitly, violently racist white supremacists. They claim that they don’t really hate non-whites and want to hurt them or deprive them of rights, just that they take pride in their “white” culture and believe in (pseudo-)scientific theories which purport to show that non-whites are intellectually inferior. It is not very surprising, to most people, that in practice the distinction between a “peaceful” race realist and a violently racist white supremacist is extremely thin, and most would rightly conclude that means there is something wrong with race realism and race-based nationalist ideologies no matter how much superficially respectable academic spin is put on them because they are so easily popularly maladaptive.

The question I want to ask is how can we more explicitly tell when theorists should be held accountable for their popularly maladaptive theories? When does it matter that public misinterpretation of a somewhat specialized theory points to something wrong with that theory? In other words, when is the likelihood of a belief’s popularly mal-adaptivity truth-relevant?  Here are a few examples where it’s a pretty gray area:

  1. It is commonly claimed by communitarian critics of liberalism that liberalism reduces to atomistic individualism that robs humanity of all its desire for community and family and reduces people to selfish market actors (one of the original uses of the term “neoliberal”). Liberals, such as Hayek and Judith Shklar, typically respond by saying that liberal individualism, properly understood, fully allows individuals to make choices relevant to such communal considerations. Communitarians sometimes respond by pointing out that liberalism is so often misunderstood publicly as such and say that this shows there is something wrong with liberal individualism.
  2. It is claimed by critics of postmodernism and forms of neo-pragmatism that they imply some problematic form of relativism which makes it impossible to rationally adjudicate knowledge-claims. Neo-pragmatists and postmodernists respond by pointing out this is misunderstanding their beliefs, the idea that our understanding of truth and knowledge isn’t algorithmically answerable to correspondence doesn’t mean it’s irrational, postmodernism is about skepticism towards meta-narratives not skepticism towards all rational knowledge itself, and (as Richard Bernstein argued) these perspectives often make hardcore relativism as incoherent as hardcore objectivism. The critic sometimes responds by citing examples of lay people and low-level academics using this to defend absurd scientific paradigms and relativistic-sounding theories and this should make us skeptical of postmodernism or neo-pragmatism.
  3. Critics of Marxism and socialism often point out that Marxism and socialism often transform into a form of authoritarianism, such as in the Soviet Union or North Korea. Marxist and socialists respond by saying that all these communist leaders misused Marxist doctrine, Marx doesn’t really imply anything that would lead of necessity to authoritarianism, and socialism can work in a democratic, more free context. The critic (such as Don Lavoie) will point out that the incentives of socialism lead of necessity to a sort of militarism due to the economic incentives faced by socialist governments regardless of the good intentions of the pure intentions of the socialist theorist, in other words they claim that socialism is inherently popularly maladaptive due to the incentives it creates. The socialist still thinks this isn’t the case and, regardless, the fact that socialism has turned authoritarian in the past was because it was in the hands of the wrong popularizers and that isn’t relevant to socialism’s truth.
  4. Defenders of traditional social teachings of Christianity with respect to homosexuality claim there is nothing inherently homophobic about the idea that homosexual acts are a sin. In the spirit of “Love the sinner, hate the sin,” they claim that being gay isn’t a sin but homosexual acts are the sin, and Christians should show love and compassion for gay people while still condemning their sexual behavior. Secular and progressive Christian critics respond by pointing out how, in practice, Christians do often act very awfully towards gay people. They point out it is very difficult for most Christians who believe homosexual acts are sinful to separate the “sin” from the “sinner” in practice regardless of the intellectually pure intentions of their preacher, and that such a theological belief is often used to justify homophobic cruelty. Since you will judge a faith by its fruits (a pretty Christian way of saying that popular mal-adaptivity is truth-relevant), we should be skeptical of traditional teachings on homosexuality. The traditionalist remains unconvinced that it matters.

It is important to distinguish between two questions: whether these beliefs are popularly maladaptive empirically (or, perhaps, just very likely to be) and whether the possibility of them being popularly maladaptive is relevant to their truth. For example, a liberal could respond to her communitarian critic by pointing out empirical evidence that individuals engaging in market exchange in liberal societies aren’t selfish and uncaring about their communities to undermine the claim that their individualism is popularly maladaptive in the first place. But that response is different from a liberal saying that just because their individualism has been misunderstood means that they shouldn’t care about it.

We should also distinguish the question of whether beliefs are likely to be maladaptive from whether their mal-adaptivity is relevant. For example, it is conceivable that a popularized atheism would be extremely nihilistic even if careful atheists want to save us from nihilism. An atheist could say that appears unlikely since most non-intellectual atheists aren’t really nihilists (which would answer the former question), or by saying that people’s misunderstanding of the ethical implications of God’s non-existence is not relevant to the question of whether God exists (which would answer the latter question). For now, I am only concerned with when mal-adaptivity is truth-relevant.

There are a couple of responses which seem initially plausible but are unconvincing. One potential response is that positive scientific theories (such as evolution and monetary economics) do not need to worry about whether they are likely to be popularly maladaptive, but normative moral or philosophical theories (such as liberal individualism or theological moral teachings) do not.

However, this confuses the fact that scientists do often make normative claims based on their theories which seem irrelevant to their popular interpretation. For an instance, it’s not clear that a monetary economist, who makes normative policy conclusions based on their theories, should care if the layman does not understand how, for example, the Taylor Rule, Nominal Income Target, or Free Banking should work. Further, there are philosophical theories where popular maladaptively doesn’t seem to matter; for example, Kantians shouldn’t really fret if an introductory student doesn’t really grasp Kant’s argument for the synthetic a priori, and analytic philosophers shouldn’t care if most people don’t understand Quine’s objections to the analytic/synthetic distinction.

I’m unsure exactly how to answer this question, but it seems like answering it would clear up a lot of confusion in many disagreements.

Why I’m No Longer a Christian: An Autobiographical/ Philosophical/ Therapeutic Explanation to Myself

Note: This was written about 18 months ago and posted on my now-defunct blog. I figure it might be worth reposting, mostly for posterity.

Throughout most of my youth, like the majority of middle-class Americans, I was raised as a Christian. As an argumentative and nerdy teenager, much of the intellectual energy throughout my adolescence was dedicated to the fervent apologetics of the Christian faith. In my eyes, I was trying to defend some deep, correspondent truth about the Lord; today I realize that was mostly youthful self-deception, I was trying to make beliefs I had made an epistemic and personal commitment to do to my social situation work with the experiences of the modern world I was thrust into. There is nothing wrong with my attempts to find Jamesean cash value to my contingent religious beliefs, and there is nothing wrong with people who succeed in that endeavor; but it was wrong for me to think I was doing something more than that—something like defending eternal truths I knew certainly through faith, which I did so dogmatically.

As the title of this post suggests, my quest to make my religious beliefs work was ultimately unsuccessful, or at least have been up until this point (I’m not arrogant enough to assume I’ve reached the end of my spiritual/religious journey). For a variety of personal and intellectual reasons, I have since become a sort of agnostic/atheist in the mold of Nietzsche, or more accurately James. Most of the point of this post is to spell out for my own therapeutic reasons the philosophical (I would venture to say, with James and Rorty, that philosophy at its best when it is therapeutic) and personal reasons why I have the religious beliefs I have now at the young age of twenty. To the readers, this is ultimately a selfish post in that as the target audience is myself, both present and future. Nonetheless, I hope you enjoy this autobiographical/religious/philosophical mind vomit. Please, read it as like you would a novel—albeit a poorly written one—and not a philosophical or religious treatise.Perhaps the best place to start is at the beginning of my childhood. But to understand that I guess it’s better to start with my mother and father’s upbringing. My mother came from an intensely religious Baptist household with a mother who, to be blunt, used religion as a manipulative tool to the point of abuse. If her children disobeyed her, it was obviously the influence of Satan. Of course, any popular culture throughout my mother’s childhood was regarded as the work of Satan. I’ll spare you the details, but the upshot is this caused my mother some religious struggles that I inherited. My father came from a sincere though not feverish Catholic family. For much of my father’s young adulthood and late adolescence, religion took a backseat and when my parents met my father was an agnostic. He converted to Christianity by the time they married, but his religious beliefs were always more intimately personal and connected with his individual, private pursuit of happiness than anything else—a fact that has profoundly influenced the way I think about religion as a whole.

Perhaps the best place to start is at the beginning of my childhood. But to understand that I guess it’s better to start with my mother and father’s upbringing. My mother came from an intensely religious Baptist household with a mother who, to be blunt, used religion as a manipulative tool to the point of abuse. If her children disobeyed her, it was obviously the influence of Satan. Of course, any popular culture throughout my mother’s childhood was regarded as the work of Satan. I’ll spare you the details, but the upshot is this caused my mother some religious struggles that I inherited. My father came from a sincere though not feverish Catholic family. For much of my father’s young adulthood and late adolescence, religion took a backseat and when my parents met my father was an agnostic. He converted to Christianity by the time they married, but his religious beliefs were always more intimately personal and connected with his individual, private pursuit of happiness than anything else—a fact that has profoundly influenced the way I think about religion as a whole.

Though neither of my parents were at all interested in shoving religion down my throat, I kind of shoved it down my own throat as a child. I was surrounded by evangelical—for want of better word—propaganda throughout my childhood as we mostly attended non-denominational, moderately evangelical churches throughout my childhood. My mother mostly sheltered me from my grandmother’s abuses of religion, and she reacted to her grandmother’s excesses appropriately by trying to make my religious upbringing centered on examples of God’s love. However, her struggles with religion still had an impact on me as she wavered between her adult commitments to an image of an all-loving deity with the remnants of her mother’s conception of good as the angry, vengeful, jealous God of the Old Testament. She never really manifestly expressed the ladder conception, but it was implicit, just subtlety enough for my young mind to notice, in the way some of the churches we chose in my youth expressed the Gospel.

At the age of seven, we moved from Michigan to the heart of the Bible belt in Lynchburg, VA, home to one of the largest evangelical colleges in the world: Liberty University. Many of the churches we attended in Virginia had Liberty professors as youth leaders, ministers, and the like, so Jerry Falwell’s Southern Baptist conception of God which aligned closely with my grandmothers was an influence on me through my early teenage years. Naturally, religion was closely linked with political issues of the day; God blessed Bush’s war in Iraq, homosexuality was an abhorrent sin, abortion was murder, and the like were fed to me. Of course, evolution was an atheist lie and I remember watching creationist woo lectures with my mother while she was taking an online biology course from Liberty (she isn’t a creationist, for the record, and her major was psychology, which Liberty taught well).

Though it certainly wasn’t as extreme, some of the scenes in the documentary Jesus Camp are vaguely like experiences I had around this time. I was an odd kid who got interested in these serious “adult” issues at the age of nine while most of my friends were watching cartoons, so I swallowed the evangelical stance hook, line, and sinker. But there was something contradictory between my mother’s reservations about an angry God and refusal to push my religious beliefs in any direction thanks to the influence of her mother, my father’s general silence about religious issues unless the conversation got personal or political, and the strong evangelical rhetoric that the culture around me was spewing.

Around seventh grade, we moved from Virginia to another section of the Bible-Belt, Tennessee. For my early high school years, my interest in evangelical apologetics mostly continued. However, religion mostly took a backseat to my political views. With the beginning of the recession, I became far more interested in economics: I wanted an explanation for why there were tents with homeless people living in them on that hill next to Lowe’s. My intellectual journey on economics is a topic for another day, but generally, the political component of my religious views was slowly becoming less and less salient. I became more apathetic about social issues and more focused on economic issues.

It was around this time I also became skeptical of the theologically-justified nationalistic war-mongering fed to me by the Liberty crowd in Virginia. We lived near Ft. Campbell and I had the displeasure of watching family after family of my friends ruined because their dad went to Afghanistan and didn’t come back the same, or didn’t come back at all. The whole idea of war just seemed cruel and almost unjustifiable to me, even though I still would spout the conservative line on it externally I was internally torn (perhaps I was writing esoterically?). I would say I was beginning to subconsciously reject Christianity’s ontology of violence (apologies to Millbank).

It was also around this time, ninth grade, that I began more systemically reading the Old Testament. War is a common theme throughout the whole thing, and all I could think of as I read about the conquer of Israel, the slaying of Amalekites, the book of Job, and the like were my personal experiences with my friends who were deeply affected by the Wars in Iraq and Afghanistan. At this point, there was skepticism and doubt about how God could justly war and cruel mass-killings in Biblical times.

Around tenth grade, I became immaturely interested in philosophy. I’m ashamed to admit it today, but Ayn Rand was my gateway drug to what would become an obsession of mine up until now. I loved elements of Rand’s ethics, her individualism, her intense humanism (which I still appreciate on some level), and of course her economics (which I also still appreciate). But her polemics against religion and her simplistic epistemological opposition between faith and reason put me in an odd position. What was I, a committed evangelical Christian, to do with my affinities with Rand? Naturally, I should’ve turned to Aquinas, whose arguments for the existence of God and his unification of faith and reason I now can appreciate; however, at the time, I instead had the misfortune of turning to Descartes, whose rationalism seemed to me seemed to jive with what I saw in Rand’s epistemology (today, I definitely would not say that about Rand and Descartes at all as Rand is far more Aristotelian, ah the sophomoric follies of youth). Almost all of my subsequent intellectual journey with religion and philosophy could be considered a fairly radical reaction to the dogmas that I had bought at this time.

I had fully bought perhaps the worst of Rand and Descartes. Descartes’ philosophical method and “proofs” of God, with all the messy metaphysical presumptions of mind-body dualism (though I might’ve implicitly made a greater separation between “mind” and “spirit” than Descartes would’ve), the correspondence theory of truth, quest for certainty, and spectator theory of knowledge, the ego theory of the self, and libertarian free will. From Rand I got the worst modernist presumptions she took from Descartes, what Bernstein calls the “Cartesian Anxiety” in her dogmatic demand for objectivism, as well as her idiosyncratic views on altruism (though I never really accepted ethical egoism, or believed she was really an ethical egoist). The flat, horribly written protagonists of Atlas Shrugged and Fountainhead I took to be somehow emblematic of the Christian conception of God (don’t ask me what in the hell I was thinking). Somehow, I couldn’t explain it then coherently and cringe at it now, I had found a philosophical foundation of sorts for a capital-C Certain belief in protestant Christianity in God and a watered down Randian ethics. Around this time, I also took an AP European History class, and my studies (and complete misreadings of) traditional Lutheranism and Catholicism reinforced my metaphysical libertarianism and Cartesian epistemological tendencies.

Around this time, my parents became dissatisfied with the aesthetic and teachings of evangelical non-denominational churches, and we started attending a run-of-the-mill, mainline PCUSA church my mom had discovered through charity programs she encountered as a social worker. I certainly didn’t buy Presbyterianism’s lingering affinities for Calvinism inherited from Knox (such as their attempt to retain the language of predestination while affirming Free Will), but the far more politically moderate to apolitical sermons, as well as focus on the God of the New Testament as opposed to my Grandmother’s God, was a refreshing change of pace from the evangelical dogmatism I had become accustomed to in Virginia. It fit my emerging Rand-influenced transition to political libertarianism well, and the old-church aesthetic and teaching methods fit well with the more philosophical outlook I had taken on religion.

In eleventh grade, we moved back to Michigan in the absolute middle of nowhere. Virtually every single protestant church within a twenty-mile radius was some either some sort of dogmatically evangelical nondenominational super-church where the populist, charismatic sermons were brought to you buy Jesus, Inc.; or an equally evangelical tiny rural church with a median age of 75 where the sermons were the somewhat incoherent and rabidly evangelical ramblings of an elderly white man. Our young, upper-middle class family didn’t fit into the former theologically or demographically, and certainly didn’t fit into the ladder theologically or aesthetically. After about a year of church-shopping, our family stopped going to church altogether.

Abstaining from church did not dull my religion at all. Sure, the ethical doubts I was having at the time and the epistemological doubts caused by my philosophical readings were working in the background, but in a sense, this was my most deeply religious time. I had taken up fishing almost constantly all summer since we lived on a river, and much of my thoughts while sitting with the line in the water revolved around religion or politics. When my thoughts turned religious, there was always a sense of romantic/transcendentalist (I was reading Thoreau, Emerson, and Whitman in school at the time) sublimity in nature that I could attribute to God. Fishing, romping around in the woods, hunting, and experiencing nature became the new church for me and was a source of private enjoyment and self-creation (you can already see where my affinities for Rorty come from) in my late teens. Still, most of my intellectual energy was spent on political and economic interests and by now I was a fully committed libertarian.

Subconsciously earlier in my teens, but very consciously by the time I moved to Michigan, I had begun to realize I was at the very least on the homosexual spectrum, quietly identifying as bisexual at the time. The homophobic religious rhetoric of other Christians got on my nerves, but in rural northern Michigan I was mostly insulated from it and it never affected me too deeply. I had reasoned by that point, since I assumed I was bi, it wasn’t that huge a deal in terms of my identity even if homosexuality was a sin, which I doubted it was though I couldn’t explain why, so I never really thought too deeply about it. However, it did contribute to my ethical doubts about Christianity further; if God says homosexuality is a sin, and Christians are somehow justified in oppressing homosexuality, how does that bode for God’s cruelty? It became, very quietly, an anxiety akin to the anxieties I was having about war when I moved to Tennessee.

Though abstaining from church didn’t cheapen my experience of religion, my exposure to my grandmother’s angry God did.  Up until that point, I had mostly been ignorant of her religious views because we lived so far away; but moving back to Michigan, as well as some health issues she had, thrust her religious fervor back into my—and my mother’s—consciousness. The way she talked about it and acted towards non-Christians reeked of the worst of I Samuel, Johnathan Edwards, John Calvin, and Jerry Falwell rolled into one. My skepticism towards the potential cruelty of the Christian God caused by my experiences with war and homophobia were really intensified by observing my maternal grandmother.

The year was 2013, I had just graduated from High School, I had just turned eighteen, and I had chosen my college. I had applied to some local state school as a backup which I only considered because it was a full-ride scholarship, my father’s alma-mater, the University of Michigan, and Hillsdale College. After the finances were taken care of, I’m fortunate enough to be a member of the upper-middle class, the real choices were between Michigan and Hillsdale. For better or for worse, I chose the latter.

My reasons for choosing Hillsdale were mostly based on misinformation about the college’s mission. Sure, I knew it was overwhelmingly conservative and religious. But I thought there was far more of a libertarian bent to campus culture. The religious element was sold to me as completely consensual, not enforced by the college at all other than a couple vague comments about “Judeo-Christian values” in the mission statement. I wanted a small college full of intellectually impassioned students who were dedicated to, as the college mission statement said, “Pursuing Truth, Defending Liberty.” The “defending liberty” part made me think the college was more libertarian, and the “pursuing truth” part made me assume it was very open minded as a liberal arts education was supposed to be. I figured there’d probably be some issues about my budding homosexuality/bisexuality, but since it wasn’t a huge deal at the time for me personally, and some students I’d talked to said it wasn’t a big deal there, I thought I could handle it. Further, I suspected my major was going to be economics, and Hillsdale’s economics department—housing Ludwig von Mises’ library—is a dream come true (my opinion on this hasn’t changed).

If I ever had problems misunderstanding the concept of asymmetric information, the lies I was told as an incoming student to Hillsdale cleared them up. The Hillsdale I got was far more conservative than I could ever imagine and in a ridiculously dogmatic fashion. It was quickly revealed to be not the shining example of classical liberal arts education I had hoped for, but instead little more than a propaganda mill for a particularly nasty brand of Straussian conservatism. The majority of the students were religious in the same sense of my grandmother; though they would intellectually profess to a different concept of God than my grandmother’s simplistic, lay-man Baptist understanding of God as an angry, jealous judge, the fruits of their faith showed little difference. My homosexual identity—by this point I’d abandoned the term “bisexual”—quickly became a focal point of my religious anxiety. Starting a few weeks in my freshman year, I began to fall into a deep depression, largely thinks to my treatment by these so-called “Christians”—that would cripple me for the next two years and that I am still dealing with the after-shocks of as I write this.

Despite the personal issues I had with my peers at Hillsdale, the two years I spent there were hands-down the two most intellectually exciting years of my life. My first semester, I took an Introduction to Philosophy class. My professor, James Stephens, turned out to be a former Princeton student and had Richard Rorty and Walter Kauffman as his PhD advisors. His introductory class revolved first around ancient Greek philosophy, in particular, Plato’s Phaedo, then classical epistemology, particularly Descartes, Kant, and Hume, and a lot of experimental philosophy readings from the likes of Stephen Stitch and Joshua Knobe. The class primarily focused on issues in contemporary metaphysics which I had struggled with since I discovered Rand—like libertarian free will and theories of the self—epistemological issues, and metaphilosophical issues of method. Though only an intro class outside of my major, no class has changed my worldview quite as much as this.

In addition to the in-class readings, I read philosophy prolifically and obsessively outside of class as a matter of personal interest. That semester I had finished Stitch’s book The Fragmentation of Reason (which I wouldn’t have understood without extensive talks with Dr. Stephens in office hours), worked through most of Kant’s Critique of Pure Reason, basically re-learned Cartesianism, and read Hume’s Treatise. By the end of the class, I had completely changed almost every element of my philosophical world view; I went from a hardcore objectivist Cartesian to a fallibilist, postmodern pragmatist (I had also read James due to Stitch’s work with him), from a fire-breathing metaphysical libertarian to a squishy compatibilist, from someone who had bought the Cartesian referential view of language to a card-carrying Wittgensteinianist (of the Investigations, that is).

Other classes I took that first semester also would have a large impact on me. In my “Western Heritage” class—Hillsdale’s pretentious and propagandized term for what would usually be called something like “History of Western Thought: Ancient Times to 1600”—I essentially relearned all the theology I had poorly understood in my high school AP Euro class by reading church fathers and Catholic saints like Augustine, Tertullian, and, of course, Aquinas as well as rereading the likes of Luther and Calvin. Additionally, and this would have the most profound intellectual influence on me of anything I have ever read, I read Hayek’s Constitution of Liberty in my first political economy class cemented my epistemological fallibilism (although, I also read Fatal Conceit for pleasure which influenced me even more).

Early on that year, after reading Plato and Augustine, I began to become committed to some sort of Platonism, and for a second considered some sort of Eastern Orthodoxy. By this point, I was a political anarchist and saw the hierarchical and top-down control of Catholicism as too analogous to coercive statist bureaucracy, while the more communal structure of Orthodoxy, though still Hierarchical, seemed more appealing. To paraphrase Richard Rorty on his own intellectual adolescence, I had desperately wanted to become one with God, a desperation I would later react to violently, and I saw Plato’s ideas of the Forms and Augustine’s incorporation of them into Christianity as a means to do that. But as I kept reading, particularly James, Hume, Kant, and Wittgenstein, the epistemological foundations of my Platonist metaphysical and theological stances crumbled. I became absolutely obsessed with the either-or propositions of the “Cartesian anxiety” and made a hobby of talking to my classmates in a Socratic fashion to show that they couldn’t be epistemically Certain in the Cartesian sense, much to the chagrin of most of my classmates. You could’ve played a drinking game of sorts during those conversations in which you took a shot every time I said some variation “How do you know that?” and probably give your child fetal alcohol syndrome, even if you weren’t pregnant or were a male.

In the second semester of my freshman year, I had turned more explicitly to theological readings and topics in my interests. (Keep in mind, I was mostly focusing on economics and math in class, almost all of this was just stuff I did on the side. I didn’t get out much in those days largely due to the social anxiety caused by the homophobia of my classmates.) My fallibilist/pragmatist epistemic orientation, as well as long with conversations with a fellow heterodox Hillsdale student from an Evangelical background, wound up with me getting very interested in “radical theology.” That semester, John Caputo had come to Hillsdale to discuss his book The Insistence of God. I attempted to read it at the time but was not well-versed enough in continental philosophy to really get what was going on in it. Nonetheless, my Jamesean orientation had me deeply fascinated in much of what Caputo was getting across.

My theological interests were twofold: first, more of an epistemic question, how can we know God exists? My conclusion was that we can’t, but whether God exists or not is irrelevant—what matters is the impact the belief of God has on our lives existentially and practically. This was the most I could glean out of Caputo’s premise “God doesn’t exist, he insists” without understanding Derrida, Nietzsche, Hegel, and Foucault. I began calling myself terms like “agnostic Christian,” “ignostic Christian,” or “pragmatist Christian” to try and describe my religious views. This also led me to a thorough rejection of Biblical literalism and infallibility, I claimed it was more a historical document on man’s interaction with God from man’s flawed perspective.

But, now in the forefront, were questions of Christianity’s ethical orientation that had lingered at the back of my mind since the early teens: why did the Christian God seem so cruel to me? I had resolved most of it with my rejection of Biblical infallibility. Chances are, God didn’t order the slaughter of Amalekites, or Satan’s torture of Job, or any of the other cruel acts in the Old Testament—the fallen humans who wrote the Bible misunderstood it. Chances are, most of the Old Testament laws on things like homosexuality were meant specifically for that historically contingent community and were not eternal moral laws and God of the New Testament, as revealed by Jesus, was the most accurate depiction of God in the Bible. Paul’s prima facie screeds against homosexuality in the New Testament, when taken in context and hermeneutically analyzed, probably had nothing to do with homosexuality as we know it today (I found this sermon convincing on that note). God sent Jesus not as a substitute for punishment but to act as an exemplar for how to love and not be cruel to others. I could still defend the rationality of my religious faith on Jamesean grounds, I was quoting Verities of Religious Experience and Pragmatism more than the Bible at that point.  I also flirted with some metaphysical theologies drawing such as Death of God theology, which seemed appealing based off of the little I knew about Nietzsche, and process theology, which to me bore a beautiful resemblance to Hayek’s concept of spontaneous order. Even saying it now, much of that sounds convincing and if I were to go back to Christianity, most of those beliefs would probably remain in-tact.

But still, there was this nagging doubt that the homophobic, anti-empathetic behavior of the Hillsdale “Christians” somehow revealed something rotten about Christianity as a whole; and the fact that the church had committed so many atrocities in the past from Constantine using it to justify war, to the Crusades, to the Spanish conquistadors, to the Salem witch trials, to the persecution of homosexuals and non-believers throughout all of history still rubbed me the wrong way. Jesus’ line about judging faith by its fruits became an incredibly important scripture for me with my interest in William James. That scripture made me extremely skeptical of the argument that the actions of fallen humans do not reflect poorly on the TruthTM about the Christian God. What was the cash value of Christian belief if it seemed so obviously to lead to so much human cruelty throughout history and towards me personally?

That summer and the next semester, two books, both written by my philosophy professor’s PhD advisor coincidentally enough as I had independently come across them, once again revolutionized the way I looked at religion. The first was Richard Rorty’s Philosophy and the Mirror of Nature, the second was William Kaufmann’s Nietzsche: Psychologist, Philosopher, Antichrist which I had read in tandem with most of Nietzsche’s best-known work (ie., Beyond Good and Evil, Thus Spake Zarathustra, Genealogy of Morals, and, most relevant to this discussion, The Antichrist).

Rorty had destroyed any last vestiges of Cartesianism or Platonism I had clung to. His meta-philosophical critique of big-P Philosophy that tries to stand as the ultimate judge of knowledge claims of the various professions around me completely floored me. His incorporation of Kuhnian philosophy of science and Gadamer’s hermeneutics was highly relevant to my research interests in the methodology of economics. Most importantly for religion was his insistence, though more explicit in his later works I had noticed it fairly heavily in PNM, that we are only answerable to other humans. There is no world of the forms to which we can appeal to, there is no God to whom we are answerable to, there is no metaphysical concepts we can rely on to call a statement true or false; the measurement of truth is the extent to which it helps us cope with the world around us, the extent to which it helps us interact with our fellow human beings.

Nietzsche’s concept of the Death of God haunted me, and now that I was beginning to read more continental philosophy some of the concepts in Caputo that flew over my head began to make sense. The Enlightenment Project to ground knowledge had made God, at least for much of the intellectual class who were paying attention to the great philosophical debates, a forced option. No longer could we rely on the Big Other to ground all our values, we had to reevaluate all our values and build a meaningful life for ourselves. Additionally Nietzsche’s two great criticisms of Christianity in the Antichrist stuck in my mind: that it led to the inculcation of a slave morality, a sort of resentment for the “lower people;” and that the idea that we should “store our treasures in heaven” took all the focus off of this world, it ignored all those pragmatic and practical results of our philosophical beliefs that had become so important to me thanks to Matthew 7:16 and William James, and instead focused on our own selfish spiritual destiny. The latter critique didn’t quite ring with me because Nietzsche’s anti-egalitarian, and to be honest quite cruel, attitude seemed as bad as what I saw the Christians doing to me; but his criticism of Christianity’s focus on the afterlife rather than the fruits of their faith in this life posed a serious threat to my beliefs, and helped explain why the empathetic, homophobic hatred I was experiencing from my classmates was causing so much religious anxiety and cognitive dissonance.

(Note: Clearly, I’m violently oversimplifying and possibly misreading both Nietzsche and Rorty in the previous two paragraphs, but that’s beside the point as I’m more interested in what they made me think of in my intellectual development, not what they actually thought themselves.)

Still, through most of my sophomore year, I tried to resist atheism as best I could and cling to what I saw as salvageable in Christianity: the idea of universal Christian brotherhood and its potential to lead people to be kind to each other was still promising. Essentially, I still wanted to salvage Jesus as a didactic exemplar of moral values of empathy and kindness, if not in some metaphysical ideal of God, at least in the narrative of Jesus’s life and his teaching. Ben Franklin’s proto-pragmatic, yet still virtue ethical, view on religion in his Autobiography lingered in my mind very strongly during this phase. I still used the term “agnostic Christian” through most of that time and self-identified as a Christian, but retrospectively the term “Jesusist” probably better described the way I was thinking at that time.

I came to loathe (and still do) what Paul had done to Christianity: turning Jesus’ lessons into absolutist moral laws rather than parables on how to act kinder to others (see, for example, Paul’s treatment of sexual ethics in 1 Corinthians), taking the worst slave-morality tendencies Nietzsche ridiculed to the extreme, and acting as if there was only one way—which happened to be his what I saw as very cruel way—to experience Jesus’ truth in religious community in all his letters. Additionally, I loathed Constantine for turning Christianity into a tool to justify governmental power and coercion, which it remained throughout the reign of the Holy Roman Empire, Enlightenment-era absolutism, and into modern social conservative theocratic tendencies in America.

But the idea of an all-loving creator, if not a metaphysical guarantee of meaning and morality, sending his son/himself as an exemplar for what humanity can and should be still was extremely—and in many ways still is—attractive to me. I flirted with the Episcopalian and Unitarian Universalist churches, but something about their very limited concept of community rubbed me the wrong way (I probably couldn’t justify it or put my finger on it).

Clearly, my religious and philosophical orientation (not to mention my anarchist political convictions) put me at odds with Hillsdale orthodoxy. I started writing papers that were pretty critical of my professor’s lectures at times (though I still managed to mostly get A’s on them), particularly in my Constitution (essentially a Jaffaite propaganda class) and American Heritage (essentially a history of American political thought class, which was taught very well by a brilliant orthodox Catholic Hillsdale grad) classes. I was writing editorials in the student paper subtlety ridiculing Hillsdale’s homophobia and xenophobia, and engaging in far too many Facebook debates on philosophy, politics, and religion that far too often got far too personal.

In addition, in the beginning of my sophomore year, I came out as gay publicly. With the Supreme Court decision coming up the following summer, never had Hillsdale’s religiously-inspired homophobia reached such a fever-pitch. I could hardly go a day without hearing some homophobic slur or comment and the newspaper was running papers—often written by professors—claiming flat out false things about gay people (like comparing it to incest, saying that no society has ever had gay marriage and the like). The fruit/cash value of Jesus’ teachings was quite apparently not turning out to be the empathetic ethos I had hoped for, the rotten elements of the Old Testament God which my grandmother emphasized, the Pauline perversions, and Constantine’s statism were instead dominating the Christian ethos.

At the end of that academic year (culminating with this) I suffered a severe mental breakdown largely due to Hillsdale’s extreme homophobia. By the beginning of the next school year, I was completely dysfunctional academically, intellectually, and socially; I was apathetic about all the intellectual topics I had spent my entire thinking life occupied with, completely jaded about the future, and overall extraordinarily depressed. I’ll spare the dirty details, but by the end of the first month of my Junior year, it became clear I could no longer go on at Hillsdale. I withdrew from Hillsdale, and transferred to the University of Michigan.

That pretty much takes me up to present day. But coming out of that depression, I began to seriously pick back up the question of why Christianity—even the good I saw in Jesusism—no longer seemed true in the pragmatic sense. Why was this religion I had spent my whole life so committed to all of a sudden utterly lacking in cash value?

I found my answer in Rorty and Nietzsche one cold January day while I took a weekend trip to Ann Arbor with my boyfriend. I sat down at a wonderful artisan coffee shop set in a quaint little arcade tucked away in downtown Ann Arbor, and was re-reading Rorty’s Contingency, Irony, and Solidarity. Rorty’s continued insistence that “cruelty is the worst thing you can do,” even if he couldn’t metaphysically or epistemically justify it, seemed to be a view I had from the very beginning when my doubts about the Christian faith started thanks to my experiences with the victims of war.

Now, I can say that the reason I’m not a Christian—and the reason I think it would be a good idea if Christianity as religion faded out as a public metanarrative (though certainly not as a private source for joy and self-creation that my dad exemplified)—is because Christianity rejects the idea that cruelty is the worst thing you can do. According to Christian orthodoxy (or, at least, the protestant sola fide I grew up with), you can be as outrageously, sadistically, egomaniacally cruel to another person as you want, and God will be perfectly fine with it if you believe in him. If Stalin would “accept God into his heart”—whatever that means—his place in paradise for eternity is assured, even if he had the blood of fifty million strong on his hands.

I have no problem with that per se, I agree with Nietzsche that retributive justice is little more than a thinly veiled excuse for revenge. Further, I agree with Aang from Avatar: The Last Airbender in saying “Revenge is like a two-headed rat viper: while you watch your enemy go down, you’re being poisoned yourself.” As an economist, the whole idea of revenge kind of seems to embrace the sunk cost fallacy.I still regard radical forgiveness and grace as among the best lessons Christianity has to offer, even forgiveness for someone like Stalin.

What seems absurd is that while Stalin could conceivably get a pass, even the kindest, most genuinely empathetic, and outstanding human being will be eternally damned and punished by God simply for not believing. For the Christian, the worst thing you can do is not be cruel, the worst thing you can do is reject their final vocabulary. When coupled with Nietzsche’s insights that Christianity is so focused on the afterlife that it ignores the pragmatic consequences of actions in this life, it is no wonder that Christianity has bred so much cruelty throughout history. Further, the idea that we are ultimately answerable to a metaphysical Big Other rather than to our fellow human beings (as Rorty would have it) seems to cheapen the importance of our other human beings. The most important thing to Christians is God, not your fellow man.

Of course, the Christian apologist will remark that “TrueTM” Christianity properly understood does not necessarily entail that conclusion. No true Scotsman aside, the point is well taken. Sure, the concept of Christian brotherhood teaches that since your fellow man is created in God’s image harming him is the same as harming God. Sure, Jesus does teach the most important commandment is essentially in line with my anti-cruelty. Sure, different sects of Christianity have a different view of divinity that are more nuanced than the one I gave.

But, again, if we judge this faith by its fruits, if we empirically look at the cash value of this belief, if we look at the revealed preference of many if not most Christians, it aligns more with my characterization than I would like. Between the emphasis on the afterlife, the fundamentally anti-humanist (in a deep sense) ontological orientation, and the belief that cruelty is not the worst thing you can do, I see little cash value to Christianity and a whole bunch of danger that it is highly apt—and clearly has been empirically—to be misused for sadistic purposes.

This is not to say Christianity is completely (pragmatically) false. I also agree with Rorty when he says the best way to reduce cruelty and advance human rights is through “sentimental education.” The tale of Jesus, if understood the way we understand a wonderful work of literature—like Rorty himself characterizes writers like Orwell—should live on. It may sound corny and blasphemous, but if “Christian” were simply the name of the Jesus “fandom,” I’d definitely be a Christian. I also certainly don’t think Christianity is something nobody should believe; the cash value of a belief is based on the myriad of particular contingencies of an individual or social group, and those contingencies are not uniform to my experience. However, from my contingent position, I cannot in good faith have faith.

Perhaps it is a sad loss, perhaps it is a glorious intellectual and personal liberation, and perhaps it is something else. Only time will tell. Anyways, 6,325 words later I hope I have adequately explained to myself why I am not a Christian.

In Search of Firmer Cosmopolitan Solidarity: The Need for a Sentimentalist Case for Open Borders

Most arguments for open borders are phrased in terms of universalized moral obligations to non-citizens. These obligations are usually phrased as “merely” negative (eg., that Americans have a duty to not impede the movement an impoverished Mexican worker or Syrian refugee seeking a better life) rather than positive (eg., that the first obligation does not imply that Americans have a duty to provide, for example, generous welfare benefits to immigrants and refugees), but are phrased as obligations based on people in virtue of their rationality rather than nationality nonetheless.

Whether they be utilitarian, moral intuitionist, or deontological, what these arguments assume is that nation of origin isn’t a “morally relevant” consideration for one’s rights to immigrate and rely on some other view of moral relevance implicitly as an alternative to try and cement a purely moral solidarity that extends beyond national border. They have in common an appeal to a common human capacity to have rights stemming from something metaphysically essential to our common humanity.

Those arguments are all coherent and possibly valid and are even the arguments that originally convinced me to support open borders. The only problem is that they are often very unconvincing to people skeptical of immigration because they merely beg the question of that moral obligation is irrelevant with respect to nationality. As one of my critics of one of my older pieces on immigration observed, most immigration skeptics are implicitly tribalist nationalists, not philosophically consistent consequentialists or deontologists. They have little patience for theoretical and morally pure metaphysical arguments concluding any obligation, even merely negative, to immigrants. They view their obligations to those socially closer to them as a trump card (pardon the pun) to any morally universalized consideration. So long as they can identify with someone else as an American (or whatever their national identity may be) they view their considerations as relevant. If they cannot identify with someone else based on national identity, they do not view an immigrant’s theorized rights or utility functions as relevant.

There are still several problems with this tribalist perspective, given that nation-states are far from culturally homogenous and cultural homogeneity often transcends borders in some important respects, why does one’s ability to “identify” on the basis of tribal affiliation stop at a nation-state’s borders? Further, there are many other affinities one may have with a foreigner that may be viewed as equally important, if not more important, to one’s ability to “identify” with someone than national citizenship. They may be a fellow Catholic or Christian, they may be a fellow fan of football, or a fellow manufacturing worker, or a fellow parent, etc. Why is “fellow American” the most socially salient form of identification and allows one to keep a foreigner in a state tyranny and poverty, but not whether they are a “fellow Christian” or any of the many other identifiers people find important?

However, these problems are not taken seriously by those who hold them because tribalist outlook isn’t about rational coherence, it is about non-rational sentimental feelings and particularized perspectives on historical affinities. Even if a skeptic of immigration takes those problems seriously, the morally pure and universalizing arguments are no more convincing to a tribalist.

I believe this gets at the heart of most objections Trump voters have to immigration. They might raise welfare costs, crime, native jobs lost, or fear of cultural collapse as post-hoc rationalizations for why they do not feel solidarity with natives, but the fact that they do not feel solidarity due to their nationalist affinities is at the root of these rationalizations. Thus when proponents of open borders raise objections, be it in the form of economic studies showing that these concerns are not consistent with facts or by pointing out that these are also concerns for the native-born population and yet nobody proposes similar immigration restrictions on citizens, they fall on deaf ears. Such concerns are irrelevant to the heart of anti-immigrant sentiment: a lack of solidarity with anyone who is not a native-born citizen.

In this essay, drawing from the sentimentalist ethics of David Hume and the perspective on liberal solidarity of Richard Rorty, I want to sketch a vision of universalized solidarity that would win over tribalists to the side of, if not purely open borders, at least more liberalized immigration restrictions and allowance for refugees. This is not so much a moral argument of the form most arguments for open borders have taken, but a strategy to cultivate the sentiments of a (specifically American nationalist) tribalist to be more open to the concerns and sympathies of someone with whom they do not share a national origin. The main goal is that we shouldn’t try to argue away people’s sincere, deeply held tribalist and nationalist emotions, but seek to redirect them in a way that does not lead to massive suffering for immigrants.

Rorty on Kantian Rationalist and Humean Sentimentalist Arguments for Universalized Human Rights

In an article written by American pragmatist philosopher Richard Rorty called “Rationality, Sentimentality, and Human Rights,” he discusses two strategies for expanding human rights culture to the third world. One, which he identifies with philosophers such as Plato and Kant, involves appealing to some common faculty which all humans have in common—namely rationality—and claim all other considerations, such as kinship, custom, religion, and (most importantly for present purposes) national origin “morally irrelevant” to whether an individual has human rights and should be treated as such. These sort of arguments, Rorty says, are the sort that try to use rigorous argumentation to answer the rational egoist question “Why should I be moral?” They are traced back to Plato’s discussion of the Ring of Gyges in the Republic through Enlightenment attempts to find an algorithmic, rational foundation of morality, such as the Kantian categorical imperative. This is the sort of strategy, in varying forms, most arguments in favor of open borders try to pursue.

The second strategy, which Rorty identifies with philosophers such as David Hume and Annette Baier, is to appeal to the sentiments of those who do not respect the rights of others. Rather than try to answer “Why should I be moral?” in an abstract, philosophical sense such that we have a priori algorithmic justification for treating others equal, this view advocates trying to answer the more immediate and relevant question “Why should I care about someone’s worth and well-being even if it appears to me that I have very little in common with them?” Rather than answer the former question with argumentation that appeals to our common rational faculties, answer the latter with appealing to our sentimental attitudes that we do have something else in common with that person.

Rorty favors the second Humean approach for one simple reason: in practice, we are not dealing with rational egoists who substitute altruistic moral values with their ruthless self-interest. We are dealing with irrational tribalists who substitute more-encompassing attitudes of solidarity with less-encompassing ones. They aren’t concerned about why they should be moral in the first place and what that means, they are concerned with how certain moral obligations extend to people with whom they find it difficult to emotionally identify. As Rorty says:

If one follows Baier’s advice one will not see it as the moral educator’s task to answer the rational egoist’s question “Why should I be moral?” but rather to answer the much more frequently posed question “Why should I care about a stranger, a person who is no kin to me, a person whose habits I find disgusting?” The traditional answer to the latter question is “Because kinship and custom are morally irrelevant, irrelevant to the obligations imposed by the recognition of membership in the same species.” This has never been very convincing since it begs the question at issue: whether mere species membership is, in fact, a sufficient surrogate closer to kinship. […]

A better sort of answer is the sort of long, sad, sentimental story which begins with “Because this is what it is like to be in her situation—to be far from home, among strangers,” or “Because she might become your daughter-in-law,” or “Because her mother would grieve for her.” Such stories, repeated and varied over the centuries, have induced us, the rich, safe and powerful people, to tolerate, and even to cherish, powerless people—people whose appearance or habits or beliefs at first seemed an insult to our own moral identity, our sense of the limits of permissible human variation.

If we agree with Hume that reason is the slave of the passions, or more accurately that reason is just one of many competing sentiments and passions, then it should come as no surprise that rational argumentation of the form found in most arguments for open borders are not super convincing to people for whom reason is not the ruling sentiment. How does one cultivate these other sentiments, if not through merely rational argumentation? Rorty continually comments throughout his political works that novels, poems, documentaries, and television programs—those genres which tell the sort of long sad stories commented on above—have replaced sermons and Enlightenment-era treatises as the engine of moral progress since the end of the nineteenth century. Rational argumentation may convince an ideal-typical philosopher, but not many other people.

For Rorty, the application of this sentimental ethics had two main purposes, the first of which is mostly irrelevant for present purposes and the second of which is relevant. First, Rorty wanted to make his vision of a post-metaphysical, post-epistemological intellectual culture and a commonsensically nominalist and historicist popular culture compatible with the sort of ever-expanding human solidarity necessary for political liberalism; a culture for which the sort of algorithmic arguments for open borders I mentioned in the first half of this article would not seem convincing for more theoretical reasons than the mere presence of nationalist sentiment. Though that is an intellectual project with which I have strong affinities, one need not buy that vision for the purposes of this article—that of narrowly applying sentimental ethics to overcome nationalist objections to immigration.

The second, however, was to point out a better way to implement the liberal cultural norms to prohibit the public humiliation of powerless minorities. The paradigmatic cases Rorty says such a sentimental education has application are how Serbians viewed Muslims, how Nazis viewed Jews, or how white southern Confederates viewed African-American slaves. Though those are far more extreme cases, it is not a stretch to add to that list the way Trump voters view Muslim refugees or Mexican migrant workers.

A Rortian Case against Rortian (and Trumpian) Nationalism

Though Rorty was a through-and-through leftist and likely viewed most nationalist arguments for restricting immigration and especially keeping refugees in war-zones with scorn, there is one uncomfortable feature of his views for most radical proponents of immigration. It does leave very well open the notion of nationalism as a valid perspective, unlike many of the other arguments offered.

Indeed, Rorty—from my very anarchist perspective—was at times uncomfortably nationalist. In Achieving Our Country he likens national pride to self-respect for an individual, saying that while too much national pride can lead to imperialism, “insufficient national pride makes energetic and effective debate about national policy unlikely.” He defended a vision of American national pride along the lines of Deweyan pragmatism and transcendentalist romanticism as a nation of ever-expanding democratic vistas. Though radically different from the sort of national pride popular in right-wing xenophobic circles, it is a vision of national pride nonetheless and as such is not something with which I and many other advocates of open borders are not sympathetic with.

Further, and more relevant to our considerations, is he viewed national identity as a tool to expand the sort of liberal sentiments that he wanted. As he wrote in Contingency, Irony, and Solidarity:

Consider, as a final example, the attitude of contemporary American liberals to the unending hopelessness and misery of the lives of the young blacks in American cities. Do we say these people must be helped because they are our fellow human beings? We may, but it is much more persuasive, morally as well as politically, to describe them as our fellow Americans—to insist it is outrageous that an American to live without hope. The point of these examples is that our sense of solidarity is strongest when those with whom solidarity is expressed are thought of as “one of us,” where “us” means something smaller and more localized than the human race.

It is obvious why many critics of immigration restrictions would view this attitude as counterproductive. This type of description cannot be applied in many other scenarios at all relevant to questions of immigration at all. Liberalism, in the sense Rorty borrowed from Shklar (and also the sense which I think animates much of the interest in liberalized immigration policies), as an intense aversion to cruelty is concerned with merely ending cruelty as such. It wants to end cruelty whether it be the cruelty of the American government to illegal immigrants or suffering of native-born African-Americans as a result of centuries of cruelty by racists. This is surely something with which Rorty would agree as he writes elsewhere in that same chapter:

[T]here is such a thing as moral progress and that progress is indeed in the direction of greater human solidarity. But that solidarity is not thought of as recognition of a core self, the human essence, in all human beings. Rather, it is thought of as the ability to see more and more traditional differences (of tribe, religion, race, customs, and the like) as unimportant when compared to the similarities with respect to pain and humiliation—the ability to think of people wildly different from ourselves in the range of ‘us.’

Surely, that moral progress doesn’t stop at the unimportant line of a national border. The problem is that appeals to national identity of the sort Rorty uses, or of mythologized national histories, do stop at the border.

Rorty is right that it is easier for people to feel a sense of solidarity with those for whom there are fewer traditional differences, and that no amount of appeal to metaphysical constructions of human rationality will fully eclipse that psychological fact. However, the problem with forms of solidarity along national identity is it is much easier for people to stop there. In modern pluralistic, cosmopolitan societies such as America, it is hard for someone to stop their sense of solidarity at religion, tribe, custom and the like. This is because the minute they walk out the door of their home, the minute they arrive at their workplace, there is someone very close to them who would not fit that sense of solidarity yet someone for whom they would still feel some obligation, just based off of seeing the face of that person, off of mere proximity.

Stopping the line at national identity is much easier since many Americans, particularly those in the midwestern and southeastern states which gave Trump his presidency, will rarely interact with non-nationals on a regular basis while they will more likely interact with someone who is more distant from them in other ways. While other forms of solidarity are unstable for most because they are too localized, nationalism is stable because it is too general to be upset by experience of others while not general enough to be compatible with liberalism. Moral progress, if we pursue Rorty’s explicitly nationalist project, will halt at the national borders and his liberal project of ending cruelty will end with it. There is an inconsistency between Rorty’s liberalism and his belief in national pride.

Further, insisting “because they are American” leads people to ask what it means to “be American,” a question which can only be answered, even by Rorty in his description of American national pride, by contrast with what isn’t American (see his discussion of Europe in “American National Pride). It makes it difficult to see suffering as the salient identifier for solidarity, and makes other ‘traditional’ differences standing in the way of Rorty’s description of moral progress as more important than they should be. Indeed, this is exactly what we see with most xenophobic descriptions of foreigners as “not believing in American ideals.” Rorty’s very humble, liberalized version of national pride faces a serious danger of turning into the sort of toxic, illiberal nationalism we have seen in recent years.

Instead, we should substitute the description Rorty offers as motivating liberal help for African-Americans in the inner city ,‘because they are American,’ with the redescription Rorty uses elsewhere: ‘because they are suffering, and you too can suffer and have suffered in the past.’ This is a sentimental appeal which can apply to all who are suffering from cruelty, regardless of their national identity. This is more likely to make more and more other differences seem unimportant. As Rorty’s ideas on cultural identity politics imply, the goal should be to replace “identity”—including national identity—with empathy.

Thus, in making an appeal to Rorty’s sentimentalism for open border advocates, I want to very clearly point out how it is both possible and necessary to separate appeals to solidarity and sentiment from nationalism to serve liberal ends. This means that the possibility of nationalist sentiments of seeming acceptable to a non-rationalist form of ethics should not discourage those of us skeptical of nationalism from embracing and using its concepts.

Sentimental Ethical Appeals and Liberalized Immigration

The application of this form of sentimental ethics for people who merely want to liberalized immigration should be obvious. Our first step needs to be to recognize that people’s tribalist sentiments aren’t going to be swayed by mere rationalist argumentation as it merely begs the question. Our second step needs to be to realize that what’s ultimately going to be more likely to convince them aren’t going to get rid of people’s tribalist sentiments altogether, but to redirect them elsewhere. The goal should be to get people to see national identity as unimportant to those sentiments compared to other more salient ones, such as whether refugees and immigrants are suffering or not. The goal should be for nationalists to stop asking questions of immigrants like “Are immigrants going to be good Americans like me?” and more “Are they already people who, like me, have suffered?”

This does not mean that we stop making the types of good academic philosophical and economic arguments about how immigration will double the global GDP and how rights should be recognized as not stopping with national identity—those are certainly convincing to the minority of us to whom tribalism isn’t an especially strong sentiment. However, it does mean we should also recognize the power of novels like Under the Feet of Jesus or images like the viral, graphic one of a Syrian refugee child who was the victim of a bombing which circulated last year. The knowledge that Anne Frank’s family was turned down by America for refugee status, the feelings of empathy for Frank’s family one gets from reading her diary, the fear that we are perpetuating that same cruelty today are far more convincing than appeals to Anne Frank’s natural rights in virtue of her rational faculties as a human being.

Appeals to our common humanity in terms of our “rational faculties” or “natural rights” or “utility functions” and the like are not nearly as convincing to people who aren’t philosophers or economists as appeals to the ability of people to suffer. Such an image and sentimental case is far more likely to cultivate a cosmopolitan solidarity than Lockean or Benthamite platitudes.

References:

Rorty, Richard. “American National Pride: Whitman and Dewey.” Achieving our Country: Leftist Thought in Twentieth Century America. Rpt. in The Rorty Reader. Ed. by Christopher J. Voparil and Richard J. Bernstein. Malden: Blackwell Publishing Ltd, 2010. 372-388. Print.

Rorty, Richard. “Human Rights, Rationality, and Sentimentality.” On Human Rights: The Oxford Amnesty Lectures. Rpt. in The Rorty Reader. Ed. by Christopher J. Voparil and Richard J. Bernstein. Malden: Wiley-Blackwell Publishing Ltd, 2010.
352-372. Print.

Rorty, Richard. Contingency, Irony, and Solidarity. Cambridge: Cambridge University Press, 1999. Print.

 

The Deleted Clause of the Declaration of Independence

As a tribute to the great events that occurred 241 years ago, I wanted to recognize the importance of the unity of purpose behind supporting liberty in all of its forms. While an unequivocal statement of natural rights and the virtues of liberty, the Declaration of Independence also came close to bringing another vital aspect of liberty to the forefront of public attention. As has been addressed in multiple fascinating podcasts (Joe Janes, Robert Olwell), a censure of slavery and George III’s connection to the slave trade was in the first draft of the Declaration.

Thomas Jefferson, a man who has been criticized as a man of inherent contradiction between his high morals and his active participation in slavery, was a major contributor to the popularizing of classical liberal principles. Many have pointed to his hypocrisy in that he owned over 180 slaves, fathered children on them, and did not free them in his will (because of his debts). Even given his personal slaves, Jefferson made his moral stance on slavery quite clear through his famous efforts toward ending the transatlantic slave trade, which exemplify early steps in securing the abolition of the repugnant act of chattel slavery in America and applying classically liberal principles toward all humans. However, this very practice may have been enacted far sooner, avoiding decades of appalling misery and its long-reaching effects, if his (hypocritical but principled) position had been adopted from the day of the USA’s first taste of political freedom.

This is the text of the deleted Declaration of Independence clause:

“He has waged cruel war against human nature itself, violating its most sacred rights of life and liberty in the persons of a distant people who never offended him, captivating and carrying them into slavery in another hemisphere or to incur miserable death in their transportation thither.  This piratical warfare, the opprobrium of infidel powers, is the warfare of the Christian King of Great Britain.  Determined to keep open a market where Men should be bought and sold, he has prostituted his negative for suppressing every legislative attempt to prohibit or restrain this execrable commerce.  And that this assemblage of horrors might want no fact of distinguished die, he is now exciting those very people to rise in arms among us, and to purchase that liberty of which he has deprived them, by murdering the people on whom he has obtruded them: thus paying off former crimes committed against the Liberties of one people, with crimes which he urges them to commit against the lives of another..”

The second Continental Congress, based on hardline votes of South Carolina and the desire to avoid alienating potential sympathizers in England, slaveholding patriots, and the harbor cities of the North that were complicit in the slave trade, dropped this vital statement of principle

The removal of the anti-slavery clause of the declaration was not the only time Jefferson’s efforts might have led to the premature end of the “peculiar institution.” Economist and cultural historian Thomas Sowell notes that Jefferson’s 1784 anti-slavery bill, which had the votes to pass but did not because of a single ill legislator’s absence from the floor, would have ended the expansion of slavery to any newly admitted states to the Union years before the Constitution’s infamous three-fifths compromise. One wonders if America would have seen a secessionist movement or Civil War, and how the economies of states from Alabama and Florida to Texas would have developed without slave labor, which in some states and counties constituted the majority.

These ideas form a core moral principle for most Americans today, but they are not hypothetical or irrelevant to modern debates about liberty. Though America and the broader Western World have brought the slavery debate to an end, the larger world has not; though countries have officially made enslavement a crime (true only since 2007), many within the highest levels of government aid and abet the practice. 30 million individuals around the world suffer under the same types of chattel slavery seen millennia ago, including in nominal US allies in the Middle East. The debates between the pursuit of non-intervention as a form of freedom and the defense of the liberty of others as a form of freedom have been consistently important since the 1800’s (or arguably earlier), and I think it is vital that these discussions continue in the public forum. I hope that this 4th of July reminds us that liberty is not just a distant concept, but a set of values that requires constant support, intellectual nurturing, and pursuit.

A Right is Not an Obligation

Precision of language in matters of science is important. Speaking recently with some fellow libertarians, we got into an argument about the nature of rights. My position: A right does not obligate anyone to do anything. Their position: Rights are the same thing as obligations.

My response: But if a right is the same thing as an obligation, why use two different words? Doesn’t it make more sense to distinguish them?

So here are the definitions I’m working with. A right is what is “just” or “moral”, as those words are normally defined. I have a right to choose which restaurant I want to eat at.

An obligation is what one is compelled to do by a third party. I am obligated to sell my car to Alice at a previously agreed on a price or else Bob will come and take my car away from me using any means necessary.

Let’s think through an example. Under a strict interpretation of libertarianism, a mother with a starving child does not have the right to steal bread from a baker. But if she does steal the bread, then what? Do the libertarian police instantly swoop down from Heaven and give the baker his bread back?

Consider the baker. The baker indeed does have a right to keep his bread. But he is no under no obligation to get his bread back should it get stolen. The baker could take pity on the mother and let her go. Or he could calculate the cost of having one loaf stolen is low to expend resources to try to get it back.

Let’s analyze now the bedrock of libertarianism, the nonaggression principle (NAP). There are several formulations. Here’s one: “no one has a right to initiate force against someone else’s person or property.” Here’s a more detailed version, from Walter Block: “It shall be legal for anyone to do anything he wants, provided only that he not initiate (or threaten) violence against the person or legitimately owned property of another.”

A natural question to ask is, what happens if someone does violate the NAP? One common answer is that the victim of the aggression then has a right to use force to defend himself. But note again, the right does not imply an obligation. Just because someone initiates force against you, does not obligate you or anyone else to respond. Pacifism is consistent with libertarianism.

Consider another example. Due to a strange series of coincidences, you find yourself lost in the woods in the middle of a winter storm. You come across an unoccupied cabin that’s obviously used as a summer vacation home. You break in, and help yourself to some canned beans and shelter, and wait out the storm before going for help.

Did you have a right to break into the cabin? Under some strict interpretations of libertarianism, no. But even if this is true, all it means is that the owners of the cabin have the right, but not obligation, to use force to seek damages from you after the fact. (They also had the right to fortify their cabin in such a way that you would have been prevented from ever entering.) But they may never exercise that right; you could ask for forgiveness and they might grant it.

Furthermore, under a pacifist anarchocapitalist order, the owners might not even use force when seeking compensation. They might just ask politely; and if they don’t like your excuses, they’ll simply leave a negative review with a private credit agency (making harder for you to get loans, jobs, etc.).

The nonaggression principle, insofar as it is strictly about rights (and not obligations), is about justice. It is not about compelling people to do anything. Hence, I propose a new formulation of the NAP: using force to defend yourself from initiations of force can be consistent with justice.

This formulation makes clear that using force is a choice. Initiating force does not obligate anyone to do anything. “Excessive force” may be a possibile injustice.

In short, justice does not require force.

Highly recommended work on Ayn Rand

Most scholarship on Ayn Rand has been of mediocre quality, according to Gregory Salmieri, the co-editor of A Companion to Ayn Rand, which is part of the series “Blackwell Companions to Philosophy.” The other co-editor of the volume is the late Allan Gotthelf, who died during it’s last preparatory stages.

The reasons for the poor scholarship are diverse. Of course Rand herself is a large element. She hardly ever participated in regular academic procedures, did not tolerate normal academic criticism on her work and strictly limited the number of people who could authoritatively ‘explain’ her Objectivist philosophy to herself and Nathaniel Branden. Before her death she appointed Leonard Peikoff as ‘literary heir’. She inspired fierce combat against the outside world among her closest followers, especially when others wrote about Rand in a way not to their liking. The result was that just a small circle of admirers wrote about her ideas, often in a non-critical way.

blog ayn rand

On the other hand, the ‘rest of the academy’ basically ignored her views, despite her continued popularity (especially in the US), her influence, particularly through her novels, and large sales, especially after the economic crisis of 2008. For sure, Objectivists remain a minority both inside and outside academia. Yet despite the strong disagreement with her ideas, it would still be normal to expect regular academic output by non-Randians on her work. Suffice it to point to the many obscure thinkers who have been elevated to the academic mainstream over the centuries. Yet Rand remains in the academic dark, the bias against her work is strong and influential. This said, there is a slight change visible. Some major presses have published books on Rand in the past years, with as prime examples the books by Jennifer Burns, Goddess of the Market: Ayn Rand and the American Right (2009), and Anne C Heller, Ayn Rand and the World She Made (2010). And this volume is another point in case.

One of the strong points of The Blackwell Companion on Ayn Rand is that the contributions meet all regular academic standards, despite the fact that the volume originates from the Randian inner circle. It offers proper explanation and analysis of her ideas and normal engagement with outside criticism. The little direct attack on interpretations or alleged errors of others is left to the end notes, albeit sometimes extensively. Let us say, in friendly fashion, that it proves hard to get rid of old habits!

It should not detract from the extensive, detailed, clearly written and plainly good quality of the 18 chapters in this companion, divided in 8 parts, covering overall context, ethics and human nature, society, the foundations of Objectivism, philosophers and their effects, art and a coda on the hallmarks of Objectivism. The only disadvantage is the large number of references to her two main novels, The Fountainhead and Atlas Shrugged, which makes some acquaintance with these tomes almost prerequisite for a great learning experience. Still, as a non-Randian doing work on her political ideas, I underline that this companion offers academically sound information and analysis about the full range of Rand’s ideas. So, go read it if you are interested in this fascinating thinker.

The death of reason

“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”

Thomas Kuhn, The Structure of Scientific Revolutions p. 111

I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.

My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”

And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.

The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.

If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.

The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.

Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary.  (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not just a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.

Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.

Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unproven auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.

But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”

Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.

For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.

For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.

So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.

Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms (Kuhn’s terminology), participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.

From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.

This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acoltyes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.

There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.

Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremburg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…

In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”

Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.

Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocuters, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.

Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. Die Endlösung, the Final Solution, is not about extermination anymore, but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.

The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.

Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it’s climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.

Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermoine Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”

“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233

Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.

Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.

When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.

The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.

Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.

Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.

Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.

If we don’t posit a baseline-rationality — Hermoine Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.

There were other attacks on the hoax, from SlateSalon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.

Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.

The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”

Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.

In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.

However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.