Why I’m No Longer a Christian: An Autobiographical/ Philosophical/ Therapeutic Explanation to Myself

Note: This was written about 18 months ago and posted on my now-defunct blog. I figure it might be worth reposting, mostly for posterity.

Throughout most of my youth, like the majority of middle-class Americans, I was raised as a Christian. As an argumentative and nerdy teenager, much of the intellectual energy throughout my adolescence was dedicated to the fervent apologetics of the Christian faith. In my eyes, I was trying to defend some deep, correspondent truth about the Lord. Today, I realize that was mostly youthful self-deception. I was trying to make beliefs I had made an epistemic and personal commitment to due to my social situation work with the experiences of the modern world I was thrust into. There is nothing wrong with my attempts to find some reason to cling to my contingent religious beliefs, and there is nothing wrong with people who succeed in that endeavor, but it was wrong for me to think I was doing anything more than that—something like defending eternal truths I knew certainly through faith, which I did so dogmatically.

As the title of this post suggests, my quest to make my religious beliefs work was ultimately unsuccessful, or at least have been up until this point (I’m not arrogant enough to assume I’ve reached the end of my spiritual/religious journey). For a variety of personal and intellectual reasons, I have since become a sort of agnostic/atheist in the mold of Nietzsche, or more accurately James (not Dawkins). Most of the point of this post is to spell out for my own therapeutic reasons the philosophical and personal reasons why I have the religious beliefs I have now at the young age of twenty. To the readers, this is ultimately a selfish post in that as the target audience is myself, both present and future. Nonetheless, I hope you enjoy this autobiographical/religious/philosophical mind vomit. Please, read it as like you would a novel—albeit a poorly written one—and not a philosophical or religious treatise.

Perhaps the best place to start is at the beginning of my childhood. But to understand that, I guess it’s better to start with my mother and father’s upbringing. My mother came from an intensely religious Baptist household with a mother who, to be blunt, used religion as a manipulative tool to the point of abuse. If her children disobeyed her, it was obviously the influence of Satan. Of course, any popular culture throughout my mother’s childhood was regarded as the work of Satan. I’ll spare you the details, but the upshot is this caused my mother some religious struggles that I inherited. My father came from a sincere though not devoted Catholic family. For much of my father’s young adulthood and late adolescence, religion took a backseat. When my parents met my father was an agnostic. He converted to Christianity by the time they married, but his religious beliefs were always more intimately personal and connected with his individual, private pursuit of happiness than anything else—a fact that has profoundly influenced the way I think about religion as a whole.

Though neither of my parents were at all interested in shoving religion down my throat, I kind of shoved it down my own throat as a child. I was surrounded by evangelical—for want of better word—propaganda throughout my childhood as we mostly attended non-denominational, moderately evangelical churches throughout my childhood. My mother mostly sheltered me from my grandmother’s abuses of religion, and she reacted to her grandmother’s excesses appropriately by trying to make my religious upbringing centered on examples of God’s love. However, her struggles with religion still had an impact on me as she wavered between her adult commitments to an image of an all-loving deity with the remnants of her mother’s conception of good as the angry, vengeful, jealous God of the Old Testament. She never really manifestly expressed the latter conception, but it was implicit, just subtlety enough for my young mind to notice, in the way some of the churches we chose in my youth expressed the Gospel.

At the age of seven, we moved from Michigan to the heart of the Bible belt in Lynchburg, VA, home to one of the largest evangelical colleges in the world: Liberty University. Many of the churches we attended in Virginia had Liberty professors as youth leaders, ministers, and the like, so Jerry Falwell’s Southern Baptist conception of God which aligned closely with my grandmother’s was an influence on me through my early teenage years. Naturally, religion was closely linked with political issues of the day. God blessed Bush’s war in Iraq, homosexuality was an abhorrent sin, abortion was murder, and nonsense like that was fed to me. Of course, evolution was an atheist lie and I remember watching creationist woo lectures with my mother while she was taking an online biology course from Liberty (she isn’t a creationist, for the record, and her major was psychology, which Liberty taught well enough).

Though it certainly wasn’t as extreme, some of the scenes in the documentary Jesus Camp are vaguely like experiences I had around this time. I was an odd kid who got interested in these serious “adult” issues at the age of nine while most of my friends were watching cartoons, so I swallowed the stock evangelical stance hook, line, and sinker. But there was something contradictory between my mother’s reservations about an angry God and refusal to push my religious beliefs in any direction thanks to the influence of her mother, my father’s general silence about religious issues unless the conversation got personal or political, and the strong evangelical rhetoric that the culture around me was spewing.

Around seventh grade, we moved from Virginia to another section of the Bible-Belt, Tennessee. For my early high school years, my interest in evangelical apologetics mostly continued. However, religion mostly took a backseat to my political views. With the beginning of the recession, I became far more interested in economics: I wanted an explanation for why there were tents with homeless people living in them on that hill next to Lowe’s. My intellectual journey on economics is a topic for another day, but generally, the political component of my religious views was slowly becoming less and less salient. I became more apathetic about social issues and more focused on economic issues.

It was around this time I also became skeptical of the theologically-justified nationalistic war-mongering fed to me by the Liberty crowd in Virginia. We lived near Ft. Campbell and I had the displeasure of watching family after family of my friends ruined because their dad went to Afghanistan and didn’t come back the same, or didn’t come back at all. The whole idea of war just seemed cruel and almost unjustifiable to me, even though I still would spout the conservative line on it externally I was internally torn. I would say I was beginning to subconsciously reject Christianity’s own ontology of violence (apologies to Millbank).

It was also around this time, ninth grade, that I began more systemically reading the Old Testament. War is a common theme throughout the whole thing, and all I could think of as I read about the conquer of Israel, the slaying of Amalekites, the book of Job, and the like were my personal experiences with my friends who were deeply affected by the Wars in Iraq and Afghanistan. At this point, there was skepticism and doubt about how God could justly wage war and commit cruel mass-killings in Biblical times.

Around tenth grade, I became immaturely interested in philosophy. I’m ashamed to admit it today, but Ayn Rand was my gateway drug to what would become an obsession of mine up until now. I loved elements of Rand’s ethics, her individualism, her intense humanism (which I still appreciate on some level), and of course her economics (which I also still appreciate, though they are oversimplified). But her polemics against religion and her simplistic epistemological opposition between faith and reason put me in an odd position. What was I, a committed evangelical Christian, to do with my affinities with Rand? Naturally, I should’ve turned to Aquinas, whose arguments for the existence of God and his unification of faith and reason I now can appreciate. However, at the time, I instead had the misfortune of turning to Descartes, whose rationalism seemed to me seemed to jive with what I saw in Rand’s epistemology (today, I definitely would not say that about Rand and Descartes at all as Rand is far more Aristotelian, ah the sophomoric follies of youth). Almost all of my subsequent intellectual journey with religion and philosophy could be considered a fairly radical reaction to the dogmas that I had bought at this time.

I had fully bought perhaps the worst of Rand and Descartes. Descartes’ philosophical method and “proofs” of God, with all the messy metaphysical presumptions of mind-body dualism (though I might’ve implicitly made a greater separation between “mind” and “spirit” than Descartes would’ve), the correspondence theory of truth, quest for certainty, and spectator theory of knowledge, the ego theory of the self, and libertarian free will. From Rand I got the worst modernist presumptions she took from Descartes, what Bernstein calls the “Cartesian Anxiety” in her dogmatic demand for objectivism, as well as her idiosyncratic views on altruism (though I never really accepted ethical egoism, or believed she was really an ethical egoist). The flat, horribly written protagonists of Atlas Shrugged and Fountainhead I took to be somehow emblematic of the Christian conception of God (don’t ask me what in the hell I was thinking). Somehow, I couldn’t explain it then coherently and cringe at it now, I had found a philosophical foundation of sorts for a capital-C Certain belief in protestant Christianity in God and a watered down Randian ethics. Around this time, I also took an AP European History class, and my studies (and complete misreadings of) traditional Lutheranism and Catholicism reinforced my metaphysical libertarianism and Cartesian epistemological tendencies.

Around this time, my parents became dissatisfied with the aesthetic and teachings of evangelical non-denominational churches, and we started attending a run-of-the-mill, mainline PCUSA church my mom had discovered through charity programs she encountered as a social worker. I certainly didn’t buy Presbyterianism’s lingering affinities for Calvinism inherited from Knox (such as their attempt to retain the language of predestination while affirming Free Will), but the far more politically moderate to apolitical sermons, as well as focus on the God of the New Testament as opposed to my Grandmother’s God, was a refreshing change of pace from the evangelical dogmatism I had become accustomed to in Virginia. It fit my emerging Rand-influenced transition to political libertarianism well enough, and the old-church aesthetic and teaching methods fit well with the more philosophical outlook I had taken on religion.

In eleventh grade, we moved back to Michigan in the absolute middle of nowhere. Virtually every single protestant church within a twenty-mile radius was  either some sort of dogmatically evangelical nondenominational super-church where the populist, charismatic sermons were brought to you buy Jesus, Inc.; or an equally evangelical tiny rural church with a median age of 75 where the sermons were the somewhat incoherent and rabidly evangelical ramblings of an elderly white man. Our young, upper-middle-class family didn’t fit into the former theologically or demographically and certainly didn’t fit into the later theologically or aesthetically. After about a year of church-shopping, our family stopped going to church altogether.

Abstaining from church did not dull my religion at all. Sure, the ethical doubts I was having at the time and the epistemological doubts caused by my philosophical readings were working in the background, but in a sense, this was my most deeply religious time. I had taken up fishing almost constantly all summer since we lived on a river, and much of my thoughts while sitting with the line in the water revolved around religion or politics. When my thoughts turned religious, there was always a sense of romantic/transcendentalist (I was reading Thoreau, Emerson, and Whitman in school at the time) sublimity in nature that I could attribute to God. Fishing, romping around in the woods, hunting, and experiencing nature became the new church for me and was a source of private enjoyment and self-creation (you can already see where my affinities for Rorty come from) in my late teens. Still, most of my intellectual energy was spent on political and economic interests and by now I was a fully committed libertarian.

Subconsciously earlier in my teens, but very consciously by the time I moved to Michigan, I had begun to realize I was at the very least on the homosexual spectrum, quietly identifying as bisexual at the time. The homophobic religious rhetoric of other Christians got on my nerves, but in rural northern Michigan I was mostly insulated from it and it never affected me too deeply. Since I assumed I was bi, it wasn’t that huge a deal in terms of my identity even if homosexuality was a sin, which I doubted it was though I couldn’t explain why, so I never really thought too deeply about it. However, it did contribute to my ethical doubts about Christianity further; if God says homosexuality is a sin, and Christians are somehow justified in oppressing homosexuality, how does that bode for God’s cruelty? It became, very quietly, an anxiety akin to the anxieties I was having about war when I moved to Tennessee.

Though abstaining from church didn’t cheapen my experience of religion, my exposure to my grandmother’s angry God did.  Up until that point, I had mostly been ignorant of her religious views because we lived so far away; but moving back to Michigan, as well as some health issues she had, thrust her religious fervor back into my—and my mother’s—consciousness. The way she talked about it and acted towards non-Christians reeked of the worst of I Samuel, Johnathan Edwards, John Calvin, and Jerry Falwell rolled into one. My skepticism towards the potential cruelty of the Christian God caused by my experiences with war and homophobia were really intensified by observing my maternal grandmother.

The year was 2013, I had just graduated from High School, I had just turned eighteen, and I had chosen my college. I had applied to some local state school as a backup which I only considered because it was a full-ride scholarship, my father’s alma-mater, the University of Michigan, and Hillsdale College. After the finances were taken care of, I’m fortunate enough to be a member of the upper-middle class, the real choices were between Michigan and Hillsdale. For better or for worse, I chose the latter.

My reasons for choosing Hillsdale were mostly based on misinformation about the college’s mission. Sure, I knew it was overwhelmingly conservative and religious. But I thought there was far more of a libertarian bent to campus culture. The religious element was sold to me as completely consensual, not enforced by the college at all other than a couple vague comments about “Judeo-Christian values” in the mission statement. I wanted a small college full of intellectually impassioned students who were dedicated to, as the college mission statement said, “Pursuing Truth, Defending Liberty.” The “defending liberty” part made me think the college was more libertarian, and the “pursuing truth” part made me assume it was very open-minded as a liberal arts education was supposed to be. I figured there’d probably be some issues about my budding homosexuality/bisexuality, but since it wasn’t a huge deal at the time for me personally, and some students I’d talked to said it wasn’t a big deal there, I thought I could handle it. Further, I suspected my major was going to be economics, and Hillsdale’s economics department—housing Ludwig von Mises’ library—is a dream come true (my opinion on this hasn’t changed).

If I ever had problems misunderstanding the concept of asymmetric information, the lies I was told as an incoming student to Hillsdale cleared them up. The Hillsdale I got was far more conservative than I could ever imagine and in a ridiculously dogmatic fashion. It was quickly revealed to be not the shining example of classical liberal arts education I had hoped for, but instead little more than a propaganda mill for a particularly nasty brand of Straussian conservatism. The majority of the students were religious in the same sense of my grandmother. Though they would intellectually profess to a different concept of God than my grandmother’s simplistic, lay-man Baptist understanding of God as an angry, jealous judge, the fruits of their faith showed little difference. My homosexual identity—by this point I’d abandoned the term “bisexual”—quickly became a focal point of my religious anxiety. Starting a few weeks in my freshman year, I began to fall into a deep depression, largely thanks to my treatment by these so-called “Christians”—that would cripple me for the next two years and that I am still dealing with the after-shocks of as I write this.

Despite the personal issues I had with my peers at Hillsdale, the two years I spent there were hands-down the two most intellectually exciting years of my life until that point. My first semester, I took an Introduction to Philosophy class. My professor, James Stephens, turned out to be a former Princeton student and had Richard Rorty and Walter Kauffman as mentors. His introductory class revolved first around ancient Greek philosophy, in particular, Plato’s Phaedo, then classical epistemology, particularly Descartes, Kant, and Hume, and a lot of experimental philosophy readings from the likes of Stephen Stitch and Joshua Knobe. The class primarily focused on issues in contemporary metaphysics which I had struggled with since I discovered Rand—like libertarian free will and theories of the self—epistemological issues, and metaphilosophical issues of method. Though only an intro class outside of my major, no class has changed my worldview quite as much as this.

In addition to the in-class readings, I read philosophy prolifically and obsessively outside of class as a matter of personal interest. That semester I had finished Stitch’s book The Fragmentation of Reason (which I wouldn’t have understood without extensive talks with Dr. Stephens in office hours), worked through most of Kant’s Critique of Pure Reason, basically re-learned Cartesianism, and read Hume’s Treatise. By the end of the class, I had completely changed almost every element of my philosophical world view. I went from a hardcore objectivist Cartesian to a fallibilist pragmatist (I had also read James due to Stitch’s work with him), from a fire-breathing metaphysical libertarian to a squishy compatibilist, from someone who had bought a simple referential view of language to a card-carrying Wittgensteinianist (of the Investigations, that is).

Other classes I took that first semester also would have a large impact on me. In my “Western Heritage” class—Hillsdale’s pretentious and propagandized term for what would usually be called something like “History of Western Thought: Ancient Times to 1600”—I essentially relearned all the theology I had poorly understood in my high school AP Euro class by reading church fathers and Catholic saints like Augustine, Tertullian, and, of course, Aquinas as well as rereading the likes of Luther and Calvin. Additionally, and this would have the most profound intellectual influence on me of anything I have ever read, I read Hayek’s Constitution of Liberty in my first political economy class cemented my epistemological fallibilism (although, I also read Fatal Conceit for pleasure which influenced me even more).

Early on that year, after reading Plato and Augustine, I began to become committed to some sort of Platonism, and for a second considered some sort of Eastern Orthodoxy. By this point, I was a political anarchist and saw the hierarchical and top-down control of Catholicism as too analogous to coercive statist bureaucracy. By contrast, the more communal structure of Orthodoxy, though still Hierarchical, seemed more appealing. To paraphrase Richard Rorty on his own intellectual adolescence, I had desperately wanted to become one with God, a desperation I would later react to violently. I saw Plato’s ideas of the Forms and Augustine’s incorporation of them into Christianity as a means to do that. But as I kept reading, particularly James, Hume, Kant, and Wittgenstein, the epistemological foundations of my Platonist metaphysical and theological stances crumbled. I became absolutely obsessed with the either-or propositions of the “Cartesian anxiety” and made a hobby of talking to my classmates in a Socratic fashion to show that they couldn’t be epistemically Certain in the Cartesian sense, much to the chagrin of most of my classmates. You could’ve played a drinking game of sorts during those conversations in which you took a shot every time I said some variation “How do you know that?” and probably give your child fetal alcohol syndrome, even if you weren’t pregnant or were a male.

In the second semester of my freshman year, I had turned more explicitly to theological readings and topics in my interests. (Keep in mind, I was mostly focusing on economics and math in class, almost all of this was just stuff I did on the side. I didn’t get out much in those days largely due to the social anxiety caused by the homophobia of my classmates.) My fallibilist/pragmatist epistemic orientation, as well as long with conversations with a fellow heterodox Hillsdale student from an Evangelical background, wound up with me getting very interested in “radical theology.” That semester, John Caputo had come to Hillsdale to discuss his book The Insistence of God. I attempted to read it at the time but was not well-versed enough in continental philosophy to really get what was going on in it. Nonetheless, my Jamesean orientation had me deeply fascinated in much of what Caputo was getting across.

My theological interests were twofold: first, more of an epistemic question, how can we know God exists? My conclusion was that we can’t, but whether God exists or not is irrelevant—what matters is the impact the belief of God has on our lives existentially and practically. This was the most I could glean out of Caputo’s premise “God doesn’t exist, he insists” without understanding Derrida, Nietzsche, Hegel, and Foucault. I began calling myself terms like “agnostic Christian,” “ignostic Christian,” or “pragmatist Christian” to try and describe my religious views. This also led me to a thorough rejection of Biblical literalism and infallibility, I claimed it was more a historical document on man’s interaction with God from man’s flawed perspective.

But, now in the forefront, were questions of Christianity’s ethical orientation that had lingered at the back of my mind since the early teens: why did the Christian God seem so cruel to me? I had resolved most of it with my rejection of Biblical infallibility. Chances are, God didn’t order the slaughter of Amalekites, or Satan’s torture of Job, or any of the other cruel acts in the Old Testament—the fallen humans who wrote the Bible misunderstood it. Chances are, most of the Old Testament laws on things like homosexuality were meant specifically for that historically contingent community and were not eternal moral laws and God of the New Testament, as revealed by Jesus, was the most accurate depiction of God in the Bible. Paul’s prima facie screeds against homosexuality in the New Testament, when taken in context and hermeneutically analyzed, probably had nothing to do with homosexuality as we know it today (I found this sermon convincing on that note). God sent Jesus not as a substitute for punishment but to act as an exemplar for how to love and not be cruel to others. I could still defend the rationality of my religious faith on Jamesean grounds, I was quoting Varieties of Religious Experience and Pragmatism more than the Bible at that point.  I also flirted with some more metaphysically robust theologies. Death of God theology seemed appealing based off of the little I knew about Nietzsche, and process theology to me bore a beautiful resemblance to Hayek’s concept of spontaneous order. Even saying it now, much of that sounds convincing and if I were to go back to Christianity, most of those beliefs would probably remain in-tact.

But still, there was this nagging doubt that the homophobic, anti-empathetic behavior of the Hillsdale “Christians” somehow revealed something rotten about Christianity as a whole. The fact that the church had committed so many atrocities in the past from Constantine using it to justify war, to the Crusades, to the Spanish conquistadors, to the Salem witch trials, to the persecution of homosexuals and non-believers throughout all of history still rubbed me the wrong way. Jesus’ line about judging faith by its fruits became an incredibly important scripture for me with my interest in William James. That scripture made me extremely skeptical of the argument that the actions of fallen humans do not reflect poorly on the TruthTM about the Christian God. What was the cash value of Christian belief if it seemed so obviously to lead to so much human cruelty throughout history and towards me personally?

That summer and the next semester, two books, both written by my philosophy professor’s mentors coincidentally enough as I had independently come across them, once again revolutionized the way I looked at religion. The first was Richard Rorty’s Philosophy and the Mirror of Nature, the second was William Kaufmann’s Nietzsche: Psychologist, Philosopher, Antichrist which I had read in tandem with most of Nietzsche’s best-known work (ie., Beyond Good and Evil, Thus Spake Zarathustra, Genealogy of Morals, and, most relevant to this discussion, The Antichrist).

Rorty had destroyed any last vestiges of Cartesianism or Platonism I had clung to. His meta-philosophical critique of big-P Philosophy that tries to stand as the ultimate judge of knowledge claims of the various professions around me completely floored me. His incorporation of Kuhnian philosophy of science and Gadamer’s hermeneutics was highly relevant to my research interests in the methodology of economics. Most importantly for religion was his insistence, though more explicit in his later works I had noticed it fairly heavily in PMN, that we are only answerable to other humans. There is no world of the forms to which we can appeal to, there is no God to whom we are answerable to, there is no metaphysical concepts we can rely on to call a statement true or false. The measurement of truth is the extent to which it helps us cope with the world around us, the extent to which it helps us interact with our fellow human beings.

Nietzsche’s concept of the Death of God haunted me, and now that I was beginning to read more continental philosophy some of the concepts in Caputo that flew over my head began to make sense. The Enlightenment Project to ground knowledge had made God, at least for much of the intellectual class who were paying attention to the great philosophical debates, a forced option. No longer could we rely on the Big Other to ground all our values, we had to reevaluate all our values and build a meaningful life for ourselves. Additionally Nietzsche’s two great criticisms of Christianity in the Antichrist stuck in my mind. Nietzsche’s critique that it led to the inculcation of slave morality, a sort of resentiment for the “lower people” didn’t quite stick because it seemed cruel. But his view that Christianity’s command to  “store our treasures in heaven” took all the focus off of this world, it ignored all those pragmatic and practical results of our philosophical beliefs that had become so important to me thanks to Matthew 7:16 and William James, and instead focused on our own selfish spiritual destiny did stick.The first critique didn’t quite ring with me because Nietzsche’s anti-egalitarian, and to be honest quite cruel,  attitude seemed as bad as what I saw the Christians doing to me. But his criticism of Christianity’s afterworldly focus on the afterlife rather than the fruits of their faith in this life posed a serious threat to my beliefs, and helped explain why the empathetic, homophobic hatred I was experiencing from my classmates was causing so much religious anxiety and cognitive dissonance.

(Note: Clearly, I’m violently oversimplifying and possibly misreading both Nietzsche and Rorty in the previous two paragraphs, but that’s beside the point as I’m more interested in what they made me think of in my intellectual development, not what they actually thought themselves.)

Still, through most of my sophomore year, I tried to resist atheism as best I could and cling to what I saw as salvageable in Christianity: the idea of universal Christian brotherhood and its potential to lead people to be kind to each other was still promising. Essentially, I still wanted to salvage Jesus as a didactic exemplar of moral values of empathy and kindness, if not in some metaphysical ideal of God, at least in the narrative of Jesus’s life and his teaching. Ben Franklin’s proto-pragmatic, yet still virtue ethical, view on religion in his Autobiography lingered in my mind very strongly during this phase. I still used the term “agnostic Christian” through most of that time and self-identified as a Christian, but retrospectively the term “Jesusist” probably better described the way I was thinking at that time.

I came to loathe (and still do) what Paul had done to Christianity: turning Jesus’ lessons into absolutist moral laws rather than parables on how to act kinder to others. See, for example, Paul’s treatment of sexual ethics in 1 Corinthians. Paul represented the worst slave-morality tendencies Nietzsche ridiculed to the extreme, and the way he acted as if there was only one way—which happened to be his what I saw as very cruel way—to experience Jesus’ truth in religious community in all his letters vexed me. Additionally, I loathed Constantine for turning Christianity into a tool to justify governmental power and coercion, which it remained throughout the reign of the Holy Roman Empire, Enlightenment-era absolutism, and into modern social conservative theocratic tendencies in America.

But the idea of an all-loving creator, if not a metaphysical guarantee of meaning and morality, sending his son/himself as an exemplar for what humanity can and should be still was extremely—and in many ways still is—attractive to me. I flirted with the Episcopalian and Unitarian Universalist churches, but something about their very limited concept of community rubbed me the wrong way (I probably couldn’t justify it or put my finger on it).

Clearly, my religious and philosophical orientation (not to mention my anarchist political convictions) put me at odds with Hillsdale orthodoxy. I started writing papers that were pretty critical of my professor’s lectures at times (though I still managed to mostly get A’s on them). These essays were particularly critical in my Constitution (essentially a Jaffaite propaganda class) and American Heritage (essentially a history of American political thought class, which was taught very well by a brilliant orthodox Catholic Hillsdale grad) classes. I was writing editorials in the student paper subtlety ridiculing Hillsdale’s homophobia and xenophobia, and engaging in far too many Facebook debates on philosophy, politics, and religion that far too often got far too personal.

In addition, in the beginning of my sophomore year, I came out as gay publicly. With the Supreme Court decision coming up the following summer, never had Hillsdale’s religiously-inspired homophobia reached such a fever-pitch. I could hardly go a day without hearing some homophobic slur or comment and the newspaper was running papers—often written by professors—claiming flat out false things about gay people (like comparing it to incest, saying that no society has ever had gay marriage and the like). The fruit/cash value of Jesus’ teachings was quite apparently not turning out to be the empathetic ethos I had hoped for, the rotten elements of the Old Testament God which my grandmother emphasized, the Pauline perversions, and Constantine’s statism were instead dominating the Christian ethos.

At the end of that academic year (culminating with this) I suffered a severe mental breakdown largely due to Hillsdale’s extreme homophobia. By the beginning of the next school year, I was completely dysfunctional academically, intellectually, and socially; I was apathetic about all the intellectual topics I had spent my entire thinking life occupied with, completely jaded about the future, and overall extraordinarily depressed. I’ll spare the dirty details, but by the end of the first month of my Junior year, it became clear I could no longer go on at Hillsdale. I withdrew from Hillsdale, and transferred to the University of Michigan.

That pretty much takes me up to present day. But coming out of that depression, I began to seriously pick back up the question of why Christianity—even the good I saw in Jesusism—no longer seemed true in the pragmatic sense. Why was this religion I had spent my whole life so committed to all of a sudden utterly lacking in cash value?

I found my answer in Rorty and Nietzsche one cold January day while I took a weekend trip to Ann Arbor with my boyfriend. I sat down at a wonderful artisan coffee shop set in a quaint little arcade tucked away in downtown Ann Arbor, and was re-reading Rorty’s Contingency, Irony, and Solidarity. Rorty’s continued insistence that “cruelty is the worst thing you can do,” even if he couldn’t metaphysically or epistemically justify it, seemed to be a view I had from the very beginning when my doubts about the Christian faith started thanks to my experiences with the victims of war.

Now, I can say that the reason I’m not a Christian—and the reason I think it would be a good idea if Christianity as religion faded out as a public metanarrative (though not as a private source for joy and self-creation that my dad exemplified)—is because Christianity rejects the idea that cruelty is the worst thing you can do. According to Christian orthodoxy (or, at least, the protestant sola fide sort I grew up with), you can be as outrageously, sadistically, egomaniacally cruel to another person as you want, and God will be perfectly fine with it if you believe in him. If Stalin would “accept God into his heart”—whatever that means—his place in paradise for eternity is assured, even if he had the blood of fifty million strong on his hands.

I have no problem with that per se, I agree with Nietzsche that retributive justice is little more than a thinly veiled excuse for revenge. Further, I agree with Aang from Avatar: The Last Airbender in saying “Revenge is like a two-headed rat viper: while you watch your enemy go down, you’re being poisoned yourself.” As an economist, the whole idea of revenge kind of seems to embrace the sunk cost fallacy. I still regard radical forgiveness and grace as among the best lessons Christianity has to offer, even forgiveness for someone like Stalin.

What seems absurd is that while Stalin could conceivably get a pass, even the kindest, most genuinely empathetic, and outstanding human being will be eternally damned and punished by God simply for not believing. For the Christian, the worst thing you can do is not be cruel, the worst thing you can do is reject their final vocabulary. When coupled with Nietzsche’s insights that Christianity is so focused on the afterlife that it ignores the pragmatic consequences of actions in this life, it is no wonder that Christianity has bred so much cruelty throughout history. Further, the idea that we are ultimately answerable to a metaphysical Big Other rather than to our fellow human beings (as Rorty would have it) seems to cheapen the importance of our other human beings. The most important thing to Christians is God, not your fellow man.

Of course, the Christian apologist will remark that “TrueTM” Christianity properly understood does not necessarily entail that conclusion. No true Scotsman aside, the point is well taken. Sure, the concept of Christian brotherhood teaches that since your fellow man is created in God’s image harming him is the same as harming God. Sure, Jesus does teach the most important commandment is essentially in line with my anti-cruelty. Sure, different sects of Christianity have a different view of divinity that are more nuanced than the one I gave.

But, again, if we judge this faith by its fruits, if we empirically look at the cash value of this belief, if we look at the revealed preference of many if not most Christians, it aligns more with my characterization than I would like. Between the emphasis on the afterlife, the fundamentally anti-humanist (in a deep sense) ethical orientation, and the belief that cruelty is not the worst thing you can do, I see little cash value to Christianity and a whole bunch of danger that it is highly apt—and clearly has been empirically—to be misused for sadistic purposes.

This is not to say Christianity is completely (pragmatically) false. I also agree with Rorty when he says the best way to reduce cruelty and advance human rights is through “sentimental education.” The tale of Jesus, if understood the way we understand a wonderful work of literature—like Rorty himself characterizes writers like Orwell—should live on. It may sound corny and blasphemous, but if “Christian” were simply the name of the Jesus “fandom,” I’d definitely be a Christian. I also certainly don’t think Christianity is something nobody should believe. The cash value of a belief is based on the myriad of particular contingencies of an individual or social group, and those contingencies are not uniform to my experience. However, from my contingent position, I cannot in good faith have faith.

Perhaps it is a sad loss, perhaps it is a glorious intellectual and personal liberation, and perhaps it is something else. Only time will tell. Anyways, 6,325 words later I hope I have adequately explained to myself why I am not a Christian.

Some Thoughts on Best of Enemies

I’ve been making an unnatural effort to stay abrest of American politics the last few months and I’m reaching the end of my rope. A while back I added Best of Enemies: Buckley vs. Vidal to my queue, and now seemed like a good time to watch it. I don’t know much about either but based on some vague recollection of offhanded comments by older professors, I expected I would be watching political discussion with class and/or depth.

I would not. 

While both are eloquent and poetical, neither seemed to offer much more than insults for each other. Their bickering was entertaining. But it was not enlightening. They were a fancier version of a modern poli-tainmemt show. 

The world’s definitely going to hell in a hand basket, but it always has been.

Update: I just finished the movie. The producers have a clear message: Buckley/Vidal was the beginning of the end. They are ending the documentary with clips of both men expressing skepticism at the wisdom of their now famous 1968 debates.

Vidal (in the 10th debate):

I think these great debates are absolutely nonsense. The way they’re set up, there’s almost no interchange of ideas, very little, even, of personality. There’s also the terrible thing about this medium that hardly anyone listens. They sort of get an impression of somebody, and they think that they’ve figured out just what he’s like by seeing him on television. 

Buckley (in some other context):

Does television ruin America? There is an implicit conflict if interest between that which is highly viewable and that which is highly illuminating.

There’s also a clip from this gem:

Update 2: I fixed some grammar and missing words after initially posting… Still figuring out the Android app for WordPress…

In Search of Firmer Cosmopolitan Solidarity: The Need for a Sentimentalist Case for Open Borders

Most arguments for open borders are phrased in terms of universalized moral obligations to non-citizens. These obligations are usually phrased as “merely” negative (eg., that Americans have a duty to not impede the movement an impoverished Mexican worker or Syrian refugee seeking a better life) rather than positive (eg., that the first obligation does not imply that Americans have a duty to provide, for example, generous welfare benefits to immigrants and refugees), but are phrased as obligations based on people in virtue of their rationality rather than nationality nonetheless.

Whether they be utilitarian, moral intuitionist, or deontological, what these arguments assume is that nation of origin isn’t a “morally relevant” consideration for one’s rights to immigrate and rely on some other view of moral relevance implicitly as an alternative to try and cement a purely moral solidarity that extends beyond national border. They have in common an appeal to a common human capacity to have rights stemming from something metaphysically essential to our common humanity.

Those arguments are all coherent and possibly valid and are even the arguments that originally convinced me to support open borders. The only problem is that they are often very unconvincing to people skeptical of immigration because they merely beg the question of that moral obligation is irrelevant with respect to nationality. As one of my critics of one of my older pieces on immigration observed, most immigration skeptics are implicitly tribalist nationalists, not philosophically consistent consequentialists or deontologists. They have little patience for theoretical and morally pure metaphysical arguments concluding any obligation, even merely negative, to immigrants. They view their obligations to those socially closer to them as a trump card (pardon the pun) to any morally universalized consideration. So long as they can identify with someone else as an American (or whatever their national identity may be) they view their considerations as relevant. If they cannot identify with someone else based on national identity, they do not view an immigrant’s theorized rights or utility functions as relevant.

There are still several problems with this tribalist perspective, given that nation-states are far from culturally homogenous and cultural homogeneity often transcends borders in some important respects, why does one’s ability to “identify” on the basis of tribal affiliation stop at a nation-state’s borders? Further, there are many other affinities one may have with a foreigner that may be viewed as equally important, if not more important, to one’s ability to “identify” with someone than national citizenship. They may be a fellow Catholic or Christian, they may be a fellow fan of football, or a fellow manufacturing worker, or a fellow parent, etc. Why is “fellow American” the most socially salient form of identification and allows one to keep a foreigner in a state tyranny and poverty, but not whether they are a “fellow Christian” or any of the many other identifiers people find important?

However, these problems are not taken seriously by those who hold them because tribalist outlook isn’t about rational coherence, it is about non-rational sentimental feelings and particularized perspectives on historical affinities. Even if a skeptic of immigration takes those problems seriously, the morally pure and universalizing arguments are no more convincing to a tribalist.

I believe this gets at the heart of most objections Trump voters have to immigration. They might raise welfare costs, crime, native jobs lost, or fear of cultural collapse as post-hoc rationalizations for why they do not feel solidarity with natives, but the fact that they do not feel solidarity due to their nationalist affinities is at the root of these rationalizations. Thus when proponents of open borders raise objections, be it in the form of economic studies showing that these concerns are not consistent with facts or by pointing out that these are also concerns for the native-born population and yet nobody proposes similar immigration restrictions on citizens, they fall on deaf ears. Such concerns are irrelevant to the heart of anti-immigrant sentiment: a lack of solidarity with anyone who is not a native-born citizen.

In this essay, drawing from the sentimentalist ethics of David Hume and the perspective on liberal solidarity of Richard Rorty, I want to sketch a vision of universalized solidarity that would win over tribalists to the side of, if not purely open borders, at least more liberalized immigration restrictions and allowance for refugees. This is not so much a moral argument of the form most arguments for open borders have taken, but a strategy to cultivate the sentiments of a (specifically American nationalist) tribalist to be more open to the concerns and sympathies of someone with whom they do not share a national origin. The main goal is that we shouldn’t try to argue away people’s sincere, deeply held tribalist and nationalist emotions, but seek to redirect them in a way that does not lead to massive suffering for immigrants.

Rorty on Kantian Rationalist and Humean Sentimentalist Arguments for Universalized Human Rights

In an article written by American pragmatist philosopher Richard Rorty called “Rationality, Sentimentality, and Human Rights,” he discusses two strategies for expanding human rights culture to the third world. One, which he identifies with philosophers such as Plato and Kant, involves appealing to some common faculty which all humans have in common—namely rationality—and claim all other considerations, such as kinship, custom, religion, and (most importantly for present purposes) national origin “morally irrelevant” to whether an individual has human rights and should be treated as such. These sort of arguments, Rorty says, are the sort that try to use rigorous argumentation to answer the rational egoist question “Why should I be moral?” They are traced back to Plato’s discussion of the Ring of Gyges in the Republic through Enlightenment attempts to find an algorithmic, rational foundation of morality, such as the Kantian categorical imperative. This is the sort of strategy, in varying forms, most arguments in favor of open borders try to pursue.

The second strategy, which Rorty identifies with philosophers such as David Hume and Annette Baier, is to appeal to the sentiments of those who do not respect the rights of others. Rather than try to answer “Why should I be moral?” in an abstract, philosophical sense such that we have a priori algorithmic justification for treating others equal, this view advocates trying to answer the more immediate and relevant question “Why should I care about someone’s worth and well-being even if it appears to me that I have very little in common with them?” Rather than answer the former question with argumentation that appeals to our common rational faculties, answer the latter with appealing to our sentimental attitudes that we do have something else in common with that person.

Rorty favors the second Humean approach for one simple reason: in practice, we are not dealing with rational egoists who substitute altruistic moral values with their ruthless self-interest. We are dealing with irrational tribalists who substitute more-encompassing attitudes of solidarity with less-encompassing ones. They aren’t concerned about why they should be moral in the first place and what that means, they are concerned with how certain moral obligations extend to people with whom they find it difficult to emotionally identify. As Rorty says:

If one follows Baier’s advice one will not see it as the moral educator’s task to answer the rational egoist’s question “Why should I be moral?” but rather to answer the much more frequently posed question “Why should I care about a stranger, a person who is no kin to me, a person whose habits I find disgusting?” The traditional answer to the latter question is “Because kinship and custom are morally irrelevant, irrelevant to the obligations imposed by the recognition of membership in the same species.” This has never been very convincing since it begs the question at issue: whether mere species membership is, in fact, a sufficient surrogate closer to kinship. […]

A better sort of answer is the sort of long, sad, sentimental story which begins with “Because this is what it is like to be in her situation—to be far from home, among strangers,” or “Because she might become your daughter-in-law,” or “Because her mother would grieve for her.” Such stories, repeated and varied over the centuries, have induced us, the rich, safe and powerful people, to tolerate, and even to cherish, powerless people—people whose appearance or habits or beliefs at first seemed an insult to our own moral identity, our sense of the limits of permissible human variation.

If we agree with Hume that reason is the slave of the passions, or more accurately that reason is just one of many competing sentiments and passions, then it should come as no surprise that rational argumentation of the form found in most arguments for open borders are not super convincing to people for whom reason is not the ruling sentiment. How does one cultivate these other sentiments, if not through merely rational argumentation? Rorty continually comments throughout his political works that novels, poems, documentaries, and television programs—those genres which tell the sort of long sad stories commented on above—have replaced sermons and Enlightenment-era treatises as the engine of moral progress since the end of the nineteenth century. Rational argumentation may convince an ideal-typical philosopher, but not many other people.

For Rorty, the application of this sentimental ethics had two main purposes, the first of which is mostly irrelevant for present purposes and the second of which is relevant. First, Rorty wanted to make his vision of a post-metaphysical, post-epistemological intellectual culture and a commonsensically nominalist and historicist popular culture compatible with the sort of ever-expanding human solidarity necessary for political liberalism; a culture for which the sort of algorithmic arguments for open borders I mentioned in the first half of this article would not seem convincing for more theoretical reasons than the mere presence of nationalist sentiment. Though that is an intellectual project with which I have strong affinities, one need not buy that vision for the purposes of this article—that of narrowly applying sentimental ethics to overcome nationalist objections to immigration.

The second, however, was to point out a better way to implement the liberal cultural norms to prohibit the public humiliation of powerless minorities. The paradigmatic cases Rorty says such a sentimental education has application are how Serbians viewed Muslims, how Nazis viewed Jews, or how white southern Confederates viewed African-American slaves. Though those are far more extreme cases, it is not a stretch to add to that list the way Trump voters view Muslim refugees or Mexican migrant workers.

A Rortian Case against Rortian (and Trumpian) Nationalism

Though Rorty was a through-and-through leftist and likely viewed most nationalist arguments for restricting immigration and especially keeping refugees in war-zones with scorn, there is one uncomfortable feature of his views for most radical proponents of immigration. It does leave very well open the notion of nationalism as a valid perspective, unlike many of the other arguments offered.

Indeed, Rorty—from my very anarchist perspective—was at times uncomfortably nationalist. In Achieving Our Country he likens national pride to self-respect for an individual, saying that while too much national pride can lead to imperialism, “insufficient national pride makes energetic and effective debate about national policy unlikely.” He defended a vision of American national pride along the lines of Deweyan pragmatism and transcendentalist romanticism as a nation of ever-expanding democratic vistas. Though radically different from the sort of national pride popular in right-wing xenophobic circles, it is a vision of national pride nonetheless and as such is not something with which I and many other advocates of open borders are not sympathetic with.

Further, and more relevant to our considerations, is he viewed national identity as a tool to expand the sort of liberal sentiments that he wanted. As he wrote in Contingency, Irony, and Solidarity:

Consider, as a final example, the attitude of contemporary American liberals to the unending hopelessness and misery of the lives of the young blacks in American cities. Do we say these people must be helped because they are our fellow human beings? We may, but it is much more persuasive, morally as well as politically, to describe them as our fellow Americans—to insist it is outrageous that an American to live without hope. The point of these examples is that our sense of solidarity is strongest when those with whom solidarity is expressed are thought of as “one of us,” where “us” means something smaller and more localized than the human race.

It is obvious why many critics of immigration restrictions would view this attitude as counterproductive. This type of description cannot be applied in many other scenarios at all relevant to questions of immigration at all. Liberalism, in the sense Rorty borrowed from Shklar (and also the sense which I think animates much of the interest in liberalized immigration policies), as an intense aversion to cruelty is concerned with merely ending cruelty as such. It wants to end cruelty whether it be the cruelty of the American government to illegal immigrants or suffering of native-born African-Americans as a result of centuries of cruelty by racists. This is surely something with which Rorty would agree as he writes elsewhere in that same chapter:

[T]here is such a thing as moral progress and that progress is indeed in the direction of greater human solidarity. But that solidarity is not thought of as recognition of a core self, the human essence, in all human beings. Rather, it is thought of as the ability to see more and more traditional differences (of tribe, religion, race, customs, and the like) as unimportant when compared to the similarities with respect to pain and humiliation—the ability to think of people wildly different from ourselves in the range of ‘us.’

Surely, that moral progress doesn’t stop at the unimportant line of a national border. The problem is that appeals to national identity of the sort Rorty uses, or of mythologized national histories, do stop at the border.

Rorty is right that it is easier for people to feel a sense of solidarity with those for whom there are fewer traditional differences, and that no amount of appeal to metaphysical constructions of human rationality will fully eclipse that psychological fact. However, the problem with forms of solidarity along national identity is it is much easier for people to stop there. In modern pluralistic, cosmopolitan societies such as America, it is hard for someone to stop their sense of solidarity at religion, tribe, custom and the like. This is because the minute they walk out the door of their home, the minute they arrive at their workplace, there is someone very close to them who would not fit that sense of solidarity yet someone for whom they would still feel some obligation, just based off of seeing the face of that person, off of mere proximity.

Stopping the line at national identity is much easier since many Americans, particularly those in the midwestern and southeastern states which gave Trump his presidency, will rarely interact with non-nationals on a regular basis while they will more likely interact with someone who is more distant from them in other ways. While other forms of solidarity are unstable for most because they are too localized, nationalism is stable because it is too general to be upset by experience of others while not general enough to be compatible with liberalism. Moral progress, if we pursue Rorty’s explicitly nationalist project, will halt at the national borders and his liberal project of ending cruelty will end with it. There is an inconsistency between Rorty’s liberalism and his belief in national pride.

Further, insisting “because they are American” leads people to ask what it means to “be American,” a question which can only be answered, even by Rorty in his description of American national pride, by contrast with what isn’t American (see his discussion of Europe in “American National Pride). It makes it difficult to see suffering as the salient identifier for solidarity, and makes other ‘traditional’ differences standing in the way of Rorty’s description of moral progress as more important than they should be. Indeed, this is exactly what we see with most xenophobic descriptions of foreigners as “not believing in American ideals.” Rorty’s very humble, liberalized version of national pride faces a serious danger of turning into the sort of toxic, illiberal nationalism we have seen in recent years.

Instead, we should substitute the description Rorty offers as motivating liberal help for African-Americans in the inner city ,‘because they are American,’ with the redescription Rorty uses elsewhere: ‘because they are suffering, and you too can suffer and have suffered in the past.’ This is a sentimental appeal which can apply to all who are suffering from cruelty, regardless of their national identity. This is more likely to make more and more other differences seem unimportant. As Rorty’s ideas on cultural identity politics imply, the goal should be to replace “identity”—including national identity—with empathy.

Thus, in making an appeal to Rorty’s sentimentalism for open border advocates, I want to very clearly point out how it is both possible and necessary to separate appeals to solidarity and sentiment from nationalism to serve liberal ends. This means that the possibility of nationalist sentiments of seeming acceptable to a non-rationalist form of ethics should not discourage those of us skeptical of nationalism from embracing and using its concepts.

Sentimental Ethical Appeals and Liberalized Immigration

The application of this form of sentimental ethics for people who merely want to liberalized immigration should be obvious. Our first step needs to be to recognize that people’s tribalist sentiments aren’t going to be swayed by mere rationalist argumentation as it merely begs the question. Our second step needs to be to realize that what’s ultimately going to be more likely to convince them aren’t going to get rid of people’s tribalist sentiments altogether, but to redirect them elsewhere. The goal should be to get people to see national identity as unimportant to those sentiments compared to other more salient ones, such as whether refugees and immigrants are suffering or not. The goal should be for nationalists to stop asking questions of immigrants like “Are immigrants going to be good Americans like me?” and more “Are they already people who, like me, have suffered?”

This does not mean that we stop making the types of good academic philosophical and economic arguments about how immigration will double the global GDP and how rights should be recognized as not stopping with national identity—those are certainly convincing to the minority of us to whom tribalism isn’t an especially strong sentiment. However, it does mean we should also recognize the power of novels like Under the Feet of Jesus or images like the viral, graphic one of a Syrian refugee child who was the victim of a bombing which circulated last year. The knowledge that Anne Frank’s family was turned down by America for refugee status, the feelings of empathy for Frank’s family one gets from reading her diary, the fear that we are perpetuating that same cruelty today are far more convincing than appeals to Anne Frank’s natural rights in virtue of her rational faculties as a human being.

Appeals to our common humanity in terms of our “rational faculties” or “natural rights” or “utility functions” and the like are not nearly as convincing to people who aren’t philosophers or economists as appeals to the ability of people to suffer. Such an image and sentimental case is far more likely to cultivate a cosmopolitan solidarity than Lockean or Benthamite platitudes.

References:

Rorty, Richard. “American National Pride: Whitman and Dewey.” Achieving our Country: Leftist Thought in Twentieth Century America. Rpt. in The Rorty Reader. Ed. by Christopher J. Voparil and Richard J. Bernstein. Malden: Blackwell Publishing Ltd, 2010. 372-388. Print.

Rorty, Richard. “Human Rights, Rationality, and Sentimentality.” On Human Rights: The Oxford Amnesty Lectures. Rpt. in The Rorty Reader. Ed. by Christopher J. Voparil and Richard J. Bernstein. Malden: Wiley-Blackwell Publishing Ltd, 2010.
352-372. Print.

Rorty, Richard. Contingency, Irony, and Solidarity. Cambridge: Cambridge University Press, 1999. Print.

 

Trump Jr.

Last school year I had to deal with a pair of students (Tweedledee and Tweedledum) I caught cheating on a takehome final. When confronted with the evidence, each insisted that it was the other’s fault, and that only that other student should face any consequences.

Bear in mind that if they complete their degrees, they would be in the top 30% of the population in terms of educational attainment. In today’s world, that basically means they’re among the best and brightest, they’re high status, and they’re “the future”. If we could meaure status on a linear scale, getting a college degree still pushes you high up on that scale. 

At the time I figured that they were at least towards the bottom of that top 30%. Certainly, I still hope they’ll grow out of it. Unfortunately, Draco Malfoy’s Junior’s latest scandal shows that being bad at cheating isn’t the social hinderance we might have hoped for.

Related link: http://reason.com/blog/2017/07/13/how-trump-apologists-will-defend-the-ind

The Deleted Clause of the Declaration of Independence

As a tribute to the great events that occurred 241 years ago, I wanted to recognize the importance of the unity of purpose behind supporting liberty in all of its forms. While an unequivocal statement of natural rights and the virtues of liberty, the Declaration of Independence also came close to bringing another vital aspect of liberty to the forefront of public attention. As has been addressed in multiple fascinating podcasts (Joe Janes, Robert Olwell), a censure of slavery and George III’s connection to the slave trade was in the first draft of the Declaration.

Thomas Jefferson, a man who has been criticized as a man of inherent contradiction between his high morals and his active participation in slavery, was a major contributor to the popularizing of classical liberal principles. Many have pointed to his hypocrisy in that he owned over 180 slaves, fathered children on them, and did not free them in his will (because of his debts). Even given his personal slaves, Jefferson made his moral stance on slavery quite clear through his famous efforts toward ending the transatlantic slave trade, which exemplify early steps in securing the abolition of the repugnant act of chattel slavery in America and applying classically liberal principles toward all humans. However, this very practice may have been enacted far sooner, avoiding decades of appalling misery and its long-reaching effects, if his (hypocritical but principled) position had been adopted from the day of the USA’s first taste of political freedom.

This is the text of the deleted Declaration of Independence clause:

“He has waged cruel war against human nature itself, violating its most sacred rights of life and liberty in the persons of a distant people who never offended him, captivating and carrying them into slavery in another hemisphere or to incur miserable death in their transportation thither.  This piratical warfare, the opprobrium of infidel powers, is the warfare of the Christian King of Great Britain.  Determined to keep open a market where Men should be bought and sold, he has prostituted his negative for suppressing every legislative attempt to prohibit or restrain this execrable commerce.  And that this assemblage of horrors might want no fact of distinguished die, he is now exciting those very people to rise in arms among us, and to purchase that liberty of which he has deprived them, by murdering the people on whom he has obtruded them: thus paying off former crimes committed against the Liberties of one people, with crimes which he urges them to commit against the lives of another..”

The second Continental Congress, based on hardline votes of South Carolina and the desire to avoid alienating potential sympathizers in England, slaveholding patriots, and the harbor cities of the North that were complicit in the slave trade, dropped this vital statement of principle

The removal of the anti-slavery clause of the declaration was not the only time Jefferson’s efforts might have led to the premature end of the “peculiar institution.” Economist and cultural historian Thomas Sowell notes that Jefferson’s 1784 anti-slavery bill, which had the votes to pass but did not because of a single ill legislator’s absence from the floor, would have ended the expansion of slavery to any newly admitted states to the Union years before the Constitution’s infamous three-fifths compromise. One wonders if America would have seen a secessionist movement or Civil War, and how the economies of states from Alabama and Florida to Texas would have developed without slave labor, which in some states and counties constituted the majority.

These ideas form a core moral principle for most Americans today, but they are not hypothetical or irrelevant to modern debates about liberty. Though America and the broader Western World have brought the slavery debate to an end, the larger world has not; though countries have officially made enslavement a crime (true only since 2007), many within the highest levels of government aid and abet the practice. 30 million individuals around the world suffer under the same types of chattel slavery seen millennia ago, including in nominal US allies in the Middle East. The debates between the pursuit of non-intervention as a form of freedom and the defense of the liberty of others as a form of freedom have been consistently important since the 1800’s (or arguably earlier), and I think it is vital that these discussions continue in the public forum. I hope that this 4th of July reminds us that liberty is not just a distant concept, but a set of values that requires constant support, intellectual nurturing, and pursuit.

For more underrecognized history surrounding the founding of America, see my Before the Fourth series!

Mexicans in Mexico

I just spent another two weeks in Mexico, in Puerto Vallarta to be specific, a town pretty much invented by Liz Taylor and Richard Burton. (See the movie “Night of the Iguana.”) The more time I spend in Mexico, the more I like Mexicans. I may have to repeat myself here.

Mexican cities are clean because people sweep in front of the their doors every morning without being told. Everybody there works or is seeking hard to work. Everybody is polite and friendly. One exception: an older taxi driver showed some discrete ill humor with me. I had mistakenly given him 15 cents (American) for a tip. That’s it. Every other interaction I had was gracious or better. (It’s true that my Spanish is good and that I was accompanied most of the time by my adorable 8 year-old granddaughter modeling a broad-brim straw hat.)

Every time I am in Mexico, I notice something new. This time, I was there during the summer vacation period and Mexicans from the US were numerous and very visible. They come to Mexico to kiss old grandpa and grandma, in one case, to get married, and to a large extent, for a vacation, like everyone else. They tend to be loud and better dressed than the locals. They are brisk consumers who buy their children the best beach equipment and all the tours available, like new consumers often do. Many are garrulous and strike up a conversation with strangers easily. They know their place in the sun. I may be dreaming but I think there is something distinctively American about them.

I also bumped into a surprisingly large number of “returnees for good,” including several who got stuck on the southern side of the southern border. Many more lived in the US (legally or not, we don’t often talked about that), made their pile, and took their savings and deliberately started life anew in the old country. One bought two taxis, several built houses, another acquired a ranch where some of his less urbanized relatives live and make a living. He mentioned cows, of course, but also horses. There is a whole program of upward mobility in the simple word “horses.” Unless you have a dude ranch (unknown in Mexico, I think), horses are only for recreation. Manuel, back from short-order cooking in Los Angeles, can even afford to have his children ride. All those brief Mexican acquaintances speak well of the US; they are proud of their stay in this country but they are happy to be back in Mexico for good. In 2009, my co-author Sergey Nikiforov and I had already stated about Mexican immigrants that Mexicans, by and large, would rather live in Mexico. (“If Mexicans and Americans could cross the border freely.” [pdf])

Returnees play all kinds of bridge roles where their American experience is useful. The main “client relations specialist” in my hotel was a 23 year-old guy who had been brought up (illegally) in Colorado. Of course, his English is perfect. Soon, he will open his own business, I think.

I don’t want to give the impression that the returnees’ fate is merely to serve the needs of American tourists and visitors. It seems to me that, like many bilingual people who have lived in more than one country, they are naturally cosmopolitan types who are useful in many non-domestic business situations. (I have modest qualifications to pass judgment here because I taught international business at an elementary level for 25 years. I also worked as a consultant in that field for several years.)

The average literate Mexican is an avid student of Americana. With the help of returnee relatives, he may actually excel there. Everyone below 30 in Mexico is studying English. I have said it before: in a few years, we will be begging them to come back.

Surprisingly little talk about “the wall.” Mexicans have a sense of humor. Of course, I, myself, believe that Pres. Trump will succeed. He will build a solar electricity-producing wall, sell the electricity to Mexicans at low cost (thus making them pay for the wall) and they will thank him!

The Old Deluder Satan Act: Literacy, Religion, and Prosperity

So, my brother (Keith Kallmes, graduate of the University of Minnesota in economics and history) and I have decided to start podcasting some of our ideas. The topics we hope to discuss range from ancient coinage to modern medical ethics, but with a general background of economic history. I have posted here our first episode, the Old Deluder Satan Act. This early American legislation, passed by the Massachusetts Bay Colonists, displays some of the key values that we posit as causes of New England’s principal role in the Industrial Revolution. The episode: 

We hope you enjoy this 20-minute discussion of the history of literacy, religion, and prosperity, and we are also happy to get feedback, episode suggestions, and further discussion in the comments below. Lastly, we have included links to some of the sources cited in the podcast.


Sources:

The Legacy of Literacy: Continuity and Contradictions in Western Culture, by Harvey Graff

Roman literacy evidence based on inscriptions discussed by Dennis Kehoe and Benjamin Kelly

Mark Koyama’s argument

European literacy rates

The Agricultural Revolution and the Industrial Revolution: England, 1500-1912, by Gregory Clark

Abstract of Becker and Woessman’s “Was Weber Wrong?”

New England literacy rates

(Also worth a quick look: the history of English Protestantism, the Puritans, the Green Revolution, and Weber’s influence, as well as an alternative argument for the cause of increased literacy)

Paradoxical Geniuses: “Let us burn the ships”

In 1519, Hernán Cortés landed 500 men in 11 ships on the coast of the Yucatan, knowing that he was openly disobeying the governor of Cuba and that he was facing unknown numbers of potential enemies in an unknown situation. Regardless of the moral implications, what happened next was strategically extraordinary: he and his men formed a local alliance, and despite having to beat a desperate retreat on La Noche Triste, they conquered the second largest empire in the New World. As the expeditionary force landed, Cortés made a tactically irrational decision: he scuttled all but one of his ships. In doing so, he hamstrung his own maneuverability, scouting, and communication and supply lines, but he gained one incredible advantage: the complete commitment of his men to the mission, for as Cortés himself said, “If we are going home, we are going in our foes’ ships.” This strategic choice highlights the difference between logic and economists’ concept of “rationality,” in that illogical destruction of one’s own powerful and expensive tools creates a credible commitment that can overcome a serious problem in warfare, that of desertion or cowardice. While Cortés certainly increased the risk to his own life and that of his men, the powerful psychology of being trapped by necessity brought out the very best of the fighting spirit in his men, leading to his dramatic victory.

This episode is certainly not unique in the history of warfare, and was not only enacted by leaders as a method of ensuring commitment, but actually underlay the seemingly crazy (or at least overly risky) cultural practices of several ancient groups. The pervasiveness of these psychological strategies shows that, whether each case was because of a genius decision or an accident of history, they conferred a substantial advantage to their practitioners. (If you are interested in how rational choices are revealed in the history of warfare, please also feel free to read about hostage exchanges and ransoming practices from an earlier blog!) I have collected some of the most interesting examples that I know of, but the following is certainly not an exhaustive list and I encourage other episodes to be mentioned in the comments:

  • Julian the Apostate
    • Julian the Apostate is most famous for his attempt to reverse Constantine the Great’s Christianization of the Roman Empire, but he was also an ambitious general whose audacity gained him an incredible victory over Germanic invaders against steep odds. He wanted to reverse the stagnation of Roman interests on the Eastern front, where the Sasanian empire had been challenging the Roman army since the mid-3rd century. Having gathered an overwhelming force, he marched to the Euphrates river, took ships from there to the Sasanian capital, while the Sasanians used slash-and-burn tactics to slow his advance. When Julian found the capital (Ctesiphon) undefended, he worried that his men would want to loot the capital and return homeward, continuing the status quo of raiding and retreating. To prevent this, in a move much like that of Cortés, he set fire to his ships and forced his men to press on. In his case, this did not end with stunning victory; Julian overextended his front, was killed, and lost the campaign. Julian’s death shows the very real risks involved in this bold strategy.
  • Julius Caesar
    • Julian may have taken his cue from a vaunted Roman historical figure. Dramatized perfectly by HBO, the great Roman general and statesman Julius Caesar made huge gamble by taking on the might of the Roman Senate. Despite being heavily outnumbered (over 2 to 1 on foot and as much as 5 to 1 in cavalry), Caesar committed to a decisive battle against his rival Pompey in Greece. While Pompey’s troops had the option of retreating, Caesar relied on the fact that his legionaries had their backs to the Mediterranean, effectively trapping them and giving them no opportunity to rout. While Caesar also tactically out-thought Pompey (he used cunning deployment of reserves to stymie a cavalry charge and break Pompey’s left flank), the key to his victory was that Pompey’s numerically superior force ran first; Pompey met his grisly end shortly thereafter in Egypt, and Caesar went on to gain power over all of Rome.
  • Teutones
    • The impact of the Teutones on the Roman cultural memory proved so enduring that Teutonic is used today to refer to Germanic peoples, despite the fact that the Teutones themselves were of unknown linguistic origin (they could very well have been Celtic). The Teutones and their allies, the Cimbri, smashed Roman armies which were better trained and equipped multiple times in a row; later Roman authors said they were possessed by the Furor Teutonicus, as they seemed to posses an irrational lack of fear, never fleeing before the enemy. Like many Celtic and Germanic peoples of Northern Europe, the Teutones exhibited a peculiar cultural practice to give an incentive to their men in battle: all of the tribe’s women, children, and supplies were drawn up on wagons behind the men before battles, where the women would take up axes to kill any man who attempted to flee. In doing so, they solved the collective action problem which plagued ancient armies in which a few men running could quickly turn into a rout. If you ran, not only would you die, but your wife and children would as well, and this psychological edge allowed a roving tribe to place the powerful Roman empire in jeopardy for a decade.
  • The Persian emperors
    • The earliest recorded example of paradoxical risk as a battle custom is the Persian imperial practice of bringing the women, children, and treasure of the emperor and noble families to the war-camp. This seems like a needless and reckless risk, as it would turn a defeat into a disaster in the loss of family and fortune. However, this case is comparable to that of the Teutones, in that it demonstrated the credible commitment of the emperor and nobles to victory, and used this raising of the stakes to incentivize bravery. While the Persians did conquer much of the known world under the nearly mythical leadership of Cyrus the Great, this strategy backfired for the last Achaemenid Persian emperor: when Darius III confronted Alexander the Great at Issus, Alexander’s crack hypaspist troops routed Darius’ flank as well as Darius himself! The imperial family and a great hoard of silver fell into Alexander’s hands, and he would go on to conquer the entirety of the Persian empire.

These examples show the diversity of cultural and personal illustrations of the rational choice theory and psychological warfare that typified some of the most successful military leaders and societies. As the Roman military writer Vegetius stated, “an adversary is more hurt by desertion than slaughter.” Creating unity of purpose is by no means an easy task, and balancing the threat of death by frontline combat with the threat of death during a rout was a problem that plagued leaders from the earliest recorded histories forward (in ancient Greek battles, there were few casualties on the line of battle and the majority of casualties took place during flight from the battlefield. This made the game theoretical choice for each soldier an interesting balance of possibly dying on the line but living if ONLY he ran away, but having a much higher risk of death if a critical mass of troops ran away–perhaps this will be fodder for a future post?). This was a salient and even vital issue for leaders to overcome, and despite the high risks that led to the fall of both Julian and Darius, forcing credible commitment to battle is a fascinating strategy with good historical support for its success. The modern implications of credible commitment problems range from wedding rings to climate accords, but very few modern practices utilize the “illogical rationality” of intentional destruction of secondary options. I continue to wonder what genius, or what society, will come up with a novel application of this concept, and I look forward to seeing the results.

P.S.–thanks to Keith Kallmes for the idea for this article and for helping to write it. Truly, it is his economic background that leads to many of these historical questions about rational choice and human ingenuity in the face of adversity.

The death of reason

“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”

Thomas Kuhn, The Structure of Scientific Revolutions p. 111

I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.

My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”

And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.

The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.

If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.

The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., those of Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.

Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary. (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.

Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.

Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unconfirmed auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.

But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”

Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.

For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.

For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.

So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.

Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms, participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.

From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.

This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acolytes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.

There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.

Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremberg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…

In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”

Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.

Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocutors, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.

Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. The Final Solution is not about extermination anymore but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.

The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.

Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it is climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.

Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermione Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”

“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233

Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.

Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.

When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.

The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.

Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.

Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.

Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.

If we don’t posit a baseline-rationality — Hermione Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.

There were other attacks on the hoax, from SlateSalon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.

Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.

The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”

Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.

In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.

However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.

Where is the optimal marriage market?

I have spent the past few weeks playing around with where the optimal marriage market is and thought NoL might want to offer their two cents.

At first my instinct was that a large city like New York or Tokyo would be best. If you have a larger market, your chances of finding a best mate should also increase. This is assuming that transaction costs are minimal though. I have no doubt that larger cities present the possibility of a better match being present in the dating pool.

However it also means that the cost of sorting through the bad ones is harder. There is also the possibility that you have already met your best match, but turned them down in the false belief that someone better was out there. It’s hard to buy a car that we will use for a few years due to the lemon problem. Finding a spouse to spend decades with is infinitely harder.

In comparison a small town information about potential matches is relatively easy to find. If you’re from a small town and have known most people since their school days, you have better information about the type of person they are. What makes someone a fun date is not always the same thing that makes them a golf spouse. You may be constrained in who you have in your market, but you can avoid lemons more easily.

Is the optimal market then a mid sized city like Denver or Kansas City? Large enough to give you a large pool of potential matches, but small enough that you can sort through with minimal costs?

P.S. A friend has pointed out that cities/towns with large student populations or military bases are double edged swords for those looking to marry. On the one hand they supply large numbers of dating age youths. On the other hand, you would not want to marry a 19 year old who is still figuring out what they want to major in.

Where is the line between sympathy and paternalism?

In higher-ed news two types of terrifying stories come up pretty frequently: free speech stories, and Title IX stories. You’d think these stories would only be relevant to academics and students, but they’re not. These issues are certainly very important for those of us who hang out in ivory towers. But those towers shape the debate–and unquestioned assumptions–that determine real world policy in board rooms and capitols. This is especially true in a world where a bachelor’s degree is the new GED.

The free speech stories have gotten boring because they all take the following form: group A doesn’t want to let group B talk about opinion b so they act like a bunch of jackasses. Usually this takes place at a school for rich kids. Usually those kids are majoring in something that will give them no marketable skills.

The Title IX stories are Kafkaesque tales where a well-intentioned policy (create a system to protect people in colleges from sexism and sexual aggression) turns into a kangaroo court that allows terrible people to ruin other people’s lives. (I hasten to add, I’m sure Title IX offices do plenty of legitimately great work.)

A great article in the Chronicle gives an inside look at one of these tribunals. For the most part it’s chilling. Peter Ludlow had been accused of sexual assault, but the claims weren’t terribly credible. As far as I can tell (based only on this article) he did some things that should raise some eyebrows, but nothing genuinely against any rules. Nonetheless, the accusations were a potential PR and liability problem for the school so he had to go, regardless of justice.

The glimmer of hope comes with the testimony of Jessica Wilson. She managed to shake them out of their foregone conclusion and got them to consider that women above the age of consent can be active participants in their own lives instead of victims waiting to happen. Yes, bad things happen to women, but that’s not enough to jump to the conclusion that all women are victims and all men are aggressors.

The big question at the root of these types of stories is how much responsibility we ought to take for our lives.

Free speech: Should I be held responsible for saying insensitive (or unpatriotic) things? Who would enforce that obligations? Should I be held responsible for dealing with the insensitive things other people might say? Or should I even be allowed to hear what other people might say because I can’t take responsibility for evaluating it “critically” and coming to the right conclusion.

Title IX: Should women be responsible for their own protection, or is that akin to blaming the victim? We’ve gone from trying to create an environment where everyone can contribute to taking away agency. In doing so we’ve also created a powerful mechanism that can be abused. This is bad because of the harm it does to the falsely accused, but it also has the potential to delegitimize the claims of genuine victims and fractures society. But our forebears weren’t exactly saints when it came to treating each other justly.

Where is the line between helping a group and infantilizing them?

At either end of a spectrum I imagine caricature versions of a teenage libertarian (“your problems are your own, suck it up while I shout dumb things at you”) and a social justice warrior (“it’s everyone else’s fault! Let’s occupy!”). Let’s call those end points Atomistic Responsibility and Social Responsibility. More sarcastically, we could call them Robot and Common Pool Responsibility. Nobody is actually at these extreme ends (I hope), but some people get close.

Either one seems ridiculous to anyone who doesn’t already subscribe to that view, but both have a kernel of truth. Fair or not, you have to take responsibility for your life. But we’re all indelibly shaped by our environment.

Schools have historically adopted a policy towards the atomistic end, but have been trending in the other direction. I don’t think this is universally bad, but I think those values cannot properly coexist within a single organization.

We can imagine some hypothetical proper point on the Responsibility Spectrum, but without a way to objectively measure virtue, the position of that point–the line between sympathy and paternalism–its location is an open question. We need debate to better position and re-position that line. I would argue that Western societies have been doing a pretty good job of moving that line in the right direction over the last 100 years (although I disagree with many of the ways our predecessors have chosen to enforce that line).

But here’s the thing: we can’t move in the right direction without getting real-time feedback from our environments. Without variation in the data, we can’t draw any conclusions. What we need more than a proper split of responsibility, is a range of possibilities being constantly tinkered with and explored.

We need a diversity of approaches. This is why freedom of speech and freedom of association are so essential. In order to get this diversity, we need federalism and polycentricity–stop trying to impose order from the top down on a grand scale (“think globally, act locally“), and let order be created from the bottom up. Let our organizations–businesses, churches, civic associations, local governments and special districts–adapt to their circumstances and the wishes of their stakeholders.

Benefiting from this diversity requires open minds and epistemic humility. We stand on the shore of a vast mysterious ocean. We’ve waded a short distance into the water and learned a lot, but there’s infinitely more to learn!

(Sidenote: Looking for that Carl Sagan quote I came across this gem:

People are not stupid. They believe things for reasons. The last way for skeptics to get the attention of bright, curious, intelligent people is to belittle or condescend or to show arrogance toward their beliefs.

That about sums up my approach to discussing these sorts of issues. We’d all do better to occasionally give our opponents the benefit of the doubt and see what we can learn from them. Being a purist is a great way to structure your thought, but empathy for our opponents is how we make our theories strong.

A short note on monarchical nostalgia

Kingship organizes everything around a high centre. Its legitimacy derives from divinity, not from populations, who, after all, are subjects, not citizens. In the modern conception, state sovereignty is fully, flatly, and evenly operative over each square centimetre of a legally demarcated territory. But in the older imagining, where states were defined by centres, borders were porous and indistinct, and sovereignties faded imperceptibly into one another. Hence, paradoxically enough, the ease with which pre-modern empires and kingdoms were able to sustain their rule over immensely heterogeneous, and often even contiguous, populations for long periods of time. (19)

This passage, from Benedict Anderson’s much-cited book on nationalism (Imagined Communities), does a good job of summarizing what the world looked like politically prior to the Industrial Revolution. It does a less good job of summarizing what monarchy is, politically (see this or this), but does do a great job of explaining why monarchies were able to exert governance over populations that were linguistically, religiously, and ethnically diverse.

What is less clear in this passage is its explanation for why paleolibertarians are so enamored with monarchy and why some non-paleo libertarians often write nostalgically about imperial pasts. Even though this is not clear in the passage (I doubt Anderson had intra-libertarian squabbles in mind when he wrote Imagined Communities), it is a great way to explore why libertarians have nostalgia for monarchy and empire.

Let’s start from the top, though. Libertarians don’t like nation-states because of nationalism, because of borders with taxes and restrictions on movement of goods and people, and because of the power that governments can exert over well-defined spaces of territory. So, instead of delving into the intricacies of why nation-states are around, some libertarians reach back to an older age, where “borders were porous and indistinct,” state sovereignty was not the end game of geopolitics, and governments had ways other than nationalist propaganda to bring diverse populations to heel. So on the surface, nationalism was non-existent, borders were open, and diverse groups of people lived together in relative harmony under one roof. What libertarian wouldn’t like that? Fred Foldvary’s post on restoring the Ottoman Empire is a good example of this kind of historical naivety. (Barry and Jacques have both written good rebuttals to this kind of wishful thinking.)

Historical naivety is one thing, but the arguments of so-called “anarcho-monarchists” are quite another. Arguing that monarchy is anarchy because monarchs don’t reign over a nation-state (instead they rule over the private property of the crown) is disingenuous at best, and nefarious at worst. Royal property and private property are two different things (“L’etat c’est moi“). This argument leads directly to the awful, embarrassing arguments of Hans-Hermann Hoppe and his acolytes, who have a bad habit of claiming that anarcho-monarchism is somehow libertarian. I’m going to skip over the specifics of their arguments (Zak has done great work on this topic, but in short Hoppeans claim that anarcho-monarchist societies would be able to physically remove undesirable people from their societies; “undesirables” mostly mean socialists, homosexuals, and non-Europeans), and instead point out that Hoppe and company are simply wrong about what a monarchy actually is.

Monarchies had porous borders, they constantly warred against their neighbors (sometimes for “interests of state”), and their populations were polyglot and illiterate. I haven’t spent any time reading Hoppe, so maybe I am treating him unfairly here and he is perhaps an advocate of a new type of monarchy, but as a student of Habermas I would assume Hoppe likes to use history as a guide for understanding and explaining the world around him. How on earth could he be so wrong about what monarchy actually is, unless he is being disingenuous about his whole anarcho-monarchist utopia?


On a completely unrelated note, Benedict’s Anderson’s book on nationalism is published by Verso Books, rather than a traditional academic press (such as Princeton University Press or University of California Press). Verso Books is a left-wing publishing house dedicated to radical critiques of everything non-leftist, so I find it a bit odd that Anderson’s book has come to be so well-cited in the academic literature on a number of topics. It’s a great book, don’t get me wrong, but I think it’s popularity, despite being an explicitly ideological book rather than an academic one, explains much of the strife currently happening on campuses across the West regarding freedom of speech and freedom of assembly.

Pride and Subsidies

Freakonomics had an episode on the dramatic impact of subsidies on the visual effects (VFX) industry. Long story short: 1) VFX companies operate on razor thin margins, 2) the industry chases subsidies from competing local governments–Canada and London are  currently important locations, 3) Californian politicians want to bring these jobs back to LA, but doing so would probably be a net burden.

(Let’s put aside the issue of the state of California trying to play central planner by effectively creating different tax rates for different industries. That’s a bad idea for reasons we can explore later.)

Putting yourself in the head of a Californian, something about the policy feels right (maybe not for the typical NOL reader, but probably for the median voter). I’m sure you could convince the median voter that these subsidies are a bad idea, economically. But even so, I’d be willing to bet that you’d still get significant support.

I’m confident that if you were to talk this issue over with a representative sample of California voters–or X industry in Y region for similar industry upheavals–you could convince them of the probable negative impact of such policy and still see many voters at least weakly supporting the policy. Why? Because being able to point to a movie and say “that awesome explosion was made in my backyard,” is worth some degree of sacrifice for these people.

Perhaps people want our government to give us something to be proud of (God knows they give us enough things to be ashamed of!). Perhaps people have some latent willingness to pay to be able to say that some high status industry is in their community/city/state/country.

We like pride, but it costs us. This puts us squarely in the domain of economics. How do we figure out how to make the trade off between pride, and the price we must pay for it? Some cases seem easy, at least in hindsight–the sacrifice of the civil rights movement was a small price to pay for the pride generated–but cases like the VFX industry, aren’t so obvious, but still high stakes.

I don’t think we’re likely to be able to figure out the bill. We can be proud of NASA, movies, the post office, and whatever else. But how much of the cost can we attribute to engaging in activities that make us proud? We get the same issue in markets. I have more than brand loyalty for Honda (the maker of my motorcycle); I’m also proud to associate with Honda as an innovative company with a history of liberating the world’s poor.

A clever statistician or economist could estimate some important facts about how people tend to make these trade offs. Doing so could help us make better decisions, but can’t ultimately replace our own judgment.

Given the uncertainty we face we really have to make a decision about whether to err on the side of over- or under-provision of pride goods–and this is true in a variety of settings.

I suspect that the “let 1000 flowers bloom” approach is the appropriate one here. We don’t want to have one Secretary of Pride deciding to err on the side of over-provision and the result is that a bunch of children die from preventable causes so that we can all feel proud about how cool the latest domestically produced Fast and Furious movie is going to be. On the other hand, it would be a tragedy of slavery was never ended because it would interrupt business as usual.

Markets, civil society, and government face different sorts of pros and cons with respect to how they might make these trade offs. Arguing about them could create a new academic discipline at the intersection of ethics, economics, and sociology.

In all three spheres, there will be many very bad decisions made. But if you aren’t free to be wrong, you aren’t free. The question to ask is what sort of pride goods will tend to survive, and in which spheres?

What we can say for sure is that private, voluntary exchange and cooperation (free markets and civil society) at least allow us to choose our associations. And they require us to choose, and choose again on a regular basis. Our nation is mostly based on luck. Where we live tends not to change much. Voting with your feet is costly, so we should expect it to be that much harder to dismantle big mistakes. The political process routinely results in outcomes we’re ashamed of (about half of voters are ashamed of the results every presidential election!).

There aren’t markets in pride so it’s hard to know how the benefits compare to the costs. But we can (and do) exhibit pride in markets. We should probably do more of it. And perhaps we should also be more skeptical of government, even though we normally think of them as providing pride goods. On the margin, anyways, I think this is a good direction for most people to move. Be proud of your community because the people have whatever unique traits they do. Be proud of the brands you buy from for their contributions to the state of the art. Be proud of your local sports team.

Bruce Lee’s Application Of Taoist Philosophy In Jeet Kune Do

Bruce Lee - Jeet Kune Do

Bruce Lee was born on November 27, 1940 and died on July 20, 1973. Even though he was just 32 upon his death, he had achieved so much in his limited lifetime. He was recognized by Time magazine as one of the 100 most influential people of the 20th century.[1] He was a cha cha champion in Hong Kong at age 18, a world renowned martial artist and a Chinese actor who was not only immensely popular in Asia, but who also made his breakthrough in Hollywood at a time when oriental actors were rarely accepted for lead roles. What is less known among the public is his keen interest in philosophy, a subject he studied at the University of Washington. Writing about where his interest in philosophy came from, he wrote:

My majoring in philosophy was closely related to the pugnacity of my childhood. I often asked myself these questions: What comes after victory? Why do people value victory so much? What is ‘glory’? What kind of ‘victory’ is ‘glorious’?[2]

In one of my previous posts, I discussed the similarities between the libertarian concept of Spontaneous Order and the Taoist concept of the Tao. In this post I will discuss the application of Taoist philosophy in Jeet Kune Do (‘the way of the intercepting fist’), the martial arts that Bruce Lee founded in his mid-20s, and its roots in Taoist philosophy. I will identify several Taoist aspects that form the philosophical foundation of Jeet Kune Do. First however, I will give an anecdote of his wife Linda Cadwell on Bruce Lee’s initial motivation to develop Jeet Kune Do at all.

Bruce Lee’s initial motivation for Jeet Kune Do
Bruce Lee started teaching martial arts to Westerners in his newly founded Jun Fan Gung Fu Institute, a training gym in Oakland, California. Then by late 1964, Bruce Lee received a letter with the signatures of the most important elder Chinese martial arts masters in San Francisco who did not

look favourably on Bruce’s teaching martial art to Westerners, or actually to anyone who was not Chinese. So strongly did they harbour this historically bound belief, that a formal challenge was issued to Bruce, insisting that he participate in a confrontation, the result of which would decide whether he could continue to teach the ‘foreign devils’. (Cadwell, 1998, p. 8)

Without hesitation, Bruce Lee accepted the challenge. Linda Cadwell remembers the fight that followed as a pivotal point in Bruce Lee’s life:

Within moments of the initial clash, the Chinese gung fu man [Bruce Lee’s contender] had proceeded to run in a circle around the room, out a door that led to a small back room, then in through another door to the main room. He completed this circle several times, with Bruce in hot pursuit. Finally, Bruce brought the man to the floor, pinning him helplessly, and shouted (in Chinese), ‘Do you give up?’ After repeating this question two or three times, the man conceded, and the San Francisco party departed quickly. The entire fight lasted about three minutes, leaving James and me ecstatic that the decisive conquest was so quickly concluded. Not Bruce. Like it was yesterday, I remember Bruce sitting on the back steps of the gym, head in hands, despairing over his inability to finish off the opponent with efficient technique, and the failure of his stamina when he attempted to capture the running man. For what probably was the first time in his life, Bruce was winded and weakened. Instead of triumphing in his win, he was disappointed that his physical condition and gung fu training had not lived up to his expectations. This momentous event, then was the impetus for the evolution of Jeet Kune Do and the birth of his new training regime. (Cadwell, 1998, pp. 11-12)

Now that we know that Jeet Kune Do originated from Bruce Lee’s discontent with the physical condition he had achieved through traditional gung fu training, I will discuss how Bruce Lee was striving for a new martial arts that was superior to the already existent ones, and how this martial arts is ultimately rooted in Taoist philosophy.

Jeet Kune Do as a way of life
Bruce Lee had, throughout his whole life, always been intrigued by the question how to find his true potential, and how to express himself honestly. He wrote:

“Ever since I was a child I have had this instinctive urge for expansion and growth. To me, the function and duty of a quality human being is the sincere and honest development of one’s potential”.[3]

“When I look around, I always learn something, and that is to always be yourself, express yourself, to have faith in yourself. Do not go out and look for a successful personality and duplicate him. They always copy mannerism; they never start from the root of their being: that is, how can I be me?”[4]

Bruce Lee believed that the answers to both questions – how can I find my true potential and how can I be me so that I can express myself honestly – are ultimately related to one another.

1. Be one with the Tao; be formless like water, and be pliable
Bruce Lee believed that the person who is trained within a particular martial arts style and who clings to it indefinitely or a person who is only trained within a particular philosophical doctrine becomes self-delusional. He thought that the person who is incapable of exceeding his style or doctrine is stiff and narrow-minded. His narrow-mindedness makes him blind to observe objectively and to see the truth. He is what Bruce Lee calls, ‘the traditional man’. Bruce Lee wrote:

One can function freely and totally if he is ‘beyond system.’ The man who is really serious, with the urge to find out what truth is, has no style at all. He lives only in what is. (Bruce Lee, 1975, p. 17)

But in classical styles, system becomes more important than the man! The classical man functions with the pattern of a style! (Bruce Lee, 1975, p. 18)

How can there be methods and systems to arrive at something that is living? To that which is static, fixed, dead, there can be a way, a definite path, but not to that which is living. Do not reduce reality to a static thing and then invent methods to reach it. (Bruce Lee, 1975, p. 18)

Classical forms dull your creativity, condition and freeze your sense of freedom. You no longer ‘be,’ but merely ‘do,’ without sensitivity. (Bruce Lee, 1975, p. 19)

You cannot see a street fight in its totality, observing it from the viewpoint of a boxer, a kung-fu man, a karateka, a wrestler, a judo man and so forth. You can see clearly only when style does not interfere. You then see it without ‘like’ or ‘dislike;’ you simply see and what you see is the whole and not the partial. (Bruce Lee, 1975, p. 24)

He thought that committing himself to styles limits both his potential and his self-expression. This critique is however not only limited to martial arts. He extended this critique to Confucianism, a philosophy which he considered as too rigid, and too narrowly focused on set rules and traditions. According to Bruce Lee, man ceases being a human being and instead becomes a mechanical man, a product of mere tradition if he reveres and just follows rules and mannerisms. The philosophy that perfectly fits Bruce Lee’s vision of a self-expressive and ‘style-less’ martial arts is the epistemologically anarchistic Taoism. How can a person, according to Bruce Lee and Taoism, find his true potential and express himself honestly? The answer is to become formless, pliable, and forever adaptable just like the Tao is formless, pliable, and forever in flux.

The Tao Te Ching states the following metaphor of life (flexibility and softness) and death (rigidity and hardness):

A man is born gentle and weak.
At his death he is hard and stiff.
Green plants are tender and filled with sap.
At their death they are withered and dry.
Therefore the stiff and unbending is the disciple of death.
The gentle and yielding is the disciple of life.
Thus an army without flexibility never wins a battle.
A tree that is unbending is easily broken.
The hard and strong will fall.
The soft and weak will overcome. (Tao Te Ching, Chapter 76)

Both Lao Tze and Bruce Lee took water as the ultimate metaphor for that which is flexible and soft. Bruce Lee maintains that in order to fulfil your true potential and express yourself honestly you should become like water, formless. To be like water means to be an objective observant, relaxed and to be flowing with life – to be one with the Tao.

In the Tao Te Ching one can find the following lines:

Under heaven nothing is more soft and yielding than water.
Yet for attacking the solid and strong, nothing is better;
It has no equal.
The weak can overcome the strong;
The supple can overcome the stiff. (Tao Te Ching, Chapter 78)

There is a story about Bruce Lee’s discovery of what it means to be like water and to be united with the Tao. I am not sure about the authenticity of the story, but I will share it nonetheless as it helps to illustrate the significance of being formless in combat or in life:

Bruce, at the age of seventeen, had been training in gung fu for four years with Sifu Yip Man, yet had reached an impasse. When engaged in sparring Bruce found that his body would become tense, his mind perturbed. Such instability worked against his goal of efficiency in combat.

Sifu Yip Man sensed his trouble, and approached him. ‘Lee,’ he said, ‘relax and calm your mind. Forget about yourself and follow the opponent’s movements. Let your mind, the basic reality, do the counter-movement without any interfering deliberation. Above all, learn the art of detachment.’

Bruce Lee believed he had the answer to his problem. He must relax! Yet there was a paradox: the effort in trying to relax was inconsistent with the effortlessness in relaxing, and Bruce found himself back in the same situation.

Again Sifu Yip Man came to Bruce and said, ‘Lee, preserve yourself by following the natural bends of things and don’t interfere. Remember never to assert yourself: never be in frontal opposition to any problem, but control it by swinging with it.’

Sifu Yip Man told Bruce to go home for a week and think about his words. Bruce spent many hours in meditation and practice, with nothing coming of it. Finally, Bruce decided to go sailing in a junk (boat). Bruce would have a great epiphany. ‘On the sea, I thought of all my past training and got mad at myself and punched the water. Right then at that moment, a thought suddenly struck me. Wasn’t this water the essence of gung fu? I struck it, but it did not suffer hurt. I then tried to grasp a handful of it but it was impossible. This water, the softest substance, could fit into any container. Although it seemed weak, it could penetrate the hardest substance. That was it! I wanted to be like the nature of water.

Therefore in order to control myself I must accept myself by going with, and not against, my nature. I lay on the boat and felt that I had united with Tao; I had become one with nature.[5]

Bruce Lee emphasized the importance of ‘a style of no style’ that he later would regret the name Jeet Kune Do as a name implies limitations or specific parameters. Bruce Lee wanted it to resemble the Tao, nameless and of almost supernatural power. Chapter one of the Tao Te Ching states:

The Tao that can be told is not the eternal Tao.
The name that can be named is not the eternal name. (Tao Te Ching, Chapter 1)

See this video in which Bruce Lee asserts that we should be like water:

2. Break rules and conventions and have no way as your way
Jeet Kune Do does not limit itself to styles. It takes from other styles what is useful, discards what is useless, and adds what is uniquely our own. The slogan of the Jeet Kune Do logo reads two things: (a) take no way as your way, and (b) take no limitation as your limitation. As styles, rules, conventions, mannerisms limit us we should deconstruct and transcend them. Jeet Kune Do is therefore iconoclastic. Bruce Lee wrote:

Jeet Kune Do favors formlessness so that it can assume all forms and since Jeet Kune Do has no style, it can fit in with all styles. As a result, Jeet Kune Do utilizes all ways and is bound by none and, likewise, uses any techniques or means which serve its end. (Bruce Lee, 1975, p. 12)

What are the characteristics of a martial arts with no style? According to Bruce Lee, it becomes open-minded, non-traditional, simple, direct, and effective.

Bruce Lee contended that:

Jeet Kune Do does not beat around the bush. It does not take winding detours. It follows a straight line to the objective. Simplicity is the shortest distance between two points. (Bruce Lee, 1975, p. 12)

In Enter the Dragon, there is a scene in which an ostentatious man asks Bruce Lee what his style is. Bruce Lee answers: “You can call it the art of fighting without fighting”. Being challenged by the man to show this style, Bruce Lee cunningly proposes to take a boat to a nearby island where they can fight. When the man set foot on the boat, Bruce Lee let the boat drift away and pulls it on a line. The essence of the story is that (a) one should not be pretentious as that is not honest self-expression, and (b) a fight should be won in the most direct and easiest manner, preferably without the use of violence.[6]

You can find the videoclip here:

In order to break with traditions and conventions means that we should also get rid of our past attachments. This is what Bruce Lee meant when he metaphorically said that we should ‘empty our cup’.

3. Empty your cup and learn the art of dying
To empty your cup means to get rid of your self-delusion so that you can look at the world from a new and refreshed perspective. In order to find your true potential and your nature, you should first be self-conscious. You should know what you want, what you desire, what your strengths and weaknesses are, your pride, your fears, your accomplishments, your ambitions and eventually get rid of all that as they maintain an ego that interferes with who you truly are – a fluid personality who cannot be narrowly defined by your desires, fears, achievements etc.

In the Tao Te Ching one can read:

Empty yourself of everything.
Let the mind become still.
The ten thousand things rise and fall while the Self watches their return. (Tao Te Ching, Chapter 16)

This is frightening for most of us, because it confronts us with our own prejudices; we may find that our traditions that have previously given us a sense of security may be baseless. However, Bruce Lee did not only want us to break with the archaic, but he also showed us an alternative – a way of creating new values and skills to supersede the old. In this respect, Bruce Lee’s views of how to progress in life is very much in line with the iconoclastic Nietzschean übermensch: we must first break with traditions and try to rise above our culture so that a higher being can emerge from our renewed self-creation. This is how I personally interpret Bruce Lee’s saying that we should learn the “art of dying”.

In a famous scene in Longstreet, Bruce Lee taught us not to make a plan of fighting, he told us to empty our mind, and to be formless like water. The “art of dying” is the “art of being non-fixed” – the art of being a different person tomorrow than we are today by letting go our past attachments including our ambitions. I believe it is similar to the Nietzschean ideal of self-creation: continuously subjecting our current values to our personal judgements, breaking down ‘lower values’ and creating ‘higher values’. The art of dying is hence a metaphor for continuously breaking down our past selves, values, attachments, pride, desires (dying) and creating our new selves (being reborn) so that we can continuously improve. The “art of dying” is therefore also the “art of self-forgetfulness”, a skill that is characteristic of the ‘baby’ who is its self-propelling wheel in Nietzsche’s story of the ‘three metamorphoses’ from Thus Spoke Zarathustra.

See here the scene of Longstreet:

Bruce Lee wrote:

Empty your cup so that it may be filled; become devoid to gain totality. (Bruce Lee, 1975, p. 14)

Emptying our cup precedes our discovery of new truths or new values so that hopefully we can find ourselves and become our own standard. Bruce Lee told us not to despair when we cannot find solace within our past attachments as the creation of personal values is vastly more valuable.

See here a great explanation of ‘emptying our cup’:

The logical consequence of self-creation is that one becomes his own standard.

4. Become your own standard and accept life
According to Bruce Lee, we should not worry about what others think of us. He advised us not to look for a personality to duplicate as that would be a betrayal to our selves – one might call this practice ‘other-expression’ instead of ‘self-expression’. Being our own standard also encompasses the acceptance of disgrace and losses as much as accepting grace and victories. How else can we accept ourselves and fulfill our own potential?

The Tao Te Ching advises us the following:

Accept disgrace willingly.
Accept misfortune as the human condition.

What do you mean by “Accept disgrace willingly”?
Accept being unimportant.
Do not be concerned with loss or gain.
This is called “accepting disgrace willingly.”

What do you mean by “Accept misfortune as the human condition”?
Misfortune comes from having a body.
Without a body, how could there be misfortune?

Surrender yourself humbly; then you can be trusted to care for all things.
Love the world as your own self; then you can truly care for all things. (Tao Te Ching, Chapter 13)

5. Wei Wu Wei
Lastly, I would like to discuss another aspect of ‘having no way as your way’. To have ‘no way as your way’, is also Bruce Lee’s expression for following the Taoist doctrine of ‘wei wu wei’ (‘action without action’ or ‘effortless action’). Bruce Lee maintained that when a person is truly in control of himself, he experiences his action without consciously forcing his actions to happen. Self-consciousness is initially required for the understanding of ourselves, but to be truly expressing ourselves through our actions we must move into a state where we act unconsciously. I think it is best comparable with the English expression of ‘being in a state of flow’. Bruce Lee said:

I’m moving and not moving at all. I’m like the moon underneath the waves that ever go on rolling and rocking. It is not, ‘I am doing this,’ but rather, an inner realization that ‘this is happening through me,’ or ‘it is doing this for me.’ The consciousness of self is the greatest hindrance to the proper execution of all physical action. (Bruce Lee, 1975, p. 7)

This idea is expressed as follows in the Tao Te Ching:

Tao abides in non-action (‘wu wei’),
Yet nothing is left undone. (Tao Te Ching, Chapter 37)

Footnotes
[1] See http://www.ranker.com/list/time-magazine-100-most-important-people-of-the-20th-century/theomanlenz?format=SLIDESHOW&page=55http://www.ranker.com/list/time-magazine-100-most-important-people-of-the-20th-century/theomanlenz?format=SLIDESHOW&page=55

[2] I do not remember where I have found this quote.

[3] Idem

[4] Idem

[5] From http://www.becoming.8m.net/bruce02.htm

[6] The scene is actually based on an old Japanese Samurai folk tale. The tale goes as follows:

“While travelling on a ferry, a young samurai began bullying and intimidating some of the other passengers, boasting of his fighting prowess and claiming to be the best in the country with a samurai sword. When the young warrior noticed how unmoved [Tsukahara] Bokuden [a legendary Japanese swordsman] was, he was enraged and not knowing who he was dealing with challenged the old master to a duel. Bokuden told him;

‘My art is different from yours. It consists not so much in defeating others but in not being defeated.’

He continued to inform him that his school was called The Mutekatsu Ryu meaning ‘to defeat an enemy without hands’. The young samurai saw this as cowardice and demanded satisfaction so he told the boats-man to stop at an island so they could do battle there.

However when he jumped into the shallow waters to make his way to the fight venue, Bokuden got hold of the boats-man’s pole and proceeded back to deeper waters minus a now irate young samurai. The wise old master laughed and shouted to his would be adversary; ‘Here is my no sword school!’” (See, http://www.historyoffighting.com/tsukahara-bokuden.php)

Bibliography
History Of Fighting. Retrieved from http://www.historyoffighting.com/tsukahara-bokuden.php

Lao Tze. Tao Te Ching. Retrieved from http://www.schrades.com/tao/taotext.cfm?TaoID=1

Lee, B. (1975). Tao Of Jeet Kune Do. Santa Clarita: Ohara Publications.

Little, J. (1998). Bruce Lee: The Art Of Expressing The Human Body. North Clarendon: Tuttle Publishing.

French Africa

This is a meandering essay; although it’s about history, it’s a bit personalized, for effect. In other words, it’s far from straightforwardly scholarly history but I think it’s all or mostly true. Be patient, at one point it will become about the former French African colonial empire and socio-cultural strata it deposited in France, and there to this day.

Acting Uncool

Often, in my dotage, I sneak a look at TV5, the French language cable channel. Often too, I fall asleep on the couch while watching its usually – but not always – insipid programs. One day, a short documentary catches my attention. It’s about sexual harassment of French women on the public way. It catches my attention because it’s not obvious to me what would pass for sexual harassment in France, I mean, this side of grabbing and such. So, it turns out that the makers of the documentary had placed a man with a hidden camera near a cafe on a street with a bad reputation. The street is near to one of the main railroad stations in Paris, guaranteeing a two-way flow of commuters, including women, of course.

In the course of twenty minutes, the documentary displays about thirty episodes of “sexual harassment.” I am only a man, of course, and thus limited, and a skeptic, but the worst harassment I witness takes the form of annoying mouth noises that I am not talented enough to reproduce with words. Mostly, there are gauche invitations to have a cup of coffee. The documentary ends with the expected boring, trite lamentations, blah, blah. There is zero mention of a striking fact: All the harassers without exception sport a thick North African accent.

I say a “thick” accent to signify recent arrival in France. The accent normally erodes in a few years or months. I imagine the harassers were young immigrants from small villages in Algeria and Morocco trying artlessly to deal with the knowledge that they were now in a society where sex could theoretically be had outside of marriage and outside of prostitution. Some may have been merely lonely and naively hoping to make a French friend. Political correctness clashes with political correctness: Harassing women, even if only verbally, is terrible but mentioning that the harassers all proceed from Muslim countries is terrible too. So, make the documentary and shut up about the obvious!

This is not a very interesting story, of course; I know this. Would anyone expect probably poorly educated rural young men from sex-segregated societies to learn to be cool with women as they are stepping off the boat? It will take quite a while, at best. For some, it will never happen; they will remain uncool forever. Then, they will marry an immigrant woman from their area of origin. Again, it would be absurd to expect anything else. In the same vein, would it be reasonable to imagine that all those immigrants would quickly come to appreciate the importance of the separation of religion from governance (of “church and state”) when it’s anathema in Islam?

Is it possible that a few will never appreciate at all the beauty of such separation? Is it possible that their ignorance, or their hostility, will be passively transmitted to their offspring, together with pork avoidance, for example? Will (would) that transmission have a cumulative effect on French society? France contributed more than its share of apprentice terrorists to ISIS, even would-be war brides, even young women ready for the sexual jihad. The one thing may have little to do with the other. And, it’s true that a startling number of the above are converts from Christianity or, more likely, from atheism.

French people who are not racist, or even “Islamophobic” in any mechanistic sense, carry this sort of question on their minds all the time. Some French people who have been in France for a long time but have Muslim names become themselves attached to secularism (la laïcité). They also discreetly worry about the very same issue. Those who will actually talk about it appear more worried than their fellow citizens with names like mine, or like “Pierre Dupont.” This is all impressionistic, of course. There is no survey. For one thing, it’s illegal in France to gather data about ethnicity.

How did it come to this, you might wonder. Why are these guys in France at all, the ones acting uncool in every conceivable meaning of the word?

Quitting Algeria

In 1962, the French Republic and the Algerian nationalists of the Front de Libération Nationale (“FLN”) came to an agreement about Algerian independence. That was after 130 years of French colonization and eight years of brutal war, including war against civilians, from both sides. The colonization had been in depth, with hundreds of thousands of French settlers convincing themselves that Algeria was a kind of second France, resembling the original in every way. Except, that is, for the inconvenient prior presence of numerous exotically dressed people who were neither Christians nor free-thinkers. Except for the fact that many of the French settlers were newly minted poor immigrants from Spain and Italy.

At Independence, I participated in the evacuation of large number of French civilians from the country as a little sailor. I mean “French French.” By that time and belatedly, the presumably Muslim population had been granted citizenship. Too little, too late. Probably in an an effort to divide to conquer, the numerous (Arabic speaking) Algerian Jews had all been granted citizenship in the 1880s. In the days of evacuation, the number of (old) French who wanted to leave was much greater than French authorities had planned for. An aircraft carrier – emptied of its planes – had to be used. It was a pathetic show, complete with broken, uncomprehending old grandmothers who had probably never set foot in France. There were no deluxe suitcases in sight but there were used mattresses. Some factions within the FLN were threatening the French with death if they did not go immediately; others would have liked to keep them, or some of them. The death threats prevailed.

It was too bad that the French left in such large numbers. It made the transition to independence technically more difficult than it could have been. It gave the upper hand in Algeria to those who had the best guns rather than to those who could govern, or to the people. It was a pity for all concerned. The French refugees faced an uncertain and harsh future in France, for the most part. For the Algerians, many positions were left for a while without competent personnel, including a budding oil industry in the Sahara. There was a shortage of medical doctors for many years.

Make a mental note of this fact: The French French were not the only ones fleeing. They were accompanied by tens of thousands of families with Muslim names and whose native language was other than French. They were Algerians who had chosen the wrong side in the war of independence and who feared to be massacred in the new Algeria (correctly so, it turned out). Those joined the other hundreds of thousands who had been living in France for economic reasons beginning with WWI.

I think of those events as a double tragedy or a tragedy leading to a tragedy. The Algerian independence fighters who had prevailed by shedding quantities of their blood were definitely not (not) Islamists. In most respects, intellectually and otherwise, they were a lot like me at the time, moderate, democratic leftists. In fact, I once spent a moving three hours drinking coffee with a convalescing FLN soldier my age, in a third country. He and I had most things in common, including the French language. (More needs to be said about communities of language.)

The true Algerian revolutionaries were soon replaced in power in Algiers however by the professional soldiers of an army that had never really fought because it had been formed outside Algeria while partisan-style forces battled the French army. The military is still in power, fifty-five years later. I think of their regime as a classical but fairly moderate kind of fascism. It has bloodily fought Islamism to a standstill on Algerian soil so, everyone pretends to like them.

The Poor Politics of Colonialism

I went back to Algeria – as a tourist, a spear fisherman, believe it or not- six years after independence. I was warmly received and I liked the people there. They felt like cousins, the sort of cousins you played with in childhood but have not seen in adulthood. I think now, as I thought in 1962, that the nationalists were on the right side of the argument but I miss Algeria nevertheless. It’s like a divorce that should not have happened if someone had been more reasonable. Even such a short time after the events, events I had lived through as an adult, it was difficult to comprehend what had gone wrong. It was difficult to find any trace of hatred for the French. A young man I wanted to thank for a favor done asked me to take him to a restaurant where he could eat Brie, made expensive by a tariff. (Do I have the talent to make up this anecdote?)

I blame the astonishing incompetence of a French political class that failed in the course of 130 years to invent a form of citizenship that would have accommodated a large and fast growing Muslim population. At the time, it was widely argued that the Muslims insisted on being ruled by a mild form of Sharia insofar as their personal affairs, such as marriage and divorce, were concerned. Such an arrangement was incompatible with the strictly secular laws of the French Republic, of course, they were told. The Muslim numerical majority thus had to remain subjects, with only individual access to citizenship, more or less like any Finn or any Bulgarian. I don’t know if this was a genuine obstacle or an excuse for a simple case of yielding to the local French population who did not wish to live under Muslim rule, even if only for local affairs. In spite of their well publicized humanitarian and liberal values, French parties of the left played a prominent part in colonization and in the attendant repression of native populations. The late Socialist Pres. Mitterand, for example, was vigorously policing Algeria when he was a young politician (who had had one foot in the Resistance and one foot in Vichy, earlier, another story, of course).

A brief history of imperialism

After completing the military conquest of Algeria in 1847, which had been arduous, France soon developed a vague appetite for easy territorial gains overseas. The age-old British rival’s imperialism probably inspired the French. By WWI, France had placed under its control, Algeria’s neighbors Tunisia and Morocco (the latter, split with Spain), and the present countries of Mali, Niger, Mauritania, Senegal, Guinea, Cote d’Ivoire, Burkina Faso, Chad, Benin, the Central African Republic, Gabon, and the Congo (the small one, next to the Belgian Congo). During World War I, France also took Togo, and the southern half of Cameroon from Germany. We must add Djibouti on the Red Sea and the large island of Madagascar in the Indian Ocean.

Most – but not all – of the population in the colonies was Muslim. Possibly close to half were native speakers of Arabic dialects. However in North Africa, large minorities knew no Arabic but were speakers of several varieties of Tamazigh (“Berber”). French colonial power did not fail to utilize this linguistic dichotomy, as you might expect. Be it as it may, at the close of WWII, you could travel straight south from Algiers on the Mediterranean to Pointe Noire, (across the river Congo from Kinshasa in the larger and better known Belgian Congo) without ever leaving French control.

The possession of a colonial empire seems to have generated monopolistic profits for a few French people, the extraction of which were accompanied by routine atrocities in some parts. The horrors of French rule in the equatorial colonies where hevea -rubber trees – grew, was documented by the great writer André Gide in his travel narrative Un Voyage au Congo. National possession of the empire gave the average French person much psychic income, I think. At least, it facilitated fantasizing – under the gray French skies – about palm trees and warm seas. And adventurous but skill-less young Frenchmen could always find jobs easily in the southern colonies, overseeing native (black) labor just for being white, French, and knowing the common language (French) well.

All the sub-Saharan African countries achieved independence peacefully in the late fifties or early sixties. Morocco and Tunisia had preceded them in 1956. Before that, in Sétif, Algeria, a peaceful demonstration against the French government was put down in 1945 in a massacre where thousands perished. In 1947, an attempted insurrection against French colonial power in faraway Madagascar was ended with another bloodbath. One concrete objection to colonialism is that it regularly places mediocre men in charge of the destinies of many others, some of whom are not mediocre. Those who gave the order to shoot in both Sétif and Madagascar where low level public servants.

Compare

There is an intuitive tendency to view colonialism largely or completely in terms of the culture of the colonial power. This is probably wrong. What matters is the circumstances of the colonial acquisition and the use to which it was put. The contrasting cases of Algeria and Senegal are instructive in this respect.

Algeria was conquered militarily between 1830 and 1847 in a thoroughly ravaging war. Note that 1830 was only 18 years after the Waterloo defeat. The Napoleonic era’s stupendous French military victories (excepting Waterloo) were fresh in the collective consciousness. Plus, the political entity centered in Algiers had been far from a bucolic and peaceful place before the French conquest. Its economy relied heavily on piracy and various forms of slaving. It made a likely prey. No one or almost no one was going to miss it. (It’s a mystery why Thomas Jefferson ran out of breath before he got to that Barbary state.) Algeria always mattered because it seemed a likely colony of settlement. It became one, a good one, in spite of the existence of a large native population.

The balance of France’s African colonies – with the exception of Tunisia that was wrested from nominal Ottoman rule by a brief military invasion – was acquired without much purposefulness and with little fighting. A large swath of land near the Equator was taken without a fight by an Italian adventurer, a naturalized Navy officer, a contemporary of Stanley. Brazza was usually accompanied only by a handful of native troops. Wherever he went, he cheekily raised the French flag and abolished slavery. The capital of the Congo bears his name to this day (indicating that he left a pretty good memory).

The smallish country of Senegal in western Africa is a special case of French colonization. French political presence there dates back to the 17th century, first in the form of slave trading posts. Later, the four main cities of Senegal were re-formed as French political municipalities. This, in the absence of a significant local French population. The inhabitants of those cities obtained French citizenship in 1792, that is, earlier than many inhabitants of France. They were eligible to vote and to be elected. French power over the countryside extended slowly from those four towns meeting little resistance.

This special case matters because the assimilationist current in Senegal was strong before independence in 1960 and it continued after independence. Today, it’s difficult to find a Senegalese who does not speak good to excellent French. The unknown percentage who can write do it in French. Interestingly, the casual racism guiding the interaction with the natives of the few French administrators and military personnel, plus a handful of businessmen, was largely suspended when they dealt with the Senegalese. (Personally, I think labels matter, “citizen,” for example. Obviously, that’s another story.)

The narrative of the colonization of Senegal is fairly important because it shows one case where a Muslim country (95%) is explicitly friendly toward the West and well informed about it (via the French language). It is also politically stable and democratic although it is poor (GDP/capita of only about $2,600 around 2015). It’s a case of successful intellectual colonization. I have even personally heard English-speaking Africans accuse Senegalese intellectuals of the same sins of arrogance and obstinacy that usually stick to Paris Left Bank intellectuals. Something went right in Senegal.

By the time of WWII, much of public opinion – including the still-large officer class – was enamored with the notion of France as a great Muslim power.

Colonial strata within France

Every new acquisition of territory in Africa generated a new wave of emigrants to France: students, low-level civil servants climbing the bureaucratic ladder, and some laborers. Public school teachers of native extraction – a large number – would go to France for training through what was intended as a revolving door. There, some would find true love, marry and stay. Every loss of a colony did the same as every acquisition because – as I have mentioned – not everyone knows how to choose the right side in a conflict. Every war also brought Africans to France, as soldiers and as laborers both. Many won French citizenship and remained too. Over the twentieth century that African-originated population grew inside France because immigrants, mostly from rural areas, usually multiply faster than the more urban host population. All immigrants and all their children and all their grandchildren attended the Republic’s schools, or, more rarely, the few Catholic schools.

There was comparatively little true racism, racism by color. (Read the subtle observations of the black American writer Richard Wright, for example.) The existence on the soil of Metropolitan France of a long assimilated black West Indian population may have contributed to deny conventional racism much traction. Despised cultural traits and a condition of economic inferiority on the one hand, and skin color on the other, just did not coincide well enough.

The relative rarity of color sentiment and its shallowness, does not mean that the French were or are free of prejudice, of course. For more than one century, the worst jobs in the country were occupied by immigrants from North Africa, mostly Algeria. Those were people from deeply rural, primitive regions, literate in no language. For most of that period, they lived in ghettos, while their wives and children remained behind in a Maghreb that was always fairly near.

Those people were subject to systematically poor treatment. It was made much worse by the Algerian war of independence that was fought partly in France, with numerous acts of terrorism. French French people never knew enough about Islam until recently and they were too religiously indifferent to call that prejudice “Islamophobic,” I think. What is now the largest political party in France, the Front National, used to be overtly anti-Muslim. Under new leadership, it has cleaned up its act in this respect, avowedly because that stance was doing it more electoral harm than good. It’s now against all immigration. In the current (2017) presidential campaign, some people with Muslims names have said publicly that they would vote for the Front. (They remain a curiosity, I am guessing.)

I am trying to be fair and descriptive here. Two relevant stories. When I was a teenager, I worked part time in an expensive hotel in Paris. Luxury hotels are like theaters; they have a public stage and a backstage. There was a middle aged guy who was the fix-everything man. He was knowledgeable and he had all the tools of most trades. His name was “Ahmed” backstage but it became magically “Jean” when he was in the public area. The great and luminous French movie star Isabelle Adjani (b. 1955) kept her half Algerian origins in the closet for half of her career. To be fair, when she disclosed that she was the daughter of an Algerian Amazigh (a Muslim) a consensus quickly formed that her secrecy had been silly. It’s also possible that she feared the nude scenes in her movies would meet with dangerous disapproval from her father’s group of origin.

In the end, there is a large sub-population in France today that traces its ancestry to various parts of Africa, north, west, and central. By American standards, some are black, some are white. Many or most are citizens. Many are not but have a legal right to live in France by virtue of some international post-colonial agreement or other. Some almost have that right. Many – and still coming – don’t have any such right at all but their cousin lives there. Their children all attend school. They all arrive knowing some French from the schooling in their countries of origin. Given the comparatively effective (comparatively) French school system, and given the unsmiling, generalized French contempt for multilingualism, they all end up “French” in some sense, knowing the French language well, familiar with the fundamentals of civics, well versed in basic French history.

Muslim identity

The only trait that consistently differentiates some, or probably most people of African origin from the rest of the French population, is their presumed Muslim identity. (Notably, you almost never hear of people of African descent who are Christian, or even nothing at all.) Islam matters as a cultural fact, even irrespective of genuine religious sentiment, because it prevents mixing to a large extent, and especially, intermarriage. Previous immigrants, from Poland, Germany, Italy, Spain, and more recently, Portugal all tended to marry French. Even more so did their daughters. Muslims from Africa mostly don’t except that a few men marry non-Muslim women.

I say “presumed” Muslim identity because there is no rigorous way to estimate the current Muslim population in France. That too, is forbidden. Going by names – which is often done – is sure to give bad results. It’s likely that most French people with a Muslim name are like the bulk of other French people, religiously indifferent.  Hence name counting inflates the number of Muslims in any meaningful sense. Still, there are many mosques in France and many recriminations about their being in insufficient number. There is a large, monumental, highly visible mosque near central Paris. It shelters the headquarters of the official national organization that represents the interests of French Muslims with the government. I don’t know how representative that representative organization currently is, of course.

People with Muslim first names and last names are everywhere in France, over the latitude and longitude of the territory but also from the bottom – sweeping the streets of Paris – to the top of the socioeconomic pyramid. (A while ago, I was half in love with a French woman named Rachida Dati. She was a minister in Pres. Sarkozy’s cabinet. It did not work out!) The first French soldier to die in the NATO expedition in Bosnia was named El Hadji. The Paris cop terrorists killed outside of Charlie Hebdo also had a Muslim name.

There are many other markers of long-term African presence in France. Here are some, pell-mell: Best couscous in the world. The North African Arabic word for “fast” is commonly used in French, including by people with 32 ancestors born in France. One of the many vocables for the male appendage in French, also one of the most commonly used, is straight from Arabic. (Don’t count on me to satisfy you salacious curiosity; do your own research.) Paris is the world center for the promotion and recording of rich West African music. Same for most fiction and poetry in French, including a significant production from Africa. The strange, often baffling intellectual movement “la négritude“(“negroeness,” I think) developed in France. The largest or second largest collection (after that of the British Museum, maybe) of black African art in the world is in a Paris museum, etc.

Cultures

Those who know me, in person or through Notes On Liberty, or Liberty Unbound, those who spend even a little time on my blog (factsmatter.worldpress.com), or on my FB page will have heard me lamenting loudly the sterility of contemporary French culture. I cry torrents, especially over the impoverishment and the muddiness of the current French public French language, I mean, as spoken in France, specifically.* For the past fifty years, the French have had precious little to show by way of visual arts, or music and much of their contemporary literature projects the very cold of the grave. Aided by endless government subsidies, the French make many mediocre movies whose slowness and technical imperfection passes for intellectual depth, especially among a certain category of Americans.  (On this topic of government help to the French movie industry, you might read Delacroix and Bornon: Can Protectionism Ever Be Respectable? A Skeptic’s Case for the Cultural Exception, with Special Reference to French Movies.” [pdf])

French public figures talk like teenagers and they generally don’t know how to finish a sentence. If a member of the French intelligentsia speaks to you about Iraq, for example, say a journalist at prestigious Le Monde, you know no more about Iraq when he is finished than you did when he begun; you may know less. It was not always like this. (And, I will not insist that the decline of French culture and language are due to my emigration to the US at age 21 but the dates coincide pretty well.) Incidentally, the museums are still good; actually, the whole country of France is like an attractive museum that would have a superlative cafeteria attached. But I digress. This is all to let you know of a certain critical pessimistic state of mind of mine.

Still, there are French cultural phenomena that continue to interest me. One is a “culture” TV show with a strong political component that’s tougher on politicians than anything we do in the US. (It’s called, “On nest pas couchés.“) Another is a pure political show, also hard on the politicians interviewed there. (It’s called simply, “L’ Emission politique.“)

So, another time, I am watching French TV intently because there is a retrospective show on the anarchizing singer/composer George Brassens who died in 1981. Brassens is the closest thing France has – except for Edith Piaf –  to a secular modern saint. He wrote elegant poems addressed to ordinary people that the intellectual elite also admired. He also put to music Victor Hugo and even the medieval poet François Villon. He sang all with a distinctive stage presence.

That night several current stars of French popular song have been gathered in one setting to each sing one or more of Brassens’s songs. A man named “Slimane” takes one of the three or four most popular, most familiar of Brassens’ pieces and sings it in a deliberately Arabized manner. When he is finished, the eyes of several women singers sparkle. I am strongly moved myself. Slimane has given new life to a classic. No one will ever forget his hybrid rendition of the song.

This is yet another time, I am dozing on the couch (again) after a good French political show I mentioned elsewhere. The TV is still on, of course. Something stops me from falling right asleep; something drags me back to consciousness. This has never happened to me before. What’s waking me is the clarity of the language used by a youngish man being interviewed for one of those culture/literature shows that abound on French television.** The man to whom the voice belongs enunciates precisely; his words are well chosen without being precious; his grammar is impeccable; he finishes every one of the sentences he begins; he does not stutter. He speaks like a man who has thought of what he is speaking about.

Soon, I am alert enough to realize that the fine speaker of French is on the show to flog his newly published book. The book is about conversations he has had in his mind with the writer/philosopher Albert Camus. Now, Camus died in 1960, by the look of it, before the current writer on Camus was born. Camus has a special place in the minds and hearts of several generations of a certain category of French men that used to include me. He is one of the fathers of popular “existentialism.” (I have to use the qualifier and the quote marks to avoid the predictable correction by pedants who will push quotes in German into my email to prove that Camus is in no way a real existentialist. WTF!) Camus received the Nobel in literature in 1957 but that’s not why we care about him. I cannot describe here in detail the particular category of French men who revere him but here is a pointer: Early on in his fame Camus broke up very publicly with his good buddy, the better known Jean-Paul Sartre because Sartre would not denounce Stalinism.

The young writer on TV is black. I am told he is a well-known rapper in France. His name is Abd el Malik. Anecdotal evidence about nothing, some will say. Will it influence me in the future in spite of my good social science training? You bet. How can I avoid it? How can millions of French people ignore this kind of episode irrespective of their views on immigration? That man’s short presentation was like a ray of sunshine in a uniformly dark forest. Why should they not let it impress them?

The story does not end here, Camus himself was a Frenchman from Algeria, obviously not a Muslim. He was born to a widowed, half-deaf and illiterate Spanish immigrant woman who cleaned houses to support herself and Albert. The French are not so much confused about the legacies of their former colonial Empire as they are faced with a confounding reality.


* French is well spoken in various places, in Senegal, first, in much of urban Morocco and Tunisia, and among the Haitian elite, of all places. Romanians and Lebanese also tend to speak a very classical French as a second language.

**I say this with a little bitterness because, as someone who is still practicing being a commercially unsuccessful American writer, I regret strongly that we don’t have a plethora of such shows in the US of A.