Is the United States a patriarchy?

When someone — whether laypeople, like Jill Soloway or the writers at Buzzfeed, or academics, like bell hooks — describes the United States as a “patriarchy,” it is unclear to me what they intend to mean. Maybe this sounds irreverent, but women serve in every level of government — executive, judicial and legislative, at this point only never occupying (and losing only by a close margin) the upper echelons of the presidency. Just this week in Georgia, a woman won a House seat in an election where her opponent was a well-financed white male. If we look at influential powers beyond government, women own some of the largest hundred-billion dollar corporations in the west. Women are the majority of teachers, arguably one of the most influential concentrations of quasi-political power in a democratic republic. As a voting bloc they’ve had much sway in all elections since suffrage. 

So, to understand what someone means when they insist that the States is still a patriarchy, it might be appropriate to ask: Would the United States still be a patriarchy if Hillary Clinton had won the election? 

It’s a yes or no question. There are two possible responses. 

If the answer is “No, the United States would have no longer been a patriarchy,” then we’ve clarified something — we’ve singled out a condition under which the States would cease to be a patriarchy (namely, φ: electing a woman to the highest position of executive power). Someone answering “No” stipulates that there is a certain achievable goal under which patriarchy would cease. Now, why did Clinton lose the election? Not, as some people would like to believe, because of pervasive American sexism — rather, because of a variety of complicated reasons that can in no way be reduced to misogyny. Donald Trump did not win simply because Clinton was a woman. That was not a decisive factor. However, now that the individual has clarified under what condition (φ) the state of patriarchy could be dissolved, and we know that this (sufficient) condition could be achieved at any given election cycle — that men only occupying the presidency for the last few terms has been a purely contingent state of affairs — then we know that the term “patriarchy” only trivially applies to the United States. We know that, essentially, use of the term “patriarchy” is only appropriate because a male currently sits in the Oval Office. It might follow from this answer that were Clinton in office, America would even be a matriarchy. Now, by designating φ as nullifying the term “patriarchy,” the person has demonstrated that the term, applied now, can hardly condemn at all (as it only specifies one stage in a democratic process), and all the baggage it carries loses much of its weight. A woman could occupy the presidency at any time. If she does, then the patriarchy will be dissolved. Thus, the patriarchy could be dissolved at any time, the U.S. is not innately a patriarchy, and the term carries only taxonomic weight. (Not to say it may not carry particular emotional weight, but it does not carry damning weight.) 

However, the person is unlikely to answer this way. Few agreed that racism ended when President Obama took office, and of course it didn’t. The two are not the same, but let’s examine what happens when they choose the other response.

The other possible answer is “Yes, the United States would still be a patriarchy if Clinton had won.” If this is the case, then we know, first of all, that a woman occupying the most powerful position in the world would still not be enough to end patriarchy. Certain consequences follow from this. There would seem to be less incentive for believers in a patriarchy to work to elect female politicians, or female board members, or encourage female participation in science or engineering — women in power, just in and of itself, is not enough. Presumably, it has to be the right kind of woman power; the people that answer this way don’t think of Clinton as feminist or progressive enough; her engagement with politics is no better than another conservative man’s political engagement. What these people want is large-scale cultural and political change. Patriarchy is not about women holding power, it is about the “mental, social, spiritual, economic and political organization/structuring of society produced by… sex-based political relations… reinforced by different institutions… to achieve consensus on the lesser value of women” (A. Facio, “What is Patriarchy?”). Or more simply, it is a “social system that values masculinity over femininity” (M. Watanabe, Feminist Fridays). 

I rarely encounter succinct definitions of patriarchy (much less in terms through which progress can be made), yet it is still nonchalantly applied in certain political circles. Often, when parts of the definition do make sense, they’re false. Modern-day societies — at least capitalist ones — are not “organized” in any way intelligible by Facio’s definition; political relations are rarely, in the Western world, defined by sex or gender. One element that seems central to a definition is the over-valuing of “masculine” qualities over “feminine.” Glossing over the problem of defining these (even discussing them seems to be submitting to gender stereotypes), the value a society places on certain qualities is only the aggregate values of its individual members. Different people have different preferences. The idea that a society might completely equalize its values — why would we ever expect that to be possible or desirable? — seems to suggest superimposing someone’s idea of a perfect value set onto all others. Regardless, it’s unclear why, from some estimation of sexism in a culture, we need the introduction of a political term, using “-archy.” There must be more to it to make that term appropriate. It’s still unclear.

The problem of answering the question with “Yes” is that we still lack a condition, e.g. φ, by which we can dissolve the patriarchy. Under what circumstances will that word no longer apply? Otherwise, it is meaningless. Many of the proposed explanations evoke “institutions” — things never explicitly defined, and when critically examined, are revealed to be either nonexistent or too heterogeneous to dub patriarchal. If these institutions in America are supposed to be, say, the legal system or the education system, and these institutions are supposed to give America its organizational status, then in fact America is a matriarchy, due to the distribution of power present in these systems. If income-bracket is supposed to be an institution, then there might be a case to be made; men, on average, earn more (because they are in higher-paying positions), but this is probably not because they are men, or because our civilization favors them so, but rather certain contingent factors (such as career choice). This, again, would show America to not be innately patriarchal, “institutionally,” but temporarily, accidentally.

Sexism exists, to a much higher degree toward women than toward men. Does this mean we have to call America a patriarchy? No. The term “patriarchy” could do with some clarification, and not just from the ivory tower — with the same methods of analysis that we use to identify a system as a republic, dictatorship, or whatever — or be put to rest. The term is so abstract as to defy any analytic understanding, and its only coherent definition — a society or government run by males — either does not fit the United States or fits it only trivially. 

The Old Deluder Satan Act: Literacy, Religion, and Prosperity

So, my brother (Keith Kallmes, graduate of the University of Minnesota in economics and history) and I have decided to start podcasting some of our ideas. The topics we hope to discuss range from ancient coinage to modern medical ethics, but with a general background of economic history. I have posted here our first episode, the Old Deluder Satan Act. This early American legislation, passed by the Massachusetts Bay Colonists, displays some of the key values that we posit as causes of New England’s principal role in the Industrial Revolution. The episode: 

We hope you enjoy this 20-minute discussion of the history of literacy, religion, and prosperity, and we are also happy to get feedback, episode suggestions, and further discussion in the comments below. Lastly, we have included links to some of the sources cited in the podcast.


Sources:

The Legacy of Literacy: Continuity and Contradictions in Western Culture, by Harvey Graff

Roman literacy evidence based on inscriptions discussed by Dennis Kehoe and Benjamin Kelly

Mark Koyama’s argument

European literacy rates

The Agricultural Revolution and the Industrial Revolution: England, 1500-1912, by Gregory Clark

Abstract of Becker and Woessman’s “Was Weber Wrong?”

New England literacy rates

(Also worth a quick look: the history of English Protestantism, the Puritans, the Green Revolution, and Weber’s influence, as well as an alternative argument for the cause of increased literacy)

Paradoxical Geniuses: “Let us burn the ships”

In 1519, Hernán Cortés landed 500 men in 11 ships on the coast of the Yucatan, knowing that he was openly disobeying the governor of Cuba and that he was facing unknown numbers of potential enemies in an unknown situation. Regardless of the moral implications, what happened next was strategically extraordinary: he and his men formed a local alliance, and despite having to beat a desperate retreat on La Noche Triste, they conquered the second largest empire in the New World. As the expeditionary force landed, Cortés made a tactically irrational decision: he scuttled all but one of his ships. In doing so, he hamstrung his own maneuverability, scouting, and communication and supply lines, but he gained one incredible advantage: the complete commitment of his men to the mission, for as Cortés himself said, “If we are going home, we are going in our foes’ ships.” This strategic choice highlights the difference between logic and economists’ concept of “rationality,” in that illogical destruction of one’s own powerful and expensive tools creates a credible commitment that can overcome a serious problem in warfare, that of desertion or cowardice. While Cortés certainly increased the risk to his own life and that of his men, the powerful psychology of being trapped by necessity brought out the very best of the fighting spirit in his men, leading to his dramatic victory.

This episode is certainly not unique in the history of warfare, and was not only enacted by leaders as a method of ensuring commitment, but actually underlay the seemingly crazy (or at least overly risky) cultural practices of several ancient groups. The pervasiveness of these psychological strategies shows that, whether each case was because of a genius decision or an accident of history, they conferred a substantial advantage to their practitioners. (If you are interested in how rational choices are revealed in the history of warfare, please also feel free to read about hostage exchanges and ransoming practices from an earlier blog!) I have collected some of the most interesting examples that I know of, but the following is certainly not an exhaustive list and I encourage other episodes to be mentioned in the comments:

  • Julian the Apostate
    • Julian the Apostate is most famous for his attempt to reverse Constantine the Great’s Christianization of the Roman Empire, but he was also an ambitious general whose audacity gained him an incredible victory over Germanic invaders against steep odds. He wanted to reverse the stagnation of Roman interests on the Eastern front, where the Sasanian empire had been challenging the Roman army since the mid-3rd century. Having gathered an overwhelming force, he marched to the Euphrates river, took ships from there to the Sasanian capital, while the Sasanians used slash-and-burn tactics to slow his advance. When Julian found the capital (Ctesiphon) undefended, he worried that his men would want to loot the capital and return homeward, continuing the status quo of raiding and retreating. To prevent this, in a move much like that of Cortés, he set fire to his ships and forced his men to press on. In his case, this did not end with stunning victory; Julian overextended his front, was killed, and lost the campaign. Julian’s death shows the very real risks involved in this bold strategy.
  • Julius Caesar
    • Julian may have taken his cue from a vaunted Roman historical figure. Dramatized perfectly by HBO, the great Roman general and statesman Julius Caesar made huge gamble by taking on the might of the Roman Senate. Despite being heavily outnumbered (over 2 to 1 on foot and as much as 5 to 1 in cavalry), Caesar committed to a decisive battle against his rival Pompey in Greece. While Pompey’s troops had the option of retreating, Caesar relied on the fact that his legionaries had their backs to the Mediterranean, effectively trapping them and giving them no opportunity to rout. While Caesar also tactically out-thought Pompey (he used cunning deployment of reserves to stymie a cavalry charge and break Pompey’s left flank), the key to his victory was that Pompey’s numerically superior force ran first; Pompey met his grisly end shortly thereafter in Egypt, and Caesar went on to gain power over all of Rome.
  • Teutones
    • The impact of the Teutones on the Roman cultural memory proved so enduring that Teutonic is used today to refer to Germanic peoples, despite the fact that the Teutones themselves were of unknown linguistic origin (they could very well have been Celtic). The Teutones and their allies, the Cimbri, smashed Roman armies which were better trained and equipped multiple times in a row; later Roman authors said they were possessed by the Furor Teutonicus, as they seemed to posses an irrational lack of fear, never fleeing before the enemy. Like many Celtic and Germanic peoples of Northern Europe, the Teutones exhibited a peculiar cultural practice to give an incentive to their men in battle: all of the tribe’s women, children, and supplies were drawn up on wagons behind the men before battles, where the women would take up axes to kill any man who attempted to flee. In doing so, they solved the collective action problem which plagued ancient armies in which a few men running could quickly turn into a rout. If you ran, not only would you die, but your wife and children would as well, and this psychological edge allowed a roving tribe to place the powerful Roman empire in jeopardy for a decade.
  • The Persian emperors
    • The earliest recorded example of paradoxical risk as a battle custom is the Persian imperial practice of bringing the women, children, and treasure of the emperor and noble families to the war-camp. This seems like a needless and reckless risk, as it would turn a defeat into a disaster in the loss of family and fortune. However, this case is comparable to that of the Teutones, in that it demonstrated the credible commitment of the emperor and nobles to victory, and used this raising of the stakes to incentivize bravery. While the Persians did conquer much of the known world under the nearly mythical leadership of Cyrus the Great, this strategy backfired for the last Achaemenid Persian emperor: when Darius III confronted Alexander the Great at Issus, Alexander’s crack hypaspist troops routed Darius’ flank as well as Darius himself! The imperial family and a great hoard of silver fell into Alexander’s hands, and he would go on to conquer the entirety of the Persian empire.

These examples show the diversity of cultural and personal illustrations of the rational choice theory and psychological warfare that typified some of the most successful military leaders and societies. As the Roman military writer Vegetius stated, “an adversary is more hurt by desertion than slaughter.” Creating unity of purpose is by no means an easy task, and balancing the threat of death by frontline combat with the threat of death during a rout was a problem that plagued leaders from the earliest recorded histories forward (in ancient Greek battles, there were few casualties on the line of battle and the majority of casualties took place during flight from the battlefield. This made the game theoretical choice for each soldier an interesting balance of possibly dying on the line but living if ONLY he ran away, but having a much higher risk of death if a critical mass of troops ran away–perhaps this will be fodder for a future post?). This was a salient and even vital issue for leaders to overcome, and despite the high risks that led to the fall of both Julian and Darius, forcing credible commitment to battle is a fascinating strategy with good historical support for its success. The modern implications of credible commitment problems range from wedding rings to climate accords, but very few modern practices utilize the “illogical rationality” of intentional destruction of secondary options. I continue to wonder what genius, or what society, will come up with a novel application of this concept, and I look forward to seeing the results.

P.S.–thanks to Keith Kallmes for the idea for this article and for helping to write it. Truly, it is his economic background that leads to many of these historical questions about rational choice and human ingenuity in the face of adversity.

Stifling Charles Murray

Since I’ve been concerned with the status of speech on university campuses, I started looking into the actual speakers who have received the most flak. I’ve been familiar with Milo Yiannopolous for quite a while, and there’s really nothing to comment on there. Ann Coulter is nothing unique. Charles Murray, however, does prove an interesting case; with professional credentials, connections to indicate his in-group status, and the high-profile articles written to counter him, his utter condemnation is a little more peculiar.

I haven’t had time to read The Bell Curve, but I did tune into the podcast with Sam Harris and some of the counterarguments online. Based on the two hour conversation alone, Murray seems honest, well-informed and humble. His field is a controversial one, and so one should expect these qualities. His field is also an academic one with empirical and statistical methods, and for that very reason alone, without an extensive treatment the general population is going to lag behind in comprehension. Lots of the viewpoints Murray espouses are not so easily countered or adopted without background knowledge in psychometrics, and so most of the audience for Waking Up probably walked away with a predestined opinion.

When I listened to the episode, knowing beforehand only that Murray had been subject to endless criticism and condemnation on campus and in research (e.g. by Stephen Jay Gould), I was surprised by how lucid he sounded. The man seems far from a white nationalistracist, neo-Nazi or white supremacist; instead, he presents himself as interested in the same concerns as those on the left (his findings, he says, indicate the need for a basic income. He also supports gay marriage). Sam Harris is also constantly derided, often for his criticism of Islam, and after hosting Murray received some of the same ugly labels. One critique of the podcast is that Harris was too charitable and unquestioning on Murray’s presentation, and I can agree with that, but that may have been because Harris was already familiar with his work. (Also, around the hour-fifteen mark, Murray launches into a domain Harris is clearly less comfortable with, and Harris does explore his guest’s views a little more critically — though it could be said, not enough.)

The experience, for me, made concrete a maxim often championed by defenders of speech. Inaccurately attaching our most powerful labels of evil — fascism, racism, sexism, Nazism, supremacy — means that when faced with the real thing, we will be powerless. Placing genuine bigots and totalitarians in the same word pool as controversial scientists is a bad move for critique. I listened to Murray and thought: so this is the big bad wolf? These are the opinions of the man orthodoxy has eschewed? The man could be wrong about everything, as the internet says, but he does not seem to be motivated by something other than obtaining the facts. If he is wrong, by God, let us challenge his and Herrnstein’s methods and underlying assumptions; let us not push this into a dark corner of human thought it does not fit in with. Some well-established professors writing for Vox say Murray argues from insufficient evidence and this debate ought to be over with already (having gone on since 1994); but they explicitly advocate calling out incorrect ideas rather than stifling them by violent protests.

That is indeed the way to go, because not only did I experience the solidification of the above maxim, but by the end of the interview I was experiencing a curiosity that could easily turn morbid. I imagine it is even stronger for younger people tuning in to Harris. The thought arises, “If this is ‘forbidden knowledge’ — this reasonable, ostensibly well-grounded argument — then what else is there? What else do our global intellectuals call pseudoscience that might be true? If the community labels this racist, what other things do they label racist which are not?” We need to be able to trust each other to keep labels in check. Is it too much of a slippery slope to fear that, once people discover Murray is not a racist, they will seek out other less savory iconoclasts who have also been dubbed white supremacists, looking for what knowledge they might have?

Yiannopolous presents no arguments. Murray does: shutting him down without confrontation only creates an allure of forbidden knowledge, leading those explorers who find Murray digestible to trust established scientific facts less and less. I am not versed in cognitive studies enough to come to an opinion on who is right in this matter; it’s only clear that Charles Murray is arguing from what he thinks is scientifically-validated information. Placing his research into the same domain as David Duke’s ramblings can only lead the curious into an unpleasant trap when they realize their intellectual elders lied to them.

The death of reason

“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”

Thomas Kuhn, The Structure of Scientific Revolutions p. 111

I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.

My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”

And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.

The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.

If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.

The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.

Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary.  (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not just a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.

Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.

Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unproven auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.

But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”

Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.

For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.

For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.

So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.

Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms (Kuhn’s terminology), participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.

From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.

This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acoltyes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.

There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.

Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremburg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…

In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”

Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.

Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocuters, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.

Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. Die Endlösung, the Final Solution, is not about extermination anymore, but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.

The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.

Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it’s climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.

Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermoine Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”

“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233

Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.

Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.

When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.

The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.

Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.

Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.

Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.

If we don’t posit a baseline-rationality — Hermoine Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.

There were other attacks on the hoax, from SlateSalon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.

Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.

The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”

Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.

In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.

However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.

Where is the optimal marriage market?

I have spent the past few weeks playing around with where the optimal marriage market is and thought NoL might want to offer their two cents.

At first my instinct was that a large city like New York or Tokyo would be best. If you have a larger market, your chances of finding a best mate should also increase. This is assuming that transaction costs are minimal though. I have no doubt that larger cities present the possibility of a better match being present in the dating pool.

However it also means that the cost of sorting through the bad ones is harder. There is also the possibility that you have already met your best match, but turned them down in the false belief that someone better was out there. It’s hard to buy a car that we will use for a few years due to the lemon problem. Finding a spouse to spend decades with is infinitely harder.

In comparison a small town information about potential matches is relatively easy to find. If you’re from a small town and have known most people since their school days, you have better information about the type of person they are. What makes someone a fun date is not always the same thing that makes them a golf spouse. You may be constrained in who you have in your market, but you can avoid lemons more easily.

Is the optimal market then a mid sized city like Denver or Kansas City? Large enough to give you a large pool of potential matches, but small enough that you can sort through with minimal costs?

P.S. A friend has pointed out that cities/towns with large student populations or military bases are double edged swords for those looking to marry. On the one hand they supply large numbers of dating age youths. On the other hand, you would not want to marry a 19 year old who is still figuring out what they want to major in.

Laws regulating cannabis are laws regulating the body

In honor of 4/20, here is a paper I published a couple years ago in my school paper on the science of marijuana consumption. And here’s a few interesting facts about the drug:

  • sea squirts were the first organisms to develop cannabinoid receptors
  • when Tupac Shakur was cremated, members of his musical group Outlawz combined his ashes with marijuana and smoked him
  • the first ever sale on the internet was a small bag of marijuana
  • Carl Sagan, writing under a pseudonym, once wrote an article outlining what he saw as the beneficial effects of smoking it

Combing through the history of drug criminalization in America, it is clear that much of the law arose in response to national or state crises. It’s very obviously too simplistic to say, these drugs became illegal because they are dangerous; it’s also disingenuous to claim criminalization occurs simply to oppress or discriminate against certain groups of people (although, certainly, racism played a role in several criminalization campaigns, definitely including marijuana’s). The attitude has changed toward recreational weed: people think it should be legal because it’s safe and has legitimate medical benefits (instantly making its Schedule I status ridiculous). These things are true, but when progressives rest their drug legalization case on these mild criteria they weaken the case for more dramatic legislation which could produce effects far more progressive. Marijuana should be legal because laws which regulate it are laws which regulate the human body, in ways that only effect the user. The case for legalization is a case for freedom and autonomy, and from it follows the lifting of prohibitions on ketamine, opioids, barbiturates, benzodiazepines, methamphetamine, heroin, lysergic acid diethylamide, psilocybin, ecstasy,cocaine, etc. The economic argument for drug decriminalization is clear; the legal argument (like the iron law of prohibition) is clear; the moral argument is deontological and follows from much of the spirit of new political voices that wonder about government’s role in regulating the body.

The argument, often given by marijuana consumers, that it should be illegal because, were it to become legal, small businesses would get wiped out by larger conglomerates, and lower quality weed would get produced, is partially false and wholly single-minded. Most of the people that get jail time for weed are busted only for possession, not distribution; some people have been charged with life imprisonment for minor acts of growing — some of them retired veterans dealing with mental health issues. Focusing on how potent the weed would be if decriminalized is focusing only on how we, free individuals, will make out; it leaves these people in prison for a minor bump in hedonism. Further, cannabis’ potency has soared over the years since its initial popularity (in truth, a consequence of prohibition). It’s unlikely that it would start to de-escalate, as demand is so high. And weed is available in a multitude of forms now: the experimentation could only grow with laxer drug laws. Also, small businesses are often the ones hardest hit by regulation of products. The massive cartels will suffer most by drying up the black market, and then individuals who want relatively harmless drugs like marijuana can avoid entering a seedy underground (where they are exposed to far worse ails) to obtain it.

Prohibitionists also claim to be concerned about the children: with weed legal, won’t younger people start doing it? Again, the case for drug legalization is a case for autonomy, so this argument is misguided anyway, but to answer it — the best research from Colorado after recreational legalization (see Reason) suggests no statistically significant fluctuation in youth use. Marijuana is already immensely popular with young people; it can’t get much more in-fashion. Also, the recreational measures being introduced propose 21 as a purchase age: if kids are obtaining the drug from their neighborhood dealer now, the new laws would only direct them to buying from older, and probably more responsible people.

Most of the arguments point to decriminalizing weed, and not just because it’s safe and has medical benefits. These arguments also justify the extension of similar opinion to “hard drugs.”

For a more just criminal justice system, and a more free society … legalize it.

Happy 4/20. (And no, by the way, I don’t smoke.)