The President’s commission on opioids (1/2)

Given Zachary’s post on the drug war and opioid crisis, I thought I would share parts of an essay I wrote for a class last semester about Trump’s commission on opioids, which is the first policy step the new administration took in dealing with the issue. It’s edited for links and language and whatnot.


 

One of the more recent executive steps to combat the opioid crisis — the “abuse” of prescription and illegal opioid-based painkillers — was the creation of The President’s Commission on Combating Drug Addiction and the Opioid Crisis (hereafter, the Commission) two years ago by the Trump administration. The commission, led by Chris Christie, was instituted to investigate the issue further and produce recommendations for the government and pharmaceutical industry. It released its final report in November and seems set to work on opioid use with the same sort of strategies the federal government always treats drugs, except maybe a little more progressive in its consideration of medicinal users. Looking at the Commission’s report, I argue that a refusal to treat unlike cases dissimilarly will lead to less than effective policy.

The President’s Commission

The DEA first asserted that overdose deaths from opioids had reached an epidemic in 2015. In March of last year, Donald Trump signed an executive order establishing the policy of the executive branch to “combat the scourge of drug abuse” and creating The President’s Commission. The Commission is designed to produce recommendations for federal funding, addiction prevention, overdose reversal, recovery, and R&D. Governor Chris Christie of New Jersey served as Chairman alongside Gov. Charlie Baker (R-MA), Gov. Roy Cooper (D-NC), representative Patrick J. Kennedy (D-RI), former deputy director of the Office of National Drug Policy and Harvard professor of psychobiology Bertha Madras, and Florida Attorney General Pam Bondi.

Included in the final report is a short history of opioid use in the United States, characterized by a first crisis in the mid- to late-19th century of “unrestrained … prescriptions,” eventually reversed by medical professionals “combined with federal regulations and law enforcement.” A public distrust of opioids developed afterward, but this was “eroded,” and now the new crisis, traceable to 1999, has become more perilous by innovations since the 19th century: large production firms for prescription drugs, a profitable pharmaceutical industry, cheaper and purer heroin, new fentanyl imports from China.

Since the Commission’s report, several bills have been introduced in the House or Senate currently awaiting judgment (e.g., H.R.4408, H.R.4275, S.2125). Declaring widespread addiction and overdoses to be a national emergency in August, Trump fulfilled one of the interim steps proposed by Christie in an early draft of the report; since, the President has met with drug company executives to discuss nonopiate alternatives for pain relief. Within the next few months we should start to see large scale moves.

Through all of this, the treatment of opioids by the Commission and the US government uses a traditional framing. The National Institute on Drug Abuse (NIDA) defines drug abuse in the following way:

[Use of substances] becomes drug abuse when people use illegal drugs or use legal drugs inappropriately. This includes the repeated use of drugs to produce pleasure, alleviate stress, and/or alter or avoid reality. It also includes using prescription drugs in ways other than prescribed or using someone else’s prescription. Addiction occurs when a person cannot control the impulse to use drugs even when there are negative consequences—the defining characteristic of addiction.

This definition by the federal government does not discriminate between various levels of damaging consumption behavior. The weakness of this definition is that, because all illicit drug consumption is categorized as abuse, there can be no standard for misuse of a black market drug for recreation. An entry-level dose of heroin qualifies as equally “abusive” as a lethal dose because of the binary character of the definition. Other federal agencies give similar definitions; in its report on recommendations for abuse-deterrent generic opioids (see below), the HHS and FDA use a definition of abuse characterized by the “intentional, nontherapeutic use of a drug product or substance, even once, to achieve a desired psychological or physiological effect.” This terminology still characterizes any and all recreational consumption of opioid analgesics as abuse, and not misuse, regardless of dosage or long-term dependency. It will be seen that this is a problem for the success of any sort of policy aimed at quelling usage, and particularly hazardous for the opioid problem.

Legal Background

First, the legal background and a more extensive history. The category “opioid” covers much drug terrain both prescription and illegal. Opioids in the most expansive sense are synthetic derivatives of alkaloids in the opium of the West Asian poppy species Papaver somniferum. Opium resin contains the chemicals morphine, codeine and thebaine. Morphine is the basis for powerful pain relievers like heroin and fentanyl. Codeine is considered less powerful for pain relief but can be used to produce hydrocodone; it also doubles as a cough suppressant. Lastly, thebaine is similar to morphine and is used for oxycodone. 90% of the world’s opium production is in Afghanistan.

All opioids are criminalized under federal Drug Scheduling. Heroin is a Schedule I drug as part of the Controlled Substances Act. Several synthetic opioid drugs that contain hydro- or oxycodone are Schedule II (Vicodin, Dilaudid, OxyContin). Fentanyl is also a Schedule II drug. Heroin is just a brand name for the chemical diacetylmorphine (invented by Bayer), still used as treatment in plenty of developed nations like the United Kingdom and Canada; after heroin was completely criminalized in the United States (“no medical benefits”), synthesized opiate drugs became more popular for prescriptions.

The Pure Food and Drug Act of 1906 introduced labels on medicine containing codeine and opium in general after Chinese immigrant workers introduced the drug to the states. Through 1914, various federal laws restricted opium further until the Harrison Narcotics Tax Act on opium and coca products (which are not narcotics, and the colloquial language has been messed up ever since) effectively criminalized the prescription of opioid products to addicted patients. Shortly afterward, the amount of heroin in the U.S. skyrocketed. Only in recent decades have synthesized opiates occupied the public mind, however. Between 1999 and the present, deaths from overdoses of opioids and opioid-based painkillers like OxyContin, Vicodin, morphine and street heroin have risen almost fourfold.

The data on overdoses and deaths does not paint a straightforward picture, and the group “opioid” obscures the different trends between drugs. The CDC classifies data according to four varieties of opioids: natural/semi-synthetic opioid analgesics like morphine, codeine, oxy- or hydrocodone, and oxy- or hydromorphone; synthetic opioid analgesics like tramadol and fentanyl; methadone; and heroin. The last is the only completely illegal opioid. Overdose deaths that have included heroin and completely synthetic opioids have increased exponentially from 2010 and 2013, respectively, while deaths from natural/semi-synthetic opioids and methadone have roughly stabilized or gone down over the last decade. Taken altogether, the deaths from opioid overdoses per 100,000 from 2000 to 2015 have increased from three to eleven people. (As of 2016, natural/semi-synthetic opioid deaths have actually started to go up again, but its still recent in the trend.)

OpioidDeathsByTypeUS

In 2016, the CDC issued guidelines for treating chronic pain that warned physicians against prescribing high dose opioids and suggested talking about health risks. It also advised to “start low and go slow” — a slogan later mocked by John Oliver in a segment on opioids. And, according to a CDC analysis, prescriptions for the most dangerous opioids have dropped 41% from 2010 to 2015, and so have opioid prescriptions in general dropped. This has resulted in patients with physical dependency suffering withdrawal, often without programs to ease the transition to nonopioid pain relievers. Opioid dependants with withdrawal, or average citizens in need of pain relief, often turn to stronger street narcotics, since heroin is the cheaper and stronger alternative to oxycodone. For example, with the drop in first-time OxyContin abuse since 2010, heroin use has spiked. In Maine, a 15% decrease in opioid analgesic overdoses came with a 41% increase in heroin overdoses in 2012. The use of prescribed opioids, then, looks like it might be strongly connected to the use of street narcotics. The Commission, for its credit, notes that “the removal of one substance conceivably will be replaced with another.”

One fact lost in the discussion is that the use of nonmedical opioids has decreased but the amount of overdose deaths has increased. And “opioid epidemic” when discussing overdoses highly obscures that heroin is the major contributor alongside fentanyl — not merely prescription analgesics. We hear a lot about OxyContin and Vicodin, which are actually leveling out (or were until 2015), and less about the drugs which are already policed more, have been policed longer, and cause more physical problems.

What the Commission proposes

In its report, the Commission concludes the goals of its recommendations are “to promote prevention of all drug use with effective education campaigns and restrictions in the supply of illicit and misused drugs.” The President’s Commission doesn’t want to interfere too strongly, despite all of Trump’s suggestions of a revamped drug war. The report notes that coming down hard on opioids will hurt patients with real needs, as has already happened, and, in a way, has happened since 1924. Much of the Commission’s recommendations come from a market approach, e.g. the suggestion (Rec. 19) to reimburse nonopioid pain treatments. The current Centers for Medicare and Medicaid Services (CMS) policy for reimbursement for healthcare providers treats nonopioid, postsurgical pain relief treatments the same as opioid prescriptions, issuing one inclusive payment for all “supplies” at a fixed fee. Nonopioid medications and treatments cost more, and so hospitals opt for dispensing opioids instead. The Commission recommends “adequate reimbursement [for] a broader range of pain management” services, changing the bundle payment policy to accommodate behavioral health treatment, educational programs, “tapering off opioids” and other nonopioid options.

Trump himself suggested an educational approach in a public announcement, which triggered critical comparisons to the failed D.A.R.E. program and “your brain on drugs” commercials. Educational programs are a less coercive option than direct regulation of opioids, but their effectiveness seems to be hit and miss. The Commission cites the Idaho Meth Project from 2007 (ongoing), conducted by a private nonprofit to inform young adults on the health problems associated with methamphetamine use, as a success story: “The Meth Project reports that 94% of teens that are aware of the anti-meth campaign ads say they make them less likely to try or use meth, and that Idaho has experienced a 56% decline in teen meth use since the campaign began.” This meth project is one success story out of many failures. For instance, the Montana Meth Project from 2005, on which the Idaho project was modelled, “accounting for a preexisting downward trend in meth use,” was determined to have “effects on meth use [that] are statistically indistinguishable from zero,” according to an analysis by the National Library of Medicine. Then again, one large scale anti-drug educational campaign, truth, which encourages youth to avoid tobacco, might be having success. Their modern guerrilla tactics are a major improvement on the old model of Partnership for a Drug-Free America. 

In another market approach to help recovering addicts reenter society, the Commission recommends decoupling felony convictions and eligibility for certain occupations (Rec. 50). The report cites Section 1128 of the Social Security Act, which prohibits employers that receive funding from federal health programs from hiring past convicts charged with unlawfully manufacturing, distributing or dispensing controlled substances. Any confrontation with law enforcement is a barrier to landing a job — a protected area of discrimination — and government laws that specifically ban their hiring make it worse on ex-users and -dealers trying to get clean. Recommendations like these lessen the role that the state has in keeping ex-convicts out of work. 

Much of the funding requested by the President’s Commission is authorized by the Obama administration’s major contribution to combating opioid usage, the Comprehensive Addiction and Recovery Act (CARA), signed into law July 2016 and credited as the “first major federal addiction legislation in 40 years.” CARA helped implement naloxone (an opioid overdose-reversal nail spray) in firefighting departments and strengthen drug monitoring programs. 


 

I’ll post the second half soon, and then a bonus post on my personal favorite solution.

Are voting ages still democratic?

Rather par for the course, our current gun debate, initiated after the school shooting in Parkland, has been dominated by children — only this time, literally.

I’m using “children” only in the sense that they are not legally adults, hovering just under the age of eighteen. They are not children in a sense of being necessarily mentally underdeveloped, or necessarily inexperienced, or even very young. They are, from a semantics standpoint, still teenagers, but they are not necessarily short-sighted or reckless or uneducated.

Our category “children” is somewhat fuzzy. And so are our judgments about their political participation. For instance, we consider ourselves, roughly, a democracy, but we do not let children vote. Is restricting children from voting still democratic?

With this new group of Marjory Stoneman Douglas high school students organizing for political change (rapidly accelerated to the upper echelons of media coverage and interviews), there has been widespread discussion about letting children vote. A lot of this is so much motivated reasoning: extending suffrage to the younger demographic would counter the current proliferation of older folks, who often vote on the opposite side of the aisle for different values. Young people tend to be more progressive; change the demographics, change the regime. Yet the conversation clearly need not be partisan, since there exist Republican- and Democrat-minded children, and suffrage doesn’t discriminate. (Moreover, conservative religious groups that favor large families, like Mormons, could simply start pumping out more kids to compete.)

A plethora of arguments exist that do propose pushing the voting age lower — 13, and quite a bit for 16 (ex. Jason Brennan) — and avoid partisanship. My gripe about these arguments is that, in acknowledging the logic or utility of a lowered voting age, they fail to validate a voting age at all. Which is not to say that there should not be a voting age in place (I am unconvinced in either direction); it’s just to say that we might want to start thinking of ourselves as rather undemocratic so long as we have one.

An interesting thing to observe when looking at suffrage for children is that Americans do not consider a voting age incompatible with democracy. If Americans do not think of America as a democracy, it is because our office of the President is not directly elected by majority vote (or they think of it as an oligarchy or something); it is not undemocratic just because children cannot vote. The fact that we deny under-eighteen year olds the vote does not even cross their minds when criticizing what many see as an unequal political playing field. For instance, in eminent political scientist Robert Dahl’s work How Democratic is the American Constitution? the loci of criticism are primarily on the electoral college and bicameral legislature. In popular parlance these are considered undemocratic, conflicting with the equal representation of voters.

Dahl notes that systems with unequal representation contrast to the principle of “one person, one vote.” Those with suffrage have one or more votes (as in nineteenth-century Prussia where voters were classified by their property taxes) while those without have less than one. Beginning his attack on the Senate, he states “As the American democratic credo continued episodically to exert its effects on political life, the most blatant forms of unequal representation were in due time rejected. Yet, one monumental though largely unnoticed form of unequal representation continues today and may well continue indefinitely. This results from the famous Connecticut Compromise that guarantees two senators from each state” (p. 48).

I quote Dahl because his book is zealously committed to majoritarian rule, rejecting Toqueville’s qualms about the tyranny of the majority. Indeed, Dahl says he believes “that the legitimacy of the Constitution ought to derive solely from its utility as an instrument of democratic government” (39). And yet, in the middle of criticizing undemocratic American federal law, the voting age and status of children are not once brought up. These factors appear to be invisible. In our ordinary life, when the voting age is brought up, it is nearly always in juxtaposition to other laws, e.g., “We let eighteen year olds vote and smoke, but they have to be 21 to buy a beer,” or, on the topic of gun control, “If you can serve in the military at 18, and you can vote at 18, then what is the problem, exactly, with buying a gun?”

What is the explanation for this? We include the march for democracy as one progressive aspect of modernity. We see ourselves as more democratic than our origin story, having extended suffrage to non-whites, women and people without property. We see America under the Constitution as a more developed rule-of-the-people than Athens under Cleisthenes. So, we admit to degrees of political democracy — have we really reached the end of the road? Isn’t it more accurate that we are but one law away from its full realization? And of course, even if we are more of a representative republic, this is still under the banner of democracy — we still think of ourselves as abiding by “one person, one vote” (Dahl, 179-183).

In response, it is said that children are not properly citizens. This allows us to consider ourselves democratic, even while restricting direct political power from a huge subset of the population while inflicting our laws on them.

This line of thought doesn’t cut it. The arguments for children as non- or only partial-citizens are riddled with imprecisely-targeted elitism. “Children can be brainwashed. Children do not understand their own best interests. Children are uninterested in politics. Children are not informed enough. Children are not rational. Children are not smart enough to make decisions that affect the entire planet.”

Although these all might apply, on the average, to some age group — one which is much younger than seventeen, I would think — they also apply to all sorts of individuals distributed throughout every age. A man gets into a car wreck and severely damages his frontal lobe. In most states there is no law prohibiting him from dropping a name in the ballot, even though his judgment is dramatically impaired, perhaps analogous to an infant. A nomad, who eschews modern industrial living for the happy life of travel and pleasure, is allowed to vote in his country of citizenship — even though his knowledge of political life may be no greater than someone from the 16th century. Similarly, adults can be brainwashed, adults can be stupid, adults can be totally clueless about which means will lead to the satisfaction of their preferred ends.

I venture that all Americans do not want uninformed, short-sighted, dumb, or brainwashable people voting, but they will not admit to it on their own. Children are a proxy group to try to limit the amount of these people that are allowed in on our political process. And is banning people based on any of these criteria compatible with democracy and equality?

Preventing “stupid” people from voting is subjective and elitist; preventing “brainwashable” people from voting is arbitrary; preventing “short-sighted” people from voting is subjective and elitist, and the same for “uninformed” people. We come to the category of persons with severe mental handicaps, be their brain underdeveloped from the normal process of youth, or injury, or various congenital neurodiversities. Regrettably, at first glance it seems reasonable to prevent people with severe mental defects from voting. Because, it is thought, they really can’t know their interests, and if they are to have a voting right, it should be placed in a benefactor who is familiar with their genuine interests. But now, this still feels like elitism, and it doesn’t even touch on the problem of how to gauge this mental defect — it seems all too easy for tests to impose a sort of subjective bias.

Indeed, there is evidence that this is what happens. Laws which assign voting rights to guardians are too crude to discriminate between mental disabilities which prevent voting and other miscellaneous mental problems, and make it overly burdensome to exercise voting rights even if one is competent. It is hard to see how disenfranchising populations can be done on objective grounds. If we extended suffrage from its initial minority group to all other human beings above the age of eighteen, the fact that we prolong extending it to children is only a function of elitism, and consequently it is undemocratic.

To clarify, I don’t think it is “ageist” to oppose extending the vote to children, in the way that it is sexist to restrict the vote for women. Just because the categories are blurry doesn’t mean there aren’t substantial differences, on average, between children and adults. But our reasoning is crude. We are not anti-children’s suffrage because of the category “children,” but because of the collective disjunction of characteristics we associate underneath this umbrella. It seems like Americans would just as easily disenfranchise even larger portions of the population, were we able to pass it off as democratic in the way that it has been normalized for children.

Further, it is not impossible to extend absolute suffrage. Children so young that they literally cannot vote — infants — could have their new voting rights bestowed upon their caretakers, since insofar as infants have interests, they almost certainly align with their daily providers. This results in parents having an additional vote per child which transfers to their children whenever they request them in court. (Again, I’m not endorsing this policy, just pointing out that it is possible.) The undemocratic and elitist nature of a voting age cannot be dismissed on the grounds that universal suffrage is “impossible.”

It is still perfectly fine to say “Well, I don’t want the boobgeoisie voting about what I can do anyway, so a fortiori I oppose children’s suffrage,” because this argument asserts some autocracy anyway (so long as we assume voting as an institutional background). The point is that the reason Americans oppose enfranchising children is because of elitism, and that the disenfranchising of children is undemocratic.

In How Democratic is the American Constitution? the closest Robert Dahl gets to discussing children is adding the Twenty-Six Amendment to the march for democratic progress, stating that lowering the voting age to eighteen made our electorate more inclusive (p. 28). I fail to see why lowering it even further would not also qualify as making us more inclusive.

In conclusion, our system is not democratic at all,
Because a person’s a person no matter how small.

 

A preliminary argument against moral blameworthiness

For a while now I’ve advocated not an absence of morality, but an absence of moral blameworthiness. Here’s a first, brief attempt to jot down the basic idea.

There’s two arguments. First let’s consider the epistemic conditions that must hold to make a moral judgment. For any enunciator of a moral judgment, e.g. “this murder, being unprovoked, was wrong,” the speaker must have knowledge of specific details of the case — who committed the crime? was there malice aforethought? — and also moral knowledge, knowledge with normative validity. To judge something as moral or immoral, then, requires information of one kind which is open to forensic methods and of another kind which is … highly contested as to its epistemic foundations. Obvious thus far. Now, this is the situation of the bystander judging retroactively. The perpetrator of the immoral act is in an even worse predicament. Most people would agree, as a basic axiom of juvenile jurisprudence, that a person must have “knowledge of right and wrong” in order to be morally blameworthy. This allows us to discriminate between mentally competent adults, on the one hand, and children or mentally challenged individuals on the other. However, like we have said, this domain of right and wrong is highly contested by highly intelligent people, enough to cast skepticism into all but the most stubborn, and so most people, acting according to their ethics, understand themselves to be acting uncertainly. And, unlike the bystander judging retroactively, the perpetrator is on a time crunch, and must make snap decisions without the luxury of an analysis of the objective conditions — who, what, how, why — or a literature review of the subjective conditions, the theories.

So, to sum up, moral blameworthiness requires knowledge of right and wrong. This knowledge is highly contested (and widely considered to be emotional rather than rational); thus, people must act, but must act under highly uncertain information. Without an agreed-upon rubric moral action is more or less guessed. The doer is in a more uncertain situation than the judger so his judgment is likely to be less justified, more forgivably wrong.

Okay, but now as a friend has pointed out, where morality is highly contested is on the margins, and not the fundamentals. There is a lot of agreement that unprovoked murder is wrong, this does not seem highly contested (though certainly there is disagreement provided the forensic circumstances). So, can we not hold a murderer morally accountable?

Here, in response to that, is the second argument, which is much more fundamental and probably exposes me to some logical consequences I don’t want to accept. With action, there is something we could call a “regression to non-autonomy.” Traditional perspectives on morality and punishment emphasized the individual making a choice to commit an offense. This choice reflected bad moral character. More recently, the social sciences have impacted the way we think about choices: people are shaped by their environments, and often they do not choose these environments. Get the picture? But, it is even worse than that. We could say that the murderer chose to pull the trigger; but, he did not choose to be the sort of person who in that situation would pull the trigger. That person was a product of their environment and their genes. Aren’t they also a product of “themselves”? Yes, but they did not choose to be themselves; they simply are. And, even when someone “chooses to be a better person,” this choice logically presupposes the ability to choose to become a better person, which, again, is an ability bestowed upon some and not upon others and is never of our own choosing. Thus if we go back far enough we find autonomy, or a self-creative element, is not at root in our behavior and choices. And non-autonomous action cannot be considered morally blameworthy.

This is my argument (I do not claim originality; many people have said similar things). The murderer is doing something immoral, but finding them worthy of blame seems, to me, almost if not always out of the question. This ends up being hard to accept psychologically: I want to find history’s greatest villains morally culpable. I cannot, though. Instead of any sort of retributivist punishment — found, now, to be psychologically satisfying but morally confused — we are left only with punishment policy that seeks to deter or isolate offenders, the category of “moral blameworthiness” found to be lacking.

I invite criticisms of the arguments as sketched out here — preferrably, ones that don’t require us to get into what actually is moral or the status of free will.

“The Impossibility of a University”

I was just reading David Friedman’s The Machinery of Freedom. He published the first edition in 1973. Amidst the wild ride of the contemporary American university (Evergreen State College being the most heinous single episode), one passage seems especially prescient.

From chapter twelve in the third edition:

The modern corporate university, public or private, contains an implicit contradiction: it cannot take positions, but it must take positions [sides]. The second makes the demand for a responsible university appealing, intellectually as well as emotionally. The first makes not merely the acceptance of that demand but its very consideration something fundamentally subversive of the university’s proper ends.

It cannot take positions because if it does, the efforts of its members will be diverted from the search for truth to the attempt to control the decision-making process. If it takes a public position on an important matter of controversy, those on each side of the controversy will be tempted to try to keep out new faculty members who hold the other position, in order to be sure that the university makes what they consider the right decision. To hire an incompetent supporter of the other side would be undesirable; to hire a competent one, who might persuade enough faculty members to reverse the university’s stand, catastrophic. Departments in a university that reaches corporate decisions in important matters will tend to become groups of true believers, closed to all who do not share the proper orthodoxy. They so forfeit one of the principal tools in the pursuit of truth — intellectual conflict.

A university must take positions. It is a large corporation with expenditures of tens of millions of dollars and an endowment of hundreds of millions. It must act, and to act it must decide what is true. What causes high crime rates? Should it protect its members by hiring university police or by spending money on neighborhood relations or community organizing? What effect will certain fiscal policies have on the stock market, and thus the university’s endowment? Should the university argue for them? These are issues of professional controversy within the academic community.

A university may proclaim its neutrality, but neutrality as the left quite properly argues, is also a position. If one believes that the election of Ronald Reagan or Teddy Kennedy would be a national tragedy, a tragedy in particular for the university, how can one justify letting the university, with its vast resources of wealth and influence, remain neutral?

The best possible solution within the present university structure has been not neutrality but the ignorance or impotence of the university community. As long as students and faculty do not know that the university is bribing politicians, investing in countries with dictatorial regimes, or whatever, and as long as they have no way of influencing the university’s actions, those acts will not hinder the university in its proper function of pursuing truth, however much good or damage they may do in the outside world. Once the university community realizes that the university does, or can, take actions substantially affecting the outside world and that students and faculty can influence those actions, the game is up.

There is no satisfactory solution to this dilemma within the structure of the present corporate university. In most of the better universities, the faculty has ultimate control. A university run from the outside, by a state government or a self-perpetuating board of trustees, has its own problems. A university can pretend to make no decisions or can pretend that the faculty has no control over them, for a while. Eventually someone will point out exactly what the emperor is wearing.

With an activist culture in place, the university endures more and more blows to its truth-seeking abilities. UC Berkeley spent an estimated $600,000 on security for Ben Shapiro a couple months ago, after the chaos and protests of the past year. Staff cut seating in half, worried that protesters would dismantle chairs and throw them onto the audience on the bottom floor. Now, so I hear, student clubs are having difficulty hosting evening meetings on campus, as the administration makes up for the expenses by cutting down on electricity usage and janitorial services. Club stipends, of course, are down. All of this damages the educational environment.

My friends went to see Ben, and watched a woman with a “Support the First Amendment! Shalom Shapiro!” sign get dragged into a crowd and beat up. (Not reported by major media; falsely reported as a knifing by right-wing media.) David identified the internal problem of the corporate university, which I believe we see escalating; the external problem is when outsiders — most of the violent rioters in Berkeley since the beginning of 2017 — understand the political power of the university and the speech that goes on there, and seek to control the process of intellectual conflict through physical force. Both are advanced in accordance with the political involvement of the students as well as the teachers.

Sunday reading: Bertrand Russell

We have a lot of fresh faces in my philosophy club this semester. On one hand, the new perspectives remind our old heads about some of the basic questions we studied when we first arrived at university, and it’s nice to introduce new students to philosophy; on the other, it makes us behave like a wolf pack, moving at the pace of the slowest member. Sometimes, I want to be an elitist and focus on more complex areas, utilizing the knowledge of our most experienced members. This would mean we lose all of our new members. Ultimately, semester after semester, keeping a high membership (and keeping students participating) always proves to be more important.

This fall I assigned Bertrand Russell’s Problems of Philosophy, as an easy intro text. As he introduces ideas, he injects his own interpretations and potential solutions, and I find I usually disagree with him. However, Russell is great at moving between analytic argument and simple, digestible prose, so I started seeking his other writing out, as one of the canonical popularizers. Here is an essay I thought I’d share, on his meeting with Lenin, Trotsky and Maksim Gorky, when, as a self-identified Communist seeking out the new post-capitalist order, he inadvertently ended up completely disillusioned with the “Bolshevik religion.” He would publish The Practice and Theory of Bolshevism shortly after in 1920, condemning the historical materialist philosophy and system he saw just three years after the October Revolution.

Russell remained somewhere between a social democrat and a democratic socialist throughout his life (e.g., his 1932 “In Praise of Idleness” on libcom.org), but a vocal critic of Soviet repression. He was also a devout pacifist, which explains his early infatuation with the Russian political party which advocated an end to WWI. He died of influenza in 1970 after appealing to the United Nations to investigate the Pentagon for war crimes in South Vietnam.

From his article on meeting Lenin:

Perhaps love of liberty is incompatible with wholehearted belief in a panacea for all human ills. If so, I cannot but rejoice in the skeptical temper of the Western world. I went to Russia believing myself a communist; but contact with those who have no doubts has intensified a thousandfold my own doubts, not only of communism, but of every creed so firmly held that for its sake men are willing to inflict widespread misery.

What is a “left libertarian”?

I often hear a contrast drawn between “left-” and “right-libertarians.” In fact, I hear it so often, that I have no idea what it could possibly refer to. The history of the word makes it particularly confusing.

The word “libertarian,” prior to, perhaps, the later 20th century, referred to (definitely) left-wing, anarchist philosophies. The point is well-known and harmless. The modern day, American usage of the term refers to a different branch of philosophies, with a common root in classical liberalism. Comparing the left-wing anarchists of old to the Libertarian Party, for instance, would draw an obvious line between left-wing and right-wing politics. There’s nothing wrong or appropriative about this name change. The word “liberal” has also suffered a large definitional change in the United States that it hasn’t in most other countries. It could be argued that most political groups have shifted around under various names, at times co-opting even their ideological opponent’s.

So, “libertarian” to the average joe nowadays means something different than the libertarian socialism espoused by Proudhon or Bakunin. However, it could still be applied; it might just be an anachronism: two very different referents.

Then, for the modern libertarian movement, there again appears a “left” and “right” division. For instance, I hear Cato or the Institute for Humane Studies regarded as left-libertarian, and the Mises Institute as right-libertarian. Bleeding Heart Libertarians is called left-libertarian. These “left” groups are, however, all clearly in favor of mostly free market capitalism. Then there’s Center for a Stateless Society, which labels itself “pro-market anarchist,” and then, when people confuse it for just, I don’t know, anarcho-capitalism, Kevin Carson says he wants to use the word market instead. Maybe capitalism is too long to spell. In any case C4SS is considered left-libertarian. Michelangelo seems to use the term to refer to, again, capitalism-inclined folks. (I also hear Students for Liberty referred to as left- and Young Americans for Liberty more right-libertarian.)

“Left-libertarians” are not all anarchists intent on abolishing the state, but some are; meanwhile, libertarian socialists would hardly call market anarchism an “anarchism” at all, since they oppose private property rights. If you ask them, they generally seem pretty pissed off about the whole name co-opting. Noam Chomsky is, anyway.

So, it looks like there’s the left libertarians, who may be using an American anachronism, but maintain their philosophical etymology just as classical liberals try to. And then there’s the left-libertarians, who would still fall in the bottom-right of the modern political compass, directly to the left of the right-libertarians. Does that sound right? What is the sense in which a libertarian qua libertarian would use the term “left-libertarian”?

It doesn’t usually seem like libertarians use the term left-libertarian to refer to anarchic socialists, but it sometimes does. Hanging out with Marxists only makes it worse. I’m looking for someone who has been around the liberty movement longer than I have to make sense of it.

Deontology and consequentialism, again

Christopher Freiman, associate professor in philosophy at William and Mary and writer at Bleeding Heart Libertarians, identifies as both a libertarian and utilitarian. Since my first real introduction to libertarianism was Harvard theorist Robert Nozick, I originally envisioned the philosophy as a rights-based, and thereby in some sense deontological, political theory, with like-minded economists and political scientists arguing for its merits in terms of material conditions (its consequences). In university philosophy courses, “libertarianism” means self-ownership and property rights, often through Nozick’s analytic approach. Consequentialism looked more like a top-down approach on how to live, one that doesn’t necessarily suggest any political theory, or does so only ambiguously.

In living by a deontological ethics, considerations about the consequences of an action will almost inevitably come into play, especially when pressed with more extraordinary cases. (Brandon has pointed out their ostensible — I think it only that — compatibility.) The right of an individual to not be violently attacked, for example, seems trumped in the face of the alternative immediate destruction of every other human being. I don’t think this is a great method for deducing practical principles, however. Although considering extreme cases might be entertaining and enlightening as to the durability of a thesis, their pragmatic import is typically negligible.

However, in considering their philosophical compatibility, libertarianism and utilitarianism feel at odds, and not over extreme counterexamples. Let’s look at a few low-hanging fruits. Suppose the National Security Agency had advanced knowledge that someone was planning to attack a nightclub in Orlando a few weeks prior to June 12, 2016. Private security would have increased, several clubs would have shut down. Were the threat classified as serious enough, state government might debate the Constitutionality of entering peoples’ homes and forcefully taking firearms; they might do this and succeed. Any further firearm sales would also be prohibited. This is an awful lot of state power and intrusion. However, fifty lives are plausibly saved, including Omar Mateen, and the lives of their family and friends are not devastated. Using a hedonistic calculus, these efforts look justified. Now, ignoring the NSA’s incompetency, suppose that our security agencies predicted the hijackings several months before September 11, about sixteen years ago to this day. In a utilitarian model, would the choice to prevent any civilian boarding for so many days, in order to prevent tragedy, be the correct one? In essence, is the partial nuisance to a substantial number of people overridden by the imperative to save 2,996 lives? Certainly — through utilitarianism — yes: the government ought to intervene and shut down air travel. In fact, the state determined it had a compelling interest immediately after the attacks and did this very thing, balancing national security over civil liberties.

Utilitarianism and liberal positions also challenge each other aggressively on issues like gun rights. In theory, were it possible to completely remove firearms from the states, there would be a gain in utility for the lives saved that would otherwise be lost to gun violence accidental or otherwise. Many people suffering nuisance (e.g. loss of pleasure from visiting the shooting range and insecurity about home invasion) is less consequential than the saving of lives.

And what of abortion? I align with reproductive rights, like plenty but not nearly all libertarians. Is choice, here, compatible with utilitarianism? All the additional children, bringing their own default happiness (cf. David Benatar for a counterargument), might be a utility bomb large enough to warrant invasive pro-life measures under utilitarianism, regardless of first, second or third trimester.

There are surely historical arguments that protest awarding the consequentialist victory so easily to the side of authoritarianism. For example, a nation equipped with the administrative power to invade private citizens’ homes and families, or cancel intranational travel or immigration, is probably not the nation which, in the long run, leads to the most utility or happiness. Nationhood aside, if all firearms were removed from society, this too might not be that which leads to the greatest net utility: maybe home invasion becomes epidemic; maybe rural areas that capitalize on hunting fall into unforeseen economic concerns; maybe the sheer quantity of the nuisance outweighs the beneficial effect of confiscation. The consequences of most of these issues are empirical and fall to historical argument. However, at least to me, utilitarianism seems incompatible with a variety of rights-based libertarian commitments, and thus deontological considerations become essential.

Here is another challenge to utilitarianism in general, and particularly Bentham’s project of a utilitarian legal system: discovering utils, or quantifying how much utility is connected to any action, is difficult. (And, since it has been, in all instantations, attached to government policy — not cooperation among peoples — it suffers from planning concerns on an even more detrimental scale.) The calculation is even more challenging when considering “short” versus “long term” effects. In the cases of Patriot Act-style defense, gun control (were it possible), and abortion, large-scale government intervention is, prima facie, justified by utilitarianism; yet over time, it may become evident that these choices result in overall poorer consequences. How much time do we wait to decide if it was the utilitarian decision? — And in the episodes of history, did any of those scenarios play out long enough to give a definitive “long term” case study? Swapping classical for “rule ulitarianism” doesn’t remove this epistemic barrier. There isn’t a non-arbitrary rule that determines how many moments into the future one must wait before judging the utility-consequence of any action, for those actions where we cannot pinpoint the closed-system end of the casual chain. Another related concern is that utilitarian judgments take on society as a whole, with little room for specific circumstances and idiosyncracies. This is why it strikes me as viciously top-down.

Thus the two philosophies, one etho-political and one entirely ethical, appear to conflict on several important considerations. (Most of the principles of the Libertarian Party, to name one platform, are not utilitarian.) Lengthy historical arguments become necessary to challenge the compelling nature of particular hypotheticals. J. S. Mill, whose utilitarian work inspired much of the classical liberal tradition, was, at the end of the day, a consequentialist; however, his harm principle from On Liberty is definitively rights-based, and this principle is at the core of his libertarian import, along with his anti-paternalism as espoused by people like Freiman. Freiman acknowledges some of the criticisms of utilitarianism, being (I think) a Millian and a libertarian, including one of its most prominent objections from those concerned with individual liberty: the separateness of persons, as offered from critics like Rawls. His response to this problem is essentially the one that falls to historical argument: “While it is possible for utilitarianism to recommend organ harvesting, hospitals that expropriate organs would not contribute to a happy and peaceful society in the real world.” This empirical conjecture leaves the realm of philosophy for us.

The inconsistencies promulgated by Mill — from his political philosophy, namely in On Liberty (1859), and his ethical philosophy, namely in Utilitarianism (1863) — may be why both consequentialist and deontologist libertarians can find support in his writings. Combinations like these are no doubt why Brandon finds the two compatible.

I don’t find them compatible, though utilitarianism as it was understood before Rawls may be the worse of the two (although rhetorically more effective). The modern father of deontology, Immanuel Kant, rejected the consequentialist ethos in his call to “treat people as ends, not means.” Utilitarianism, as broadly understood, has every reason to produce an omnipotent authority figure that will approve any gamut of regulatory and coercive policies if it seems to benefit the greatest interest of the majority. The “seems to” part is the only part that matters, since plans have to be acted on the basis of best knowledge; and I would maintain that estimating utils is never certain, being an empirical question made especially blurry by historical confusion. Brandon gave the example of the Great Leap Forward as an instance where we see utmost disregard for human sanctity in the sake of majoritarian or nationalist or “best interest” considerations.

Yet Kant can be interpretated as no less controlling. Deontology, from deos “duty,” is the study of what is morally permissible or obligatory, and to this natural rights is just one possible derivative. He is taken to be a natural rights theorist, and there is a separateness of persons explicit in his ethics absent from Bentham and Mills’ greatest happiness principle. But although Kant’s metaphysics of morals has persons, and not majorities, his Protestant upbringing shines through in his conservative views on sexuality and otherwise non-political behavior.

In a comment on Freiman’s post, Matt Zwolinksi objects to his assertion that utilitarianism is opposed to the interference of government in private, consenting interactions between adults (for some of the reasons mentioned above, and I agree). Zwolinski says, on the other hand, that Kant was strongly anti-paternalist. I doubt this. Immanuel Kant wrote criticisms of casual sex — each party is self-interested, and not concerned about the innate dignity of the other — and, like other Enlightenment philosophers, advanced that true freedom is something other than acting how one wishes within the bounds of others’ rights (true freedom is, in fact, acting according to how Kant wants you to act). It’s not exactly clear if his traditionalist positions on personal morality follow from his categorical imperative, but his duty ethics in isolation prohibits many activities we would take to be personal freedoms regardless. Kant might have opposed forms of government paternalism, but his entire ethical philosophy is paternalistic by itself.

For example, what would a Kantian say about a proposal to legalize prostitution? When someone pays another for sexual favors, the former is definitely not considering the latter’s innate dignity. The person who sells their body is treated as means to an end and not an end in themselves. Presumably, since Kant thought the state has a role in regulating other behavior, he would be against this policy change. This is confusing, though, because in most trades people use each other as means and not ends. The sexual transaction is analogous enough to any sort of trade between persons, in which we consider each other in terms of our own immediate benefit and not inherent humanity. When I purchase a Gatorade from a gas station, I am using the cashier as the means to acquire a beverage. Kantian deontologists could, the same as the utilitarians, call to organize all the minutiae of personal life to coordinate with the ideals of one man from Königsberg.

Meanwhile, what does the classical utilitarian say about legalizing prostitution? We only have to weigh the utility gained and lost. First of all, it helps the customers, who no longer have to enter the seedy black market to buy a one-night stand. Next, it helps the workers, who in a regulated marketplace are treated better and are less likely to receive abuse from off-the-radar pimps. There would likely be a dip in human trafficking, which would raise the utility of would-be kidnapees. In addition, it creates new jobs for the poor. If you are in poverty, it automatically benefits you if a new way to create income is opened up and legally protected. Further, with legalization there would be less stigmatization, and so all involved parties benefit from the mitigated social ostracization too. The disutility is minor, and comes from the pimps (who lose much of their workforce), abusive tricks who get away with physical violence as long as prostitution is underground, and the slight increase in moral disgust from involved sexual prudes around the globe. So, it seems safe to award the legalization case to Bentham and Mill, and indeed decriminalizing prostitution is the right thing to do. (Although we see another fault. Since all humans are equal, their utility too is considered equally: the utility of “bad men” is worth as must as the utility of “good men,” there being no meta-util standard of good.)

In this situation, utilitarianism helps the libertarian cause of individual freedom and self-determination; in others, duty based ethics are a closer bet. Natural rights perspectives, from Cicero and Aquinas to Nozick and Rothbard, on average satisfy more of the conditions which we find essential to libertarian concerns, especially when the emphasis is on the individual. That said, Kant is a deontologist and not necessarily a freedom-lover. Neither utilitarianism nor Kantian deontology point obviously to libertarianism. The moral psychology research of Jonathan Haidt gives us reason to surmise that it’s mostly “left-libertarians” that think in terms of consequences, and “right-libertarians” that stick to natural rights or deontologic premises. I think, regardless of which theory is more correct, they both capture our ethical intuitions in different ways at different times — and this without even considering other popular theories, like Aristotle’s virtue ethics, Rawlsian justice as fairness, loyalty ethics or Gilligan’s ethics of care.*

I like a lot of Christopher Freiman’s writing on Rawls and basic income. However, I find utilitarianism has to submit to empirical inquiry a little too often to answer fundamental questions, and in its ambiguity often points to policy that disrespects the atomic individual in favor of a bloated government. I don’t think utilitarianism or deontology à la Kant are the bedrock of libertarian principles, but ultimately natural rights is the most non-incorrect position and groups together most cohesively the wide range of positions within libertarianism.

* Gilligan’s ethics of care is terrible.

Social noble lies

In the Republic, Socrates introduced the “noble lie”: governmental officials may, on occasion, be justified in propagating lies to their constituents in order to advance a more just society. Dishonesty is one tool of the political class (or even pre-political — the planning class) to secure order. This maxim is at the center of the debate about transparency in government.

Then, in the 20th century, when academic Marxism was in its prime, the French Marxist philosopher Louis Althusser became concerned with the issue of social reproduction. How does a society survive from one generation to the next, with most of its moors, morals and economic model still in place? This question was of particular interest to the Orthodox Marxists: their conflict theory of history doesn’t illuminate how a society is held together, since competing groups are always struggling for power. Althusser came up with “Ideological State Apparatuses”: institutions, coercive or purely ideological, that reinforce societal beliefs across generations. This necessarily includes all the intelligence agencies, like the CIA and FBI, and state thugs, like the Gestapo and NKVD, but it also includes the family unit (authorized by a marriage contract), public education and the political party system. “ISAs” also include traditions in the private sector, since for Althusser, the state exists primarily to protect these interests.

It’s rarely easy enough to point to a country and say, “This is the dominant ideology.” However, and here the Marxists are right, it can be useful to observe the material trends of citizens, and what sorts of interests people (of any class) save up money for, teach their children to admire, etc. In the United States, there is a conditional diversity of philosophies: many different strains abound, but most within the small notecard of acceptable opinion. Someone like Althusser might say there is a single philosophy in effect — liberal capitalism — getting reproduced across apparatuses; a political careerist might recognize antagonists across the board vying for their own particular interests. In any case, the theory of ISAs is one answer to conflict theory’s deficiencies.

There is no reason, at any time, to think that most of the ideas spreading through a given society are true. Plenty of people could point to a lesson taught in a fifth grade classroom and find something they disagree with, and not just because the lessons in elementary school are simplified often to distortion. Although ideas often spread naturally, they can also be thrusted upon a people, like agitprop or Uncle Sam, and their influence is either more or less deleterious.

Those outlooks thrust upon a people might take the form of a noble lie. I can give qualified support for noble lies, but not for the government. (The idea that noble lies are a right of government implies some sort of unique power for government actors.) There are currently two social lies which carry a lot of weight in the States. The first one comes from the political right, and it says: anyone can work their way to financial security. Anyone can come from the bottom and make a name for themselves. Sentiment like this is typically derided as pulling oneself up from the bootstraps, and in the 21st century we find this narrative is losing force.

The second lie comes from the left, and it says: the system is rigged for xyz privileged classes, and it’s necessarily easier for members of these groups to succeed than it is for non-members. White people, specifically white men, all possess better opportunities in society than others. This theory, on the other hand, is increasingly popular, and continues to spawn vicious spinoffs.

Of the two, neither is true. That said, it’s clear which is the more “socially useful” lie. A lie which encourages more personal responsibility is clearly healthier than one which blames one’s ills all on society and others. If you tell someone long enough that their position is out of their hands because the game is rigged, they will grow frustrated and hateful, and lose touch with their own creative power, opting to seek rent instead. Therefore one lie promotes individualism, the other tribalism.

Althusser wrote before the good old fashioned class struggle of Marxism died out, before the postmodernists splintered the left into undialectical identity politics. God knows what he would think of intersectionality, the ninth circle in the Dante’s Inferno of progressivism. These ideas are being spread regardless of what anyone does, are incorporated into “apparatuses” of some sort, and are both false. If we had to choose one lie to tell, though, it’s obvious to me the preferable one: the one which doesn’t imply collectivism in politics and tribalism in culture.

Human Action, Ch. 1

Well, I finally started reading Human ActionOne connection stood out to me from the first chapter. 

First, there’s much more attention paid to fundamental philosophy than I expect from economic treatises. This is understandable given that Mises felt he had to set the stage — sparring, as he says, with the irrationalists, polylogists, historicists, positivists, behaviorists, and other economists within the youngest science. Every undergrad, cracking open Hobbes’ Leviathan, is startled to find lengthy remarks on human cognition in what they thought was only a work of political philosophy; this was a similar experience. 

There are noticeable allusions between von Mises and pre- and post-Tractatus Wittgenstein. Both Austrians and both Ludwigs, the economist writes that “It is impossible for the human mind to conceive logical relations at variance with the logical structure of our mind. It is impossible for the human mind to conceive a mode of action whose categories would differ from the categories which determine our own actions” (p. 25). Similarly, for the philosopher, the logical structure of thought (and language) was a central theme of Tractatus Logico-Philosophicus; one of the young Wittgenstein’s conclusions was that some (ethical, aesthetic, metaphysical) postulates go beyond the limits of language and, when crunched into such human linguistic straightjackets, create sheer nonsense (leading to such maxims like, “What can be said at all can be said clearly…”) (TLP, §7).

Each Ludwig, of course, limited their inquiry to the human mind, discovering, like Kant, universal conditions of rational beings (or so I garner so far from Mises). 

Another methodological point in common. Ludwig von Mises, in section The Alter Ego, remarks on “the ultimate given,” an idea which, I believe, is unpopular in contemporary epistemology. The empirical sciences must reach final points of inquiry, upon which their tools fail to produce deeper insight. This is so because, to Mises, there are only “two principles available for a mental grasp of reality, namely, those of teleology and causality” (p. 25): teleology belonging to purposeful behavior, and causality to non-purposive objects of study. The former, applied ad infinitum, must stop at the unmoved mover, and the latter can only invoke an infinite regress. This point is important for deploying praxeology as a deductive science.

This doesn’t seem like a new insight, but it’s also one that Wittgenstein touches upon in a different way in Philosophical Investigations, writing in the first segment “Explanations come to an end somewhere.” The use of language in daily life does not imply ultimate, elucidated concepts between speakers; we never ask for these and likewise we do not need them to communicate. Reaching deeper into shared insight also leads to confusion; we talk of objects and ideas as ‘wholes’ and ‘composites,’ but these categories are not unambiguous. Wittgenstein situates the sense of concept-analysis only within a language game: “The question ‘Is what you see composite?’ makes good sense if it is already established what kind of complexity — that is, which particular use of the word — is in question. If it had been laid down that the visual image of a tree was to be called ‘composite’ if one saw not just a single trunk, but also branches, then the question ‘Is the visual image of this tree simple or composite?’, and the question ‘What are its simple component parts?’, would have a clear sense — a clear use.” So therefore, “To the philosophical question: ‘Is the visual image of this tree composite, and what are its component parts?’ the correct answer is: ‘That depends on what you understand by “composite”.’ (And that is of course not an answer but a rejection of the question.)” (PI, §47).

In the future I might post on Mises’ short use of terms like “being,” “change,”and “becoming,” which he uses in a sense reminiscent of Parmenides.

Wittgenstein, Philosophical Investigations. G. E. M. Anscombe trans.

von Mises, Human Action. Scholar’s Edition. Ludwig von Mises Institute.

The death of reason

“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”

Thomas Kuhn, The Structure of Scientific Revolutions p. 111

I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.

My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”

And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.

The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.

If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.

The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., those of Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.

Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary. (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.

Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.

Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unconfirmed auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.

But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”

Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.

For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.

For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.

So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.

Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms, participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.

From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.

This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acolytes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.

There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.

Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremberg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…

In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”

Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.

Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocutors, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.

Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. The Final Solution is not about extermination anymore but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.

The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.

Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it is climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.

Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermione Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”

“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233

Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.

Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.

When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.

The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.

Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.

Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.

Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.

If we don’t posit a baseline-rationality — Hermione Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.

There were other attacks on the hoax, from SlateSalon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.

Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.

The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”

Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.

In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.

However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.

Pornography, virtual reality and censorship [II]: puritanism and videogames

[Continuing from my last post, noting that feminists have not behaved monolithically toward pornography, and statistics have not provided any justifiable inference from violent pornography to violent crime.]

Most feminists would align, however, in a condemnation of violent pornography, even if they do not attempt to use legal coercion to restrict it. It has been particularly controversial when material becomes first-person, or even playable. And thus pornography, and violent pornography, often makes an intersection with the videogame industry. To name one infamous example, RapeLay, a role-playing game from a company in Yokohama, Japan, allows the player to assault a defenseless mother and her two children. Some critics argued that the videogame breached the Convention on the Elimination of All Forms of Discrimination Against Women, agreed to by the United Nations.

New York City Council speaker Christine Quinn called RapeLay a “rape simulator.” Commenting on the game and other controversies, an IGN journalist added: “For many, videogames are nothing but simulators. They are literal replications, and, as such, should be cause for the same kind of alarm the real life equivalents would inspire.” Is this the same motive for consumers though – that of, essentially, practice? On the piratebay download link for RapeLay, a top commenter “slask777” writes: “I highly approve of this for two reasons, [sic] the first is that it’s a slap in the face of every prude and alarmist idiot out there and second, it’s a healthy outlet for the rape fantasy, which is more common than most people believe.”

I suspect that much of the appreciation for videogames is due to their simplicity, to be eventually supplemented by mild and mostly innocent addiction. Then – not to put too much faith in slask777’s psychological credentials – I suspect as well that violent videogames serve a “channeling” function, allowing some instinctual energies to exert themselves in a harmless environment and release some psychological tension. Perhaps the “rape fantasy” is not shared by the majority of the populace, but judging by the comments on the torrent site, the audience for this game cannot be confined to stereotypical images of basement N.E.E.Ts. (Studies of the occupations of internet trolls confirm as well the difficulty of pinning down an image for anonymous internet users). There was even an informative, civil discussion of reproductive anatomy on page one of the torrent site. Following this theory of channeling, we might find similar uses for virtual realities: nonreal locations to perform socially unacceptable acts. Locations for people with genuine sexual or sadistic pathologies, to release their desires and blow off steam without harming other people. The entire premise is empathetic.

Of course, throughout history, any activity which has the possibly of harmlessly releasing what could be described as primordial man, the “reptilian” side, the repressed id, or whatnot, has faced violent opposition from culture, religion, criminal law and various romantic-familial-social apparatuses. Here, we can already expect that, fitting into the category of “recreational and individuational,” virtual reality technologies will face a cultural blowback. RapeLay is an extreme example of both violence and sexuality in videogames: the high-profile protest it received could be expected. (Pornography has even been to the Supreme Court a few times (1957, ’64, ’89). In a separate case, Justice Alito, commenting on RapeLay, wrote that it “appears that there is no antisocial theme too base for some in the videogame industry to exploit.”) This moral outrage, however, is not simply content-based, but medium-based, and flows directly from the extant condescension and distrust toward videogames and pornography.

The simple fact that disparate ideological camps agree on, and compatible groups disagree on, the effects and what to do about pornography and videogames could be seen as demonstrative of the issue’s complexity; in fact, this implies that the nature of opinion on this is fundamental and dogmatic. The opinion provides the starting point for selectively filtering research. There are two logical theories concerning these violent media: the desensitization argument, and the cathartic/channeling argument. Puritans and rebels enter the debate with their argumentative powers already assigned, and the evidence becomes less important.

Before evidence that might contradict either primitive position on pornography interferes, many people have already formed their condescension and distrust. The desensitization theory is particularly attractive due on the most publicly-understood thesis of cognitive-behavioral psychology: mental conditioning. Thus when violence or abusive language is used as a male advance in adult videos and games, and women are depicted as acquiescing rather than fighting back, boys must internalize this as reality. Of course, media itself has no interest in depicting legitimate representations of reality; it is inherently irreal, and it would be naïve to expect pornography directors to operate differently. This irreality I think is poorly understood, and thus the “replicator” argument as adopted by the IGN reporter becomes the most common sentiment for people that find pornography affronting to their morals and are also disinterested in research or empirical data. Glenn Beck, commenting on the release of Grand Theft Auto IV, said “there is no distinction between reality and a game anymore.”* He went on to say that promiscuity is at an all-time high, especially with high school students, when the number of sexual partners for young people is at a generational low. The seemingly a priori nature of a negative pornographic effect allows woefully out-of-touch rhetoric to dominate the conversation, appealing also to the emotional repulsion we may experience when considering violent porn. It encourages a simplifying effect to the debate as well. Again, were it simply true that nations with heavy pornography traffic face more frequent sexual violence (as a result of psychological conditioning, etc.), we would expect countries like Japan to be facing an epidemic – especially given the infamous content of Japanese porn (spread across online pornography, role-playing games and manga). Yet, among industrialized nations, Japan has a relatively low rape frequency. The rape ratio of a nation cannot be guessed simply from the size or content of its pornography industry.

Across the board, the verdict is simply still out, as most criminologists, sociologists and psychologists agree. There are innumerable religious and secular institutions committed to proving the evils of pornography, but contrasting them are studies that demonstrate that, alongside the arrival of internet porn, (1) sexual irresponsibility has declined, (2) teen sex has declined (with millennials having less sex than any other group), (3) divorce has declined, and – contrary to all the hysteria, contrary to all the hubbub – (4) violent crime and particularly rape has declined. Even with these statistics, and of course compelling arguments might be made against any and all research projects (one such counterargument is here), violent efforts are made to enforce legal restrictions – that is something that will probably persist indefinitely.

I first became interested in debating pornography with the explosion of “Porn Kills Love” merchandise that became popular half a decade ago. The evidence has never aligned itself with either side; if anything, to this day it points very positively toward a full acquittal. Yet, young and old alike champion the causticity of pornography toward “society,” the family, women, children, and love itself (even as marriage therapists unanimously recommend pornography for marriage problems). Religion has an intrinsic interest in prohibiting pleasurable Earthly activities, but the ostensible puritanism of these opposing opinions is not present in any religiously-identifiable way for a great number of the hooplaers. So an atheistic condemnation of pornography goes unexplained. One might suppose that, lacking the ability to get pleasure (out of disbelief) from a figure-headed faith (which sparks some of the indignation behind New Atheism), people move to destroy others’ opportunities for pleasure out of egalitarianism, and this amounts to similar levels of spiritual zeal. Traces of sexist paternalism are to be found as well, e.g. “it’s immoral to watch a woman sell her body for money,” and through these slogans Willis’ accusation of moral authoritarianism becomes evident. Thus the attitudes which have always striven to tighten the lid on freedom and individual spirituality – puritanism, paternalism, misogyny, envy, etc. – align magnificently with opposing pornography, soft-core or otherwise.

*I try to avoid discussion of GamerGate or anti-GG, but it is almost impossible when discussing videogames and lunatics. Recently, commenting on Deus Ex‘ options for gameplay, which allow the player to make decisions for themselves, Jonathan McIntosh described all games as expressing political statements, and that the option should not even be given to the player to make moral decisions about murder, etc. It’s immoral that there is a choice to kill, was his conclusion. He’s right about all games expressing political statements. But he’s a fucking idiot for his latter statement.

[In my next post I’ll conclude with an investigation into the importance of virtual reality technology and the effect it will have on society.]

Pornography, virtual reality and censorship [I]: presidents and feminism

Oculus Rift, recently purchased by Facebook and partnered with Samsung, and HTC Vive, manufactured by HTC with Valve technology, have lead the 2010 wave in developing virtual reality headsets. These technologies, innovative by today’s standards but primitive by science fiction’s, mark the beginning of a differently structured society. They also mark a starting point for a new debate about privacy, the social affects of videogames, and especially censorship in media.

Virtual reality (in its not-too-distant actuality) offers an opportunity to behave outside of social norms in an environment that is phenomenologically the real world. The only comparable experience for humankind thus far is lucid dreaming, for which the rewards are less intense and the journey less traversible than the quick promises of virtual reality machines. One inevitable development for these machines is violent, sexually explicit experiences, available for cheap and accessible 24/7. To see how VR might be received, the closest industries to analyze are the videogame and pornography industries.

Interestingly, pornography has a very liberal history, in comparison to other “societal ills,” like drugs. Erotica dates back to ancient cultures — notably, the Kama Sutra, hardcore by today’s standards, is still a staple of contemporary sexual experimentation — and today’s perversions were common themes: bestiality, pedophilia, etc., although pornography with an emphasis on violence might be a more modern trend. This isn’t to ignore, however, the roles typically played by women in ancient Western folklore and mythology, which are degrading by today’s feminist standards.

The case could be made that today’s censorial views on pornography come from a far more malevolent or oppressive stance toward women than two millennia ago. The free expression that pornographic media once enjoyed was severely deflated over the 20th century. Only two years ago, a plethora of activities were banned from pornography in the United Kingdom. Reacting to the legislation, commentators were quick to criticize what was seen as policy that was specifically anti-female pleasure. Female ejaculation, fisting, face-sitting, and many forms of spanking or role-play were among the restrictions. There are puritanical, “moral outrage” elements to the restriction, but many noticed the absurdity of banning face-sitting: said one producer, “Why ban face-sitting? What’s so dangerous about it? … Its power is symbolic: woman on top, unattainable.” (There has been well-intended censorship as well. Los Angeles county passed Measure B in 2012 to require condom use during any pornographic scene with anal or vaginal contact, to combat the spread of venereal disease.)

Nowadays, there are plenty of porn directors that have learned to focus on both male and female pleasure, and reintroduced artistic merit to their directions. With the equalizing force gaining momentum in porn, it’s curious what the vehement, persistent condemnation springs from, when not focused exclusively on abusive sex scenes. In addition, the negative effects of pornography’s presence in society are still being debated. Just the other day, a study which led to headlines like “Porn doubles the risk of divorce” and “porn signifies a death knell for marriage” was criticized by Reason magazine for failing to address important underlying factors that more plausibly contribute to both pornography consumption and an unhappy marriage leading to divorce. There seems to be an obsession on behalf of the great majority of the public in assigning pornography to some sort of social harm.

Research on photographic pornography’s effect on society began early and aggressively. The Meese Report (1986), commissioned by Reagan and still frequently cited by anti-pornography advocates, determined pornography to be detrimental to society and family relations, and especially for women and children. Arguments built on similar reports attempt to connect sexually explicit material with rapes and domestic violence, alleging that the desensitization to rough sex carries over from the depictional world into the real one. Henry E. Hudson, the Chairman of the Meese Commission, alleged that pornography “appears to impact adversely on the family concept and its value to society.” The Meese Report, however, has been challenged extensively for bias, and is not taken seriously as a body of research any longer. One criticism by writer Pat Califia, concluding a traditionalist narrative embedded in the research, states that the report “holds out the hope that by using draconian measures against pornography we can turn America into a rerun of Leave It to Beaver.

The United States’ Commission on Obscenity and Pornography, preceding the Meese Report and commissioned by Lyndon B. Johnson and Nixon, was unable to find evidence of any direct harm caused by pornography. (Although Nixon, despite the evidence under his administration, believed porn corrupted civilization.) It is curious that a new federal study was requested only sixteen years after the first extensive one, but maybe not too unusual given the growth of porn with technology (from adult stores and newsstands to unlimited free online access; the internet just celebrated its quarter-centennial birthday); also not too unusual given the absurd and expensive studies already undertaken by the federal government. It is also worth pointing out that pornography, though often connected to feminism, is a divisive issue within 20th century and contemporary feminism: some thinkers, like Andrea Dworkin, condemned it as intrinsically anti-women; others feminists like Ellen Willis argued for pornography as liberating and its suppression as moral authoritarianism. The debate along lines of sexuality, online or otherwise, culminated in the feminist “sex wars,” with groups like Feminists Against Censorship and Women Against Pornography popping up. Thus, the debate is open across every ideological camp, and support of pornography is neither necessarily liberal nor necessarily feminist.

[In the next post, I discuss violent pornography’s cross-media transformation into videogames, more sociological research and the general point, and insecurity, of prohibitory measures.]

Freud and property rights

In a recent, short discussion of property rights, I offered that property is an extension of the body, and therefore property rights can be naturally assumed as equal to our bodily rights. It was responded to highly critically. The body is intrinsically tied to our identity, as most recently stated with Sosa-Valle’s article; most people would agree to that. I feel similarly about personal property, even if proving this is somewhat more difficult.

The question of property comes up in an infinite number of discussions. If I own a Sharpie, acquiring it through legitimate transaction, I can legally prohibit another from using it. Isn’t this more of an intrusion on another’s freedom to explore the world than it is a utility of my freedom to protect this object? Why is this Sharpie mine such that I may disallow others its use? How is it within my freedom to prohibit it from others?

Where property rights actually come from, and what concerns, aside from economic or consequentialist, validate their protection, is a fundamental question. Here is a perspective from a Freudian dissection of ego relations, and historical-technological advance.

Technology is fundamentally an extension of human attributes. What is a record, but an upgrade of human auditory memory; what is a video, but an upgrade of human visual memory or imagination; “materializations of the power [man] possesses of recollection”? “With every tool man is perfecting his own organs, whether motor or sensory, or is removing the limits to their functioning. Motor power places gigantic forces at his disposal, which, like his muscles, he can employ in any direction,” and so on (Civilization and Its Discontents, p. 43).

It’s not remarkable to consider that material objects may take precedence over actual limbs, given technology is simply human advancement. When a woman loses her ability to walk, and is outfitted with a mobility scooter or likewise, the apparatus takes the place of natural walking endowments; prosthetic advancements, still infantile in Freud’s time, increasingly distort what are “legs” and what are not. We wouldn’t lessen the strength of the legal bodily autonomy just because her legs are composed of different material than the organic.

Our accessories, aside from restoring us from disable- to able-bodied, take us far beyond what the human was ever capable of accomplishing, creating “prosthetic Gods.” The modern cellphone contains the entire world of knowledge in its hardware and software. Many people feel more connected to their tablets than their hidden organs. (Or maybe, more accurately, people are more connected to the functionality of their tablets, than the automatic, reflexive actions of their organs. This is clear because tablets are replaceable but the overall attached feeling persists.) The ego, per a Freudian perspective, is extended to the external world, through some fulfillment of instinct that technology allows in an otherwise impossible situation (see instinct displacement, Instincts and Their Vicissitudes, p. 121, James Strachey translation). It becomes difficult to delineate what is attached to “me” and what is not, contrary to the simplistic, phenomenological dichotomy of body and world.

How is it anyway that our body is even connected to our psyche? For an extremely brief discussion, consider that our sense of self, as a straightforward consciousness, is not immediately crippled by, say, the removal of an appendage through a freak accident. The attachment that we feel, then, is cerebral and historical, and functional. These same conditions in and of themselves are equally possible for relating the sense of self to foreign, i.e. materially external, objects. Indeed, the “connection” we feel to our body is perfectly capable of being transferred onto other objects. See, for instance, Freud’s discussions in An Analysis of a Case of Hysteria (this point of transference could be argued to be the central pillar of the classical psychoanalytic perspective on childhood and ego-formation); David Chalmers’ arguments for the phone as a part of our mind via cognitive extension; and recent psychological studies of “joint action,” through dancing and the like.

Given these instances, I think it’s more sensible than not, at least providing one accepts even a little Freud, to perceive property rights as on the same ground as bodily autonomy.

Of course, Freud never argued for property rights from his analysis of technology as ego-engagement. His political views were mostly impersonal and disinterested. He left Vienna after his daughter was summoned by Gestapo in 1938, to live in London, but unfortunately left no direct commentary on totalitarianism, and most of his political views have to be derived.

Unanimous direct democracy

I was recently introduced to a few positive arguments for this in R. P. Wolff’s In Defense of Anarchism. Lacking the book to cite, he was absorbed with the problems of democracy, namely, the triumphant majoritarian democracy, in the manner that the minority suffers exclusion from representative processes and alienation in their laws. Philosophically he thinks contemporary liberalism leads to an illegitimate government, and anarchism is the only legitimate form of governance.

He proposed a possible method in which unanimity might be lost (as is the case in any large enough governed society), but directness and egalitarianism sustained and an authentic “rule by the people” enacted: socially-funded television sets, installed at large community centers or subsidized for private homes, with featured debates at every election season. Specialists, e.g., in fields like economics, American history and foreign policy, could feature, from various recesses of the political spectrum, to explain the more complicated issues in a collaborative, unpedantic effort. Middle Eastern history, for example, could be briefly clarified before candidates discuss their stances. (Of course, biases would find an entry point through specialists. Further discussion is necessary for this.) At the end of the week of debates, once issues are clarified and nominees understood, the remote control could be used to cast a vote for each member of the household according to the census. This system would greatly increase voter participation, and make domestic politics worthwhile for the average citizen, returning policy-making to everyone affected.

This is an idea of working within the current society on a system for better voter say: it should be judged on these merits as such.

Is it feasible? Is it at all admirable? Discuss.