From Grass Valley to Chico: party culture and noise ordinances

A professor at Chico recently asked his critical thinking class about noise ordinances: “Why should a few people get to shut down a few hundred people having fun?” The question is about our intuitive response to questions of utility. However, it goes beyond that.

Recently, students at Chico State tried to organize support for shutting down noise ordinances (after the penalties became harsher), which would mean more loud parties. It’s unlikely to go anywhere since there are thousands of non-collegiate adults living in the immediate vicinity who have to put up with noise on a regular basis and need police power to shut down public disturbances.

Nevertheless, the college was established in 1887, and everyone who chooses to live in the city has been here a much shorter time than the campus. Its reputation precedes itself. Chico State came in 59th place out of 1,426 colleges for its party scene for 2017. In 1987, Chico came in first place in a Playboy ranking, and since then, its status has been for general excess.

With a population of 90,000 when school is in session, that’s a lot of partying hard. President Paul Zingg, who resided over Chico from 2004 to 2016, made a crack-down on partying central in his mission statement. Zingg, 70, entered Chico administration after a fraternity hazing death and sought to reduce the school’s notoriety for binge drinking. Since his arrival Chico dropped another twenty or so notches in partying prestige.

There are many programs on campus to discourage binge drinking and equip students with the knowledge of how to proceed if alcohol poisoning does occur. Further, the local fines for minor-in-possession charges are exceedingly costly. However, policing parties harder – both academically and politically – might have some unintended consequences. Driving out parties means young people will have to entertain themselves in other ways, and, barring 20 year olds spending their nights at the science museum, different drugs may show up to occupy them.

There are many Californian cities where there are no parties but recurrent drug abuse. My hometown is one such place. Nevada County, composed of Grass Valley, Nevada City, Penn Valley, Alta Sierra, and, sort of, Truckee, has a problem with homelessness, youth homelessness and youth drug abuse; all misdemeanor offenses, however, so Nevada County has a low felony rate per capita compared to neighboring counties and the state as a whole.

When I went to middle and high school in Nevada City and Grass Valley, there was an impressive issue with teen drug abuse and recidivism. I knew many students without stable living conditions. Many graduates lacked any occupational motivation, while the area offered an extensive and encompassing drug culture. Narcotics and experienced vagrants provided a rubric against which a hopeless mentality prospered. (The Nevada County Sheriff’s activity website boasts largely of public disorder or drug-related arrests.)

The city of Chico, meanwhile, is adjacent to methamphetamine giants like Oroville and Marysville. The city itself has eleven suspected clandestine drug laboratories as of 2014, compared to Oroville’s twenty. (Residents know there are more.) The governance of Chico, in terms of partisanship, is not unique; nor is its relative adolescent-adult ratio peculiar for a college town. Yet it manages to get by without excessive hardcore drug use or addiction rates like its neighbors. I think it’s plausible that, among other factors, Chico’s party scene helps keeps out the harder drugs.

Parties mean marijuana, alcohol and cocaine. These are staples in any festive town. However, without parties, there are less party drugs, and instead a window opens for more deadly drugs. People don’t party on deadly drugs, like those that slow heart rate, so when there’s parties, more designer drugs appear rather than lounge-around, do-nothing narcotics, like opioids or barbiturates. (I think this effect holds for towns of a certain population, but once that population and acreage is large enough, the effect may begin to work in the opposite direction.)

Grass Valley and Nevada City have a large proportion of young adults but a microscopic party scene. They are sister towns where the borders are relatively undefined, and have a combined population of about 16,000, made up of mostly middle-aged adults and a subsection of retired elderly folk. Nevada City came in eighth place for most dangerous city in California in a Telegraph Today poll. This is partially explainable in terms of population – the total amount of people is so small that any violence means, per 1,000 people, the likelihood of being a target of violence grows heavily. Like I said, violent crime appears relatively absent. Nonetheless there is an inordinate amount of drug use, from harmless depressants like marijuana to titans like methamphetamine.

Grass Valley teenagers don’t have many all-night ragers to shotgun cans of Pabst at. This means they don’t have an environment to learn how to party hard safely, and also that they become dependent on drugs that can fit any occasion.

There’s a reason why alcohol is the most popular drug for people under 21: it’s illegal until you turn 21. Similarly, I think crackdowns on drug use – common and relatively harmless drugs like alcohol, nicotine, marijuana and cocaine – may lead to backfires and encourage kids to move into more unexplored narcotic territory. (I also wrote a paper last year illustrating that several non-profit efforts to quell methamphetamine use across the nation had a negligible effect or the reverse effect.)

The drugs of today are a new type of foe. The lost war on drugs is getting the tar kicked out of it by Vicodin and OxyContin in the age of the highest recorded overdoses of all time. Smart drug policy would investigate the positive effects that parties have in the broader neighborhood, like tourism, promotion of social behavior, and promotion of drugs that are better understood by medical professionals and users alike.

The hazing overdose death that President Zingg came to office on was not from alcohol, nor any party drug, but from excessive water intake. The mistake is to infer a causal connection between dangerous partying and deaths like these, when there are causal events underlying both. Loud, boisterous parties are preferable, any day, to equally illicit, but infinitely more dangerous, covert drug use.

So, to answer the original question, “Why should a few people get to shut down a few hundred people having fun?” They shouldn’t. Screw them.

HARD Summer: autonomy within a prohibitive legal system

I want to elaborate on an earlier post of mine, that received some backlash on Reddit. I said “it is [not] an altogether correct judgment to blame drug consumers for their deaths.” I stated that I don’t believe drug users are entirely to blame for the occasionally-lethal consequences of their willful intoxication; a point I didn’t elaborate on sufficiently, and something that, at first glance, seems wholly incorrect or contrary to many principles espoused on this site. Not so. Illegal drugs (and to an extent, legalized drugs, and to a small extent, anything illegal) face a special status in the realm of human action.

Imagine if I, an undergrad who studies law and philosophy, took on a business venture. I haven’t studied finance or business. Not only have I avoided all theory, but I have no real-world knowledge about the stock market or fluctuating national trends. If I go out on a limb, and buy shares in a company doomed to fail, at that precise moment I exercise my will, make a poor judgment and the inevitable consequences fall on my own shoulders. The responsibility is on me, the same as it would if I were highly educated on the subject and took a promising gamble. Why? Clearly, because I could have easily educated myself – the information, and the means to obtain it, is out there.

Drugs, such as methamphetamine, are different. Legal authorities have specifically prevented me from obtaining information, which would have been collected and condensed naturally, that is requisite for forming a knowledgeable, and thereby autonomous, judgment. In the same way that a child cannot be blamed for removing a firearm from its safe and mistakenly pulling the trigger, I cannot be entirely blamed for making decisions the consequences for which I had an incomplete understanding. It’s part of the reason the crime of homicide is nowhere in law so general: it is divided into murder, manslaughter, voluntary and involuntary, and so on, based on degrees of competence. Now, in the case of a hard drug, I did go of my own accord into the black market to purchase the substance. To some degree, no matter what, I was aware of the risks attached to it. Regardless, the government has, through its force, removed sources of information which would be central to my evaluative performance: scientists cannot perform tests because of enforced ethical standards or threatened jail time for proximity to a drug; there is a lack in direct testimony from acquaintances because the ability to acquire the drug has been drastically altered (in some circumstances, removed; in others, just made more seedy); in addition, police officers often do not know how to handle someone on certain drugs (even when they are required to), they know only how to make the arrest: occasionally life-threatening situations are stalled medical care. State paternalism actually has an infantilizing effect, which shifts some of the responsibility from my hands into those of my helicopter caregiver.

This disruption of responsibility does not imply the government now has the right to make my decisions for me: making decisions for me is what led to the disruption, not any innate immaturity on my part or the part of any user. Disruption cannot justify itself. Instead of knowing why methamphetamine is dangerous for so and so reasons, what I really know and understand, is that the market I can purchase it in is dangerous. Genuine knowledge, wisdom about a substance and its chemical make-up, is less commonplace than wisdom about the dangers of obtaining it: knowledge for which its existence is utterly contingent on an extensive state apparatus.

So, authorities that pass laws criminalizing drug use directly make the market more dangerous (by creating cartels, etc.), and indirectly prevent consumers from discovering information appropriate to making an informed (and thereby blameworthy) decision. The weight of many acute drug overdoses rests on the heads of lawmakers, and in every future drug experimentation the government plays an inhibitory role in the autonomy of the experimenter.

Politically-minded people who don’t want the government (or other people) making decisions for them – whether that’s conservatives hoping to direct their income how they see fit, or liberals fighting for marriage equality – usually employ the argument that people can be responsible for themselves. What I think is obvious, is that when government attempts to prohibit any substance, they impact the ability to even potentially be responsible when making decisions about the substance, by coercively limiting the information that would naturally develop.

Vulvæ in pornography and culture

I was in a discussion recently on the effects of “porn culture” on young boys and girls. I went back and forth a bit with a debater, until she mentioned the rising rates of labiaplasty, prima facie caused by women’s lack of confidence in their own external genitals after watching pornography (which, according to everyone, is a massive, growing, all-pervasive industry).

Labiaplasty, surgery on the vulva to trim or cut away the labia minora (such that they protrude less than the labia majora) or the clitoral hood, is, indeed, on the rise in the West; it can also lead to complications including pain and infection. The number of girls under eighteen that paid for labiaplasty almost doubled between 2014 and 2015, and the number continues to grow.

Insecurity is a leading factor in many women’s decision to pursue the surgery. Patient satisfaction post-op is about 95%. Labiaplasty improves confidence and happiness and can lead to a healthier sex life. Costs are about $4000 – 5000, which is not terribly expensive for a life-altering operation.

However, the logistics alone are deceiving. Plastic surgery is a relatively young field with ever-improving techniques. To attribute the rise in labiaplasty immediately to the effects of an increasingly pornographic culture is disingenuous. There are other factors, and an analysis of labiaplasty patients shows that 32% of women undergoing the surgery did so for functional impairment, 31% for combined functional impairment and aesthetic improvement, and 37% for aesthetic improvement alone. That is a majority of surgeries for achieving or improving upon physical function. Of course, if you’re thinking of getting a surgery to improve a malfunctioning organ and it comes with visually pleasing benefit as well, you might incorporate this a little into your selection process – thus, it is reasonable to assume that many of the patients in the dual percentage were focused primarily on physical function, with visuals as an afterthought. The authors of the study found that “the majority of patients undergoing reduction of the labia minora do so for functional reasons with minimal outside influences affecting their decision for treatment.” This is an inconvenient conclusion if you want to argue that porn culture is causing these surgeries.

There’s also, as always, much more to the story. Female porn stars have what might be called “tidy” mons pubes, in terms of pubic hair and the conspicuousness of labia. These “designer vaginas” don’t accurately represent the general, mammalian populace of people with vaginas. Yet, as noted by Lisa Wade, a sociology professor at Occidental College, the reason porn actresses in Australia maintain such kept labia is not male, female or porn-producer preference, but rating boards (which are government bodies). Soft-core porn, in Australia, can only show “discreet genital detail,” which rules out fleshy, extrusive labia. The labia minora is considered “too offensive” for soft-core. Similar distinctions apply elsewhere. In Germany, for instance, one must be at least 16 years old to purchase soft-core material and 18 to buy hardcore; the distinction falls along multiple lines some of which dictate how extrusive the labia are. In Australia, often airbrushers at porn mags apply digital labiaplasty to “heal” vulvas to an acceptable “single crease.”

One of the feminist authors I’ve linked makes the point clear by comparison: what if porn classificatory boards determined the testicles were too explicit? What if the scrotum had to be airbrushed out in order for men’s genitals to appear in soft-core?

Daisy Buchanan at the Guardian observes another reason women are seeking aesthetic “improvement” to their natural organs: the seductive exhibitionism of pornography is not confined to hardcore adult films anymore, it’s everywhere – at music awards, during celebrity photoshoots, in casual advertisements, whatever.

This point could be worded in one of two ways: society is hyper-sexualized, or, our culture is sexually liberated. The second, more optimistic one, symbolizes a social maturity into sexual autonomy. Buchanan exaggerates by claiming we’re “as likely” to see a vulva on a music channel as on a pornographic website, but nonetheless, sex has become a more tolerable phenomenon for more and more people. The labiaplasty rate can be disparaged as a coercive instrument for young women to meet societal standards, or, it can be lauded as the growth of opportunity for women with functional problems and their willingness (and increased ability to afford) to shape their body how they want it.

As detailed previously on Notes on Liberty (here and here), the stifling force of 20th and 21st century censorship has been obsessed with pornography and female pleasure. For those who view labiaplasty as a disturbing, sexist phenomenon, one place to start would be rating boards, often managed by the government.

People like Dylan Marron gave us Trump

Have we gotten it out there enough that the left’s obsession with elitist politically correct culture partially lost them the election  – per Bernie Sanders, per President Obama – throwing plenty of center folks into the authoritarian right? Yeah? Good.

Here’s an example of the tone-deaf (oops, was that ableism?) leftist reporting style that utterly alienates its audience:

The whole video is done in a patronizing, vicious manner. The reporter might have mistaken the mood of his tirade as sarcastic, or funny, or something. Instead, it comes off as the embodiment of the left’s carcinogenic (oops, was that ableism?) idée fixe: ostracizing condescension. (“If you don’t agree with me – fuck you!”) In this post-election nation, where Jonathan Haidt’s message of understanding might yet get a chance, videos like this are just tedious.

I was struck not just by the venomous fashion of the sketch, but its utter lack of depth that has become familiar in most comedic reporting since the election. Moreover, the entertainer Dylan Marron clearly misunderstands one of his own vital points. He writes off disability method acting (in films, not just Trumpian impersonations) as “ableist,” making me wonder if he understands the purpose of acting. The purpose of acting is to portray someone you are not, and do a good job. Marron stress the point: “witness Arts academies honor able-bodied actors over and over again for pretending they have a disability.”

Yes, acting is also known as pretending. For a tautology, Marron thinks it packs much more of a punch. There is a reason disabled people don’t often play disabled roles, and it’s not just because most celebrities – with a great many exceptions – are able-bodied, physically and mentally. Actors aren’t hired to portray who they actually are, unless it’s a biopic, and the more difficult the role, the better the acting. Marron, it seems, wants a world where actors must portray their actual, own lived experience. Hollywood directors need to recruit genuine serial killers for their horror films. In essence, the abolishment of acting.

Portraying a character with a disability is a role every good actor should be capable of executing. Celebrities get roles portraying disabled people because they’re good actors, and there’s nothing ableist about it. Tom Hanks won the Best Actor Academy Award because the character of Forrest Gump was a difficult one to convey. Doesn’t actively seeking someone who is developmentally challenged, just to put them in a feature film as a mentally-disabled character – as a token – seem far worse than recruiting someone qualified with good acting talent to take on the role?

In case you think Marron doesn’t actually want an end to all acting, ever, Marron stresses, again, that actors are taking on a role “they’ve never lived” when they portray mentally-challenged individuals. In other words, they’re acting. Not just acting, but acting well. Boom, take that, you ableist scum!

Here, a point could be made that Marron doesn’t even bother to observe: if actors have never lived life disabled, isn’t their research on the role going to be informed by vicious stereotypes and come off as derogatory or insensitive? Now this actually is something to be concerned about. Directors and actors should, certainly, consult people with actual intellectual or physical disabilities when they feature these roles in their films, for the sake of decency and guidance. Good information should be researched rather than baseless stereotypes about what it means to be bipolar, or autistic, or depressed, or what have you.

This also answers a potential rebuttal of my post: if actors can portray any role, even if they haven’t lived that experience, what’s the problem with “blackface”? It’s simple: the difference is that blackface, and acting as a different race or ethnicity, is informed by vicious stereotypes. It’s been abandoned by Hollywood because it was genuinely racist, and based on ethnic clichés. Thus, the difference between Tropic Thunder‘s “Simple Jack” and Forrest Gump: one actually attempted to portray a mental disability, realistically, and one played off vulgar stereotypes (ironically, of course).

In the world of this marronic sketch, The Dark Knight would never have been filmed. A Beautiful Mind, Fight Club, Psycho would never have been filmed. Donnie Darko would never have been filmed. Benjamin Button would never have been filmed (not so bad). The obsession with politically correct culture already gave us Trump, and nonsensical videos like this are essentially advertisements for his re-election. Don’t take away our cinema too.

The safety of safe spaces

Michelangelo’s recent post on safe spaces has led me to revive an old thought I had. It’s not that safe spaces are bad, other than infantilizing students – they don’t tread on anyone’s rights. The worrisome consequence is censorship, which might arise from building bigger and better safe spaces, until eventually the university wants to consider its entire acreage a safe space, and finally the nation does too.

That concern is very real, especially given that political commentary these days is more tense than ever before, and parties may wish to retreat from every corner of the internet or any social gathering. What I want to analyze here, though, is what actually happens with speech, and the inherent problem of protecting ourselves from speech: that the consequences of words are genuinely up to us.

While developing safe spaces on universities, the idea is bannered around that words hurt, and students on campus need administration-sponsored buildings to provide a comfortable atmosphere to avoid or deal with these infiltrations on their emotional-or-otherwise safety. It’s worthwhile to preface that surely, words do hurt, in a sense; it would be ignorant to suppose that vocalizations never have any traumatic impact on the listening party. And that safe spaces are instrinsically tied to minority representation and protection is a claim irrelevant to what the actual message broadcasted by these miniature creations is: again, words hurt, and are somehow a tool of oppression.

It is politically advantageous to think of words as tools of oppression, as I noted with my experience in a multicultural and gender studies class. Attaching the label “oppressive” to an action in the cultural geist makes it far less difficult to get people to rally against that action, or even get it prohibited. However, though words might be useful tools for oppressors, the linguistic oppression is always in a very material way defended and perpetuated by the would-be oppressee.

Let’s think about messages and symbolism. There is no meaning attached externally to an object – only internal, psychological meaning(s) inside of individuals. (These might arise culturally, habitually or traditionally.) Without an existent population of individuals proclaiming that a word means something, the arrangement of squiggly lines given an arbitrary pronounciation has no relevance or meaning. If a word is antiquated it has no meaning (though it may once have, but only for an extant population). I know this position on language might be aggressively denied by some thinkers that commit themselves to this arena, but I think this formulation is adequate for now being common sensical. If it is incorrect, it is at least relevant for my main explanation of why safe-spaces are ridiculous (following Robert Nozick’s analysis of explanations, it could be thought of as a fact-defective potential explanation).

Following my point, in a very real sense, both persons make deliberate decisions through vocalization. It’s obvious that “faggot” or “dyke” are worthless without people to identify them – whether you’re an internalist or externalist with language, this formulation will still hold, thus the simplistic and applicable definition. But it is perhaps less obvious that the meaning of these words is most critical from the person that listens to their proclamation, as opposed to the enunciator.

The listener has to want the word or phrase to mean whatever it means to him or her, and want their meaning to keep. If “faggot” is a prejorative term for a homosexual man to a listener, L, it reflects his desire that “faggot” remain this vulgarity. L’s desire to interpret a word surely does not change the intention of the orator, S, in saying it. Yet if S is speaking with intent to curse provocatively, this curse – passing as a wave form to L’s ears – and its reception is wholly dependant on L’s conscious attention. There isn’t a meaning embedded in the sound wave; there isn’t a meaningfulness-mesh suffused throughout Earth’s atmosphere that attaches purposefully to human undulations that disturb it. Meaning is in S… meaning is broken as the vocalization travels… and a meaning is conceived in L. Meaning isn’t revived, resusitated or reinvigorated in L: it is wholly created anew from his brain. There is a direct, physical connection from S’s oral exercise and L’s auditory reception, but no such connection exists between S and L’s brains where meaning exists. Thus, each person creates it fresh and idiosyncratically. It is always an effort of both parties to communicate meaning.

Given this understanding, seeking protection from words is ineffective. This is not said in ignorance of some of the social research that discloses the power of words as comparable to physical violence. It has been shown that lashing out vocally can cause trauma, perhaps even on par with getting physical. Verbal abuse, the height of dangerous speech, is not the proper nor stated enemy of university safe spaces, however. Safe spaces outlaw any range of contrasting opinions, and controversial dialogue, whereas verbal abuse is, inherently, abusive, and in some degree illegal. Verbal abuse, though it might contain the same and worse prejoratives as any ordinary, disrespectful speech, is legitimately dangerous, and in a sense implies a relationship between the speaker and listener that is absent in the latter type of speech. If cajoled on the streets for wearing a short skirt, one is not verbally abused, but instead harassed (and only harassed if the speech is continuous). Regular encounters with strangers might be distressing and unpleasant, not to mention obnoxious, but they linger in an area of the violence spectrum far below verbal abuse. The verbal encounters a student has at a university with a speaker or faculty member rarely ever constitute abuse, and safe spaces are set up to avoid/deal with these encounters; so safe spaces do not deal with verbal abuses but rather arguments and disagreements.

This sort of analysis seems to assume an innate stoic element to persons, so that emotional reactions are wholly within their rational control. The intention is not to claim that veterans with post-traumatic stress, or victims of violent rape, are willing their capacity to be triggered by speech – that they are entirely complicit in their ongoing trauma. With the analysis it seems more likely that persons with genuine inabilities to “get over” distressing speech have a mental blockage that precedes the verbalization of S. While an untraumatized person has to make an effort to conceive the meaning originally intended by S, war veterans might be triggered by references that are beyond, in some way, their ratiocination; to be consistent with the rest of the reasoning here, we can say they can’t choose to choose a separate meaning.

In discussing persons that speak contrasted to persons that listen, the word “listen” is specifically important. It might have seemed appropriate, at the beginning, to portray the one that does not speak as the “receiver” as opposed to “listener”; after all, with active listening painted as a narrow skillset in behavioral sciences or therapy, far beyond simple hearing, we might not want to apply this connotational activity to the person on the receiving end of a profanity (in order to up-play that person’s role as inactive victim). Hopefully now, the importance of “listener” is clear: the receiver is indeed always L when words hurt, or when any meaning whatsoever is left intact among the orator and audience.

Now, safe spaces are a better alternative to no-platforming speakers with controversial or simply oppositional viewpoints. They are echo chambers that stifle novel opinions, for sure, but as long as their participation is voluntary, they pose no real issue.

But they cannot be justified by recourse to “protection from oppressive speech,” or buffer from profiling hate speech. Verbal abuse is almost never an occurrence at university events, and the maxim that speech somehow, as a singular action of the speaker, causes mental or emotional damage has been refuted. Unless all the people arguing the need for a safe space genuinely suffer from post-traumatic stress or another disorder which limits their ability to choose to choose, their claim for safety is far less strong than it might seem to be.

What sort of discipline is women’s studies?

The tenets of women’s studies – and gender or multicultural studies – of patriarchy, intersectional oppression and social constructionism are, as noticed by Toni Airaksinen, unprovable and unfalsifiable. (We’ve had some discussion of Popperian falsifiability elsewhere; maybe this is another opportunity.) Social constructionism, I would argue, stands as a legitimate scientific theory: it can be either confirmed or refuted by biological evidence (Cf. John Dupré, Ian Hacking, Nancy Cartwright, etc.). The other two tenets, however, cannot hold the esteem of science, and don’t fit nicely as philosophical, sociological or political theories either. If they are considered philosophical theories, it has to be recognized that they began with their conclusions as premises; ergo, they are circular, and only confirmed by circularity. Neither conjecture has even the loose falsifiability to belong to a social science like sociology, and their refutation (were it possible) would mean the closing of their scientific branch, so they cannot be (relevant) sociological theories. Finally, very few theories that fall under the branch of “political” are fundamentally political; usually, they begin in another, more atomic field and are only secondarily responsive to the political realm. So, calling them political theories begs the question. It makes the most sense to classify theories of “patriarchy” and “intersectional oppression” as theological conjectures instead of philosophical, sociological or political.

To demonstrate the point: firstly, they posit an original sin: some of us are born with privilege, and only through reparations or race/gender-denunciations can we overcome it. They also, again like Christianity, possess a disdain for the current, real state of things: where Christians posit a celestial heaven for the afterlife, progressive idealists embrace utopian visions materially impossible to accomplish, or at least humanly unrealistic. To fuel the utopianism, historicism or a disregard for enlightened economic, historical or sociological analysis comes with the politics. Another tenet of religion is its typical weak exclusivism (van Inwagen, 2010): religions take themselves to be logically inconsistent with other sects (that is, if two belief systems are logically consistent, one is not a religion), and hold that, for people in the typical epistemic state of its adherents, it is rational to accept that religion. This mild exclusivism is very obvious for movements like modern feminism; it is also easy to see that stronger exclusivism not only follows from weak, but is applicable to the leftist ideologies as well: proponents of a religion must find opponents that possess the same epistemic certifications to be irrational. Also, the same exceptionalism, and infiltration into politics, is familiar to religions (like Christianity and Islam) as well as feminist theorists that seek to distort the law into beneficial means, beyond its legitimate jurisdiction.

Finally, Ludwig Feuerbach wrote in the 1840’s that theology was truly anthropology: Christianity was an appraisal of man, and the story of mankind. Gender studies sees this reversed: what might euphemistically be termed social science or anthropology, sociology, etc. is discovered to be instead a new sort of theology. Facts are subordinate to blind belief and obedience, and the probing essence of reason is dismissed for the docile, hospitable nature of faith. It seeks to see God, or masculine oppression, in everything. This is another instance of its discontent for anything formerly satisfying; until the tenets of women’s studies are taught exclusively in the classroom, its students will consider themselves forever oppressed. Creationism’s proponents wrestled fruitlessly as evolution replaced their faith in American middle schools. Feminists will try tirelessly to invade grade school as well, until faith can again triumph over critique.

Gogol Bordello and Multiculturalism

Donald Trump is about to be President of the United States. Trump’s victory is the result of a great plethora of political and cultural attitudes. It is not a “white-lash” (both candidates failed to attract the hispanic and black audience); it is not because America is, beneath the diverse veneer, intrinsically racist, sexist, xenophobic, islamophobic, homophobic, etc., etc.; it’s not simply that Bernie might have won had the DNC not been skewed in Hillary’s favor, nor is Trump’s unexpected win simply a retaliation from general conservatives after a double Democratic term. One of the largest elements in Trump’s victory is the cultural shift toward political correctness, and the backlash from not only conservatives but apolitical entities as well. People on the left won’t understand (except maybe accelerationist Marxists), but the infiltration of academia by progressive ideas, the shifting of institutions into liberal political pandering, and the emerging call for the repression of free speech has bent a great migration of non-Republican and nonpartisan minds into the Trump vote.

Establishment-left politicians are effectively finished after the failed Clinton campaign, just like the old-school GOP is finished following the election of their ugly duckling. What emerges from the left wing will most likely be more radical and extreme than Donald was to his political label. The movements all function under one shared umbrella, one unlikely to back down now that its worst nightmare is in charge for four years. Moving on from these facts, and recognizing that political correctness is a feature of the direction of left politics in general, I’ll comment on my first real experience with the anti-neoliberal left.

As a freshman in college, I took an introduction to Multicultural and Gender Studies course (the sheer fact that cultural, ethnic studies and gender/sex studies are combined implies the sort of ideological commitments necessary to teach these classes). My professor, unlike many in the major, let us be led into her viewpoints, rather than beginning sharply from her own and forcing us to abruptly commit or retaliate (as happens in Political Scienc classes). This approach is more gentle, more clandestine, leading to a greater deal of brainwashing. A few weeks in, she asked, “What is race?” I answered promptly, “a social construction.” MCGS155 was, in a sense, the first class I became utterly submissive to my teacher, and participated at any opportunity. When asked, “What is gender?” I vocally distinguished between the genitalia (sex) between our legs, and the identity in our heads. Thus far, the beliefs I was committed to then are the same I possess now.

Over the semester, however, I was taught lessons that were more sinister, more nefarious, and at times wholly offensive to the reality of the world. When my professor explained that she needed a male professor to negotiate her wages at my university (because women are not taught how to negotiate or argue, while men are tacitly trained to be argumentative and authoritative), I thought it made perfect sense, and it does. Then I was asked, gradually, to believe all women’s experiences were like this, all the time. Gender and racial monoliths began a glacial formation. Through the acceptance of small-scale experiences, a larger picture began to manifest in my mind: that of systematic discrimination and, eventually, oppression.

Prior to taking the multicultural and gender studies course, the word “oppression” was rare to encounter, especially when applied to a contemporary setting. Oppression was what the victims of Transatlantic slavery faced, for me. Indeed, outside of academia and far-left politics, that’s what oppression is: forced servitude. When leftist vocalizations of “oppression” take to the social field, the primary apolitical connotation is slavery, and so slapping the label on our government or culture can only arouse the most sincere feelings of empathy and rage. “Oppression” was used in my class to describe the conditions under which any and all minorities live in the United States. Using such an authoritative word, I began to understand American society as functioning modern-day slavery. Toni Airaksinen points out that women’s studies classes are built on the conjectures of “patriarchy, intersectional oppression, and social constructionism.” To note that “oppression” does not realistically describe any specific group’s position in American society would be to upset my professor, the major, and an entire national field of study.

The epitome and eventual product of my brainwashing was an extended argumentative essay, in which I concluded that Gogol Bordello was, among other things, cultural appropriation, offensive to diasporadical cultures, faux-ethnically inclusive, and, in some mystical sense, racist. I argued that Funkadesi (a South African-styled, funk/hip hop group liked by Obama) was the true gender and cultural warrior. As a teenager I used to enjoy Gogol Bordello as fun, raunchy music; within three months, however, I’d called them “insincere,” “promoting global fornication” with a “condescending attitude of hemispherical and cultural superiority.” My class, effectively, destroyed the fun in life.

Even as I wrote the anti-Bordello essay (calling Eugene Hütz a “homogenizer”), I felt that what I was arguing was somehow off. When I hung out with friends, friends who enjoyed Gogol Bordello, my conscience nagged that I ought to confront its problematic elements and put an end to their uninformed participation in oppression; another part of me, more internal and sensible, told me uninformed participation is a staple of human aesthetic enjoyment, and launching into a leftist tirade was not only off-kilter but immoral and misanthropic. After I passed the class I learned to reneg the Anglo-Saxon hatred and reinterpret Gogol Bordello not as cultural offensive, but culturally celebratory, inclusive, and self-aware. 

An element, one that I now consider essential to far-leftist politics, that dominated the course was its utter lack of appreciation for any actual social progress throughout history. This is done singularly and topically. In the beginning of class, we discussed the image of America as a “melting pot”; this ideal was rejected in the 1980s as assimilative: the Western Caucasian template would dominate the pot, as minority groups lost their identities (i.e., globalization). The great celebration of the census bureau that we might all mix together our distinctions and emerge more wholesome was decimated by my professor’s politics. Then, we discussed multiculturalism: instead of the stew of the melting pot, American immigrancy and citizenship would come together as a mosaic or kaleidoscope, with our distinctions still celebrated even as we learned to function together. Multiculturalism, for the second third of my semester, seemed enlightened: different groups would no longer be processed into a Western canon. However, this too was to fail as equally problematic. (Those of you outside of culture and gender studies who might think multiculturalism is still upheld as the ideal, guess again.) My teacher proposed that our society must enter something like a post-multicultural state. Multiculturalism was too tokenizing, too uninformed, too patronizing; somehow, the Caucasians had won again, and we had to move on to new philosophical horizons.

This tradition of dissatisfaction with formerely satisfying solutions is across the board with modern leftist movements. Just lately, a (brilliantly un-self aware) Guardian writer Zoe Oja Tucker wrote about college-aged men being severely punished for a sexist sheet of paper, all while desperately holding on to an ideology that says this sort of punishment is culturally nonexistent. The far-left has been eating itself alive for a while, like when Canadian Black Lives Matter protesters shut down a Gay Pride parade. One might suspect that post-multiculturalism will be answered by a sort of apartheid, and indeed, that seems to be the case with new segregationist options offered for minorities. (The pre-Civil Rights are back, but the positions have switched.) Meanwhile, by squabbling over increasing theoretical accuracy, legitimate gains that have been made are seen as neutral events, or political façades for continued oppression. Thus, the entrenched Marxist doctrine (which informs much of the left’s perception of politics nowadays) that society is composed of only two groups, the bourgeoisie and the proletariat, at once twists into the cultural Marxism of “oppressor” and “oppressed,” and simultaneously loses its secondary category to internal disputes over minute aspects of the ideology itself, enlarging at once the first, privilege-possessing class. Progress in women’s rights, gay rights, etc. are even seen as PR-masks for the real tyranny – that of capitalism – by Marxists. So when law is passed specifically to aid the working class, not even this can satisfy the theory of endless, eternal oppression. The dissatisfaction with solutions is also seen with Marxists’ continued rejection of campaign voting: even though an increase in the third-party vote would alter American politics for the next campaign, Marxists across the board have rejected participation, dismissing the entirety of presidential elections as a corporate charade. (In essence, never doing anything for their own progression. Yet this dead philosophy hangs on.)

Leftist political scientists don’t care about legitimate social progress, because the great bureaucracy of professionalized philosophy requires tedious publishing year after year, and if ever theoretical perfection (or genuine satisfaction) were reached, the opportunity for tenure is lost. Thus, utter shit is churned out, like studies on online drinking photos promoting “regimes of gendered power,” dildos as tools of oppression, critical analyses on testicles, or studies on how to convince young women they are systematically oppressed. Freud would probably have castrated himself before he saw his methodology used for such off-base and imbecilic purposes today.

Feminism fought and won victories: in its first wave for voting rights, in its second for sexual freedom and abortion rights. It is not fighting for equally protecting legislation anymore. It is now fighting a culture war, and the only way to fight a culture is by seeking to replace it with a new ideology, and there is no immediate reason to assume the new one will be better than the old one. Third-wave feminism might best be described with a quip occasionally offered by its constituents: “if you’re not offended, you’re not paying attention.” (Or: “if you’re not finding oppression: look harder.”) Thus, the quality of “uninformity,” i.e. ignorance, discussed earlier, so despised by leftists and attributed to any of their opponents, is reckoned as the price to pay for not being enraged all the time. We must be offended constantly, or risk ignorance; this sort of position, of course, propels the lack of satisfaction with actual social progress, disturbs the sense of civil mobility, and leads to a rejection of enjoyment of almost anything.