In the Republic, Socrates introduced the “noble lie”: governmental officials may, on occasion, be justified in propagating lies to their constituents in order to advance a more just society. Dishonesty is one tool of the political class (or even pre-political — the planning class) to secure order. This maxim is at the center of the debate about transparency in government.
Then, in the 20th century, when academic Marxism was in its prime, the French Marxist philosopher Louis Althusser became concerned with the issue of social reproduction. How does a society survive from one generation to the next, with most of its moors, morals and economic model still in place? This question was of particular interest to the Orthodox Marxists: their conflict theory of history doesn’t illuminate how a society is held together, since competing groups are always struggling for power. Althusser came up with “Ideological State Apparatuses”: institutions, coercive or purely ideological, that reinforce societal beliefs across generations. This necessarily includes all the intelligence agencies, like the CIA and FBI, and state thugs, like the Gestapo and NKVD, but it also includes the family unit (authorized by a marriage contract), public education and the political party system. “ISAs” also include traditions in the private sector, since for Althusser, the state exists primarily to protect these interests.
It’s rarely easy enough to point to a country and say, “This is the dominant ideology.” However, and here the Marxists are right, it can be useful to observe the material trends of citizens, and what sorts of interests people (of any class) save up money for, teach their children to admire, etc. In the United States, there is a conditional diversity of philosophies: many different strains abound, but most within the small notecard of acceptable opinion. Someone like Althusser might say there is a single philosophy in effect — liberal capitalism — getting reproduced across apparatuses; a political careerist might recognize antagonists across the board vying for their own particular interests. In any case, the theory of ISAs is one answer to conflict theory’s deficiencies.
There is no reason, at any time, to think that most of the ideas spreading through a given society are true. Plenty of people could point to a lesson taught in a fifth grade classroom and find something they disagree with, and not just because the lessons in elementary school are simplified often to distortion. Although ideas often spread naturally, they can also be thrusted upon a people, like agitprop or Uncle Sam, and their influence is either more or less deleterious.
Those outlooks thrust upon a people might take the form of a noble lie. I can give qualified support for noble lies, but not for the government. (The idea that noble lies are a right of government implies some sort of unique power for government actors.) There are currently two social lies which carry a lot of weight in the States. The first one comes from the political right, and it says: anyone can work their way to financial security. Anyone can come from the bottom and make a name for themselves. Sentiment like this is typically derided as pulling oneself up from the bootstraps, and in the 21st century we find this narrative is losing force.
The second lie comes from the left, and it says: the system is rigged for x, y, z privileged classes, and it’s necessarily easier for members of these groups to succeed than it is for non-members. White people, specifically white men, all possess better opportunities in society than others. This theory, on the other hand, is increasingly popular, and continues to spawn vicious spinoffs.
Of the two, neither is true. That said, it’s clear which is the more “socially useful” lie. A lie which encourages more personal responsibility is clearly healthier than one which blames one’s ills all on society and others. If you tell someone long enough that their position is out of their hands because the game is rigged, they will grow frustrated and hateful, and lose touch with their own creative power, opting to seek rent instead. Therefore one lie promotes individualism, the other tribalism.
Althusser wrote before the good old fashioned class struggle of Marxism died out, before the postmodernists splintered the left into undialectical identity politics. God knows what he would think of intersectionality, the ninth circle in the Dante’s Inferno of progressivism. These ideas are being spread regardless of what anyone does, are incorporated into “apparatuses” of some sort, and are both false. If we had to choose one lie to tell, though, it’s obvious to me the preferable one: the one which doesn’t imply collectivism in politics and tribalism in culture.
“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”
Thomas Kuhn, The Structure of Scientific Revolutions p. 111
I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.
My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”
And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.
The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.
If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.
The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., those of Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.
Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary. (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.
Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.
Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unconfirmed auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.
But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”
Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.
For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.
For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.
So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.
Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms, participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.
From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.
This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acolytes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.
There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.
Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremberg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…
In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”
Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.
Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocutors, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.
Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. The Final Solution is not about extermination anymore but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.
The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.
Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it is climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.
Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermione Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”
“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233
Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.
Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.
When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.
The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.
Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.
Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.
Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.
If we don’t posit a baseline-rationality — Hermione Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.
There were other attacks on the hoax, from Slate, Salon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.
Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.
The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”
Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.
In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.
However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.
In part, postmodernism has its origin in the existentialism of the 19th and 20th centuries. The Danish theologian and philosopher Søren Kierkegaard (1813-1855) is generally regarded as the first existentialist. Kierkegaard had his life profoundly marked by the breaking of an engagement and by his discomfort with the formalities of the (Lutheran) Church of Denmark. In his understanding (as well as of others of the time, within a movement known as Pietism, influential mainly in Germany, but with strong precedence over the English Methodism of John Wesley) Lutheran theology had become overly intellectual, marked by a “Protestant scholasticism.”
Scholasticism was before this period a branch of Catholic theology, whose main representative was Thomas Aquinas (1225-1274). Thomas Aquinas argued against the theory of the double truth, defended by Muslim theologians of his time. According to this theory, something could be true in religion and not be true in the empirical sciences. Thomas Aquinas defended a classic concept of truth, used centuries earlier by Augustine of Hippo (354-430), to affirm that the truth could not be so divided. Martin Luther (1483-1546) made many criticisms of Thomas Aquinas, but ironically the methodological precision of the medieval theologian became quite influential in Lutheran theology of the 17th and 18th centuries. In Germany and the Nordic countries (Denmark, Finland, Iceland, Norway and Sweden) Lutheranism became the state religion after the Protestant Reformation of the 16th century, and being the pastor of churches in major cities became a respected and coveted public office.
It is against this intellectualism and this facility of being Christian that Kierkegaard revolts. In 19th century Denmark, all were born within the Lutheran Church, and being a Christian was the socially accepted position. Kierkegaard complained that in centuries past being a Christian was not easy, and could even involve life-threatening events. In the face of this he argued for a Christianity that involved an individual decision against all evidence. In one of his most famous texts he makes an exposition of the story in which the patriarch Abraham is asked by God to kill Isaac, his only son. Kierkegaard imagines a scenario in which Abraham does not understand the reasons of God, but ends up obeying blindly. In Kierkegaard’s words, Abraham gives “a leap of faith.”
This concept of blind faith, going against all the evidence, is central to Kierkegaard’s thinking, and became very influential in twentieth-century Christianity and even in other Western-established religions. Beyond the strictly religious aspect, Kierkegaard marked Western thought with the notion that some things might be true in some areas of knowledge but not in others. Moreover, its influence can be seen in the notion that the individual must make decisions about how he intends to exist, regardless of the rules of society or of all empirical evidence.
Another important existentialist philosopher of the 19th century was the German Friedrich Nietzsche (1844-1900). Like Kierkegaard, Nietzsche was also raised within Lutheranism but, unlike Kierkegaard, he became an atheist in his adult life. Like Kierkegaard, Nietzsche also became a critic of the social conventions of his time, especially the religious conventions. Nietzsche is particularly famous for the phrase “God is dead.” This phrase appears in one of his most famous texts, in which the Christian God attends a meeting with the other gods and affirms that he is the only god. In the face of this statement the other gods die of laughing. The Christian God effectively becomes the only god. But later, the Christian God dies of pity for seeing his followers on the earth becoming people without courage.
Nietzsche was particularly critical of how Christianity in his day valued features which he considered weak, calling them virtues, and condemned features he considered strong, calling them vices. Not just Christianity. Nietzsche also criticized the classical philosophy of Socrates, Plato, and Aristotle, placing himself alongside the sophists. The German philosopher affirmed that Socrates valued behaviors like kindness, humility, and generosity simply because he was ugly. More specifically, Nietzsche questioned why classical philosophers defended Apollo, considered the god of wisdom, and criticized Dionysius, considered the god of debauchery. In Greco-Roman mythology Dionysius (or Bacchus, as he was known by the Romans) was the god of festivals, wine, and insania, symbolizing everything that is chaotic, dangerous, and unexpected. Thus, Nietzsche questioned the apparent arbitrariness of the defense of Apollo’s rationality and order against the irrationality and unpredictability of Dionysius.
Nietzsche’s philosophy values courage and voluntarism, the urge to go against “herd behavior” and become a “superman,” that is, a person who goes against the dictates of society to create his own rules . Although he went in a different religious direction from Kierkegaard, Nietzsche agreed with the Danish theologian on the necessity of the individual to go against the conventions and the reason to dictate the rules of his own existence.
In the second half of the 20th century existentialism became an influential philosophical current, represented by people like Jean-Paul Sartre (1905-1980) and Albert Camus (1913-1960). Like their predecessors of the 19th century, these existentialists criticized the apparent absurdity of life and valued decision-making by the individual against rational and social dictates.
Ho hum. Jacques wants his government to do three things in the name of fighting Muslim terrorism (not to be confused with other, more numerous kinds of terrorism): 1) allow for an armed, perpetually-on-alert military to be active on US soil, 2) allow for a surveillance state that can do as it pleases in regards to Muslims only, and 3) initiate ideological quotas for Muslim immigrants.
The entire ‘comments’ thread is well worth reading, too. Dr Amburgey, who came from the same doctoral program as Jacques, brings the quantitative fire; Dr Khawaja, the qualitative. Jacques has responded to each of them.
The absurdity of Delacroix’ argument speaks for itself. I will come back to it shortly, but first I want to address a couple of his points that are simply made in bad faith. Observe:
(Yes, Mohammed did behead every man of a vanquished enemy tribe on the battlefield. Incidentally, they were Jews. The Prophet then “married ” their wives, he raped them, in others words. Bad example? Talk about this genuine part of Muslim tradition?)
Murdering and raping Jews is a “Muslim tradition”? I am sure this is news to Uighurs in China and the Javanese of Indonesia. I think there is a good case to be made for a present-day Arab cultural chauvinism that rests in part on what could be called “Muslim tradition,” but this is not a nuance that Jacques – the retired college professor – cares to address. I wonder why. If we’re going to go back to the 7th century to find cultural defects, can anybody think of something nasty that was going on in what is now France at the time? In what is now the US? What an odd historical anecdote to include in an argument.
Here, too, is another whopper:
One article of faith among literalist Muslims is that government must come from God. That’s why the Supreme Leader of the Shiite Islamic Republic is explicitly a cleric, couldn’t be an elected civilian or a general. This belief also explains the search for a Caliphate among Sunni jihadists, a polity where administrative and religious powers are one and the same.
What is a “literalist Muslim”? Nevermind. The government of Iran, its structure, is based on Plato’s Republic, not the Qur’an. The “Supreme Leader” Jacques identifies is based on the notion of a philosopher-king, not a Shiite cleric. This was done to protect the new dictatorship from its many enemies, including those loyal to the old dictatorship (the one supported by the United States; the one that Washington installed after helping to remove a democratically-elected Leftist government during the Cold War). The rhetoric of the Iranian dictatorship is explicitly religious, but in reality it’s just plain, old-fashioned despotism.
In a similar vein, “Sunni jihadists” (to use Jacques’ term) do not search for a Caliphate because of a belief that government should come from God, but instead look to a mythical Caliphate that they believe existed from the 7th to early 20th centuries as inspiration for creating a society that cannot be pushed around by murderous Western governments. Pretending that Arab Sunnis want to create a Caliphate in order to strengthen the link between government and God can only be described as “dishonest” when it comes from the mouth of a sociologist with a doctorate from Stanford.
At best, it could be argued that Jacques is simply making these types of points because they are pervasive throughout American society, and thus we – as libertarians of all stripes – have our work cut out for us. Now that I think about it, Jacques’ argument is so silly that it has to be an exercise in critical thinking. Nobody of his stature could say something so stupid, right?
Those are just two examples of, uh, the misrepresentation of reality by Jacques. There are many more, and I don’t think he got those myths from an academic journal. He got them from Fox News. That’s not good. That means libertarians are not taking advantage of their right to free speech, like conservatives and Leftists do. Why aren’t you blogging more often?
I’d like to turn back to his policy proposals. Here they are again:
- an armed, perpetually-on-alert military on US soil,
- a surveillance state that can do as it pleases in regards to Muslims only, and
- ideological quotas for Muslim immigrants.
The first two proposals look like they were copied directly from the playbook of the Third Reich (I hope you’ll reprimand me in the ‘comments’ section if you think I am being overly dramatic, or strawmanning Jacques’ argument). Just replace “US” with “Germany” and “Muslims” with “Jews” and voila, you have an answer for your Muslim (“Jewish”) problem. (RE Policy #3: National socialists, of course, don’t like anybody immigrating to their territories, whereas Jacques, in his infinite kindness and wisdom, seeks only to allow those who think like him into his territory.)
Again, Jacques’ argument is silly. It is both vulgar and unintelligent. It is misinformed. And yet I have to ask: Who is winning the PR battle here, conservatives on the one side or left-liberals and libertarians on the other?
Everyone carries a part of society on his shoulders; no one is relieved of his share of responsibility by others. And no one can find a safe way out for himself if society is sweeping toward destruction. Therefore, everyone, in his own interests, must thrust himself vigorously into the intellectual battle. None can stand aside with unconcern; the interest of everyone hangs on the result. Whether he chooses or not, every man is drawn into the great historical struggle, the decisive battle into which our epoch has plunged us.
That’s from Ludwig von Mises, the libertarian Austrian (and Jewish) economist who had to flee his homeland as the Third Reich took power. Speak your mind!
Marie Le Jars de Gournay (1565-1646) was a minor aristocrat from Sancerre in central France who became a leading scholar and writer of her time, and an important advocate of women’s liberty through her scholarly career against the dismissive attitude of powerful men of the time, and through her writing in favour of equality between men and women. She was a friend of Michel de Montaigne, one of the great historical advocates of liberty if in a rather enigmatic manner, and he even treated her as an adoptive daughter. After the death of Montaigne, she lived on the Montaigne estate as a guest of the family, while preparing the third edition of Montaigne’s Essays, a contribution to the history of thought and thinking about liberty in itself.
Gournay’s work in the transmission of Montaigne’s thought is though just one episode in a life of writing covering translations of the classics, literary compositions, and essays. Two essays in particular mark important moments in the case for liberty to apply equally between the two sexes: The Ladies’ Complaint and Equality of Men and Women. In these brief, but rich texts, Gournay argues that there can be no liberty, where goods are denied, so since women have been deprived of the goods of equal esteem, there is no liberty.
She points to the frequency and intensity of denial of equal esteem to women and contests it through the examples in which women have been esteemed, or we can see that women have performed great deeds on a level with great men. The argument is very much that of a Renaissance Humanist, that is someone educated in the languages, history, and literature of antiquity, as great expressions of human spirit and with the assumption that these are the greatest expressions of human spirit. Greatness of literary, intellectual, and statecraft in modern languages, modern thought, and modern states, is possible where continuing from the classical tradition. Since the emphasis is on pagan classical antiquity, the Humanists to some degree placed humanity above Christian theological tradition, though some Christians were also Humanists and secular Humanist achievements to some degree interacted with scholarship of the Hebrew and Greek languages of the Bible, along with the Greek and Latin used by church thinkers.
Gourany’s concerns are largely secular but she does deal with the place of women in the Bible. For the Hebrew Bible (Old Testament) She points out that if the Queen of Sheba (often thought to refer to an ancient queen of Yemen, or possibly Sudan) visited King Solomon, because she knew of his great wisdom then she too must have had an interest in wisdom, and had some high level of scholarship, learning, and intellectual work herself.
With regard to the New Testament, she comments on St Paul’s injunction in his Epistles that women be silent in church and not take the role of priest. Gournay argues that Paul was not writing out of contempt for women, but fear that men would be distracted and tempted by women speaking out in church serves whereto as part of the congregation or as priests. The limitation on the role of women is not therefore based on beliefs about the supposed inferiority of women, but control of male desire.
On the role of women in the Bible, Gournay argues that in general we should not argue that it supports an inferior role for women, given that God created both men and women in the beginning, and given that men are commanded to leave their parents in order to find a wife. The connection between man and woman, and the idea that a man’s life is completed by association with a woman, is the main message of Christian scripture for Gournay.
Looking at the more secular aspects of Greek and Roman antiquity, Gournay deals with philosophical and with historical concerns. On the philosophical side she notes the importance that Plato gives to the priestess Diotima (unknown outside Plato’s writings) in his dialogue The Symposium, which appears to recount conversations about love in a dinner and drinking party in Athens attended by some of the leading people of the time.
Plato shows Socrates presenting the views of Diotima as the correct ones on love, and Socrates, the teacher of Plato, always appears in Plato’s dialogues as a representative of truth. So Gournay points out, it must be conceded that Plato claims that his ideas, and those of Socrates, are in some degree dependent on the thought of women of their time. In that case, Aristotle made himself absurd when he claimed that women were defective and inferior, since he was the student of Plato and therefore was in some way formed by ideas that Plato said came from Diotima.
Plato’s student Aristotle may have claimed women were inferior by nature to men, but Antisthenes, a follower of Socrates regarded women and men as equal in virtue. Gournay also refers to the tradition according to which Aspasia, female companion of the Athenian democratic leader Pericles (admired by Plato and Aristotle though they did not share his democratic principles) was a scholar and thinker of the time. There is a lack of contemporary sources confirming this view, but this applies to much about the antique world, so Gournay’s suggestions about Aspasia are just as strongly founded as many claims about antiquity, and the investigation of tradition is itself an important part of any kind of intellectual history.
Moving onto Roman historiography, Gournay points out the role take by women in the tribes of Germany and Gaul, according to Tacitus. Women serve as judges of dispute and as battlefield participants inciting male warriors to fight fiercely. So she can point to a revered classic source, which suggests that women had roles in ancient France and Germany denied to them in those countries in early modern times. In general, as she points out, the antiques often referred to a tribe of female warriors, known as Amazons, which may have some historical origin in Scythian tribes from north of the Black Sea.
Gournay uses her formidable Humanist learning to demonstrate the ways in which equality between men and women had been recognised in the ancient past, on some occasions in some places at least. Showing that women have been recognised as equal to men in some contexts is evidence that the lower status of women in many societies is a result of socially embedded prejudices rather than any difference in abilities. As Gournay notes, rectifying denial of rights to women is part of the basis for real enduring liberty.
Apparently some people have enjoyed the posts on ‘Another Liberty Canon’, so I will keep going on that tack, but with a revision to the heading as I ‘ll be covering some thinkers already accepted into the liberty canon, or at least some of the various canons. I’ll continue to discuss what I think should be brought into the canon, and push the boundaries a bit on those already generally accepted into the canon. I’ll be giving coverage to major figures, with regard to their work as a whole, but at some point I’ll start doing some relatively detailed readings of individual classic works.
I’ll start at the beginning, more of less with Aristotle. I’m sure there are texts and thinkers within the Greek tradition, and certainly in the Near East, southern and eastern Asia, and so on worthy of attention, but for substantial books clearly devoted to the nature of politics, and which have a focus on some idea of liberty, Aristotle seems as a good a place as any to start.
There is maybe a case for starting with Aristotle’s teacher Plato, or even Plato’s teacher. I think Plato should be rescued from the persistent image, never popular with Plato scholars, of forerunner of twentieth century totalitarianism, because just to start off the counter-arguments, Plato’s arguments refer to a reinforcement, albeit radical and selective, of existing customs rather than the imposition of a new state imposed ideology, and certainly do to suggest that arbitrary state power should rise above law.
However, on the liberty side, Plato’s teacher Socrates was the promoter in his own life style of a kind of individualist strength and critical spirit who fell foul of public hysteria. We know very little about Socrates apart from the ways Plato represents him, but the evidence suggests Socrates was more concerned with a kind of absolutism about correct customs, laws and philosophical claims, in his particular critical individualistic attitude than what we would now recognise as a critical individualistic attitude.
It looks like Socrates was an advocate of the laws and constitutions in Greek states, like Sparta that were less respectful of individuality, liberty and innovation than Athens. Though Aristotle does not look like the ideal advocate of liberty by our standards, he was critical of Plato (often referring to him though Socrates, though it looks like he is reacting to Plato’s texts rather than any acquaintance with Socratic views different to those mentioned by Plato) for subordinating the individual to the state and abandoning private property, presumably referring to the Republic which does seem to suggest that for Plato, ideally the ruling class of philosopher-guardians should not own property, and that the lower classes composed of all those who accumulate money through physical effort, a special craft, or trade, should be completely guided by those guardians.
It is not clear that Plato ever meant the imaginary ideal state of the Republic to be implemented, but it is clear that it reflects the preference Plato had for what he sees as the changeless pious hierarchies and laws of the Greek states of Crete and Sparta, and the already ancient kingdom of Egypt, in which power goes to those who at least superficially have detached themselves from the world of material gain in some military, political, or religious devotion to some apparently higher common good.
Plato and (maybe) Socrates had some difficulties in accepting the benefits of the liberties and democracy associated with fifth and fourth century BCE Athens that fostered commercial life, great art, great literature, and great philosophy. I will discuss the explanation and promotion of the values of Athens in a future post on the most distinguished leader of democratic Athens, Pericles, so I will not say more about it here.
Aristotle (384-322 BCE) came from outside Athens, he was born in monarchist Macedon which lacked the republican institutions of participatory government in the city states of Greece. Aristotle’s family was linked with the monarchy which turned Macedon into the hegemon of Greece, destroying the autonomy of Athens and the other republics. Aristotle even spent time as the tutor of Alexander the Great, who turned the Madedonian-Greek monarchical state into an empire stretching to India and Libya.
Aristotle was however not the advocate of such empires, but had already studied with Plato in Athens, where he acquired a preference for the self-governing city state participatory model of politics . His links with the Macedonian state sometimes made it difficult to spend time in Athens where most resented the domination from the north, so he spend time in Anatolia (apparently marrying the daughter of a west Anatolian king), the Aegean islands and the island of Euboea off Athens, dying in the latter place.
Despite these difficulties, Aristotle was so much in favour of the values of republican Athens that he even endorsed the idea that foreigners, or those born of one foreign parent could not be citizens, in case of a dilution of the solidarity and friendship between citizens. This issue brings us onto the ways in which Aristotle does not appeal to the best modern ideas of liberty. He was attached to the idea of a self-enclosed citizen body, along with slavery, the secondary status of women, the inferior nature of non-Greeks, restrictions on commerce, and the inferiority of those who labour for a living or create new wealth.
Nevertheless, given the times he lived in, his attitudes were no worse than you would expect and often better. Despite his disdain for non-Greeks, he recognised that the north African city of Carthage had institutions of political freedom worth examining. His teacher Plato was perhaps better on one issue, the education of women, which appeared to have no interest to Aristotle.
Still unlike Plato, he did not imagine a ‘perfect’ city state where everything he found distasteful had been abolished and did not dream of excluding free born males at least from the government of their own community. Aristotle disdained labourer as people close to slavery in their dependence on unskilled work to survive, but assumed that such people would be part of a citizens’ assembly in any state where there was freedom.
His ideal was the law following virtuous king, and then a law following virtuous aristocracy (that is those who inherited wealth), but even where the government was dominated by king or aristocracy, he thought the people as a whole would play some part in the system, and that state power would still rest on the wishes of the majority.
All Greeks deserved to Iive with freedom, which for Aristotle meant a state where laws (which he thought of as mainly customary reflecting the realities of ancient Greece) restrained rulers and rulers had the welfare of all free members of the community as the object of government. In this way rulers developed friendship with the ruled, an aspect of virtue, which for Aristotle is the same as the happy life, and justice.
Friendship is justice according to Aristotle in its more concrete aspects, and ideally would replace the more formal parts of justice. Nevertheless Aristotle did discuss justice in its more formal aspects with regard to recompense for harms and distribution of both political power and wealth.
Like just about every writer in the ancient world, Aristotle found the pursuit of unlimited wealth or just wealth beyond the minimum to sustain aristocratic status discomforting, and that applies to writers who were very rich. Given that widespread assumption Aristotle makes as much allowance for exchange and trade as is possible, and recognised the benefits of moving from a life of mere survival in pre-city societies to the material development possible in a larger community where trade was possible under common rules of justice.
As mentioned, Aristotle preferred aristocratic or monarchical government, but as also mentioned he assumed that any government of free individuals would include some form of broad citizen participation . We should therefore be careful about interpreting his criticisms of democracy, which have little to do with modern representative democracy, but are directed at states where he thought citizens assemblies had become so strong, and the very temporary opinions of the majority so powerful that rule of law had broken down. He still found this preferable to rule by one person or a group lacking in virtue, which he called tyranny and oligarchy.
He suggested that the most durable form of government for free people was a something he just called a ‘state’ (politea) so indicating its dominant normality, where the people between the rich and the poor dominated political office, and the democratic element was very strong though with some place for aristocratic influence. It’s a way of thinking about as close as possible to modern ideas of division or separation of powers in a representative political system, given the historical differences, most obviously the assumption of citizens’ assemblies in very small cities as the central part of political participation rather than elections for national assemblies.
Relevant texts by Aristotle
There is no clear distinction between politics and ethics in Aristotle, so his major text in each area should be studied, that is the Politics and the Nicomachean Ethics. Other relevant texts include the Poetics (which discussed the role of kings in tragedy), the Constitution of Athens, the Eudemian Ethics, and the Rhetoric (the art of speaking was central to political life in the ancient world). Aristotle of course wrote numerous other books on various aspects of philosophy and science.
There is a debate afoot now about whether one ever owns the likes of a novel, poem, computer game, song, arrangement or similar “intellectual” items. Some argue, to quote the skeptic, Professor Tom Bell of Chapman University’s School of Law, “Copyrights and patents function as a federal welfare program of sorts of creators,” while others, such as James V. DeLong of the Competitive Enterprise Institute, hold that “It is difficult to see why intellectual property should be regarded as fundamentally different from physical property.” I want to suggest a way to come to terms with this dispute in this brief essay and offer a possible resolution.
A major issue that faces one who wishes to reach a sensible understanding of intellectual property is just what “intellectual” serves to distinguish among what surrounds us in the world and how that contrasts with other kinds and types of possible property. What quality does “intellectual” point to about something? In my list, above, I am assuming that whatever is an invention or creation of the human mind amounts to potential IP, while others would argue that nothing intellectual in fact can constitute property, let alone private property. But this is merely to start things off, in need of clarification and analysis.
Some have proposed that the major element distinguishing intellectual from other property is that it is supposed to be intangible. So, for example, home or car or land parcels are tangible, capable of being brought into contact with our senses. However, a musical score or arrangement or a romance novel is supposed to be intangible – such a thing cannot be touched, felt or otherwise brought into contact with our sensory organs. Yet an immediate problem this attempt to distinguish intellectual property is that there are tangible aspects to inventions, and there are intangible aspects to these other items that are supposedly all tangible. A home is not just some raw stuff but a building that is the result of a combination of ideas, some of them inventions. Even land isn’t own exactly as it occurs in the wild but is configured by the more or less elaborate design work of landscapers. The same with whatever so called tangible items that function is property. A watch is not just some metal, mineral, glass and such assembled randomly but some assembly of such materials designed to show time and otherwise be appealing as well. In turn, a novel, song or computer game is also a combination of tangible and intangible stuff – the paper, typewriter or pen and the lead or ink with which the novel is written – only the author, and only for a little while, encounters the novel in intangible form after which the novel becomes an often very tangible manuscript.
The tangible/intangible distinction is not a good one for what can and cannot be owned and, thus, treated as distinctive enough to be related to owners. Indeed, the distinction seems to derive from a more fundamental one, in the realm of philosophy and its basic branch, metaphysics. In a dualist world reality would come in either a material or a spiritual rendition. Our bodies, for example, are material objects, whereas our minds or souls are spiritual or at least immaterial ones.
This goes back to Plato’s division of reality into the two realms, actual and ideal, although in Plato particular instances of poems or novels belong to the actual realm. A less sophisticated version of dualism, however, suggests the kind of division that’s hinted at through the tangible-intangible distinction. In nature we may have physical things as well as stuff that lacks any physical component, say our minds or ideas. Yet much that isn’t strictly and simply physical is intimately connected with what is, such as our minds (to our brains) and ideas (to the medium in which they are expressed).
So, the tangible versus intangible distinction does not seem to enable us to capture the distinguishing aspect of intellectual property. What other candidates might there be?
One candidate is that unless government or some other force bearing agency bans the supply of some item of intellectual property, there is never any scarcity in that supply.
There is certainly something at least initially plausible about this view. What is tangible is more subject to delimitation and capable of being controlled by an owner than something that is intangible. A car or dresser is such a tangible item of property, whereas a novel or musical composition tends to be fuzzy or less than distinct. One cannot grab a hold of a portion of a novel, such as one of its characters, as one can of a portion of a house, say a dresser.
Yet intellectual property isn’t entirely intangible, either. Consider that a musical composition, on its face, fits the bill of being intangible, yet as it appears, mainly in a performance or on a recording, it takes on tangible form. Consider, also, a design, say of a Fossil watch. It is manifest as the watch’s shape, color, and so on. Or, again, how about a poem or musical arrangement? Both usually make their appearance in tangible form, such as the marks in a book or the distinctive style of the sounds made by a band. These may be different from a rock, dresser, top soil or building but they aren’t exactly ghosts or spirits, either.
It might also appear that the theological division between the natural and supernatural mirrors the tangible-intangible division but that, too, is misleading since no one who embraces that division would classify a poem or novel as supernatural. Thus it seems that there isn’t much hope in the distinction some critics of intellectual property invoke. The tangible-intangible distinction seems to be independent of the usual types of ontological dualism and so the case against intellectual property, then, seems unfounded. If there is such a distinction, between ordinary and intellectual property, it would need to be made in terms of distinctions that occur in nature, without recourse to anything like the supernatural realm. Supposedly, then, in nature itself there are two fundamentally different types of beings, tangible and intangible ones. Is this right?
Again, it may seem at first inspection that it is. We have, say, a brick, on the one hand, and a poem, on the other. But we also have something very unlike a brick, for example, smoke or vapor or clouds. In either case it’s not a problem to identify and control the former, while the latter tend to be diffused and allusive. We also have liquids, which are not so easy to identify and control as bricks but more so than gases. Indeed, it seems that there is a continuum of kinds of beings, from the very dense ones to the more and more diffused ones, leading all the way to what appear to be pure ideas, such as poems or theater set designs.
So, when we consider the matter apart from some alleged basic distinction between tangible and intangible stuff, one that seems to rest on certain problematic philosophical theories, there does not appear to be any good reason to divide the world into tangible versus intangible things. Differentiation seems to be possible in numerous ways, on a continuum, not into two exclusive categories. Nor, again, does it seem to be the case that there is anything particularly intellectual about, say, cigarette smoke or pollutants, albeit they are very difficult to identify and control. They are, in other words, not intellectual beings, whatever those may be, yet neither are they straightforwardly tangible.
I would like to explore the possibility of a very different distinction, namely, one between what is untouched by human meaning and whatever is subject to it. For example, there would be no poems without intentions, decisions, deliberations and so forth. There would, however, be trees, rocks, fish or lakes. Is it the point of those who deny that intellectual property is possible that when people produce their intentional or deliberate objects, such as poems, novels, names, screenplays, designs, compositions, or arrangements, these things cannot be owned? But this is quite paradoxical.
The very idea of the right to private property is tied, in at least the classical liberal tradition – starting with William of Ockham, to John Locke and Ayn Rand – to human intention. It is the decision to mix one’s labor with nature that serves for Locke as the basis for just acquisition. In the case of such current champions of this basic individual right, such as James Sadowsky and Israel Kirzner, it is the first judgment made by someone to invest something with value that serves to make something an item of private property.
However all of this comes out in the end, one thing is certain: the status of something as property appears to hinge on it’s being in significant measure an intentional object. But then it would seem that so called intellectual stuff is a far better candidate for qualifying as private property than is, say, a tree or mountain. Both of the latter are only remotely related to human intentions, whereas a poem or novel cannot have their essential identity without having been intended (mentally created) by human beings.
Of course, in becoming owned, a tree and mountain does become subject to intentionality, as when someone decides to make use of such a thing for his or her purposes. And, conversely, even in the case of a poem, there are words that are as it were pre-existing and only their particular concatenation is a matter of intention.
I am not certain what the outcome should be from these and related reflections. They do suggest something that is part of both the ordinary and the so called “intellectual” property traditions, namely, that when human beings are agents of creation, when they make something on their own initiative – when they invest the world with their distinctive effort, they gain just possession of what they have produced. And if there is anything that they produce more completely than such items as poems or computer games, I do not know what it might be.
For me, then, the issue is this: When one designs and produces something novel that one has thought up, some gadget or machine or such, does one then own this design/product? And if someone else copies it, did they take something from the former against his or her will? If the answer is yes to the former, then I think the answer must be yes to the latter.
Whether the protection of one’s property occurs via this or that legal device — patent, contract, trademark, what have you — seems a secondary issue and detail. The first is ownership. Also, what one’s owning something one conceives and makes may mean for others who may be thinking up the same thing later is irrelevant, no less so than if one finds a piece of land and appropriates it and then later others, too, find it and would like to appropriate it but now may not.
Those, by the way, who complain that governments enforce patents and copyright laws, should realize that governments also enforce property rights in societies with governments. Governments in such societies are akin to body or security guards. Certainly, taxing others for this enforcement is unjust but that isn’t the essential idea behind the enforcement, not if one understands that copyright and patents could be protected without government, as well, just as other private property can be protected without government. But until it is government that protects — not establishes but protects — rights, it will also protect the right to intellectual property, if there be such a distinct thing in the first place. Taxation for such protection is irrelevant since taxation for the protection of other types of property is also beside the point.
Finally, that patents run out may be compared to the fact that ownership can cease with death, too. Of course, patents or trademarks or copyrights could all be reassigned from one to another owner, just as property in anything can be reassigned upon voluntary exchange or transfer. There is nothing necessarily odd about this, simply because the matter hasn’t developed very smoothly and consistently.
Rousseau maintains this ideological preference consistently throughout his economic thought. We have seen that he was distressed that the possibility and actuality of shifting occupational roles would lead to inauthenticity. Change and social mobility were so psychologically destructive in his view, that he came to praise the caste system of ancient Egypt because it forced sons to follow their fathers’ occupations.
On a side note (and completely unrelated as well), this is possibly the best hip-hop album of all-time. Enjoy!
A couple of very thoughtful comments have been posted in regards to property rights over the past couple of weeks, and I want to single them out for their thought-provoking content. JuanBP writes:
Libertarian socialists (no, my dear American friends, that is not an oxymoron) believe in freedom of speech, association,religion and all other liberties cherished by Libertarians. Where we differ profoundly is in our understanding of what constitutes economic liberty – for we tend to consider that property, the very foundation of capitalism, is theft.
There is one general difference I see between the left and the right which also would refer to the general difference between left-libertarians and right-libertarians. Those on the left tend to see human rights as prior to property rights. And those on the right tend to see property rights as prior to human rights. It is a difference of emphasis, but a very big difference in terms of practical application. As a left-winger with libertarian tendencies, I see no evidence that defending property rights inevitably leads to defending human rights.
Both of these comments provide great insights into one of the many myths espoused in political discourse throughout the world: that property rights are somehow different from any other human right. Continue reading
I am currently writing a paper for a political philosophy course on my ideal state (we are reading Plato’s Republic). I have made it a democratic one, despite some serious misgivings.
I realize that the people can be easily fooled by sophists and schemers, but in the end, I think that democracy represents very well the dignity of the common man. In fact, I am tempted to think that democracy is the best form of government, despite Churchill’s lament.
How democracy is structured is probably more important than if it is the best form of government. Our federal republic is pretty good as it stands (unless you are Ruth Bader Ginsberg, of course; according to her, South Africa has a much better constitution than our own), but there have been some serious flaws discovered over the centuries.
Can you name a few? The compromise on slavery and the inability of the Supreme Court to enact the 14th Amendment to protect black Americans from Jim Crow laws both stand out prominently in my view. Furthermore, can any of you come up with a better way to utilize the democratic process that is so integral to the dignity of the individual?