A Right is Not an Obligation

Precision of language in matters of science is important. Speaking recently with some fellow libertarians, we got into an argument about the nature of rights. My position: A right does not obligate anyone to do anything. Their position: Rights are the same thing as obligations.

My response: But if a right is the same thing as an obligation, why use two different words? Doesn’t it make more sense to distinguish them?

So here are the definitions I’m working with. A right is what is “just” or “moral”, as those words are normally defined. I have a right to choose which restaurant I want to eat at.

An obligation is what one is compelled to do by a third party. I am obligated to sell my car to Alice at a previously agreed on a price or else Bob will come and take my car away from me using any means necessary.

Let’s think through an example. Under a strict interpretation of libertarianism, a mother with a starving child does not have the right to steal bread from a baker. But if she does steal the bread, then what? Do the libertarian police instantly swoop down from Heaven and give the baker his bread back?

Consider the baker. The baker indeed does have a right to keep his bread. But he is no under no obligation to get his bread back should it get stolen. The baker could take pity on the mother and let her go. Or he could calculate the cost of having one loaf stolen is low to expend resources to try to get it back.

Let’s analyze now the bedrock of libertarianism, the nonaggression principle (NAP). There are several formulations. Here’s one: “no one has a right to initiate force against someone else’s person or property.” Here’s a more detailed version, from Walter Block: “It shall be legal for anyone to do anything he wants, provided only that he not initiate (or threaten) violence against the person or legitimately owned property of another.”

A natural question to ask is, what happens if someone does violate the NAP? One common answer is that the victim of the aggression then has a right to use force to defend himself. But note again, the right does not imply an obligation. Just because someone initiates force against you, does not obligate you or anyone else to respond. Pacifism is consistent with libertarianism.

Consider another example. Due to a strange series of coincidences, you find yourself lost in the woods in the middle of a winter storm. You come across an unoccupied cabin that’s obviously used as a summer vacation home. You break in, and help yourself to some canned beans and shelter, and wait out the storm before going for help.

Did you have a right to break into the cabin? Under some strict interpretations of libertarianism, no. But even if this is true, all it means is that the owners of the cabin have the right, but not obligation, to use force to seek damages from you after the fact. (They also had the right to fortify their cabin in such a way that you would have been prevented from ever entering.) But they may never exercise that right; you could ask for forgiveness and they might grant it.

Furthermore, under a pacifist anarchocapitalist order, the owners might not even use force when seeking compensation. They might just ask politely; and if they don’t like your excuses, they’ll simply leave a negative review with a private credit agency (making harder for you to get loans, jobs, etc.).

The nonaggression principle, insofar as it is strictly about rights (and not obligations), is about justice. It is not about compelling people to do anything. Hence, I propose a new formulation of the NAP: using force to defend yourself from initiations of force can be consistent with justice.

This formulation makes clear that using force is a choice. Initiating force does not obligate anyone to do anything. “Excessive force” may be a possibile injustice.

In short, justice does not require force.

Highly recommended work on Ayn Rand

Most scholarship on Ayn Rand has been of mediocre quality, according to Gregory Salmieri, the co-editor of A Companion to Ayn Rand, which is part of the series “Blackwell Companions to Philosophy.” The other co-editor of the volume is the late Allan Gotthelf, who died during it’s last preparatory stages.

The reasons for the poor scholarship are diverse. Of course Rand herself is a large element. She hardly ever participated in regular academic procedures, did not tolerate normal academic criticism on her work and strictly limited the number of people who could authoritatively ‘explain’ her Objectivist philosophy to herself and Nathaniel Branden. Before her death she appointed Leonard Peikoff as ‘literary heir’. She inspired fierce combat against the outside world among her closest followers, especially when others wrote about Rand in a way not to their liking. The result was that just a small circle of admirers wrote about her ideas, often in a non-critical way.

blog ayn rand

On the other hand, the ‘rest of the academy’ basically ignored her views, despite her continued popularity (especially in the US), her influence, particularly through her novels, and large sales, especially after the economic crisis of 2008. For sure, Objectivists remain a minority both inside and outside academia. Yet despite the strong disagreement with her ideas, it would still be normal to expect regular academic output by non-Randians on her work. Suffice it to point to the many obscure thinkers who have been elevated to the academic mainstream over the centuries. Yet Rand remains in the academic dark, the bias against her work is strong and influential. This said, there is a slight change visible. Some major presses have published books on Rand in the past years, with as prime examples the books by Jennifer Burns, Goddess of the Market: Ayn Rand and the American Right (2009), and Anne C Heller, Ayn Rand and the World She Made (2010). And this volume is another point in case.

One of the strong points of The Blackwell Companion on Ayn Rand is that the contributions meet all regular academic standards, despite the fact that the volume originates from the Randian inner circle. It offers proper explanation and analysis of her ideas and normal engagement with outside criticism. The little direct attack on interpretations or alleged errors of others is left to the end notes, albeit sometimes extensively. Let us say, in friendly fashion, that it proves hard to get rid of old habits!

It should not detract from the extensive, detailed, clearly written and plainly good quality of the 18 chapters in this companion, divided in 8 parts, covering overall context, ethics and human nature, society, the foundations of Objectivism, philosophers and their effects, art and a coda on the hallmarks of Objectivism. The only disadvantage is the large number of references to her two main novels, The Fountainhead and Atlas Shrugged, which makes some acquaintance with these tomes almost prerequisite for a great learning experience. Still, as a non-Randian doing work on her political ideas, I underline that this companion offers academically sound information and analysis about the full range of Rand’s ideas. So, go read it if you are interested in this fascinating thinker.

The death of reason

“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”

Thomas Kuhn, The Structure of Scientific Revolutions p. 111

I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.

My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”

And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.

The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.

If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.

The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.

Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary.  (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not just a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.

Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.

Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unproven auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.

But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”

Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.

For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.

For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.

So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.

Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms (Kuhn’s terminology), participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.

From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.

This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acoltyes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.

There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.

Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremburg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…

In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”

Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.

Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocuters, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.

Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. Die Endlösung, the Final Solution, is not about extermination anymore, but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.

The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.

Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it’s climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.

Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermoine Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”

“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233

Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.

Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.

When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.

The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.

Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.

Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.

Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.

If we don’t posit a baseline-rationality — Hermoine Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.

There were other attacks on the hoax, from SlateSalon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.

Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.

The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”

Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.

In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.

However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.

Speech in academic philosophy: Rebecca Tuvel on Rachel Dolezal

A few days ago, controversy exploded in the world of academic philosophy as a new article, published in the feminist philosophy journal Hypatia, earned itself a letter calling for retraction by over eight hundred scholars on the basis that its availability “causes harm.”

Rebecca Tuvel’s article, “In Defense of Transracialism,” argues that the same sort of theoretical support used to justify transgender issues entails, logically, support of the concept of transracialism. Tuvel details Rachel Dolezal’s claims, as a woman “presenting as a black woman for some years [though] her parents are in fact white” (Tuvel, 263). She posits sensible criteria that seem essential to a “successful identity transformation”: self-identification and the willingness of society to accept an identification. Then, she covers literature from biology, neuroscience, and critical race and feminist theory, to ultimately present the idea that, potentially, a concept of racial identity could turn on those criteria rather than notions like ancestry due to exclusionary concerns, a “normativity problem” (274).

The paper is about what our acceptance entails. Although it is entirely about furthering tolerance, the treatment of Tuvel online has been egregious. Some of it can be found on Twitter. One of the most extreme takes, by Nora Berenstain (archived here), outright accuses Tuvel of violence. In a response, Tuvel says she has received hate mail, but that few online have actually dealt with the questions of her article. Brian Leiter even offered to set up a fundraising endeavor if Tuvel decides to seek legal reparation for the defamation circulating online, as it could causes issues with her professorship. Hypatireleased a statement apologizing for the article, saying now they understand it was “unacceptable.”

Why was the feminist philosophy community so upset?

Daily Nous, a philosophy blog, has outlined the reasons the writers of the open letter gave for retracting Tuvel’s essay, and the shortcomings and incoherence of those reasons. Upon reading her essay, one finds that Tuvel is empathetic to trans causes, is entrenched in critical and queer literature, and genuinely only wants to explore some philosophical issues raised by Dolezal’s claims. Some of her offenses, according to Berenstain and the open letter, were deadnaming (using a transgender person’s previous name), using terminology like “transgenderism,” discussing biological sex, and not citing sources by black authors.

These particular misdeeds ran the gamut on social media. The open letter accused Tuvel of some academic concerns like mischaracterization and unpopular vocabulary. Twitter was more concerned with focusing on Tuvel as a “cishetero white wom[an refusing] to listen to cis black women and trans folks,” committing “cis white bullshit.” Noah Berlatsky wrote a shallow criticism that spends most of its time discussing how the article will be used against transpeople, and the other large bulk on Tuvel’s ignorance of history — as if history is somehow relevant to the logical consequences of a few philosophical commitments. He, again, fails to engage with it academically. It’s true that Tuvel could have incorporated more work on trans history; it’s also true that it in no way effects her basic argument.

Some responses revealed the ideological reasons for opposing Tuvel’s research. Dianna E. Anderson writes, “my problem as a philosophy undergrad… [was that] philosophy seems to separate itself out into a moral vacuum where every question is ‘just asking’ … there has to be a moral framework guiding which questions you’re asking and why … that’s why I grounded my higher education in Women’s Studies, where moral parameters are drawn around questions.” Incidentally, this is one of the two reasons the Church censored Galileo: first, it thought Galileo to be factually inaccurate; second, it had a boundary — theology of the Bible — past which speculation could not take place; cosmological and astronomical theorizing were not to transgress this line. Anderson commits herself to Women’s Studies as a place where stifling zones are set up; dogmas that may not be passed. (Funny enough, the first reason the Church employed censorship has not been picked up by Tuvel’s opposition.)

The non-scholarly attacks on Tuvel don’t hold water, which a short turn to the essay reveals. Daily Nous covered most of those points, and NY Mag addressed, again, the extent to which the criticisms avoided a critical look at the actual arguments contained within. Tuvel is not a transmisogynist. For some people outside of feminist academic philosophy, she would probably even seem like a caricature of an “ultra-leftist.” The attacks on her are antithetical to academic humility: among them ad hominems, appeals to authority, slippery slopes and strawmen. The academic environment necessary for such an unscholarly attack on a philosopher for not being aligned enough with the contemporary orthodoxy — and that is all it seems to boil down to — is very unnerving to those acquainted with the historical censorship of ideas. (Incidentally, Hypatia is named for Hypatia of Alexandria — a female Greek philosopher murdered for inciting controversy. The irony seems to be lost on everyone.)

The reasons given in the open letter to retract the article seem to merit, at most, a slight semantic revision. The scholars, among them Judith Butler, instead want a full apology and censorship. So then, what remains is to ask about the environment of philosophy that enabled this. Hypatia identifies itself as a “forum for cutting-edge work in feminist philosophy” … it also states that “feminist philosophy arises out of diverse traditions and methods within philosophy,” and commits to engage and uplift diversity within the field. Diversity in a continental philosophy journal might mean pluralistic methodology, e.g., hermeneutics, phenomenology, deconstrucion, dialectics, etc. Here, instead, the emphasis is on diversity of identity, which is seen as the foundation of “lived experience” (per the apology), which, it is conferred, provides access to enriched understanding.

Viewpoints from identities outside the mean are given an authority justified by conditions of their birth, rather than the authority of sound argumentation. What is important is some sort of status possessed. There’s an analogue available from another field in humanities: F. A. Hayek was opposed to the Nobel Prize in economics on the argument that no economist should be given so much power: the award “confers on an individual an authority which in economics no man ought to possess… the influence of an economist that mainly matters is an influence over laymen: politicians, journalists, civil servants and the public generally.” In essence, the argument becomes lost; the audience awards merit and attention based on something other than good reasons. In the case of Hypatia‘s readership, the audience seems to award attention based on identity.

It is, however, truly naïve to think the history of philosophy is the prevailing of logic over fallacy; that schools become popular because of their intrinsic validity, that logos always triumphs over our other baser means of evaluation. All sorts of humanistic factors, like creativity, freshness, propaganda, aesthetic appeal and explanatory power serve to elevate certain philosophies or cosmologies over others — these are the insights of important 20th century philosophers like Thomas Kuhn. This multiplicity of influences, however, doesn’t mean that reason as a guiding principle should be explicitly subsumed under the authority of influences like pathos or ethos, or skin color.

“In Defense of Transracialism” used argument to attempt to unravel some ongoing mysteries about gender and race. Tuvel approached the question of transracialism from a commitment to other philosophical commitments about gender and sexual identification. It was a question about what follows from our beliefs. Per Plato, philosophy begins in wonder: the critical endeavor to evaluate even our most cherished opinions, explore the incomprehensible world, and examine what incongruities appear in our web of belief.

Because of this, her essay was decisively feminist, in that it examined natural consequences of feminist theory while retaining some basic tenets (like the validity of transpeople, and, maybe, racial social constructivism). Combined with her level of analysis, there’s no question that it could only belong to a feminist philosophy journal.

Her article was criticized not for failing to reach sufficient level of rigor in analysis, but for alleged insensitivity in dealing with touchy subjects. Imagine if Frank Jackson had never published his thought experiment on Mary’s Room, about the experience of the color red, for fear of offending the colorblind. Or if Descartes never released his Meditations, for fear the wax example would offend vegans that don’t eat honey.

In fact, returning to the 17th century, the Tuvel situation is reminiscent of Descartes’ reluctance to publish on the heliocentric universe, after the Inquisition’s treatment of Galileo a few years earlier. When professors in the academic philosophy community, like Nora Berenstain, condemned Rebecca Tuvel for “discursive violence” for publishing an article, and called for retraction rather than debate, it aligns Tuvel with Hypatia of Alexandria, Boethius, von Hochheim, Galileo, Łyszczyński and others as victims of orthodoxical demands for acculturation and censorship in their honest pursuit of advancing understanding.

Berenstain would have been better to react as Tolosani did against Copernicus, attempting to use philosophy and scientific data to dismantle the latter’s controversial viewpoints. Instead, Tuvel’s apparent lack of citations for black or trans authors (though there are plenty of nonwhite philosophers — Quayshawn Spencer, Charles Mills, Meena Krishnamurthy, Esa Diaz-Leon (detailed here) — who have entertained the idea that Dolezal could be transracial) was like a crime to a community not concerned with analysis, as analytic philosophy is supposed to be, but a bizarre appeal toward identitarian ethos. Tuvel says “Calls for intellectual engagement are also being shut down because they ‘dignify’ the article.”

By acquiescing to the complaint, Hypatia has allowed for the possibility of a “chilling effect” on speech in academia: authors may self-censor to fit orthodoxy or risk the hate mail and potential threats to tenure Rebecca Tuvel now faces. This is disastrous for the institution of knowledge and a culture that used to be centered around expression. In the words of Greg Lukianoff, free speech is a cultural value, not just something on the Bill of Rights. “Free speech is the antithesis of violence”: it was created, as an innovation, so that we wouldn’t need the threat of force to settle issues.

Tuvel’s conclusion — “that society should accept such an individual’s decision to change race the same way it should accept an individual’s decision to change sex” (275) — is not violent, nor are her premises or methodology. Censorship in philosophy mirrors censorship on campuses: much like protestors disrupted Charles Murray without engaging with his research (and possibly completely misunderstanding it), philosophers chastised Tuvel for minor semantic offenses or lack of adherence to certain trends; each offender expressed heterodoxy where only homogeneity was desired.

The path of philosophy, from Plato to Putnam, has always been controversial. Race and gender, multicultural studies professors always declare, are exceedingly difficult to talk about: therefore, they are perfect fodder for philosophical exploration. To deal with these concepts, one does not have to be black, white, male, female, cis, trans or non-binary — one must only desire honest discovery, and proceed with argument in a way that is open to debate. The last established orthodoxy in philosophy was Stalin’s enforcement of dialectical materialism in the Soviet Union, when laws of statistics, Einstein’s theories of relativity, evolutionary biology and non-Pavlovian psychology were dismissed as pseudoscience. In a free society, the best way to deal with unfamiliar opinions is to debate them, not to call for censorship.

There are only two ways an argument can be wrong: the premises are false, or the conclusion does not follow. The attacks on Tuvel showed an unwillingness to examine either. Without willingness to argue, philosophy — and clarification on these important, mysterious issues — will suffer.

The Protestant Reformation and freedom of conscience

This year we celebrate 500 years of the Protestant Reformation. On October 31, 1517, the then Augustinian monk, priest, and teacher Martin Luther nailed at the door of a church in Wittenberg, Germany, a document with 95 theses on salvation, that is, basically the way people are led by the Christian God to Heaven. Luther was scandalized by the sale of indulgences by the Roman Catholic Church, believing that this practice did not correspond to the biblical teaching. Luther understood that salvation was given only by faith. The Catholic Church understood that salvation was a combination of faith and works.

The practice of nailing a document at the door of the church was not uncommon, and Luther’s intention was to hold an academic debate on the subject. However, Luther’s ideas found many sympathizers and a wide-spread protestant movement within the Roman Catholic Church was quickly initiated. Over the years, other leaders such as Ulrich Zwingli and John Calvin joined Luther. However, the main leaders of the Roman Catholic Church did not agree with the Reformers’ point of view, and so the Christian church in the West was divided into several groups: Lutherans, Anglicans, Reformed, Anabaptists, later followed by Methodists, Pentecostals and many others. In short, the Christian church in the West has never been the same.

The Protestant Reformation was obviously a movement of great importance in world religious history. I also believe that few would disagree with its importance in the broader context of history, especially Western history. To mention just one example, Max Weber’s thesis that Protestantism (especially Calvinism, and more precisely Puritanism) was a key factor in the development of what he called modern capitalism is very accepted, or at least enthusiastically debated. But I would like to briefly address here another impact of the Protestant Reformation on world history: the development of freedom of conscience.

Simply put, but I believe that not oversimplifying, after the fall of the Roman Empire and until the 16th century, Europe knew only one religion – Christianity – in only one variety – Roman Catholic Christianity. It is true that much of the paganism of the barbarians survived through the centuries, that Muslims occupied parts of Europe (mainly the Iberian Peninsula) and that other varieties of Christianity were practiced in parts of Europe (mainly Russia and Greece). But besides that, the history of Christianity was a tale of an ever-increasing concentration of political and ecclesiastical power in Rome, as well as an ever-widening intersection of priests, bishops, kings, and nobles. In short, Rome became increasingly central and the distinction between church and state increasingly difficult to observe in practice. One of the legacies of the Protestant Reformation was precisely the debate about the relationship between church and state. With a multiplicity of churches and strengthening nationalisms, the model of a unified Christianity was never possible again.

Of course, this loss of unity in Christendom can cause melancholy and nostalgia among some, especially Roman Catholics. But one of its gains was the growth of the individual’s space in the world. This was not a sudden process, but slowly but surely it became clear that religious convictions could no longer be imposed on individuals. Especially in England, where the Anglican Church stood midway between Rome and Wittenberg (or Rome and Geneva), many groups emerged on the margins of the state church: Presbyterians, Baptists, Congregationalists, Quakers, and so on. These groups accepted the challenge of being treated as second-class citizens, but maintaining their personal convictions. Something similar can be said about Roman Catholics in England, who began to live on the fringes of society. The new relationship between church and state in England was a point of discussion for many of the most important political philosophers of modernity: Thomas Hobbes, John Locke, Edmund Burke, and others. To disregard this aspect is to lose sight of one of the most important points of the debate in which these thinkers were involved.

The Westminster Confession of Faith, one of the most important documents produced in the period of the Protestant Reformation, has a chapter entitled “Of Christian Liberty, and Liberty of Conscience.” Of course there are issues in this chapter that may sound very strange to those who are not Christians or who are not involved in Christian churches. However, one point is immediately understandable to all: being a Christian is a matter of intimate forum. No one can be compelled to be a Christian. At best this obligation would produce only external adhesion. Intimate adherence could never be satisfactorily verified.

Sometime after the classical Reformation period, a new renewal religious movement occurred in England with the birth of Methodism. But its leading leaders, John Wesley and George Whitefield, disagreed about salvation in a way not so different from what had previously occurred between Luther and the Roman Catholic Church. However, this time there was no excommunication, inquisition or wars. Wesley simply told Whitefield, “Let’s agree to disagree.”

Agreeing to disagree is one of the great legacies of the Protestant Reformation. May we always try to convince each other by force of argument, not by force of arms. And that each one has the right to decide for themselves, with freedom of conscience, which seems the best way forward.

Where is the line between sympathy and paternalism?

In higher-ed news two types of terrifying stories come up pretty frequently: free speech stories, and Title IX stories. You’d think these stories would only be relevant to academics and students, but they’re not. These issues are certainly very important for those of us who hang out in ivory towers. But those towers shape the debate–and unquestioned assumptions–that determine real world policy in board rooms and capitols. This is especially true in a world where a bachelor’s degree is the new GED.

The free speech stories have gotten boring because they all take the following form: group A doesn’t want to let group B talk about opinion b so they act like a bunch of jackasses. Usually this takes place at a school for rich kids. Usually those kids are majoring in something that will give them no marketable skills.

The Title IX stories are Kafkaesque tales where a well-intentioned policy (create a system to protect people in colleges from sexism and sexual aggression) turns into a kangaroo court that allows terrible people to ruin other people’s lives. (I hasten to add, I’m sure Title IX offices do plenty of legitimately great work.)

A great article in the Chronicle gives an inside look at one of these tribunals. For the most part it’s chilling. Peter Ludlow had been accused of sexual assault, but the claims weren’t terribly credible. As far as I can tell (based only on this article) he did some things that should raise some eyebrows, but nothing genuinely against any rules. Nonetheless, the accusations were a potential PR and liability problem for the school so he had to go, regardless of justice.

The glimmer of hope comes with the testimony of Jessica Wilson. She managed to shake them out of their foregone conclusion and got them to consider that women above the age of consent can be active participants in their own lives instead of victims waiting to happen. Yes, bad things happen to women, but that’s not enough to jump to the conclusion that all women are victims and all men are aggressors.

The big question at the root of these types of stories is how much responsibility we ought to take for our lives.

Free speech: Should I be held responsible for saying insensitive (or unpatriotic) things? Who would enforce that obligations? Should I be held responsible for dealing with the insensitive things other people might say? Or should I even be allowed to hear what other people might say because I can’t take responsibility for evaluating it “critically” and coming to the right conclusion.

Title IX: Should women be responsible for their own protection, or is that akin to blaming the victim? We’ve gone from trying to create an environment where everyone can contribute to taking away agency. In doing so we’ve also created a powerful mechanism that can be abused. This is bad because of the harm it does to the falsely accused, but it also has the potential to delegitimize the claims of genuine victims and fractures society. But our forebears weren’t exactly saints when it came to treating each other justly.

Where is the line between helping a group and infantilizing them?

At either end of a spectrum I imagine caricature versions of a teenage libertarian (“your problems are your own, suck it up while I shout dumb things at you”) and a social justice warrior (“it’s everyone else’s fault! Let’s occupy!”). Let’s call those end points Atomistic Responsibility and Social Responsibility. More sarcastically, we could call them Robot and Common Pool Responsibility. Nobody is actually at these extreme ends (I hope), but some people get close.

Either one seems ridiculous to anyone who doesn’t already subscribe to that view, but both have a kernel of truth. Fair or not, you have to take responsibility for your life. But we’re all indelibly shaped by our environment.

Schools have historically adopted a policy towards the atomistic end, but have been trending in the other direction. I don’t think this is universally bad, but I think those values cannot properly coexist within a single organization.

We can imagine some hypothetical proper point on the Responsibility Spectrum, but without a way to objectively measure virtue, the position of that point–the line between sympathy and paternalism–its location is an open question. We need debate to better position and re-position that line. I would argue that Western societies have been doing a pretty good job of moving that line in the right direction over the last 100 years (although I disagree with many of the ways our predecessors have chosen to enforce that line).

But here’s the thing: we can’t move in the right direction without getting real-time feedback from our environments. Without variation in the data, we can’t draw any conclusions. What we need more than a proper split of responsibility, is a range of possibilities being constantly tinkered with and explored.

We need a diversity of approaches. This is why freedom of speech and freedom of association are so essential. In order to get this diversity, we need federalism and polycentricity–stop trying to impose order from the top down on a grand scale (“think globally, act locally“), and let order be created from the bottom up. Let our organizations–businesses, churches, civic associations, local governments and special districts–adapt to their circumstances and the wishes of their stakeholders.

Benefiting from this diversity requires open minds and epistemic humility. We stand on the shore of a vast mysterious ocean. We’ve waded a short distance into the water and learned a lot, but there’s infinitely more to learn!

(Sidenote: Looking for that Carl Sagan quote I came across this gem:

People are not stupid. They believe things for reasons. The last way for skeptics to get the attention of bright, curious, intelligent people is to belittle or condescend or to show arrogance toward their beliefs.

That about sums up my approach to discussing these sorts of issues. We’d all do better to occasionally give our opponents the benefit of the doubt and see what we can learn from them. Being a purist is a great way to structure your thought, but empathy for our opponents is how we make our theories strong.

Does business success make a good statesmen?

Gary Becker made the distinction between two types of on-the-job training: general and specific. The former consist of the skills of wide applicability, which enable the worker to perform satisfactorily different kinds of jobs: to keep one’s commitments, to arrive on time to work, to avoid disturbing behavior, etc.. All of them are moral traits that raise the productivity of the worker whichever his occupation would be. On the other hand, specific on-the-job training only concerns the peculiarities of a given job: to know how many spoons of sugar your boss likes for his coffee or which of your employees is better qualified to deal with the public. The knowledge provided by the on-the-job training is incorporated to the worker, it travels with him when he moves from one company to another. Therefore, while the general on-the-job training increases the worker productivity in every other job he gets, he makes a poor profit from the specific one.

Of course, it is relative to each profession and industry whether the on-the-job training is general or specific. For example, a psychiatrist who works for a general hospital gets specific training about the concrete dynamics of its internal organization. If he later moves to a position in another hospital, his experience dealing with the internal politics of such institutions will count as general on-the-job training. If he then goes freelance instead, that experience will be of little use for his career. Nevertheless, even though the said psychiatrist switches from working for a big general hospital to working on his own, he will carry with him a valuable general on-the-job training: how to look after his patients, how to deal with their relatives, etc.

So, to what extent will on-the-job training gained by a successful businessman enable him to be a good statesman? In the same degree that a successful lawyer, a successful sportsman, a successful writer is enabled to be one. Every successful person carries with him a set of personal traits that are very useful in almost every field of human experience: self confidence, work ethics, constancy, and so on. If you lack any of them, you could hardly be a good politician, so as you rarely could achieve anything in any other field. But these qualities are the typical examples of general on-the-job training and what we are inquiring here is whether the specific on-the-job training of a successful businessman could enable him with a relative advantage to be a better politician -or at least have a better chance of being a good one.

The problem is that there is no such a thing as an a priori successful businessman. We can state that a doctor, an engineer, or a biologist need to have certain qualifications to be a competent professional. But the performance of a businessman depends on a multiplicity of variables that prevents us from elucidating which traits would lead him to success.

Medicine, physics, and biology deal with “simple phenomena”. The limits to the knowledge of such disciplines are relative to the development of the investigations in such fields (see F. A. Hayek, “The Theory of Complex Phenomena”). The more those professionals study, the more they work, the better trained they will be.

On the other hand, the law and the market economy are cases of “complex phenomena” (see F. A. Hayek, Law, Legislation and Liberty). Since the limits to the knowledge of such phenomena are absolute, a discovery process of trial and error applied to concrete cases is the only way to weather such uncertainty. The judge states the solution the law provides to a concrete controversy, but the lawmaker is enabled to state what the law says only in general and abstract terms. In the same sense, the personal strategy of a businessman is successful only under certain circumstances.

So, how does the market economy survive to its own complexity? The market does not need wise businessmen, but lots of purposeful ones, eager to thrive following their stubborn vision of the business. Most of them will be wrong about their perception of the market and subsequently will fail. A few others will prosper, since their plans meet -perhaps by chance- the changing demands of the market. Thus, the personal traits that led a successful businessman to prosperity were not universal, but the right ones for the specific time he carried out his plans.

Having said that, would a purposeful and stubborn politician a good choice for government? After all, Niccolo Macchiavelli had pointed out that initiative was the main virtue of the prince. Then, a good statesman would be the one who handles successfully the changing opportunities of life and politics. Notwithstanding, The Prince was -as Quentin Skinner showed- a parody: opportunistic behaviour is no good to the accomplishment of public duties and the protection of civil liberties.

Nevertheless, there is still a convincing argument for the businessman as a prospect of statesman. If he has to deal with the system of checks and balances -the Congress and the Courts-, the law will act as the selection process of the market. Every time a decision based on expediency collides with fundamental liberties, the latter must withstand the former. A sort of natural selection of political decisions.

Quite obvious, but not so trite. For a stubborn and purposeful politician not to become a menace to individual and public liberties, his initiative must not venture into constitutional design. No bypasses, no exceptions, not even reforms to the legal restraints to the public authority must be allowed, even in the name of emergency. Especially for most of the emergencies often brought about by measures based on expediency.