Some Monday Links

Famous Brand Logos Are Reimagined as Medieval Illustrations (My Modern MET)

Loyalty to the Brand flag – source

The allure of cosmopolitan languages to courtiers and pop fans (Psyche)

The First Authoritarian (The Hedgehog Review)

The Korea Analogy (The Duck of Miverva)

Heading Into the Atom Age—Pat Frank’s Perpetually Relevant Novels (Quillette)

Got to appreciate the Atompunk aesthetics. I have also spent considerable time with the Fallout games (only the first two ones).

The death of reason

“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”

Thomas Kuhn, The Structure of Scientific Revolutions p. 111

I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.

My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”

And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.

The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.

If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.

The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., those of Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.

Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary. (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.

Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.

Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unconfirmed auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.

But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”

Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.

For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.

For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.

So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.

Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms, participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.

From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.

This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acolytes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.

There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.

Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremberg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…

In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”

Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.

Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocutors, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.

Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. The Final Solution is not about extermination anymore but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.

The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.

Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it is climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.

Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermione Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”

“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233

Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.

Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.

When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.

The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.

Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.

Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.

Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.

If we don’t posit a baseline-rationality — Hermione Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.

There were other attacks on the hoax, from SlateSalon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.

Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.

The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”

Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.

In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.

However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.

Military Dictatorship in Brazil: Was it worth it?

The title of this text can already cause controversy since many understand that there was no dictatorship in Brazil, but a series of military governments that could not be classified as dictatorial. But the fact is that, in 1964, Castelo Branco became president in place of João Goulart, being succeeded by Costa e Silva, Medici, Geisel and João Figueiredo. Calling this dictatorship or not, the fact is that João Goulart was deposed and Castelo Branco occupied the presidency to avoid that the country was taken by groups sympathetic to the communism, making Brazil a “Big Cuba”. And it is against this fact that I ask if it was worth it: was it worth having 21 years of military governments to prevent a socialist government from being implanted in Brazil?

A socialist government was implemented in Brazil in 2003 by popular vote. Although political propaganda in 2002 had proclaimed an inclination towards the center of the political spectrum, the fact is that the PT never completely abandoned its socialist inclinations. It could even be said that FHC is worthy of the same comment: although less inclined to the left, the PSDB does not carry “democratic socialism” in the name for nothing. In light of this, I ask if it was worth having 21 years of military governments in Brazil. In 1988, just three years after João Figueiredo left the presidency of the country, a Constitution was promulgated with a strong Progressive character. In 1994, less than 10 years after the last military president stepped down, Brazil elected a “Third Way” president. In 2002 a president with a past of explicit connections with socialism came to power, and in 2011 the country happened to be governed by a former guerrilla warrior. If the objective of placing the military in power has been to avoid the implantation of socialist governments in Brazil, it can be said that this goal was not achieved. It was only postponed for just over 21 years.

What is socialism? Why is it so bad? Even without any empirical research, I am quite sure that most of the Brazilian population would not know how to answer these questions. In a similar vein, I am quite convinced that most of the country’s “literate class” (artists, academics, and intellectuals of all kinds) is sympathetic to socialism. Many of the political parties in Brazil carry “socialism” or “communism” in the name.

What did the military governments offer in exchange for socialism? Although they had varied characteristics, most of the governments between 1964 and 1985 tended to be a modernized version of Positivism. Positivism states that all knowledge (tradition, common sense, religion) will be superseded by positive scientific knowledge. Another way of defining it is to say that only what is empirically proven is true. Positivism, however, presents some problems. First, it is self-defeating, that is, it does not stand up to its own validation criteria: “Only what is empirically proven is true.” Is this empirically proven? Is it empirically proven that “only that which is empirically proven is true”? No. And it could not even be. Another difficulty is to carry out the empirical tests. It is possible, even with constraints, to conduct empirical tests in a controlled environment (in laboratories) to test theories and hypotheses. But it is not possible to declare the universality of the results, even if the tests are performed a very large number of times.

This “problem of induction” (to draw universal conclusions from particular, albeit many, observations) was famously answered by Karl Popper: in Popper’s definition, the aim of science is not to prove universal truths, but to affirm with confidence a set of information. In other words, nothing is “scientifically proven,” but many things are scientifically falsified by the lack of favorable evidence. Ludwig von Mises answered the problem of induction in another way: not everything has to be empirically tested to be considered true. There are truths that are self-evident, even without any empirical test. Despite the differences, both Popper and Mises offered possibilities of non-positivistic sciences (in the sense of systematic knowledge), especially valid for the study of human beings living in society.

Positivism and Marxism are sister doctrines. Both emerged in the 19th century in response to liberalism. The origin of liberalism lies in Christianity, if not in the affirmation of the existence of the Christian God in all the details presented by the Bible, at least in elements such as Natural Law and an anthropology similar to that of Christian teaching. Positivism and Marxism have moved away from Christianity by adopting a materialist view of reality (it only exists, or at least it only matters what we can experience empirically) and by denying the natural limitations of the human being.

Following von Mises, the Austrian School rejects the positivist methodology, and therefore is classified as heterodox. Although we should avoid anachronisms, the tendency of classical economists was the same: from introspection and axioms, rather than from empirical tests. It is not a matter of despising the scientific method altogether, quite the opposite! The scientific method is excellent for taking the man to the moon and discovering the cure of diseases. It just is not fit for a human “science.” To believe so is to fall into a “fatal conceit”. The military that governed Brazil between 1964 and 1985 can be accused of this fatal conceit. They generally believed that they could rule the country as if it were a barracks.

In conclusion: was it worth it? Certainly avoiding Socialism is a great and necessary goal. But combating it with Positivism is not the right path. Two mistakes do not make a hit. Was there the possibility of combating socialism with liberalism? I think not. Brazil didn’t have the liberal tradition necessary to confront socialism and other forms of authoritarianism or totalitarianism (and maybe it still hasn’t). Looking back, we can only regret that the options were so bad. Looking forward, we can try to improve our options by building a true liberalism in Brazil.

Where did Homo Economicus come from?

Over on my Facebook page, I posted a short criticism of both neoclassical and behavioral economic scholarship on rational choice (drawing from a paper I’m working on exploring that topic). Stated a bit polemically,  though homo economicus has largely been dead in neoclassical theory, his spirit still haunts the work of most modern neoclassical scholars. Likewise, though behavioral economists are trying to dig the grave and put the final nails in the coffin of homo economicus, their nightmares are still plagued with the anxieties of his memory.

This led a former colleague from Hillsdale to ask me where I thought homo economicus came from historically. I wrote the following in response (lightly edited for this post):

It could be argued, in a sense, that the protestant Christian aim to complete moral purity and the Enlightenment aim to make man perfect in knowledge in morality (as embodied in Franklin’s virtue ethics) helped give rise to a culture that would be primed for such a model. Within economics, historically it comes from Bentham’s utilitarianism and Jevon’s mathematical extrapolations from Bentham’s psychology. However, I’d say this comes from a deeper “Cartesian anxiety” in Bernstein’s use of the term to make economic a big-T True, capital-C Certain, capital-S Science just like physics (which Jevon’s himself stated was an aim of his work,[1] and has preoccupied economists since the days of JS Mill). If economic science cannot be said to be completely positive and “scientific” like the natural sciences with absolutely falsifiable propositions and an algorithmic means of theory-choice, it is feared, it must be written off as a pseudo-scientific waste of time or else ideology to justify capitalism. If economics cannot make certain claims to knowledge, it must be solipsist and relativist and, again, be another form of pseudo-science or ideology. If economic models cannot reach definitive mathematical results, then they must be relativistic and a waste of time. This is just another example of the extreme Cartesian/Katian/Platonic (in Rorty’s use of the term) either/or: objectivity OR relativism, science OR nonscience, determinate mathematical solutions OR ideological emotional bickering. Homo economicus was erected as a means to be an epistemic foundation to solve all these anxieties and either/ors.

Of course, as any good Deweyan, I think all these either/ors are nonsense. Their understanding of science, as revealed through the so-called “growth of knowledge” literature in postempiricist philosophy of science (ie., the work of Thomas Kuhn, Lakotos, Karl Popper, Paul Feyerabend, Michael Polanyi, Richard Bernstein, Richard Rorty, etc.) has shown that this positivist conception of science, that is science consists of algorithmic theory choice selected based off correspondence with theory-free, brute “facts” of the “external world,” is woefully inaccurate. Dialogical Aristotelian practical reasoning in the community of scientists plays just as much of a role in formulating a scientific consensus as empirical verification. This does not undermine science’s claims to objectivity or rationality, in fact it puts such claims in more epistemically tenable terms.

Further, the desire to make the social sciences just another extension of the natural science, as Hayek shows in the Counterrevolution of Science, and as even positivists like Milton Freidman argue, is a completely misleading urge that has led to some of the worst follies in modern social theory. Obviously, I cheer the fact that “homo economicus is dead, and we have killed him,” but now that we’ve “out-rationalized the rationalizer of all rationalizers,” we must try to re-evaluate our economic theories and methods to, as Bernstein or Dewey would put it, “reconstruct” our economic science.

In short, immenatizing the eschaton in epistemology and philosophy of science created homo economicus.

For the record, you don’t have to be a radical scientific anti-realist like Feyerabend or Rorty to agree with my analysis here.[2] I myself wax more towards Quine than Rorty in scientific matters. However, the main point of philosophy of science since positivism is the exact type of foundationalist epistemology undergirding modern positivist methodology in the mainstream of the economics profession, and the concept of rationality that is used to buttress it, is a naive view of science, natural or social.

Notably, this critique is largely unrelated to much of the Austrian school. Mises’ own conception of rationality is mostly unrelated to homo economicus as he understands rationality to be purposive action, emphasizing that economists first understand the subjective meaning from the point of view of the economic actor him/herself before declaring any action “irrational.”[3] [4]

What are your thoughts on this? Are neoclassical and behavioral economics both still way too influenced by the spirit of homo economicus, or am I off the mark? Is my analysis of the historical conditions that led to the rise of homo economicus right? Please, discuss in the comments.

[1] Consider this quote from Jevon’s magnum opus Theory of Political Economy “Economics, if it is to be a science at all, must be a mathematical science.”

[2] In fact, I doubt anybody mentioned is really a scientific anti-realist, I agree with Bernstein that Feyerabend is best read as a satirist of the Cartesian anxiety and extreme either/or of relativism and objectivism in philosophy of science and think Rorty’s views are more complex than simple scientific anti-realism, but that’s an unrelated point.

[3] Of course, any critique of epistemic foundationalism would apply to Mises, especially his apriorism; after all, Mises did write a book called “Ultimate Foundations of the Social Sciences” and the Cartesian anxiety is strong with him, especially in his later works. Notably, none of this applies to most of Mises’ students, especially Schutz, Machlup, and Hayek.

[4] For a more detailed discussion of Mises and the Austrians on rationality, see my blog post here or this paper by Mario Rizzo. For a more general discussion of the insights of the type of philosophy of science I’m discussing, see Chapter 2 of Richard Bernstein’s excellent 1983 book Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis.

The Poverty Of Democracy

I have been a strong proponent of democracy until the Spring of 2012 when I picked up Hans-Hermann Hoppe’s Democracy: The God That Failed. Since then, I have never looked at democracy with the same warm feelings again. Now, in this post, I would like to explore why democratic political representation is an impossibility and why it deals poorly with value pluralism – the fact that society holds various fundamental values that are in conflict with each other. In addition, I would like to urge that we should look for other political possibilities and stop maintaining that democracy is the end of all forms of social organization.

What most people find attractive about democracy is its underlying idea that the electorate is an embodiment of the general will of the public as if the public has reached some kind of general agreement on public policies and legislation. It is believed that with regular elections, the rulers are in power for a limited time and they “will be compelled by the threat of dismissal to do what public opinion wants them to do” (Popper, 1963, p. 345). Gerard Casey writes in Libertarian Anarchy (2012) that “[T]he central characteristic of representation by agency is that the agent is responsible to his principal and is bound to act in the principal’s interest” (Casey, 2012, p. 125). It is however questionable to what extent the electorate can truly represent the constituency and to what extent the public voice can be considered univocal. We must also beware of attributing “to the voice of the people a kind of final authority and unlimited wisdom” (Popper, 1963, p. 347). When society holds a vox populi vox dei attitude, it can easily slip into a tyranny of the majority. A society ruled by public opinion by no means guarantees social justice.

It is important to realize that the notion of representation is highly questionable. According to public choice theory, political agents cannot possibly truly represent their constituencies when members of a society have different comprehensive doctrines, hold different values, and have different interests. Public choice theory applies economic methods in the field of political theory and provides some interesting insights that are relevant for political philosophy. It maintains that politics is ruled by clashing opinions among policy makers and clashing opinions among members of the constituency. One may for example desire to build new roads with public funds, another may want to use public funds for the modernization of the military and defense, a third may desire to spend more on social welfare, a fourth on education etc.

Given that we live in a world of value pluralism, it is difficult for policy makers to pursue and represent the ‘public interest’. Furthermore, special minority interest groups may have incentives to organize themselves in order to influence public policies through lobbying. When the expected gain of lobbying of such minority interest groups is greater than the cost of lobbying efforts, they have greater incentive to influence legislators. Large interest groups, such as taxpayers in general, have fewer incentives to campaign for particular legislations, because the benefits of their actions, if they are successful, are spread much more widely among each individual taxpayer.

When the principal believes that the cost of being politically active – keeping oneself up-to-date with political actualities and being involved with political campaigns – is not worth the benefits, the principal may become ‘rationally ignorant’ of politics. This gives representatives more incentives not to pay attention to the public interests. Rationally ignorant principals do not know who their representatives are or what they do. This consequently discourages the politicians’ feeling of accountability for their actions and it encourages the politicians to sell themselves to donors and to pursue personal agendas. Different interests, incentives, and ideologies among principals and political agents therefore result in unequal representation.

I believe that Casey is right when he asserts that there is

“no interest common to the constituency as a whole, or, if there is, it is so rare as to be practically non-existent. That being the case, there is nothing that can be represented” (Casey, 2012, p. 125).

Imagine that there is a piece of legislation that our representatives can either pass or not with 35 per cent of the public in favour of the legislation and 65 per cent who oppose it. If our representatives pass the legislation, they will represent the 35 per cent and ignore the interests of the 65 per cent. If they do not pass the legislation, they will represent the 65 per cent and cease to represent the interests of the 35 per cent.

“In this very normal political scenario, it is not that it is difficult to represent a constituency – it is rather that it is impossible” (Casey, 2012, p. 125).

A representative democracy is therefore actually quite inadequate in dealing with a pluralistic society as it cannot fulfill its promise: representing the will of the peoples. Democracy is moreover a system that is inherently violent, because it divides people along the lines of their comprehensive doctrines. People with similar political thoughts organize themselves into groups to campaign against people who hold conflicting ideas. In a democracy, these people then vote for their preferred ruler to rule over people who may have contrasting views or who may be indifferent to political issues at all. It has never happened that the turnout at elections is 100 per cent. According to Eurostat.com, the average turnout rate in Europe is around 43 per cent. Nonetheless, the 43 per cent are choosing political agents who are expected to represent the 57 per cent of the non-voting constituency. The violent nature of democracy is that with every vote the voter attempts to enforce their preferred rulers or legislation unto others. This basically makes it a system in which people lose their political autonomy to other voters.

I believe that in order to deal more adequately with value pluralism we have to look for political possibilities that lie beyond a representative democracy. Instead of considering democracy as the end of all forms of social organization, we should ask ourselves how we could discover better forms of social organization.

References
Popper, K. (1963). Conjectures and Refutations. London: Routledge.

Casey, G. (2012). Libertarian Anarchy: against the state. London: Continuum International Publishing Group.