In Search of Firmer Cosmopolitan Solidarity: The Need for a Sentimentalist Case for Open Borders

Most arguments for open borders are phrased in terms of universalized moral obligations to non-citizens. These obligations are usually phrased as “merely” negative (eg., that Americans have a duty to not impede the movement an impoverished Mexican worker or Syrian refugee seeking a better life) rather than positive (eg., that the first obligation does not imply that Americans have a duty to provide, for example, generous welfare benefits to immigrants and refugees), but are phrased as obligations based on people in virtue of their rationality rather than nationality nonetheless.

Whether they be utilitarian, moral intuitionist, or deontological, what these arguments assume is that nation of origin isn’t a “morally relevant” consideration for one’s rights to immigrate and rely on some other view of moral relevance implicitly as an alternative to try and cement a purely moral solidarity that extends beyond national border. They have in common an appeal to a common human capacity to have rights stemming from something metaphysically essential to our common humanity.

Those arguments are all coherent and possibly valid and are even the arguments that originally convinced me to support open borders. The only problem is that they are often very unconvincing to people skeptical of immigration because they merely beg the question of that moral obligation is irrelevant with respect to nationality. As one of my critics of one of my older pieces on immigration observed, most immigration skeptics are implicitly tribalist nationalists, not philosophically consistent consequentialists or deontologists. They have little patience for theoretical and morally pure metaphysical arguments concluding any obligation, even merely negative, to immigrants. They view their obligations to those socially closer to them as a trump card (pardon the pun) to any morally universalized consideration. So long as they can identify with someone else as an American (or whatever their national identity may be) they view their considerations as relevant. If they cannot identify with someone else based on national identity, they do not view an immigrant’s theorized rights or utility functions as relevant.

There are still several problems with this tribalist perspective, given that nation-states are far from culturally homogenous and cultural homogeneity often transcends borders in some important respects, why does one’s ability to “identify” on the basis of tribal affiliation stop at a nation-state’s borders? Further, there are many other affinities one may have with a foreigner that may be viewed as equally important, if not more important, to one’s ability to “identify” with someone than national citizenship. They may be a fellow Catholic or Christian, they may be a fellow fan of football, or a fellow manufacturing worker, or a fellow parent, etc. Why is “fellow American” the most socially salient form of identification and allows one to keep a foreigner in a state tyranny and poverty, but not whether they are a “fellow Christian” or any of the many other identifiers people find important?

However, these problems are not taken seriously by those who hold them because tribalist outlook isn’t about rational coherence, it is about non-rational sentimental feelings and particularized perspectives on historical affinities. Even if a skeptic of immigration takes those problems seriously, the morally pure and universalizing arguments are no more convincing to a tribalist.

I believe this gets at the heart of most objections Trump voters have to immigration. They might raise welfare costs, crime, native jobs lost, or fear of cultural collapse as post-hoc rationalizations for why they do not feel solidarity with natives, but the fact that they do not feel solidarity due to their nationalist affinities is at the root of these rationalizations. Thus when proponents of open borders raise objections, be it in the form of economic studies showing that these concerns are not consistent with facts or by pointing out that these are also concerns for the native-born population and yet nobody proposes similar immigration restrictions on citizens, they fall on deaf ears. Such concerns are irrelevant to the heart of anti-immigrant sentiment: a lack of solidarity with anyone who is not a native-born citizen.

In this essay, drawing from the sentimentalist ethics of David Hume and the perspective on liberal solidarity of Richard Rorty, I want to sketch a vision of universalized solidarity that would win over tribalists to the side of, if not purely open borders, at least more liberalized immigration restrictions and allowance for refugees. This is not so much a moral argument of the form most arguments for open borders have taken, but a strategy to cultivate the sentiments of a (specifically American nationalist) tribalist to be more open to the concerns and sympathies of someone with whom they do not share a national origin. The main goal is that we shouldn’t try to argue away people’s sincere, deeply held tribalist and nationalist emotions, but seek to redirect them in a way that does not lead to massive suffering for immigrants.

Rorty on Kantian Rationalist and Humean Sentimentalist Arguments for Universalized Human Rights

In an article written by American pragmatist philosopher Richard Rorty called “Rationality, Sentimentality, and Human Rights,” he discusses two strategies for expanding human rights culture to the third world. One, which he identifies with philosophers such as Plato and Kant, involves appealing to some common faculty which all humans have in common—namely rationality—and claim all other considerations, such as kinship, custom, religion, and (most importantly for present purposes) national origin “morally irrelevant” to whether an individual has human rights and should be treated as such. These sort of arguments, Rorty says, are the sort that try to use rigorous argumentation to answer the rational egoist question “Why should I be moral?” They are traced back to Plato’s discussion of the Ring of Gyges in the Republic through Enlightenment attempts to find an algorithmic, rational foundation of morality, such as the Kantian categorical imperative. This is the sort of strategy, in varying forms, most arguments in favor of open borders try to pursue.

The second strategy, which Rorty identifies with philosophers such as David Hume and Annette Baier, is to appeal to the sentiments of those who do not respect the rights of others. Rather than try to answer “Why should I be moral?” in an abstract, philosophical sense such that we have a priori algorithmic justification for treating others equal, this view advocates trying to answer the more immediate and relevant question “Why should I care about someone’s worth and well-being even if it appears to me that I have very little in common with them?” Rather than answer the former question with argumentation that appeals to our common rational faculties, answer the latter with appealing to our sentimental attitudes that we do have something else in common with that person.

Rorty favors the second Humean approach for one simple reason: in practice, we are not dealing with rational egoists who substitute altruistic moral values with their ruthless self-interest. We are dealing with irrational tribalists who substitute more-encompassing attitudes of solidarity with less-encompassing ones. They aren’t concerned about why they should be moral in the first place and what that means, they are concerned with how certain moral obligations extend to people with whom they find it difficult to emotionally identify. As Rorty says:

If one follows Baier’s advice one will not see it as the moral educator’s task to answer the rational egoist’s question “Why should I be moral?” but rather to answer the much more frequently posed question “Why should I care about a stranger, a person who is no kin to me, a person whose habits I find disgusting?” The traditional answer to the latter question is “Because kinship and custom are morally irrelevant, irrelevant to the obligations imposed by the recognition of membership in the same species.” This has never been very convincing since it begs the question at issue: whether mere species membership is, in fact, a sufficient surrogate closer to kinship. […]

A better sort of answer is the sort of long, sad, sentimental story which begins with “Because this is what it is like to be in her situation—to be far from home, among strangers,” or “Because she might become your daughter-in-law,” or “Because her mother would grieve for her.” Such stories, repeated and varied over the centuries, have induced us, the rich, safe and powerful people, to tolerate, and even to cherish, powerless people—people whose appearance or habits or beliefs at first seemed an insult to our own moral identity, our sense of the limits of permissible human variation.

If we agree with Hume that reason is the slave of the passions, or more accurately that reason is just one of many competing sentiments and passions, then it should come as no surprise that rational argumentation of the form found in most arguments for open borders are not super convincing to people for whom reason is not the ruling sentiment. How does one cultivate these other sentiments, if not through merely rational argumentation? Rorty continually comments throughout his political works that novels, poems, documentaries, and television programs—those genres which tell the sort of long sad stories commented on above—have replaced sermons and Enlightenment-era treatises as the engine of moral progress since the end of the nineteenth century. Rational argumentation may convince an ideal-typical philosopher, but not many other people.

For Rorty, the application of this sentimental ethics had two main purposes, the first of which is mostly irrelevant for present purposes and the second of which is relevant. First, Rorty wanted to make his vision of a post-metaphysical, post-epistemological intellectual culture and a commonsensically nominalist and historicist popular culture compatible with the sort of ever-expanding human solidarity necessary for political liberalism; a culture for which the sort of algorithmic arguments for open borders I mentioned in the first half of this article would not seem convincing for more theoretical reasons than the mere presence of nationalist sentiment. Though that is an intellectual project with which I have strong affinities, one need not buy that vision for the purposes of this article—that of narrowly applying sentimental ethics to overcome nationalist objections to immigration.

The second, however, was to point out a better way to implement the liberal cultural norms to prohibit the public humiliation of powerless minorities. The paradigmatic cases Rorty says such a sentimental education has application are how Serbians viewed Muslims, how Nazis viewed Jews, or how white southern Confederates viewed African-American slaves. Though those are far more extreme cases, it is not a stretch to add to that list the way Trump voters view Muslim refugees or Mexican migrant workers.

A Rortian Case against Rortian (and Trumpian) Nationalism

Though Rorty was a through-and-through leftist and likely viewed most nationalist arguments for restricting immigration and especially keeping refugees in war-zones with scorn, there is one uncomfortable feature of his views for most radical proponents of immigration. It does leave very well open the notion of nationalism as a valid perspective, unlike many of the other arguments offered.

Indeed, Rorty—from my very anarchist perspective—was at times uncomfortably nationalist. In Achieving Our Country he likens national pride to self-respect for an individual, saying that while too much national pride can lead to imperialism, “insufficient national pride makes energetic and effective debate about national policy unlikely.” He defended a vision of American national pride along the lines of Deweyan pragmatism and transcendentalist romanticism as a nation of ever-expanding democratic vistas. Though radically different from the sort of national pride popular in right-wing xenophobic circles, it is a vision of national pride nonetheless and as such is not something with which I and many other advocates of open borders are not sympathetic with.

Further, and more relevant to our considerations, is he viewed national identity as a tool to expand the sort of liberal sentiments that he wanted. As he wrote in Contingency, Irony, and Solidarity:

Consider, as a final example, the attitude of contemporary American liberals to the unending hopelessness and misery of the lives of the young blacks in American cities. Do we say these people must be helped because they are our fellow human beings? We may, but it is much more persuasive, morally as well as politically, to describe them as our fellow Americans—to insist it is outrageous that an American to live without hope. The point of these examples is that our sense of solidarity is strongest when those with whom solidarity is expressed are thought of as “one of us,” where “us” means something smaller and more localized than the human race.

It is obvious why many critics of immigration restrictions would view this attitude as counterproductive. This type of description cannot be applied in many other scenarios at all relevant to questions of immigration at all. Liberalism, in the sense Rorty borrowed from Shklar (and also the sense which I think animates much of the interest in liberalized immigration policies), as an intense aversion to cruelty is concerned with merely ending cruelty as such. It wants to end cruelty whether it be the cruelty of the American government to illegal immigrants or suffering of native-born African-Americans as a result of centuries of cruelty by racists. This is surely something with which Rorty would agree as he writes elsewhere in that same chapter:

[T]here is such a thing as moral progress and that progress is indeed in the direction of greater human solidarity. But that solidarity is not thought of as recognition of a core self, the human essence, in all human beings. Rather, it is thought of as the ability to see more and more traditional differences (of tribe, religion, race, customs, and the like) as unimportant when compared to the similarities with respect to pain and humiliation—the ability to think of people wildly different from ourselves in the range of ‘us.’

Surely, that moral progress doesn’t stop at the unimportant line of a national border. The problem is that appeals to national identity of the sort Rorty uses, or of mythologized national histories, do stop at the border.

Rorty is right that it is easier for people to feel a sense of solidarity with those for whom there are fewer traditional differences, and that no amount of appeal to metaphysical constructions of human rationality will fully eclipse that psychological fact. However, the problem with forms of solidarity along national identity is it is much easier for people to stop there. In modern pluralistic, cosmopolitan societies such as America, it is hard for someone to stop their sense of solidarity at religion, tribe, custom and the like. This is because the minute they walk out the door of their home, the minute they arrive at their workplace, there is someone very close to them who would not fit that sense of solidarity yet someone for whom they would still feel some obligation, just based off of seeing the face of that person, off of mere proximity.

Stopping the line at national identity is much easier since many Americans, particularly those in the midwestern and southeastern states which gave Trump his presidency, will rarely interact with non-nationals on a regular basis while they will more likely interact with someone who is more distant from them in other ways. While other forms of solidarity are unstable for most because they are too localized, nationalism is stable because it is too general to be upset by experience of others while not general enough to be compatible with liberalism. Moral progress, if we pursue Rorty’s explicitly nationalist project, will halt at the national borders and his liberal project of ending cruelty will end with it. There is an inconsistency between Rorty’s liberalism and his belief in national pride.

Further, insisting “because they are American” leads people to ask what it means to “be American,” a question which can only be answered, even by Rorty in his description of American national pride, by contrast with what isn’t American (see his discussion of Europe in “American National Pride). It makes it difficult to see suffering as the salient identifier for solidarity, and makes other ‘traditional’ differences standing in the way of Rorty’s description of moral progress as more important than they should be. Indeed, this is exactly what we see with most xenophobic descriptions of foreigners as “not believing in American ideals.” Rorty’s very humble, liberalized version of national pride faces a serious danger of turning into the sort of toxic, illiberal nationalism we have seen in recent years.

Instead, we should substitute the description Rorty offers as motivating liberal help for African-Americans in the inner city ,‘because they are American,’ with the redescription Rorty uses elsewhere: ‘because they are suffering, and you too can suffer and have suffered in the past.’ This is a sentimental appeal which can apply to all who are suffering from cruelty, regardless of their national identity. This is more likely to make more and more other differences seem unimportant. As Rorty’s ideas on cultural identity politics imply, the goal should be to replace “identity”—including national identity—with empathy.

Thus, in making an appeal to Rorty’s sentimentalism for open border advocates, I want to very clearly point out how it is both possible and necessary to separate appeals to solidarity and sentiment from nationalism to serve liberal ends. This means that the possibility of nationalist sentiments of seeming acceptable to a non-rationalist form of ethics should not discourage those of us skeptical of nationalism from embracing and using its concepts.

Sentimental Ethical Appeals and Liberalized Immigration

The application of this form of sentimental ethics for people who merely want to liberalized immigration should be obvious. Our first step needs to be to recognize that people’s tribalist sentiments aren’t going to be swayed by mere rationalist argumentation as it merely begs the question. Our second step needs to be to realize that what’s ultimately going to be more likely to convince them aren’t going to get rid of people’s tribalist sentiments altogether, but to redirect them elsewhere. The goal should be to get people to see national identity as unimportant to those sentiments compared to other more salient ones, such as whether refugees and immigrants are suffering or not. The goal should be for nationalists to stop asking questions of immigrants like “Are immigrants going to be good Americans like me?” and more “Are they already people who, like me, have suffered?”

This does not mean that we stop making the types of good academic philosophical and economic arguments about how immigration will double the global GDP and how rights should be recognized as not stopping with national identity—those are certainly convincing to the minority of us to whom tribalism isn’t an especially strong sentiment. However, it does mean we should also recognize the power of novels like Under the Feet of Jesus or images like the viral, graphic one of a Syrian refugee child who was the victim of a bombing which circulated last year. The knowledge that Anne Frank’s family was turned down by America for refugee status, the feelings of empathy for Frank’s family one gets from reading her diary, the fear that we are perpetuating that same cruelty today are far more convincing than appeals to Anne Frank’s natural rights in virtue of her rational faculties as a human being.

Appeals to our common humanity in terms of our “rational faculties” or “natural rights” or “utility functions” and the like are not nearly as convincing to people who aren’t philosophers or economists as appeals to the ability of people to suffer. Such an image and sentimental case is far more likely to cultivate a cosmopolitan solidarity than Lockean or Benthamite platitudes.

References:

Rorty, Richard. “American National Pride: Whitman and Dewey.” Achieving our Country: Leftist Thought in Twentieth Century America. Rpt. in The Rorty Reader. Ed. by Christopher J. Voparil and Richard J. Bernstein. Malden: Blackwell Publishing Ltd, 2010. 372-388. Print.

Rorty, Richard. “Human Rights, Rationality, and Sentimentality.” On Human Rights: The Oxford Amnesty Lectures. Rpt. in The Rorty Reader. Ed. by Christopher J. Voparil and Richard J. Bernstein. Malden: Wiley-Blackwell Publishing Ltd, 2010.
352-372. Print.

Rorty, Richard. Contingency, Irony, and Solidarity. Cambridge: Cambridge University Press, 1999. Print.

 

The Deleted Clause of the Declaration of Independence

As a tribute to the great events that occurred 241 years ago, I wanted to recognize the importance of the unity of purpose behind supporting liberty in all of its forms. While an unequivocal statement of natural rights and the virtues of liberty, the Declaration of Independence also came close to bringing another vital aspect of liberty to the forefront of public attention. As has been addressed in multiple fascinating podcasts (Joe Janes, Robert Olwell), a censure of slavery and George III’s connection to the slave trade was in the first draft of the Declaration.

Thomas Jefferson, a man who has been criticized as a man of inherent contradiction between his high morals and his active participation in slavery, was a major contributor to the popularizing of classical liberal principles. Many have pointed to his hypocrisy in that he owned over 180 slaves, fathered children on them, and did not free them in his will (because of his debts). Even given his personal slaves, Jefferson made his moral stance on slavery quite clear through his famous efforts toward ending the transatlantic slave trade, which exemplify early steps in securing the abolition of the repugnant act of chattel slavery in America and applying classically liberal principles toward all humans. However, this very practice may have been enacted far sooner, avoiding decades of appalling misery and its long-reaching effects, if his (hypocritical but principled) position had been adopted from the day of the USA’s first taste of political freedom.

This is the text of the deleted Declaration of Independence clause:

“He has waged cruel war against human nature itself, violating its most sacred rights of life and liberty in the persons of a distant people who never offended him, captivating and carrying them into slavery in another hemisphere or to incur miserable death in their transportation thither.  This piratical warfare, the opprobrium of infidel powers, is the warfare of the Christian King of Great Britain.  Determined to keep open a market where Men should be bought and sold, he has prostituted his negative for suppressing every legislative attempt to prohibit or restrain this execrable commerce.  And that this assemblage of horrors might want no fact of distinguished die, he is now exciting those very people to rise in arms among us, and to purchase that liberty of which he has deprived them, by murdering the people on whom he has obtruded them: thus paying off former crimes committed against the Liberties of one people, with crimes which he urges them to commit against the lives of another..”

The second Continental Congress, based on hardline votes of South Carolina and the desire to avoid alienating potential sympathizers in England, slaveholding patriots, and the harbor cities of the North that were complicit in the slave trade, dropped this vital statement of principle

The removal of the anti-slavery clause of the declaration was not the only time Jefferson’s efforts might have led to the premature end of the “peculiar institution.” Economist and cultural historian Thomas Sowell notes that Jefferson’s 1784 anti-slavery bill, which had the votes to pass but did not because of a single ill legislator’s absence from the floor, would have ended the expansion of slavery to any newly admitted states to the Union years before the Constitution’s infamous three-fifths compromise. One wonders if America would have seen a secessionist movement or Civil War, and how the economies of states from Alabama and Florida to Texas would have developed without slave labor, which in some states and counties constituted the majority.

These ideas form a core moral principle for most Americans today, but they are not hypothetical or irrelevant to modern debates about liberty. Though America and the broader Western World have brought the slavery debate to an end, the larger world has not; though countries have officially made enslavement a crime (true only since 2007), many within the highest levels of government aid and abet the practice. 30 million individuals around the world suffer under the same types of chattel slavery seen millennia ago, including in nominal US allies in the Middle East. The debates between the pursuit of non-intervention as a form of freedom and the defense of the liberty of others as a form of freedom have been consistently important since the 1800’s (or arguably earlier), and I think it is vital that these discussions continue in the public forum. I hope that this 4th of July reminds us that liberty is not just a distant concept, but a set of values that requires constant support, intellectual nurturing, and pursuit.

For more underrecognized history surrounding the founding of America, see my Before the Fourth series!

A Right is Not an Obligation

Precision of language in matters of science is important. Speaking recently with some fellow libertarians, we got into an argument about the nature of rights. My position: A right does not obligate anyone to do anything. Their position: Rights are the same thing as obligations.

My response: But if a right is the same thing as an obligation, why use two different words? Doesn’t it make more sense to distinguish them?

So here are the definitions I’m working with. A right is what is “just” or “moral”, as those words are normally defined. I have a right to choose which restaurant I want to eat at.

An obligation is what one is compelled to do by a third party. I am obligated to sell my car to Alice at a previously agreed on a price or else Bob will come and take my car away from me using any means necessary.

Let’s think through an example. Under a strict interpretation of libertarianism, a mother with a starving child does not have the right to steal bread from a baker. But if she does steal the bread, then what? Do the libertarian police instantly swoop down from Heaven and give the baker his bread back?

Consider the baker. The baker indeed does have a right to keep his bread. But he is no under no obligation to get his bread back should it get stolen. The baker could take pity on the mother and let her go. Or he could calculate the cost of having one loaf stolen is low to expend resources to try to get it back.

Let’s analyze now the bedrock of libertarianism, the nonaggression principle (NAP). There are several formulations. Here’s one: “no one has a right to initiate force against someone else’s person or property.” Here’s a more detailed version, from Walter Block: “It shall be legal for anyone to do anything he wants, provided only that he not initiate (or threaten) violence against the person or legitimately owned property of another.”

A natural question to ask is, what happens if someone does violate the NAP? One common answer is that the victim of the aggression then has a right to use force to defend himself. But note again, the right does not imply an obligation. Just because someone initiates force against you, does not obligate you or anyone else to respond. Pacifism is consistent with libertarianism.

Consider another example. Due to a strange series of coincidences, you find yourself lost in the woods in the middle of a winter storm. You come across an unoccupied cabin that’s obviously used as a summer vacation home. You break in, and help yourself to some canned beans and shelter, and wait out the storm before going for help.

Did you have a right to break into the cabin? Under some strict interpretations of libertarianism, no. But even if this is true, all it means is that the owners of the cabin have the right, but not obligation, to use force to seek damages from you after the fact. (They also had the right to fortify their cabin in such a way that you would have been prevented from ever entering.) But they may never exercise that right; you could ask for forgiveness and they might grant it.

Furthermore, under a pacifist anarchocapitalist order, the owners might not even use force when seeking compensation. They might just ask politely; and if they don’t like your excuses, they’ll simply leave a negative review with a private credit agency (making harder for you to get loans, jobs, etc.).

The nonaggression principle, insofar as it is strictly about rights (and not obligations), is about justice. It is not about compelling people to do anything. Hence, I propose a new formulation of the NAP: using force to defend yourself from initiations of force can be consistent with justice.

This formulation makes clear that using force is a choice. Initiating force does not obligate anyone to do anything. “Excessive force” may be a possibile injustice.

In short, justice does not require force.

Highly recommended work on Ayn Rand

Most scholarship on Ayn Rand has been of mediocre quality, according to Gregory Salmieri, the co-editor of A Companion to Ayn Rand, which is part of the series “Blackwell Companions to Philosophy.” The other co-editor of the volume is the late Allan Gotthelf, who died during it’s last preparatory stages.

The reasons for the poor scholarship are diverse. Of course Rand herself is a large element. She hardly ever participated in regular academic procedures, did not tolerate normal academic criticism on her work and strictly limited the number of people who could authoritatively ‘explain’ her Objectivist philosophy to herself and Nathaniel Branden. Before her death she appointed Leonard Peikoff as ‘literary heir’. She inspired fierce combat against the outside world among her closest followers, especially when others wrote about Rand in a way not to their liking. The result was that just a small circle of admirers wrote about her ideas, often in a non-critical way.

blog ayn rand

On the other hand, the ‘rest of the academy’ basically ignored her views, despite her continued popularity (especially in the US), her influence, particularly through her novels, and large sales, especially after the economic crisis of 2008. For sure, Objectivists remain a minority both inside and outside academia. Yet despite the strong disagreement with her ideas, it would still be normal to expect regular academic output by non-Randians on her work. Suffice it to point to the many obscure thinkers who have been elevated to the academic mainstream over the centuries. Yet Rand remains in the academic dark, the bias against her work is strong and influential. This said, there is a slight change visible. Some major presses have published books on Rand in the past years, with as prime examples the books by Jennifer Burns, Goddess of the Market: Ayn Rand and the American Right (2009), and Anne C Heller, Ayn Rand and the World She Made (2010). And this volume is another point in case.

One of the strong points of The Blackwell Companion on Ayn Rand is that the contributions meet all regular academic standards, despite the fact that the volume originates from the Randian inner circle. It offers proper explanation and analysis of her ideas and normal engagement with outside criticism. The little direct attack on interpretations or alleged errors of others is left to the end notes, albeit sometimes extensively. Let us say, in friendly fashion, that it proves hard to get rid of old habits!

It should not detract from the extensive, detailed, clearly written and plainly good quality of the 18 chapters in this companion, divided in 8 parts, covering overall context, ethics and human nature, society, the foundations of Objectivism, philosophers and their effects, art and a coda on the hallmarks of Objectivism. The only disadvantage is the large number of references to her two main novels, The Fountainhead and Atlas Shrugged, which makes some acquaintance with these tomes almost prerequisite for a great learning experience. Still, as a non-Randian doing work on her political ideas, I underline that this companion offers academically sound information and analysis about the full range of Rand’s ideas. So, go read it if you are interested in this fascinating thinker.

The death of reason

“In so far as their only recourse to that world is through what they see and do, we may want to say that after a revolution scientists are responding to a different world.”

Thomas Kuhn, The Structure of Scientific Revolutions p. 111

I can remember arguing with my cousin right after Michael Brown was shot. “It’s still unclear what happened,” I said, “based soley on testimony” — at that point, we were still waiting on the federal autopsy report by the Department of Justice. He said that in the video, you can clearly see Brown, back to the officer and with his hands up, as he is shot up to eight times.

My cousin doesn’t like police. I’m more ambivalent, but I’ve studied criminal justice for a few years now, and I thought that if both of us watched this video (no such video actually existed), it was probably I who would have the more nuanced grasp of what happened. So I said: “Well, I will look up this video, try and get a less biased take and get back to you.” He replied, sarcastically, “You can’t watch it without bias. We all have biases.”

And that seems to be the sentiment of the times: bias encompasses the human experience, it subsumes all judgments and perceptions. Biases are so rampant, in fact, that no objective analysis is possible. These biases may be cognitive, like confirmation bias, emotional fallacies or that phenomenon of constructive memory; or inductive, like selectivity or ignoring base probability; or, as has been common to think, ingrained into experience itself.

The thing about biases is that they are open to psychological evaluation. There are precedents for eliminating them. For instance, one common explanation of racism is that familiarity breeds acceptance, and infamiliarity breeds intolerance (as Reason points out, people further from fracking sites have more negative opinions on the practice than people closer). So to curb racism (a sort of bias), children should interact with people outside of their singular ethnic group. More clinical methodology seeks to transform mental functions that are automatic to controlled, and thereby enter reflective measures into perception, reducing bias. Apart from these, there is that ancient Greek practice of reasoning, wherein patterns and evidence are used to generate logical conclusions.

If it were true that human bias is all-encompassing, and essentially insurmountable, the whole concept of critical thinking goes out the window. Not only do we lose the critical-rationalist, Popperian mode of discovery, but also Socratic dialectic, as essentially “higher truths” disappear from human lexicon.

The belief that biases are intrinsic to human judgment ignores psychological or philosophical methods to counter prejudice because it posits that objectivity itself is impossible. This viewpoint has been associated with “postmodern” schools of philosophy, such as those Dr. Rosi commented on (e.g., those of Derrida, Lacan, Foucault, Butler), although it’s worth pointing out that the analytic tradition, with its origins in Frege, Russell and Moore represents a far greater break from the previous, modern tradition of Descartes and Kant, and often reached similar conclusions as the Continentals.

Although theorists of the “postmodern” clique produced diverse claims about knowledge, society, and politics, the most famous figures are nearly almost always associated or incorporated into the political left. To make a useful simplification of viewpoints: it would seem that progressives have generally accepted Butlerian non-essentialism about gender and Foucauldian terminology (discourse and institutions). Derrida’s poststructuralist critique noted dichotomies and also claimed that the philosophical search for Logos has been patriarchal, almost neoreactionary. (The month before Donald Trump’s victory, the word patriarchy had an all-time high at Google search.) It is not a far right conspiracy that European philosophers with strange theories have influenced and sought to influence American society; it is patent in the new political language.

Some people think of the postmodernists as all social constructivists, holding the theory that many of the categories and identifications we use in the world are social constructs without a human-independent nature (e.g., not natural kinds). Disciplines like anthropology and sociology have long since dipped their toes, and the broader academic community, too, relates that things like gender and race are social constructs. But the ideas can and do go further: “facts” themselves are open to interpretation on this view: to even assert a “fact” is just to affirm power of some sort. This worldview subsequently degrades the status of science into an extended apparatus for confirmation-bias, filling out the details of a committed ideology rather than providing us with new facts about the world. There can be no objectivity outside of a worldview.

Even though philosophy took a naturalistic turn with the philosopher W. V. O. Quine, seeing itself as integrating with and working alongside science, the criticisms of science as an establishment that emerged in the 1950s and 60s (and earlier) often disturbed its unique epistemic privilege in society: ideas that theory is underdetermined by evidence, that scientific progress is nonrational, that unconfirmed auxiliary hypotheses are required to conduct experiments and form theories, and that social norms play a large role in the process of justification all damaged the mythos of science as an exemplar of human rationality.

But once we have dismantled Science, what do we do next? Some critics have held up Nazi German eugenics and phrenology as examples of the damage that science can do to society (nevermind that we now consider them pseudoscience). Yet Lysenkoism and the history of astronomy and cosmology indicate that suppressing scientific discovery can too be deleterious. Austrian physicist and philosopher Paul Feyerabend instead wanted a free society — one where science had equal power as older, more spiritual forms of knowledge. He thought the model of rational science exemplified in Sir Karl Popper was inapplicable to the real machinery of scientific discovery, and the only methodological rule we could impose on science was: “anything goes.”

Feyerabend’s views are almost a caricature of postmodernism, although he denied the label “relativist,” opting instead for philosophical Dadaist. In his pluralism, there is no hierarchy of knowledge, and state power can even be introduced when necessary to break up scientific monopoly. Feyerabend, contra scientists like Richard Dawkins, thought that science was like an organized religion and therefore supported a separation of church and state as well as a separation of state and science. Here is a move forward for a society that has started distrusting the scientific method… but if this is what we should do post-science, it’s still unclear how to proceed. There are still queries for anyone who loathes the hegemony of science in the Western world.

For example, how does the investigation of crimes proceed without strict adherence to the latest scientific protocol? Presumably, Feyerabend didn’t want to privatize law enforcement, but science and the state are very intricately connected. In 2005, Congress authorized the National Academy of Sciences to form a committee and conduct a comprehensive study on contemporary legal science to identify community needs, evaluating laboratory executives, medical examiners, coroners, anthropologists, entomologists, ontologists, and various legal experts. Forensic science — scientific procedure applied to the field of law — exists for two practical goals: exoneration and prosecution. However, the Forensic Science Committee revealed that severe issues riddle forensics (e.g., bite mark analysis), and in their list of recommendations the top priority is establishing an independent federal entity to devise consistent standards and enforce regular practice.

For top scientists, this sort of centralized authority seems necessary to produce reliable work, and it entirely disagrees with Feyerabend’s emphasis on methodological pluralism. Barack Obama formed the National Commission on Forensic Science in 2013 to further investigate problems in the field, and only recently Attorney General Jeff Sessions said the Department of Justice will not renew the committee. It’s unclear now what forensic science will do to resolve its ongoing problems, but what is clear is that the American court system would fall apart without the possibility of appealing to scientific consensus (especially forensics), and that the only foreseeable way to solve the existing issues is through stricter methodology. (Just like with McDonalds, there are enforced standards so that the product is consistent wherever one orders.) More on this later.

So it doesn’t seem to be in the interest of things like due process to abandon science or completely separate it from state power. (It does, however, make sense to move forensic laboratories out from under direct administrative control, as the NAS report notes in Recommendation 4. This is, however, specifically to reduce bias.) In a culture where science is viewed as irrational, Eurocentric, ad hoc, and polluted with ideological motivations — or where Reason itself is seen as a particular hegemonic, imperial device to suppress different cultures — not only do we not know what to do, when we try to do things we lose elements of our civilization that everyone agrees are valuable.

Although Aristotle separated pathos, ethos and logos (adding that all informed each other), later philosophers like Feyerabend thought of reason as a sort of “practice,” with history and connotations like any other human activity, falling far short of sublime. One could no more justify reason outside of its European cosmology than the sacrificial rituals of the Aztecs outside of theirs. To communicate across paradigms, participants have to understand each other on a deep level, even becoming entirely new persons. When debates happen, they must happen on a principle of mutual respect and curiosity.

From this one can detect a bold argument for tolerance. Indeed, Feyerabend was heavily influenced by John Stuart Mill’s On Liberty. Maybe, in a world disillusioned with scientism and objective standards, the next cultural move is multilateral acceptance and tolerance for each others’ ideas.

This has not been the result of postmodern revelations, though. The 2016 election featured the victory of one psychopath over another, from two camps utterly consumed with vitriol for each other. Between Bernie Sanders, Donald Trump and Hillary Clinton, Americans drifted toward radicalization as the only establishment candidate seemed to offer the same noxious, warmongering mess of the previous few decades of administration. Politics has only polarized further since the inauguration. The alt-right, a nearly perfect symbol of cultural intolerance, is regular news for mainstream media. Trump acolytes physically brawl with black bloc Antifa in the same city of the 1960s Free Speech Movement. It seems to be the worst at universities. Analytic feminist philosophers asked for the retraction of a controversial paper, seemingly without reading it. Professors even get involved in student disputes, at Berkeley and more recently Evergreen. The names each side uses to attack each other (“fascist,” most prominently) — sometimes accurate, usually not — display a political divide with groups that increasingly refuse to argue their own side and prefer silencing their opposition.

There is not a tolerant left or tolerant right any longer, in the mainstream. We are witnessing only shades of authoritarianism, eager to destroy each other. And what is obvious is that the theories and tools of the postmodernists (post-structuralism, social constructivism, deconstruction, critical theory, relativism) are as useful for reactionary praxis as their usual role in left-wing circles. Says Casey Williams in the New York Times: “Trump’s playbook should be familiar to any student of critical theory and philosophy. It often feels like Trump has stolen our ideas and weaponized them.” The idea of the “post-truth” world originated in postmodern academia. It is the monster turning against Doctor Frankenstein.

Moral (cultural) relativism in particular only promises rejecting our shared humanity. It paralyzes our judgment on female genital mutilation, flogging, stoning, human and animal sacrifice, honor killing, Caste, underground sex trade. The afterbirth of Protagoras, cruelly resurrected once again, does not promise trials at Nuremberg, where the Allied powers appealed to something above and beyond written law to exact judgment on mass murderers. It does not promise justice for the ethnic cleansers in Srebrenica, as the United Nations is helpless to impose a tribunal from outside Bosnia-Herzegovina. Today, this moral pessimism laughs at the phrase “humanitarian crisis,” and Western efforts to change the material conditions of fleeing Iraqis, Afghans, Libyans, Syrians, Venezuelans, North Koreans…

In the absence of universal morality, and the introduction of subjective reality, the vacuum will be filled with something much more awful. And we should be afraid of this because tolerance has not emerged as a replacement. When Harry Potter first encounters Voldemort face-to-scalp, the Dark Lord tells the boy “There is no good and evil. There is only power… and those too weak to seek it.” With the breakdown of concrete moral categories, Feyerabend’s motto — anything goes — is perverted. Voldemort has been compared to Plato’s archetype of the tyrant from the Republic: “It will commit any foul murder, and there is no food it refuses to eat. In a word, it omits no act of folly or shamelessness” … “he is purged of self-discipline and is filled with self-imposed madness.”

Voldemort is the Platonic appetite in the same way he is the psychoanalytic id. Freud’s das Es is able to admit of contradictions, to violate Aristotle’s fundamental laws of logic. It is so base, and removed from the ordinary world of reason, that it follows its own rules we would find utterly abhorrent or impossible. But it is not difficult to imagine that the murder of evidence-based reasoning will result in Death Eater politics. The ego is our rational faculty, adapted to deal with reality; with the death of reason, all that exists is vicious criticism and unfettered libertinism.

Plato predicts Voldemort with the image of the tyrant, and also with one of his primary interlocutors, Thrasymachus, when the sophist opens with “justice is nothing other than the advantage of the stronger.” The one thing Voldemort admires about The Boy Who Lived is his bravery, the trait they share in common. This trait is missing in his Death Eaters. In the fourth novel the Dark Lord is cruel to his reunited followers for abandoning him and losing faith; their cowardice reveals the fundamental logic of his power: his disciples are not true devotees, but opportunists, weak on their own merit and drawn like moths to every Avada Kedavra. Likewise students flock to postmodern relativism to justify their own beliefs when the evidence is an obstacle.

Relativism gives us moral paralysis, allowing in darkness. Another possible move after relativism is supremacy. One look at Richard Spencer’s Twitter demonstrates the incorrigible tenet of the alt-right: the alleged incompatibility of cultures, ethnicities, races: that different groups of humans simply can not get along together. The Final Solution is not about extermination anymore but segregated nationalism. Spencer’s audience is almost entirely men who loathe the current state of things, who share far-reaching conspiracy theories, and despise globalism.

The left, too, creates conspiracies, imagining a bourgeois corporate conglomerate that enlists economists and brainwashes through history books to normalize capitalism; for this reason they despise globalism as well, saying it impoverishes other countries or destroys cultural autonomy. For the alt-right, it is the Jews, and George Soros, who control us; for the burgeoning socialist left, it is the elites, the one-percent. Our minds are not free; fortunately, they will happily supply Übermenschen, in the form of statesmen or critical theorists, to save us from our degeneracy or our false consciousness.

Without the commitment to reasoned debate, tribalism has continued the polarization and inhumility. Each side also accepts science selectively, if they do not question its very justification. The privileged status that the “scientific method” maintains in polite society is denied when convenient; whether it is climate science, evolutionary psychology, sociology, genetics, biology, anatomy or, especially, economics: one side is outright rejecting it, without studying the material enough to immerse oneself in what could be promising knowledge (as Feyerabend urged, and the breakdown of rationality could have encouraged). And ultimately, equal protection, one tenet of individualist thought that allows for multiplicity, is entirely rejected by both: we should be treated differently as humans, often because of the color of our skin.

Relativism and carelessness for standards and communication has given us supremacy and tribalism. It has divided rather than united. Voldemort’s chaotic violence is one possible outcome of rejecting reason as an institution, and it beckons to either political alliance. Are there any examples in Harry Potter of the alternative, Feyerabendian tolerance? Not quite. However, Hermione Granger serves as the Dark Lord’s foil, and gives us a model of reason that is not as archaic as the enemies of rationality would like to suggest. In Against Method (1975), Feyerabend compares different ways rationality has been interpreted alongside practice: in an idealist way, in which reason “completely governs” research, or a naturalist way, in which reason is “completely determined by” research. Taking elements of each, he arrives at an intersection in which one can change the other, both “parts of a single dialectical process.”

“The suggestion can be illustrated by the relation between a map and the adventures of a person using it or by the relation between an artisan and his instruments. Originally maps were constructed as images of and guides to reality and so, presumably, was reason. But maps, like reason, contain idealizations (Hecataeus of Miletus, for examples, imposed the general outlines of Anaximander’s cosmology on his account of the occupied world and represented continents by geometrical figures). The wanderer uses the map to find his way but he also corrects it as he proceeds, removing old idealizations and introducing new ones. Using the map no matter what will soon get him into trouble. But it is better to have maps than to proceed without them. In the same way, the example says, reason without the guidance of a practice will lead us astray while a practice is vastly improved by the addition of reason.” p. 233

Christopher Hitchens pointed out that Granger sounds like Bertrand Russell at times, like this quote about the Resurrection Stone: “You can claim that anything is real if the only basis for believing in it is that nobody has proven it doesn’t exist.” Granger is often the embodiment of anemic analytic philosophy, the institution of order, a disciple for the Ministry of Magic. However, though initially law-abiding, she quickly learns with Potter and Weasley the pleasures of rule-breaking. From the first book onward, she is constantly at odds with the de facto norms of the university, becoming more rebellious as time goes on. It is her levelheaded foundation, but ability to transgress rules, that gives her an astute semi-deontological, semi-utilitarian calculus capable of saving the lives of her friends from the dark arts, and helping to defeat the tyranny of Voldemort foretold by Socrates.

Granger presents a model of reason like Feyerabend’s map analogy. Although pure reason gives us an outline of how to think about things, it is not a static or complete blueprint, and it must be fleshed out with experience, risk-taking, discovery, failure, loss, trauma, pleasure, offense, criticism, and occasional transgressions past the foreseeable limits. Adding these addenda to our heuristics means that we explore a more diverse account of thinking about things and moving around in the world.

When reason is increasingly seen as patriarchal, Western, and imperialist, the only thing consistently offered as a replacement is something like lived experience. Some form of this idea is at least a century old, with Husserl, still modest by reason’s Greco-Roman standards. Yet lived experience has always been pivotal to reason; we only need adjust our popular model. And we can see that we need not reject one or the other entirely. Another critique of reason says it is fool-hardy, limiting, antiquated; this is a perversion of its abilities, and plays to justify the first criticism. We can see that there is room within reason for other pursuits and virtues, picked up along the way.

The emphasis on lived experience, which predominantly comes from the political left, is also antithetical for the cause of “social progress.” Those sympathetic to social theory, particularly the cultural leakage of the strong programme, are constantly torn between claiming (a) science is irrational, and can thus be countered by lived experience (or whatnot) or (b) science may be rational but reason itself is a tool of patriarchy and white supremacy and cannot be universal. (If you haven’t seen either of these claims very frequently, and think them a strawman, you have not been following university protests and editorials. Or radical Twitter: ex., ex., ex., ex.) Of course, as in Freud, this is an example of kettle-logic: the signal of a very strong resistance. We see, though, that we need not accept nor deny these claims and lose anything. Reason need not be stagnant nor all-pervasive, and indeed we’ve been critiquing its limits since 1781.

Outright denying the process of science — whether the model is conjectures and refutations or something less stale — ignores that there is no single uniform body of science. Denial also dismisses the most powerful tool for making difficult empirical decisions. Michael Brown’s death was instantly a political affair, with implications for broader social life. The event has completely changed the face of American social issues. The first autopsy report, from St. Louis County, indicated that Brown was shot at close range in the hand, during an encounter with Officer Darren Wilson. The second independent report commissioned by the family concluded the first shot had not in fact been at close range. After the disagreement with my cousin, the Department of Justice released the final investigation report, and determined that material in the hand wound was consistent with gun residue from an up-close encounter.

Prior to the report, the best evidence available as to what happened in Missouri on August 9, 2014, was the ground footage after the shooting and testimonies from the officer and Ferguson residents at the scene. There are two ways to approach the incident: reason or lived experience. The latter route will lead to ambiguities. Brown’s friend Dorian Johnson and another witness reported that Officer Wilson fired his weapon first at range, under no threat, then pursued Brown out of his vehicle, until Brown turned with his hands in the air to surrender. However, in the St. Louis grand jury half a dozen (African-American) eyewitnesses corroborated Wilson’s account: that Brown did not have his hands raised and was moving toward Wilson. In which direction does “lived experience” tell us to go, then? A new moral maxim — the duty to believe people — will lead to no non-arbitrary conclusion. (And a duty to “always believe x,” where x is a closed group, e.g. victims, will put the cart before the horse.) It appears that, in a case like this, treating evidence as objective is the only solution.

Introducing ad hoc hypotheses, e.g., the Justice Department and the county examiner are corrupt, shifts the approach into one that uses induction, and leaves behind lived experience (and also ignores how forensic anthropology is actually done). This is the introduction of, indeed, scientific standards. (By looking at incentives for lying it might also employ findings from public choice theory, psychology, behavioral economics, etc.) So the personal experience method creates unresolvable ambiguities, and presumably will eventually grant some allowance to scientific procedure.

If we don’t posit a baseline-rationality — Hermione Granger pre-Hogwarts — our ability to critique things at all disappears. Utterly rejecting science and reason, denying objective analysis in the presumption of overriding biases, breaking down naïve universalism into naïve relativism — these are paths to paralysis on their own. More than that, they are hysterical symptoms, because they often create problems out of thin air. Recently, a philosopher and mathematician submitted a hoax paper, Sokal-style, to a peer-reviewed gender studies journal in an attempt to demonstrate what they see as a problem “at the heart of academic fields like gender studies.” The idea was to write a nonsensical, postmodernish essay, and if the journal accepted it, that would indicate the field is intellectually bankrupt. Andrew Smart at Psychology Today instead wrote of the prank: “In many ways this academic hoax validates many of postmodernism’s main arguments.” And although Smart makes some informed points about problems in scientific rigor as a whole, he doesn’t hint at what the validation of postmodernism entails: should we abandon standards in journalism and scholarly integrity? Is the whole process of peer-review functionally untenable? Should we start embracing papers written without any intention of making sense, to look at knowledge concealed below the surface of jargon? The paper, “The conceptual penis,” doesn’t necessarily condemn the whole of gender studies; but, against Smart’s reasoning, we do in fact know that counterintuitive or highly heterodox theory is considered perfectly average.

There were other attacks on the hoax, from SlateSalon and elsewhere. Criticisms, often valid for the particular essay, typically didn’t move the conversation far enough. There is much more for this discussion. A 2006 paper from the International Journal of Evidence Based Healthcare, “Deconstructing the evidence-based discourse in health sciences,” called the use of scientific evidence “fascist.” In the abstract the authors state their allegiance to the work of Deleuze and Guattari. Real Peer Review, a Twitter account that collects abstracts from scholarly articles, regulary features essays from the departments of women and gender studies, including a recent one from a Ph. D student wherein the author identifies as a hippopotamus. Sure, the recent hoax paper doesn’t really say anything. but it intensifies this much-needed debate. It brings out these two currents — reason and the rejection of reason — and demands a solution. And we know that lived experience is going to be often inconclusive.

Opening up lines of communication is a solution. One valid complaint is that gender studies seems too insulated, in a way in which chemistry, for instance, is not. Critiquing a whole field does ask us to genuinely immerse ourselves first, and this is a step toward tolerance: it is a step past the death of reason and the denial of science. It is a step that requires opening the bubble.

The modern infatuation with human biases as well as Feyerabend’s epistemological anarchism upset our faith in prevailing theories, and the idea that our policies and opinions should be guided by the latest discoveries from an anonymous laboratory. Putting politics first and assuming subjectivity is all-encompassing, we move past objective measures to compare belief systems and theories. However, isn’t the whole operation of modern science designed to work within our means? The system by Kant set limits on humanity rationality, and most science is aligned with an acceptance of fallibility. As Harvard cognitive scientist Steven Pinker says, “to understand the world, we must cultivate work-arounds for our cognitive limitations, including skepticism, open debate, formal precision, and empirical tests, often requiring feats of ingenuity.”

Pinker goes for far as to advocate for scientism. Others need not; but we must understand an academic field before utterly rejecting it. We must think we can understand each other, and live with each other. We must think there is a baseline framework that allows permanent cross-cultural correspondence — a shared form of life which means a Ukrainian can interpret a Russian and a Cuban an American. The rejection of Homo Sapiens commensurability, championed by people like Richard Spencer and those in identity politics, is a path to segregation and supremacy. We must reject Gorgian nihilism about communication, and the Presocratic relativism that camps our moral judgments in inert subjectivity. From one Weltanschauung to the next, our common humanity — which endures class, ethnicity, sex, gender — allows open debate across paradigms.

In the face of relativism, there is room for a nuanced middleground between Pinker’s scientism and the rising anti-science, anti-reason philosophy; Paul Feyerabend has sketched out a basic blueprint. Rather than condemning reason as a Hellenic germ of Western cultural supremacy, we need only adjust the theoretical model to incorporate the “new America of knowledge” into our critical faculty. It is the raison d’être of philosophers to present complicated things in a more digestible form; to “put everything before us,” so says Wittgenstein. Hopefully, people can reach their own conclusions, and embrace the communal human spirit as they do.

However, this may not be so convincing. It might be true that we have a competition of cosmologies: one that believes in reason and objectivity, one that thinks reason is callow and all things are subjective.These two perspectives may well be incommensurable. If I try to defend reason, I invariably must appeal to reasons, and thus argue circularly. If I try to claim “everything is subjective,” I make a universal statement, and simultaneously contradict myself. Between begging the question and contradicting oneself, there is not much indication of where to go. Perhaps we just have to look at history and note the results of either course when it has been applied, and take it as a rhetorical move for which path this points us toward.

The Protestant Reformation and freedom of conscience

This year we celebrate 500 years of the Protestant Reformation. On October 31, 1517, the then Augustinian monk, priest, and teacher Martin Luther nailed at the door of a church in Wittenberg, Germany, a document with 95 theses on salvation, that is, basically the way people are led by the Christian God to Heaven. Luther was scandalized by the sale of indulgences by the Roman Catholic Church, believing that this practice did not correspond to the biblical teaching. Luther understood that salvation was given only by faith. The Catholic Church understood that salvation was a combination of faith and works.

The practice of nailing a document at the door of the church was not uncommon, and Luther’s intention was to hold an academic debate on the subject. However, Luther’s ideas found many sympathizers and a wide-spread protestant movement within the Roman Catholic Church was quickly initiated. Over the years, other leaders such as Ulrich Zwingli and John Calvin joined Luther. However, the main leaders of the Roman Catholic Church did not agree with the Reformers’ point of view, and so the Christian church in the West was divided into several groups: Lutherans, Anglicans, Reformed, Anabaptists, later followed by Methodists, Pentecostals and many others. In short, the Christian church in the West has never been the same.

The Protestant Reformation was obviously a movement of great importance in world religious history. I also believe that few would disagree with its importance in the broader context of history, especially Western history. To mention just one example, Max Weber’s thesis that Protestantism (especially Calvinism, and more precisely Puritanism) was a key factor in the development of what he called modern capitalism is very accepted, or at least enthusiastically debated. But I would like to briefly address here another impact of the Protestant Reformation on world history: the development of freedom of conscience.

Simply put, but I believe that not oversimplifying, after the fall of the Roman Empire and until the 16th century, Europe knew only one religion – Christianity – in only one variety – Roman Catholic Christianity. It is true that much of the paganism of the barbarians survived through the centuries, that Muslims occupied parts of Europe (mainly the Iberian Peninsula) and that other varieties of Christianity were practiced in parts of Europe (mainly Russia and Greece). But besides that, the history of Christianity was a tale of an ever-increasing concentration of political and ecclesiastical power in Rome, as well as an ever-widening intersection of priests, bishops, kings, and nobles. In short, Rome became increasingly central and the distinction between church and state increasingly difficult to observe in practice. One of the legacies of the Protestant Reformation was precisely the debate about the relationship between church and state. With a multiplicity of churches and strengthening nationalisms, the model of a unified Christianity was never possible again.

Of course, this loss of unity in Christendom can cause melancholy and nostalgia among some, especially Roman Catholics. But one of its gains was the growth of the individual’s space in the world. This was not a sudden process, but slowly but surely it became clear that religious convictions could no longer be imposed on individuals. Especially in England, where the Anglican Church stood midway between Rome and Wittenberg (or Rome and Geneva), many groups emerged on the margins of the state church: Presbyterians, Baptists, Congregationalists, Quakers, and so on. These groups accepted the challenge of being treated as second-class citizens, but maintaining their personal convictions. Something similar can be said about Roman Catholics in England, who began to live on the fringes of society. The new relationship between church and state in England was a point of discussion for many of the most important political philosophers of modernity: Thomas Hobbes, John Locke, Edmund Burke, and others. To disregard this aspect is to lose sight of one of the most important points of the debate in which these thinkers were involved.

The Westminster Confession of Faith, one of the most important documents produced in the period of the Protestant Reformation, has a chapter entitled “Of Christian Liberty, and Liberty of Conscience.” Of course there are issues in this chapter that may sound very strange to those who are not Christians or who are not involved in Christian churches. However, one point is immediately understandable to all: being a Christian is a matter of intimate forum. No one can be compelled to be a Christian. At best this obligation would produce only external adhesion. Intimate adherence could never be satisfactorily verified.

Sometime after the classical Reformation period, a new renewal religious movement occurred in England with the birth of Methodism. But its leading leaders, John Wesley and George Whitefield, disagreed about salvation in a way not so different from what had previously occurred between Luther and the Roman Catholic Church. However, this time there was no excommunication, inquisition or wars. Wesley simply told Whitefield, “Let’s agree to disagree.”

Agreeing to disagree is one of the great legacies of the Protestant Reformation. May we always try to convince each other by force of argument, not by force of arms. And that each one has the right to decide for themselves, with freedom of conscience, which seems the best way forward.

Where is the line between sympathy and paternalism?

In higher-ed news two types of terrifying stories come up pretty frequently: free speech stories, and Title IX stories. You’d think these stories would only be relevant to academics and students, but they’re not. These issues are certainly very important for those of us who hang out in ivory towers. But those towers shape the debate–and unquestioned assumptions–that determine real world policy in board rooms and capitols. This is especially true in a world where a bachelor’s degree is the new GED.

The free speech stories have gotten boring because they all take the following form: group A doesn’t want to let group B talk about opinion b so they act like a bunch of jackasses. Usually this takes place at a school for rich kids. Usually those kids are majoring in something that will give them no marketable skills.

The Title IX stories are Kafkaesque tales where a well-intentioned policy (create a system to protect people in colleges from sexism and sexual aggression) turns into a kangaroo court that allows terrible people to ruin other people’s lives. (I hasten to add, I’m sure Title IX offices do plenty of legitimately great work.)

A great article in the Chronicle gives an inside look at one of these tribunals. For the most part it’s chilling. Peter Ludlow had been accused of sexual assault, but the claims weren’t terribly credible. As far as I can tell (based only on this article) he did some things that should raise some eyebrows, but nothing genuinely against any rules. Nonetheless, the accusations were a potential PR and liability problem for the school so he had to go, regardless of justice.

The glimmer of hope comes with the testimony of Jessica Wilson. She managed to shake them out of their foregone conclusion and got them to consider that women above the age of consent can be active participants in their own lives instead of victims waiting to happen. Yes, bad things happen to women, but that’s not enough to jump to the conclusion that all women are victims and all men are aggressors.

The big question at the root of these types of stories is how much responsibility we ought to take for our lives.

Free speech: Should I be held responsible for saying insensitive (or unpatriotic) things? Who would enforce that obligations? Should I be held responsible for dealing with the insensitive things other people might say? Or should I even be allowed to hear what other people might say because I can’t take responsibility for evaluating it “critically” and coming to the right conclusion.

Title IX: Should women be responsible for their own protection, or is that akin to blaming the victim? We’ve gone from trying to create an environment where everyone can contribute to taking away agency. In doing so we’ve also created a powerful mechanism that can be abused. This is bad because of the harm it does to the falsely accused, but it also has the potential to delegitimize the claims of genuine victims and fractures society. But our forebears weren’t exactly saints when it came to treating each other justly.

Where is the line between helping a group and infantilizing them?

At either end of a spectrum I imagine caricature versions of a teenage libertarian (“your problems are your own, suck it up while I shout dumb things at you”) and a social justice warrior (“it’s everyone else’s fault! Let’s occupy!”). Let’s call those end points Atomistic Responsibility and Social Responsibility. More sarcastically, we could call them Robot and Common Pool Responsibility. Nobody is actually at these extreme ends (I hope), but some people get close.

Either one seems ridiculous to anyone who doesn’t already subscribe to that view, but both have a kernel of truth. Fair or not, you have to take responsibility for your life. But we’re all indelibly shaped by our environment.

Schools have historically adopted a policy towards the atomistic end, but have been trending in the other direction. I don’t think this is universally bad, but I think those values cannot properly coexist within a single organization.

We can imagine some hypothetical proper point on the Responsibility Spectrum, but without a way to objectively measure virtue, the position of that point–the line between sympathy and paternalism–its location is an open question. We need debate to better position and re-position that line. I would argue that Western societies have been doing a pretty good job of moving that line in the right direction over the last 100 years (although I disagree with many of the ways our predecessors have chosen to enforce that line).

But here’s the thing: we can’t move in the right direction without getting real-time feedback from our environments. Without variation in the data, we can’t draw any conclusions. What we need more than a proper split of responsibility, is a range of possibilities being constantly tinkered with and explored.

We need a diversity of approaches. This is why freedom of speech and freedom of association are so essential. In order to get this diversity, we need federalism and polycentricity–stop trying to impose order from the top down on a grand scale (“think globally, act locally“), and let order be created from the bottom up. Let our organizations–businesses, churches, civic associations, local governments and special districts–adapt to their circumstances and the wishes of their stakeholders.

Benefiting from this diversity requires open minds and epistemic humility. We stand on the shore of a vast mysterious ocean. We’ve waded a short distance into the water and learned a lot, but there’s infinitely more to learn!

https://youtu.be/bzS39oghcnY?t=2m29s

(Sidenote: Looking for that Carl Sagan quote I came across this gem:

People are not stupid. They believe things for reasons. The last way for skeptics to get the attention of bright, curious, intelligent people is to belittle or condescend or to show arrogance toward their beliefs.

That about sums up my approach to discussing these sorts of issues. We’d all do better to occasionally give our opponents the benefit of the doubt and see what we can learn from them. Being a purist is a great way to structure your thought, but empathy for our opponents is how we make our theories strong.

Does business success make a good statesman?

Gary Becker made the distinction between two types of on-the-job training: general and specific. The former consist of the skills of wide applicability, which enable the worker to perform satisfactorily different kinds of jobs: to keep one’s commitments, to arrive on time to work, to avoid disturbing behavior, etc.. All of them are moral traits that raise the productivity of the worker whichever his occupation would be. On the other hand, specific on-the-job training only concerns the peculiarities of a given job: to know how many spoons of sugar your boss likes for his coffee or which of your employees is better qualified to deal with the public. The knowledge provided by the on-the-job training is incorporated to the worker, it travels with him when he moves from one company to another. Therefore, while the general on-the-job training increases the worker productivity in every other job he gets, he makes a poor profit from the specific one.

Of course, it is relative to each profession and industry whether the on-the-job training is general or specific. For example, a psychiatrist who works for a general hospital gets specific training about the concrete dynamics of its internal organization. If he later moves to a position in another hospital, his experience dealing with the internal politics of such institutions will count as general on-the-job training. If he then goes freelance instead, that experience will be of little use for his career. Nevertheless, even though the said psychiatrist switches from working for a big general hospital to working on his own, he will carry with him a valuable general on-the-job training: how to look after his patients, how to deal with their relatives, etc.

So, to what extent will on-the-job training gained by a successful businessman enable him to be a good statesman? In the same degree that a successful lawyer, a successful sportsman, a successful writer is enabled to be one. Every successful person carries with him a set of personal traits that are very useful in almost every field of human experience: self confidence, work ethics, constancy, and so on. If you lack any of them, you could hardly be a good politician, so as you rarely could achieve anything in any other field. But these qualities are the typical examples of general on-the-job training and what we are inquiring here is whether the specific on-the-job training of a successful businessman could enable him with a relative advantage to be a better politician -or at least have a better chance of being a good one.

The problem is that there is no such a thing as an a priori successful businessman. We can state that a doctor, an engineer, or a biologist need to have certain qualifications to be a competent professional. But the performance of a businessman depends on a multiplicity of variables that prevents us from elucidating which traits would lead him to success.

Medicine, physics, and biology deal with “simple phenomena”. The limits to the knowledge of such disciplines are relative to the development of the investigations in such fields (see F. A. Hayek, “The Theory of Complex Phenomena”). The more those professionals study, the more they work, the better trained they will be.

On the other hand, the law and the market economy are cases of “complex phenomena” (see F. A. Hayek, Law, Legislation and Liberty). Since the limits to the knowledge of such phenomena are absolute, a discovery process of trial and error applied to concrete cases is the only way to weather such uncertainty. The judge states the solution the law provides to a concrete controversy, but the lawmaker is enabled to state what the law says only in general and abstract terms. In the same sense, the personal strategy of a businessman is successful only under certain circumstances.

So, how does the market economy survive to its own complexity? The market does not need wise businessmen, but lots of purposeful ones, eager to thrive following their stubborn vision of the business. Most of them will be wrong about their perception of the market and subsequently will fail. A few others will prosper, since their plans meet -perhaps by chance- the changing demands of the market. Thus, the personal traits that led a successful businessman to prosperity were not universal, but the right ones for the specific time he carried out his plans.

Having said that, would a purposeful and stubborn politician a good choice for government? After all, Niccolo Macchiavelli had pointed out that initiative was the main virtue of the prince. Then, a good statesman would be the one who handles successfully the changing opportunities of life and politics. Notwithstanding, The Prince was -as Quentin Skinner showed- a parody: opportunistic behaviour is no good to the accomplishment of public duties and the protection of civil liberties.

Nevertheless, there is still a convincing argument for the businessman as a prospect of statesman. If he has to deal with the system of checks and balances -the Congress and the Courts-, the law will act as the selection process of the market. Every time a decision based on expediency collides with fundamental liberties, the latter must withstand the former. A sort of natural selection of political decisions.

Quite obvious, but not so trite. For a stubborn and purposeful politician not to become a menace to individual and public liberties, his initiative must not venture into constitutional design. No bypasses, no exceptions, not even reforms to the legal restraints to the public authority must be allowed, even in the name of emergency. Especially for most of the emergencies often brought about by measures based on expediency.

Bruce Lee’s Application Of Taoist Philosophy In Jeet Kune Do

Bruce Lee - Jeet Kune Do

Bruce Lee was born on November 27, 1940 and died on July 20, 1973. Even though he was just 32 upon his death, he had achieved so much in his limited lifetime. He was recognized by Time magazine as one of the 100 most influential people of the 20th century.[1] He was a cha cha champion in Hong Kong at age 18, a world renowned martial artist and a Chinese actor who was not only immensely popular in Asia, but who also made his breakthrough in Hollywood at a time when oriental actors were rarely accepted for lead roles. What is less known among the public is his keen interest in philosophy, a subject he studied at the University of Washington. Writing about where his interest in philosophy came from, he wrote:

My majoring in philosophy was closely related to the pugnacity of my childhood. I often asked myself these questions: What comes after victory? Why do people value victory so much? What is ‘glory’? What kind of ‘victory’ is ‘glorious’?[2]

In one of my previous posts, I discussed the similarities between the libertarian concept of Spontaneous Order and the Taoist concept of the Tao. In this post I will discuss the application of Taoist philosophy in Jeet Kune Do (‘the way of the intercepting fist’), the martial arts that Bruce Lee founded in his mid-20s, and its roots in Taoist philosophy. I will identify several Taoist aspects that form the philosophical foundation of Jeet Kune Do. First however, I will give an anecdote of his wife Linda Cadwell on Bruce Lee’s initial motivation to develop Jeet Kune Do at all.

Bruce Lee’s initial motivation for Jeet Kune Do
Bruce Lee started teaching martial arts to Westerners in his newly founded Jun Fan Gung Fu Institute, a training gym in Oakland, California. Then by late 1964, Bruce Lee received a letter with the signatures of the most important elder Chinese martial arts masters in San Francisco who did not

look favourably on Bruce’s teaching martial art to Westerners, or actually to anyone who was not Chinese. So strongly did they harbour this historically bound belief, that a formal challenge was issued to Bruce, insisting that he participate in a confrontation, the result of which would decide whether he could continue to teach the ‘foreign devils’. (Cadwell, 1998, p. 8)

Without hesitation, Bruce Lee accepted the challenge. Linda Cadwell remembers the fight that followed as a pivotal point in Bruce Lee’s life:

Within moments of the initial clash, the Chinese gung fu man [Bruce Lee’s contender] had proceeded to run in a circle around the room, out a door that led to a small back room, then in through another door to the main room. He completed this circle several times, with Bruce in hot pursuit. Finally, Bruce brought the man to the floor, pinning him helplessly, and shouted (in Chinese), ‘Do you give up?’ After repeating this question two or three times, the man conceded, and the San Francisco party departed quickly. The entire fight lasted about three minutes, leaving James and me ecstatic that the decisive conquest was so quickly concluded. Not Bruce. Like it was yesterday, I remember Bruce sitting on the back steps of the gym, head in hands, despairing over his inability to finish off the opponent with efficient technique, and the failure of his stamina when he attempted to capture the running man. For what probably was the first time in his life, Bruce was winded and weakened. Instead of triumphing in his win, he was disappointed that his physical condition and gung fu training had not lived up to his expectations. This momentous event, then was the impetus for the evolution of Jeet Kune Do and the birth of his new training regime. (Cadwell, 1998, pp. 11-12)

Now that we know that Jeet Kune Do originated from Bruce Lee’s discontent with the physical condition he had achieved through traditional gung fu training, I will discuss how Bruce Lee was striving for a new martial arts that was superior to the already existent ones, and how this martial arts is ultimately rooted in Taoist philosophy.

Jeet Kune Do as a way of life
Bruce Lee had, throughout his whole life, always been intrigued by the question how to find his true potential, and how to express himself honestly. He wrote:

“Ever since I was a child I have had this instinctive urge for expansion and growth. To me, the function and duty of a quality human being is the sincere and honest development of one’s potential”.[3]

“When I look around, I always learn something, and that is to always be yourself, express yourself, to have faith in yourself. Do not go out and look for a successful personality and duplicate him. They always copy mannerism; they never start from the root of their being: that is, how can I be me?”[4]

Bruce Lee believed that the answers to both questions – how can I find my true potential and how can I be me so that I can express myself honestly – are ultimately related to one another.

1. Be one with the Tao; be formless like water, and be pliable
Bruce Lee believed that the person who is trained within a particular martial arts style and who clings to it indefinitely or a person who is only trained within a particular philosophical doctrine becomes self-delusional. He thought that the person who is incapable of exceeding his style or doctrine is stiff and narrow-minded. His narrow-mindedness makes him blind to observe objectively and to see the truth. He is what Bruce Lee calls, ‘the traditional man’. Bruce Lee wrote:

One can function freely and totally if he is ‘beyond system.’ The man who is really serious, with the urge to find out what truth is, has no style at all. He lives only in what is. (Bruce Lee, 1975, p. 17)

But in classical styles, system becomes more important than the man! The classical man functions with the pattern of a style! (Bruce Lee, 1975, p. 18)

How can there be methods and systems to arrive at something that is living? To that which is static, fixed, dead, there can be a way, a definite path, but not to that which is living. Do not reduce reality to a static thing and then invent methods to reach it. (Bruce Lee, 1975, p. 18)

Classical forms dull your creativity, condition and freeze your sense of freedom. You no longer ‘be,’ but merely ‘do,’ without sensitivity. (Bruce Lee, 1975, p. 19)

You cannot see a street fight in its totality, observing it from the viewpoint of a boxer, a kung-fu man, a karateka, a wrestler, a judo man and so forth. You can see clearly only when style does not interfere. You then see it without ‘like’ or ‘dislike;’ you simply see and what you see is the whole and not the partial. (Bruce Lee, 1975, p. 24)

He thought that committing himself to styles limits both his potential and his self-expression. This critique is however not only limited to martial arts. He extended this critique to Confucianism, a philosophy which he considered as too rigid, and too narrowly focused on set rules and traditions. According to Bruce Lee, man ceases being a human being and instead becomes a mechanical man, a product of mere tradition if he reveres and just follows rules and mannerisms. The philosophy that perfectly fits Bruce Lee’s vision of a self-expressive and ‘style-less’ martial arts is the epistemologically anarchistic Taoism. How can a person, according to Bruce Lee and Taoism, find his true potential and express himself honestly? The answer is to become formless, pliable, and forever adaptable just like the Tao is formless, pliable, and forever in flux.

The Tao Te Ching states the following metaphor of life (flexibility and softness) and death (rigidity and hardness):

A man is born gentle and weak.
At his death he is hard and stiff.
Green plants are tender and filled with sap.
At their death they are withered and dry.
Therefore the stiff and unbending is the disciple of death.
The gentle and yielding is the disciple of life.
Thus an army without flexibility never wins a battle.
A tree that is unbending is easily broken.
The hard and strong will fall.
The soft and weak will overcome. (Tao Te Ching, Chapter 76)

Both Lao Tze and Bruce Lee took water as the ultimate metaphor for that which is flexible and soft. Bruce Lee maintains that in order to fulfil your true potential and express yourself honestly you should become like water, formless. To be like water means to be an objective observant, relaxed and to be flowing with life – to be one with the Tao.

In the Tao Te Ching one can find the following lines:

Under heaven nothing is more soft and yielding than water.
Yet for attacking the solid and strong, nothing is better;
It has no equal.
The weak can overcome the strong;
The supple can overcome the stiff. (Tao Te Ching, Chapter 78)

There is a story about Bruce Lee’s discovery of what it means to be like water and to be united with the Tao. I am not sure about the authenticity of the story, but I will share it nonetheless as it helps to illustrate the significance of being formless in combat or in life:

Bruce, at the age of seventeen, had been training in gung fu for four years with Sifu Yip Man, yet had reached an impasse. When engaged in sparring Bruce found that his body would become tense, his mind perturbed. Such instability worked against his goal of efficiency in combat.

Sifu Yip Man sensed his trouble, and approached him. ‘Lee,’ he said, ‘relax and calm your mind. Forget about yourself and follow the opponent’s movements. Let your mind, the basic reality, do the counter-movement without any interfering deliberation. Above all, learn the art of detachment.’

Bruce Lee believed he had the answer to his problem. He must relax! Yet there was a paradox: the effort in trying to relax was inconsistent with the effortlessness in relaxing, and Bruce found himself back in the same situation.

Again Sifu Yip Man came to Bruce and said, ‘Lee, preserve yourself by following the natural bends of things and don’t interfere. Remember never to assert yourself: never be in frontal opposition to any problem, but control it by swinging with it.’

Sifu Yip Man told Bruce to go home for a week and think about his words. Bruce spent many hours in meditation and practice, with nothing coming of it. Finally, Bruce decided to go sailing in a junk (boat). Bruce would have a great epiphany. ‘On the sea, I thought of all my past training and got mad at myself and punched the water. Right then at that moment, a thought suddenly struck me. Wasn’t this water the essence of gung fu? I struck it, but it did not suffer hurt. I then tried to grasp a handful of it but it was impossible. This water, the softest substance, could fit into any container. Although it seemed weak, it could penetrate the hardest substance. That was it! I wanted to be like the nature of water.

Therefore in order to control myself I must accept myself by going with, and not against, my nature. I lay on the boat and felt that I had united with Tao; I had become one with nature.[5]

Bruce Lee emphasized the importance of ‘a style of no style’ that he later would regret the name Jeet Kune Do as a name implies limitations or specific parameters. Bruce Lee wanted it to resemble the Tao, nameless and of almost supernatural power. Chapter one of the Tao Te Ching states:

The Tao that can be told is not the eternal Tao.
The name that can be named is not the eternal name. (Tao Te Ching, Chapter 1)

See this video in which Bruce Lee asserts that we should be like water:

2. Break rules and conventions and have no way as your way
Jeet Kune Do does not limit itself to styles. It takes from other styles what is useful, discards what is useless, and adds what is uniquely our own. The slogan of the Jeet Kune Do logo reads two things: (a) take no way as your way, and (b) take no limitation as your limitation. As styles, rules, conventions, mannerisms limit us we should deconstruct and transcend them. Jeet Kune Do is therefore iconoclastic. Bruce Lee wrote:

Jeet Kune Do favors formlessness so that it can assume all forms and since Jeet Kune Do has no style, it can fit in with all styles. As a result, Jeet Kune Do utilizes all ways and is bound by none and, likewise, uses any techniques or means which serve its end. (Bruce Lee, 1975, p. 12)

What are the characteristics of a martial arts with no style? According to Bruce Lee, it becomes open-minded, non-traditional, simple, direct, and effective.

Bruce Lee contended that:

Jeet Kune Do does not beat around the bush. It does not take winding detours. It follows a straight line to the objective. Simplicity is the shortest distance between two points. (Bruce Lee, 1975, p. 12)

In Enter the Dragon, there is a scene in which an ostentatious man asks Bruce Lee what his style is. Bruce Lee answers: “You can call it the art of fighting without fighting”. Being challenged by the man to show this style, Bruce Lee cunningly proposes to take a boat to a nearby island where they can fight. When the man set foot on the boat, Bruce Lee let the boat drift away and pulls it on a line. The essence of the story is that (a) one should not be pretentious as that is not honest self-expression, and (b) a fight should be won in the most direct and easiest manner, preferably without the use of violence.[6]

You can find the videoclip here:

In order to break with traditions and conventions means that we should also get rid of our past attachments. This is what Bruce Lee meant when he metaphorically said that we should ‘empty our cup’.

3. Empty your cup and learn the art of dying
To empty your cup means to get rid of your self-delusion so that you can look at the world from a new and refreshed perspective. In order to find your true potential and your nature, you should first be self-conscious. You should know what you want, what you desire, what your strengths and weaknesses are, your pride, your fears, your accomplishments, your ambitions and eventually get rid of all that as they maintain an ego that interferes with who you truly are – a fluid personality who cannot be narrowly defined by your desires, fears, achievements etc.

In the Tao Te Ching one can read:

Empty yourself of everything.
Let the mind become still.
The ten thousand things rise and fall while the Self watches their return. (Tao Te Ching, Chapter 16)

This is frightening for most of us, because it confronts us with our own prejudices; we may find that our traditions that have previously given us a sense of security may be baseless. However, Bruce Lee did not only want us to break with the archaic, but he also showed us an alternative – a way of creating new values and skills to supersede the old. In this respect, Bruce Lee’s views of how to progress in life is very much in line with the iconoclastic Nietzschean übermensch: we must first break with traditions and try to rise above our culture so that a higher being can emerge from our renewed self-creation. This is how I personally interpret Bruce Lee’s saying that we should learn the “art of dying”.

In a famous scene in Longstreet, Bruce Lee taught us not to make a plan of fighting, he told us to empty our mind, and to be formless like water. The “art of dying” is the “art of being non-fixed” – the art of being a different person tomorrow than we are today by letting go our past attachments including our ambitions. I believe it is similar to the Nietzschean ideal of self-creation: continuously subjecting our current values to our personal judgements, breaking down ‘lower values’ and creating ‘higher values’. The art of dying is hence a metaphor for continuously breaking down our past selves, values, attachments, pride, desires (dying) and creating our new selves (being reborn) so that we can continuously improve. The “art of dying” is therefore also the “art of self-forgetfulness”, a skill that is characteristic of the ‘baby’ who is its self-propelling wheel in Nietzsche’s story of the ‘three metamorphoses’ from Thus Spoke Zarathustra.

See here the scene of Longstreet:

Bruce Lee wrote:

Empty your cup so that it may be filled; become devoid to gain totality. (Bruce Lee, 1975, p. 14)

Emptying our cup precedes our discovery of new truths or new values so that hopefully we can find ourselves and become our own standard. Bruce Lee told us not to despair when we cannot find solace within our past attachments as the creation of personal values is vastly more valuable.

See here a great explanation of ‘emptying our cup’:

The logical consequence of self-creation is that one becomes his own standard.

4. Become your own standard and accept life
According to Bruce Lee, we should not worry about what others think of us. He advised us not to look for a personality to duplicate as that would be a betrayal to our selves – one might call this practice ‘other-expression’ instead of ‘self-expression’. Being our own standard also encompasses the acceptance of disgrace and losses as much as accepting grace and victories. How else can we accept ourselves and fulfill our own potential?

The Tao Te Ching advises us the following:

Accept disgrace willingly.
Accept misfortune as the human condition.

What do you mean by “Accept disgrace willingly”?
Accept being unimportant.
Do not be concerned with loss or gain.
This is called “accepting disgrace willingly.”

What do you mean by “Accept misfortune as the human condition”?
Misfortune comes from having a body.
Without a body, how could there be misfortune?

Surrender yourself humbly; then you can be trusted to care for all things.
Love the world as your own self; then you can truly care for all things. (Tao Te Ching, Chapter 13)

5. Wei Wu Wei
Lastly, I would like to discuss another aspect of ‘having no way as your way’. To have ‘no way as your way’, is also Bruce Lee’s expression for following the Taoist doctrine of ‘wei wu wei’ (‘action without action’ or ‘effortless action’). Bruce Lee maintained that when a person is truly in control of himself, he experiences his action without consciously forcing his actions to happen. Self-consciousness is initially required for the understanding of ourselves, but to be truly expressing ourselves through our actions we must move into a state where we act unconsciously. I think it is best comparable with the English expression of ‘being in a state of flow’. Bruce Lee said:

I’m moving and not moving at all. I’m like the moon underneath the waves that ever go on rolling and rocking. It is not, ‘I am doing this,’ but rather, an inner realization that ‘this is happening through me,’ or ‘it is doing this for me.’ The consciousness of self is the greatest hindrance to the proper execution of all physical action. (Bruce Lee, 1975, p. 7)

This idea is expressed as follows in the Tao Te Ching:

Tao abides in non-action (‘wu wei’),
Yet nothing is left undone. (Tao Te Ching, Chapter 37)

Footnotes
[1] See http://www.ranker.com/list/time-magazine-100-most-important-people-of-the-20th-century/theomanlenz?format=SLIDESHOW&page=55http://www.ranker.com/list/time-magazine-100-most-important-people-of-the-20th-century/theomanlenz?format=SLIDESHOW&page=55

[2] I do not remember where I have found this quote.

[3] Idem

[4] Idem

[5] From http://www.becoming.8m.net/bruce02.htm

[6] The scene is actually based on an old Japanese Samurai folk tale. The tale goes as follows:

“While travelling on a ferry, a young samurai began bullying and intimidating some of the other passengers, boasting of his fighting prowess and claiming to be the best in the country with a samurai sword. When the young warrior noticed how unmoved [Tsukahara] Bokuden [a legendary Japanese swordsman] was, he was enraged and not knowing who he was dealing with challenged the old master to a duel. Bokuden told him;

‘My art is different from yours. It consists not so much in defeating others but in not being defeated.’

He continued to inform him that his school was called The Mutekatsu Ryu meaning ‘to defeat an enemy without hands’. The young samurai saw this as cowardice and demanded satisfaction so he told the boats-man to stop at an island so they could do battle there.

However when he jumped into the shallow waters to make his way to the fight venue, Bokuden got hold of the boats-man’s pole and proceeded back to deeper waters minus a now irate young samurai. The wise old master laughed and shouted to his would be adversary; ‘Here is my no sword school!’” (See, http://www.historyoffighting.com/tsukahara-bokuden.php)

Bibliography
History Of Fighting. Retrieved from http://www.historyoffighting.com/tsukahara-bokuden.php

Lao Tze. Tao Te Ching. Retrieved from http://www.schrades.com/tao/taotext.cfm?TaoID=1

Lee, B. (1975). Tao Of Jeet Kune Do. Santa Clarita: Ohara Publications.

Little, J. (1998). Bruce Lee: The Art Of Expressing The Human Body. North Clarendon: Tuttle Publishing.

Some problems with postmodernism

Despite its contributions, postmodernism is also the subject of much criticism. One of the most recurrent is its tendency to nihilism, that is, to pleasure for nothing. Postmodern deconstruction may be efficient at demonstrating the randomness of many of our concepts, but it can lead us to a point where we have nothing but deconstruction. We find that the world is made up of dichotomies or binary oppositions that cancel out, without any logic, leaving us with an immense void.

Another weakness of postmodernism is its relativism. In the absence of an absolute truth that can be objectively identified one gets subjective opinions. There is an expectation of postmodern theorists that this leads to higher levels of tolerance, but ironically the opposite is true. Without objective truths individuals are isolated in their subjective opinions, which represents a division of people, not an approximation. Moreover, postmodernism leads to a concern that all claims may be attempts at usurpation of power.

But the main weakness of postmodernism is its internal inconsistency. As mentioned in previous posts, postmodernism can be defined as unbelief about metanarratives. But would not postmodernism itself be a metanarrative? Why would this metanarrative be above criticism?

Another way of defining postmodernism is by its claim that there is no absolute truth. But is not this an absolute truth? Is it not an absolute truth, according to postmodernism, that there is no absolute truth? This circular and contradictory reasoning demonstrates the internal fragility of postmodernism. Finally, what happens if the hermeneutics of suspicion is turned against postmodernism itself? What gives us assurance that postmodern authors do not themselves have a secret political agenda hidden behind their speeches?

It is possible that postmodernists do not really feel affected by this kind of criticism, if they are consistent with the perception that there is no real world out there, or that “there is nothing outside the text”, but that the Reality is produced by discourses. That is: conventional theorists seek a truth that corresponds to reality. Postmodernists wonder what kind of reality their speeches are capable of creating.

Be that as it may, in spite of the preached intertextuality (the notion that texts refer only to other texts, and nothing objective outside the texts), postmodern theorists continue to write in the hope that we will understand what they write. Moreover, postmodernists live in a world full of meanings that are if not objective are at least intersubjective. Perhaps our language is not transparent, but that does not mean that it is opaque either. Clearly we are able to make ourselves understood reasonably well through words.

As C.S. Lewis said, “You cannot go on ‘seeing through’ things forever. The whole point of seeing through something is to see something through it. It is good that the window should be transparent, because the street or garden beyond it is opaque. How if you saw through the garden too? It is no use trying to ‘see through’ first principles. If you see through everything then everything is transparent. But a wholly transparent world is an invisible world. To ‘see through’ all things is the same as not to see”. This critique fits very well to postmodernism.

Main postmodern theorists and their main concepts

Postmodernism has been defined as “unbelief about metanarratives.” Metanarratives are great narratives or great stories; comprehensive explanations of the reality around us. Christianity and other religions are examples of metanarratives, but so are scientism and especially the positivism of more recent intellectual history. More specifically, postmodernism questions that there is a truth out there that can be objectively found by the researcher. In other words, postmodernism questions the existence of an objective external reality, as well as the distinction between the subject who studies this reality and object of study (reality itself), and consequently the possibility of a social science free of values, assumptions, or neutrality.

One of the main theorists of postmodernism (or of deconstructionism, to be more exact) was Jacques Derrida (1930-2004). Derrida noted that Western intellectual history has been, since ancient times, a constant search for a Logos. The Logos is a concept of classical philosophy from which we derive the word logic. It concerns an order, or logic, behind the universe, bringing order (cosmos) to what would otherwise be chaos. The concept of Logos was even appropriated by Christianity when the evangelist John stated that “In the beginning was the Logos, and the Logos was with God and the Logos was God,” identifying the Logos with Jesus Christ. In this way, this concept is undoubtedly one of the most influential in the intellectual history of the West.

Derrida, however, noted that this search for identifying a logos (whether it be an abstract spiritual principle, the person of Jesus Christ, or reason itself) implies the formation of dichotomies, or binary oppositions, where one of the elements of the binary opposition is closer to the Logos than the other, but with the two cancelling each other out in the last instance. In this way, Western culture tended to value masculine over feminine, adult over child, and reason over emotion, among other examples. However, as Derrida observes, these preferences are random choices, coupled with the fact that it is not possible to conceive the masculine without the feminine, the adult without the child, and so on. Derrida’s proposal is to identify and deconstruct these binaries, demonstrating how our conceptions are random.

Michel Foucault (1926-1984) developed a philosophical system similar to that of Derrida. At the beginning of his career he was inserted into the post-WWII French intellectual environment, deeply influenced by existentialists. Eventually Foucault sought to differentiate himself from these thinkers, although Nietzsche’s influence can be seen throughout his career. One of the recurring themes in Foucault’s literary production is the link between knowledge and power. Initially identified as a medical historian (and more precisely of psychoanalysis), he sought to demonstrate how behaviors identified as pathologies by psychiatrists were simply what deviated from accepted societal standards. In this way, Foucault tried to demonstrate how the scientific truths elaborated by the doctors were only authoritarian impositions. In a broader sphere he has identified how the knowledge produced by individuals and institutions clothed with power become true and define the structures in which the other individuals must insert themselves. At this point the same hermeneutic of the suspicion that appears in Nietzsche can be observed in Foucault: distrust of the intentions of the one who makes an assertion. The intentions behind an assertion are not always the explicit ones. Foucault’s other contribution was his discussion of the pan-optic, a kind of prison originally envisioned by the English utilitarian philosopher Jeremy Bentham (1748-1832) in which the incarcerated are never sure whether they are being watched or not. The consequence is that the incarcerated need to behave as if they are constantly being watched. Foucault imagined this as a control mechanism applied to everyone in modern society. We are constantly being watched, and charged to suit our standards.

In short, postmodernism questions Metanarratives and our ability to identify absolute truths. Truth becomes relative and any attempt to identify truth becomes an imposition of power over others. In this sense the foundations of modern science, especially in its positivist sense, are questioned. Postmodernism further states that “there is nothing outside the text,” that is, our language has no objective relation to a reality external to itself. Similarly, there is a “death of the author” after the enunciation of a discourse: it is impossible to identify the meaning of a discourse by the intention of the author in writing it, since the text refers only to itself, and is not capable of carrying any sense present in the intention of its author. In this way, discourses should be analyzed not by their relation to a reality external to them or by the intention of the author, but rather in their intertextuality.

Why do we teach girls that it’s cute to be scared?

I just came across this fantastic op-ed while listening to the author being interviewed.

The author points out that our culture teaches girls to be afraid. Girls are warned to be careful at the playground while boys are expected… to be boys. Over time we’re left with a huge plurality of our population hobbled.

It’s clear that this is a costly feature of our culture. So why do we teach girls to be scared? Is there an alternative? This cultural meme may have made sense long ago, but society wouldn’t collapse if it were to disappear.

Culture is a way of passing knowledge from generation to generation. It’s not as precise as science (another way of passing on knowledge), but it’s indispensable. Over time a cultural repertoire changes and develops in response to the conditions of the people in that group. Routines, including attitudes, that help the group succeed and that are incentive-compatible with those people will persist. When groups are competing for resources, these routines may turn out to be very important.

It’s plausible that in early societies tribes had to worry about neighboring tribes stealing their women. For the tribe to persist, there needs to be enough people, and there needs to be fertile women and men. The narrower window for women’s productivity mean that men are more replaceable in such a setting. So tribes that are protective of women (and particularly young women and girls) would have an cultural-evolutionary advantage. Maybe Brandon can tell us something about the archaeological record to shed some light on this particular hypothesis.

But culture will be slower to get rid of wasteful routines, once they catch on. For this story to work, people can’t be on the razor’s edge of survival; they have to be wealthy enough that they can afford to waste small amounts of resources on the off-chance that it actually helped. Without the ability to run randomized control trials (with many permutations of the variables at hand) we can never be truly sure which routines are productive and which aren’t. The best we can do is to try bundles of them all together and try to figure out which ones are especially good or bad.

So culture, an inherently persistent thing, will pick up all sorts of good and bad habits, but it will gradually plod on, adapting to an ever-changing, ever evolving ecosystem of competing and cooperating cultures.

So should we still teach our girls to be scared? I’d argue no.* Economics tells us that being awesome is great, but in a free society** it’s also great when other people are awesome. Those awesome people cure diseases and make art. They give you life and make life worth living.

Bringing women and minorities into the workplace has been a boon for productivity and therefore wealth (not without problems, but that’s how it goes). Empowering women in particular, will be a boon for the frontiers of economic, scientific, technical, and cultural evolution to the extent women are able to share new view points and different ways of thinking.

And therein lies the rub… treating girls like boys empowers them, but also changes them. So how do we navigate this tension? The only tool the universe has given us to explore a range of possibilities we cannot comprehend in its entirety: trial and error.

We can’t run controlled experiments, so we need to run uncontrolled experiments. And we need to try many things quickly. How quickly depends on a lot of things and few trials will be done “right.” But with a broader context of freedom and a culture of inquiry, our knowledge can grow while our culture is enriched. I think it’s worth making the bet that brave women will make that reality better.


* But also, besides what I think, if I told parents how to act… if I made all of them follow my sensible advice, I’d be denying diversity of thought to future generations. That diversity is an essential ingredient, both because it allows greater differences in comparative advantage, but also because it allows more novel combinations of ideas for greater potential innovation in the future.

** And here’s the real big question: “What does it mean for a society to be free?” In the case of culture it’s pretty easy to say we want free speech, but it runs up against boundaries when you start exploring the issue. And with billions of people and hundreds (hopefully thousands) of years we’re looking at a thousand-monkey’s scenario on steroids… and that pill from Flowers for Algernon.

There’s copyright which makes it harder to stand on the shoulders of giants, but might be justified if it helps make free speech an economically sustainable reality. There’s the issue of yelling “Fire!” in a crowded theater, and the question of how far that restriction can be stretched before political dissent is being restricted. We might not know where the line should be drawn, but given enough time we know that someone will cross it.

And the issue goes into due process and business regulation, and any area of governance at all. We can’t be free to harm others, but some harms are weird and counter-intuitive. If businesses can’t harm one another through competition then our economy would have a hard time growing at all. Efficiency would grow only slowly tying up resources and preventing innovation. Just as there’s an inherent tension in the idea of freedom between permissiveness and protection, there’s a similar tension in the interdependence of cooperation and competition for any but the very smallest groups.

The existentialist origins of postmodernism

In part, postmodernism has its origin in the existentialism of the 19th and 20th centuries. The Danish theologian and philosopher Søren Kierkegaard (1813-1855) is generally regarded as the first existentialist. Kierkegaard had his life profoundly marked by the breaking of an engagement and by his discomfort with the formalities of the (Lutheran) Church of Denmark. In his understanding (as well as of others of the time, within a movement known as Pietism, influential mainly in Germany, but with strong precedence over the English Methodism of John Wesley) Lutheran theology had become overly intellectual, marked by a “Protestant scholasticism.”

Scholasticism was before this period a branch of Catholic theology, whose main representative was Thomas Aquinas (1225-1274). Thomas Aquinas argued against the theory of the double truth, defended by Muslim theologians of his time. According to this theory, something could be true in religion and not be true in the empirical sciences. Thomas Aquinas defended a classic concept of truth, used centuries earlier by Augustine of Hippo (354-430), to affirm that the truth could not be so divided. Martin Luther (1483-1546) made many criticisms of Thomas Aquinas, but ironically the methodological precision of the medieval theologian became quite influential in Lutheran theology of the 17th and 18th centuries. In Germany and the Nordic countries (Denmark, Finland, Iceland, Norway and Sweden) Lutheranism became the state religion after the Protestant Reformation of the 16th century, and being the pastor of churches in major cities became a respected and coveted public office.

It is against this intellectualism and this facility of being Christian that Kierkegaard revolts. In 19th century Denmark, all were born within the Lutheran Church, and being a Christian was the socially accepted position. Kierkegaard complained that in centuries past being a Christian was not easy, and could even involve life-threatening events. In the face of this he argued for a Christianity that involved an individual decision against all evidence. In one of his most famous texts he makes an exposition of the story in which the patriarch Abraham is asked by God to kill Isaac, his only son. Kierkegaard imagines a scenario in which Abraham does not understand the reasons of God, but ends up obeying blindly. In Kierkegaard’s words, Abraham gives “a leap of faith.”

This concept of blind faith, going against all the evidence, is central to Kierkegaard’s thinking, and became very influential in twentieth-century Christianity and even in other Western-established religions. Beyond the strictly religious aspect, Kierkegaard marked Western thought with the notion that some things might be true in some areas of knowledge but not in others. Moreover, its influence can be seen in the notion that the individual must make decisions about how he intends to exist, regardless of the rules of society or of all empirical evidence.

Another important existentialist philosopher of the 19th century was the German Friedrich Nietzsche (1844-1900). Like Kierkegaard, Nietzsche was also raised within Lutheranism but, unlike Kierkegaard, he became an atheist in his adult life. Like Kierkegaard, Nietzsche also became a critic of the social conventions of his time, especially the religious conventions. Nietzsche is particularly famous for the phrase “God is dead.” This phrase appears in one of his most famous texts, in which the Christian God attends a meeting with the other gods and affirms that he is the only god. In the face of this statement the other gods die of laughing. The Christian God effectively becomes the only god. But later, the Christian God dies of pity for seeing his followers on the earth becoming people without courage.

Nietzsche was particularly critical of how Christianity in his day valued features which he considered weak, calling them virtues, and condemned features he considered strong, calling them vices. Not just Christianity. Nietzsche also criticized the classical philosophy of Socrates, Plato, and Aristotle, placing himself alongside the sophists. The German philosopher affirmed that Socrates valued behaviors like kindness, humility, and generosity simply because he was ugly. More specifically, Nietzsche questioned why classical philosophers defended Apollo, considered the god of wisdom, and criticized Dionysius, considered the god of debauchery. In Greco-Roman mythology Dionysius (or Bacchus, as he was known by the Romans) was the god of festivals, wine, and insania, symbolizing everything that is chaotic, dangerous, and unexpected. Thus, Nietzsche questioned the apparent arbitrariness of the defense of Apollo’s rationality and order against the irrationality and unpredictability of Dionysius.

Nietzsche’s philosophy values courage and voluntarism, the urge to go against “herd behavior” and become a “superman,” that is, a person who goes against the dictates of society to create his own rules . Although he went in a different religious direction from Kierkegaard, Nietzsche agreed with the Danish theologian on the necessity of the individual to go against the conventions and the reason to dictate the rules of his own existence.

In the second half of the 20th century existentialism became an influential philosophical current, represented by people like Jean-Paul Sartre (1905-1980) and Albert Camus (1913-1960). Like their predecessors of the 19th century, these existentialists criticized the apparent absurdity of life and valued decision-making by the individual against rational and social dictates.

Where the state came from

One of the questions that led me to libertarianism was “what is the state?” More than that: Where did it come from? How it works? What’s the use? Analogous questions would be “what is politics?” and “what is economics?” If my classroom experience serves as a yardstick for anything, the overwhelming majority of people never ask these questions and never run after answers. I do not blame them. Most of us are very busy trying to make ends meet to worry about this kind of stuff. I even sought an academic training in politics just to seek answers to these questions. For me it’s nothing to have answers, after all, I’m paid (albeit very poorly paid) to know these matters. Still, I wish more people were asking these types of question. I suspect that it would be part of the process to review the political and economic situation in which we find ourselves.

Many times when I ask in the classroom “what is the state?” I receive in response that Brazil is a state. In general I correct the student explaining that this is an example, not a definition. The modern state, as we have it today, is mainly the combination of three factors: government, population, and territory. The modern state, as we have it today, can be defined as a population inhabiting a specific territory, organized by a centralized government that recognizes no instance of power superior to itself. Often, in the academic and popular vocabulary, state and government are confused, and there is no specific problem in this. In fact, the two words may appear as synonyms, although this is not a necessity. It is possible to distinguish between state and government thinking that the state remains and governments go through.

The state as we know it today is a product of the transition from the Middle Ages to the Modern Age. I believe that this information alone should draw our attention enough: people have lived in modern states only in the last 500 years or so. Throughout the rest of human history other forms of political organization have been used. I am not saying (not here) that these other forms of organization were better than the modern state. I am simply saying that the modern state is far from being natural, spontaneous, or necessary. Even after 1500 the modern state took time to be universally accepted. First, this model of organization spread throughout Europe at the beginning of the Modern Era. It was only in the late 18th century and early 19th century that this model came to be used in the American continent. The modern state spread globally only after the decolonization movement that followed World War II. That is: the vast majority of modern states are not even 70 years old!

What is the purpose of the state? At least in my experience, many people respond by “providing rights” or “securing rights.” People think about health, education, sanitation, culture, security, etc. as duties of the state towards society. It is clear that many people think about health, education, housing, etc. as rights, which in itself is already questionable, but I will leave this discussion for another time. The point I want to put here is that empirically states have only cared about issues like health and public education very recently. In the classic definition of Max Weber (late 19th century), the state has a monopoly on the legitimate use of violence. In other words, virtually anyone can use violence, but only the state can do it legally. That is: the primordial function of the state is to use violence within a legal order. Other functions, such as providing health and education, came very late and only became commonplace with the welfare state that strengthened after World War II.

I find it always interesting to see how we live in a young world. Basically the entire world population today lives in some state and expects from this state a minimum level of well-being. However, this reality is only about 70 years old. The idea that we need to live in states that provide us with a minimum of well being is not natural and far from obvious. To understand that the modern state is a historical institution, which has not always existed, it is fundamental to question its validity. Moreover, to note that the functions of the state that seem obvious to us today did not exist 70 years ago leads us to question whether it is valid to expect things such as health and education from the state.

My personal perception is that the modern state (defined by territory, population, and government) is better than any alternative that has already been proposed. However, the state of social well-being is only a sugar-watered socialism. Socialism, by definition, does not work, as Ludwig von Mises very well shows. Partial socialism is as likely to function as full socialism. Expecting the state to use violence within legal parameters is valid and even fundamental. But to expect that this same state may successfully diversify its activities entering the branches of health, education, culture, etc. is a fatal conceit.

The problem with conservatives in Latin America

Shortly after the declaration of independence of the USA, in 1776, several independence movements in Iberian America followed. Basically between the 1800s and the 1820s almost all of Latin America broke its colonial ties with Spain and Portugal, giving rise to the national states we know today, from Mexico to Chile. This disruption of colonial ties, however, was only the beginning of the process of formation of Latin American national states. The borders would still undergo many transformations, and especially there would be a long and tortuous task of forming national governments in each country.

In general there was much influence from the USA and the French Revolution in the formation of Latin American national states. The constitutions that emerged on the continent were generally liberal in their essence, using a theoretical background similar to that which gave rise to the American constitution. However, in the case of Latin America, this liberalism proved to be only a veneer covering the surface. Below it Latin America was a region marked by oligarchy, paternalism, and authoritarianism.

Using Brazil as an example, one can observe how much the French Revolution was a strong influence on Latin America. In the Brazilian case, this influence was due to the fear that there would be a radicalization of liberalism that guided the process of independence, leading to a Jacobinism such as that which marked the period of Terror in France. The fear that a Brazilian Robespierre would emerge at some point forced Brazil’s founders to cooperate in such a manner that the formation of the Brazilian state was more conservative and less liberal.

One problem with Latin American conservatism lies in what it retains in trying to avoid liberal radicalization. There is a conservative Anglo-Saxon tradition identified primarily with Edmund Burke. As in Latin America, Burke was critical of the radicalization of the French Revolution (with the advantage that Burke predicted radicalization before it actually occurred). However, Burke had an already liberal country to conserve. In his case, conservatism was a liberal conservatism. In the case of Latin Americans, preserving meant maintaining mercantilism and absolutism, or at least avoiding a more rapid advance of liberalism.

Another problem with Latin American conservatism is to confuse Rousseau with true liberalism. The ideas of Jean-Jacques Rousseau were behind the most radical period of the French Revolution. Burke criticized the kind of thinking that guided the revolution because of its abstract nature, disconnected from the traditions. But this was not really Rousseau’s problem. His problem is that his ideas do not make the slightest sense. John Locke also possessed an abstract but perfectly sensible political thought. Rousseau does not represent liberalism. His thinking is a proto-socialism that we would do well to avoid. But the true liberalism of John Locke and the American Founding Fathers still needs to be implemented in Latin America.

In short, the problem of conservatism in Latin America lies in what we have to conserve. My opinion is that we still need to move forward a lot before having liberal societies that are worth thinking about being preserved. Meanwhile, it is better to avoid the idea of a Latin American conservatism.