Three Roads to Racism

Are you a racist?

Anyone can feel free to answer this question any way it/she/he wishes; they wish. And that’s the problem. In this short essay, I aim first to do a little vocabulary house-keeping. Second, I try to trace three distinct origins of racism. I operate from thin authority. My main sources are sundry un-methodical readings, especially on slavery, spread over fifty years, and my amazingly clear recollection of lectures by my late teacher at Stanford, St. Clair Drake, in the sixties. (He was the author of Black Metropolis among other major contributions.) I also rely on equally vivid memories of casual conversations with that master storyteller. Here you have it. I am trying to plagiarize the pioneer St. Clair Drake. I believe the attempt would please him though possibly not the results.

Feel free to reject everything I say below. If nothing else, it might make you feel good. If you are one of the few liberals still reading me, be my guest and get exercised. Besides, I am an old white man! Why grant me any credence?

That’s on the one hand. On the other hand, in these days (2020) obsessed with racism, I never see or hear the basic ideas about racism set down below expressed in the media, in reviews or on-line although they are substantially more productive than what’s actually around. I mean that they help arrive at a clearer and richer understanding of racism.

If you find this brief essay even a little useful, think of sharing it. Thank you.

Racism

“Racism” is a poor word because today, it refers at once to thoughts, attitudes, feeling, and also to actions and policies. Among the latter, it concerns both individual actions and collective actions, and even policies. Some of the policies may be considered to be included in so-called “systemic racism” about which I wrote in my essay “Systemic Racism: a Rationalist Take.”

The mishmash between what’s in the heads of people and what they actually do is regrettable on two grounds. First, the path from individual belief, individual thoughts, individual attitudes, on the one hand, to individual action, on the other is not straightforward. My beliefs are not always a great predictor of my actions because reality tends to interfere with pure intent.

Second, collective action and, a fortiori policies, rarely looks like the simple addition of individual actions. People act differently in the presence of others than they do alone. Groups (loosely defined) are capable of greater invention than are individuals. Individuals in a group both inspire and censor one another; they even complete one another’s thoughts; the ones often give the others courage to proceed further.

This piece is about racism, the understanding, the attitudes, the collection of beliefs which predispose individuals and groups to thinking of others as inferior and/or unlikable on the basis of some physical characteristics. As I said, racism so defined can be held individually or collectively. Thus, this essay is deliberately not about actions, program, failures to act inspired by racism, the attitude. That’s another topic others can write about.

Fear and loathing of the unknown

Many people seem to assume that racial prejudice is a natural condition that can be fought in simple ways. Others, on the contrary, see it as ineradicable. Perhaps it all depends on the source of racism. The word mean prejudgment about a person’s character and abilities based on persistent physical traits that are genetically transmitted. Thus, dislike of that other guy wearing a ridiculous blue hat does not count; neither does hostility toward one sex or the other (or the other?). I think both assumptions above – racism as natural and as ineradicable – are partly but only partly true. My teacher St. Clair Drake explained to me once, standing in the aisle of a Palo Alto bookstore, that there are three separate kinds of racial prejudice, of racism, with distinct sources.

The first kind of racism is rooted in fear of the unknown or of the unfamiliar. This is probably hard-wired; it’s human nature. It would be a good asset to have for the naked, fairly slow apes that we were for a long time. Unfamiliar creature? Move away; grab a rock. After all, those who look like you are usually not dangerous enemies; those who don’t, you don’t know and why take a risk?

Anecdote: A long time ago, I was acting the discreet tourist in a big Senegalese fishing village. I met a local guy about my age (then). We had tea together, talked about fishing. He asked me if I wanted to see his nearby house. We walked for about five minute to a round adobe construction covered in thatch. He motioned me inside where it was quite dark. A small child was taking a nap on a stack of blankets in the back. Sensing a presence, the toddler woke up, opened his eyes, and began screaming at the top of his lungs. The man picked him up and said very embarrassed. “I am sorry, my son has never seen a toubab before.” (“Toubab” is the local not unfriendly word for light skin people from elsewhere.)

Similarly, Jared Diamond recounts (and show corresponding pictures in his book, The World Until Yesterday: What Can We Learn from Traditional Societies. Viking: New York.) of how central New Guinea natives became disfigured by fear at their first sight of a white person. Some explained later that they thought they might be seeing ghosts.

Night terrors

The second distinctive form of racism simply comes from fear of the dark, rooted itself in dread of the night. It’s common to all people, including dark skinned people, of course. It’s easy to understand once you remember that beings who were clearly our direct ancestors, people whose genes are in our cells, lived in fear of the darkness night after night for several hundreds of thousands of years. Most of their fears were justified because the darkness concealed lions, leopards, hyenas, bears, tigers, saber-toothed cats, wolves, wild dogs, and other predators, themselves with no fear of humans. The fact that the darkness of night also encouraged speculation about other hostile beings -varied spirits – that did not really exist does not diminish the impact of this incomplete zoological list.

As is easy to observe, the association dark= bad is practically universal. Many languages have an expression equivalent to: “the forces of darkness.” I doubt that any (but I can’t prove it, right now) says, “the forces of lightness” to designate something sinister. Same observation with “black magic,” and disappearing into a “black hole.” Similarly, nearly everywhere, uneducated people, and some of their educated betters, express some degree of hostility – mixed with contempt, for those, in their midst or nearby, who are darker than themselves. This is common among African Americans, for example. (Yes, I know, it may have other sources among them, specifically.)

This negative attitude is especially evident in the Indian subcontinent. On a lazy day, thirty years ago in Mumbai, I read several pages of conjugal want ads in a major newspaper. I noticed that 90% of the ads for would-be brides mentioned skin color in parallel with education and mastery of the domestic arts. (The men’s didn’t.) A common description was “wheatish,” which, I was told by Indian relatives, means not quite white but pretty close. (You can’t lie too shamelessly about skin tone because, if all goes well, your daughter will meet the other side in person; you need wiggle room.) In fact, the association between skin color and likability runs so deep in India that the same Sanskrit word, “varna,” designates both caste and color (meaning skin complexion). And, of course, there is a reason why children everywhere turn off the light to tell scary stories.

In a similar vein, the ancient Chinese seem to have believed that aristocrats were made from yellow soil while commoners were made from ordinary brown mud. (Cited by Harari, Yuval N. – 2015 – in: Sapiens: A Brief History of Humankind Harper: New York.)

Some would argue that these examples represent ancestral fears mostly left behind by civilized, urban (same thing) people. My own limited examples, both personal and from observation is that it’s not so. It seems to me that fear of the dark is the first or second page of the book of which our daily street-lit, TV illuminated bravado is the cover. Allow a couple of total power stoppages (as Californians experienced recently) and it’s right there, drilling into our vulnerable minds.

Both of these two first kinds of negative feelings about that which is dark can be minimized, the first through experience and education: No, that pale man will not hurt you. He might even give you candy, or a metal ax. The second source of distaste of darkness has simply been moved to a kind of secondary relevance by the fact that today, most people live most of the time in places where some form of artificial lightning is commonplace. It persists nevertheless where it is shored up by a vast and sturdy institutional scaffolding as with the caste system of largely Hindu India. And it may be always present somewhere in the back of our minds but mostly, we don’t have a chance to find out.

The third source of hostility toward and contempt for a dark appearance is both more difficult to understand and harder to eliminate or even to tamp down. Explaining it requires a significant detour. Bear with me, please.

The origins of useful racism

Suppose you believe in a God who demands unambiguously that you love your “neighbor,” that is, every human being, including those who are not of your tribe, even those you don’t know at all. Suppose further that you are strongly inclined toward a political philosophy that considers all human beings, or at least some large subcategory of them, as fundamentally equal, or at least equal in rights. Or imagine rather that you are indifferent to one or both ideas but that you live among neighbors 90% of whom profess one, and 80% both beliefs. They manifest and celebrate these beliefs in numerous and frequent public exercises, such as church services, elections, and civic meetings where important decisions are launched.

Now a second effort of imagination is required. Suppose also that you or your ancestors came to America from the British Isles, perhaps in the 1600s, perhaps later. You have somehow acquired a nice piece of fertile land, directly from the Crown or from a landed proprietor, or by small incremental purchases. You grow tobacco, or indigo, or rice, or (later) cotton. Fortune does not yet smile on you because you confront a seemingly intractable labor problem. Almost everyone else around you owns land and thus is not eager to work for anyone else. Just about your only recourse is temporarily un-free young men who arrive periodically from old Britain, indentured servants (sometimes also called “apprentices”). Many of them are somewhat alien because they are Irish , although most of them speak English, or some English. Moreover, a good many are sickly when they land. Even the comparatively healthy young men do not adjust well to the hot climate. They have little resistance to local tropical diseases such as malaria and yellow fever. Most don’t last in the fields. You often think they are not worth the trouble. In addition, by contract or by custom, you have to set them free after seven years. With land being so attainable, few wish to stick around and earn a wage from you .

One day you hear that somewhere, not too far, new, different kinds of workers are available that are able to work long days in the heat and under the sun and who don’t succumb easily to disease. You take a trip to find out. The newcomers are chained together. They are a strange dark color, darker than any man you have seen, English, Irish, or Indian. Aside from this, they look really good as field hands go. They are muscular, youngish men in the flower of health. (They are all survivors of the terrible Atlantic passage and, before that, of some sort of long walk on the continent of Africa to the embarkation point at Goree, Senegal, or such. Only the strong and healthy survived such ordeals, as a rule.) There are a few women of the same hue with them, mostly also young.

Those people are from Africa, you are told. They are for outright sale. You gamble on buying two of them to find out more. You carry them to your farmstead and soon put them to work. After some confusion because they don’t understand any English, you and your other servants show them what to do. You are soon dazzled by their physical prowess. You calculate that one of them easily accomplishes the tasks of two of your indentured Irish apprentices. As soon as you can afford it, you go and buy three more Africans.

Soon, your neighbors are imitating you. All the dark skinned servants are snapped up as fast as they are landed. Prices rise. Those people are costly but still well worth the investment because of their superior productivity. Farmers plant new crops, labor intensive, high yield crops – -such as cotton – that they would not have dared investing in with the old kind of labor. To make the new labor even more attractive, you and your neighbors quickly figure that it’s also capital because it can be made to be self-reproducing. The black female servants can both work part of the time and make children who are themselves servants that belong to you by right. (This actually took some time to work out legally.)

Instrumental severity and cruelty

You are now becoming rich, amassing both tools and utensils and more land. All is still not completely rosy on your plantation though. One problem is that not all of your new African servants are docile. Some are warriors who were captured on the battlefield in Africa and they are not resigned to their subjection. A few rebel or try to run away. Mostly, they fail but their doomed attempts become the stuff of legend among other black servants thus feeding a chronic spirit of rebelliousness. Even in the second and third generation away from Africa, some black servants are born restive or sullen. And insubordination is contagious. At any rate, there are enough free white workers in your vicinity for some astute observers among your African servants to realize that they and their companions are treated comparatively badly, that a better fate is possible. Soon, there are even free black people around to whom they unavoidably compare themselves. (This fact deserves a full essay in its own right.)

To make a complex issue simple: Severity is necessary to keep your workforce at work. Such severity sometimes involves brutal public punishment for repeat offenders, such as whippings. There is a belief about that mere severity undermines the usefulness of the workforce without snuffing out its rebelliousness. Downright cruelty is sometimes necessary, the more public, the better. Public punishment is useful to encourage more timid souls to keep towing the line.

And then, there is the issue of escape. After the second generation, black slaves are relatively at home where they work. Your physical environment is also their home where some think they can fend for themselves. The wilderness is not very far. The slaves also know somehow that relatively close by are areas where slavery is prohibited or not actively enforced by authorities. It’s almost a mathematical certainty that at any time, some slaves, a few slaves, will attempt escape. Each escape is a serious economic matter because, aside from providing labor, each slave constitutes live capital. Most owners have only a few slaves. A single escape constitutes for them a significant form of impoverishment. Slaves have to be terrorized into not even wanting to escape.

Soon, it’s well understood that slaves are best kept in a state of more or less constant terror. It’s so well understood that local government will hang your expensive slave for rebellion whether you like it or not.

Inner contradiction

In brief, whatever their natural inclination, whatever their personal preference, slave owners have to be systematically cruel. And, it’s helpful for them to also possess a reputation for cruelty. This reputation has to be maintained and re-inforced periodically by sensationally brutal action. One big problem arises from such a policy of obligatory and vigilant viciousness: It’s in stark contradiction with both your religious and your political ideas that proclaim that one must love others and that all humans are at least potentially equal (before God, if nowhere else). And if you don’t hold deeply such beliefs yourself, you live among people who do, or who profess to. And, by a strange twist of fate, the richest, best educated, probably the most influential strata of your society are also those most committed to those ideals. (They are the class that would eventually produce George Washington and Thomas Jefferson.)

The personal psychological tension between the actual and highly visible brutal treatment of black slaves and prevailing moral values is technically a form of dissonance.” It’s also a social tension; it expresses itself collectively. Those actively involved in mistreating slaves are numerous. In vast regions of the English colonies, and later, of the United States, the contrast between action and beliefs is thus highly visible to everyone, obvious to many who are not themselves actively involved. It becomes increasingly difficult over time to dismiss slavery as a private economic affair because, more and more, political entities make laws actively supporting slavery. There are soon laws about sheltering fugitives, laws regulating the punishment of rebellious slaves, laws about slave marriage and, laws restricting the freeing of slaves, (“manumission”). Slavery thus soon enters the public arena. There are even laws to control the behavior of free blacks, those who merely used to be slaves.

Race as legal status

Special rules governing free blacks constitute an important step because, for the first time it replaces legal status (“slave,” chattel”), with race (dark skin, certain facial features, African ancestry). So, with the advent of legislation supporting slavery, an important symbolic boundary is crossed. The laws don’t concern only those defined by their legal condition of chattel property but also others, defined mostly or largely by their physical appearance and by their putative ancestry in Africa. At this point, every white subject, then every white citizen has become a participant in a struggle that depends on frankly racial categories by virtue of his belonging to the polity. Soon the social racial category “white” comes to stand for the legal status “free person,” “non-slave.”

Then, at this juncture, potentially every white adult becomes a party to the enforcement of slavery. For almost all of them, this participation, however passive, is in stark contradiction with both religious and political values. But ordinary human beings can only live with so much personal duplicity. Some whites will reject black slavery, in part or in whole. Accordingly, it’s notable that abolitionists always existed and were vocal in their opposition to slavery in the English colonies, and then in the United States, even in the deepest South. Their numbers and visibility never flagged until the Civil War.

How to reduce tension between beliefs and deeds

There are three main paths out of this personal moral predicament. They offer different degrees of resistance. The first path is to renounce one’s beliefs, those that are in contradiction to the treatment of one’s slaves. A slave owner could adjust by becoming indifferent to the Christian message, or skeptical of democratic aspiration, or both. No belief in the fraternity of Man or in any sort of equality between persons? Problem solved. This may be relatively feasible for an individual alone. In this case, though the individuals concerned, the slave owners, and their slave drivers, exist within a social matrix that re-inforces frequently, possibly daily the dual religious command to treat others decently and the political view that all men are more or less equal. Churches, political organizations, charity concerns, and gentlemen’s club stand in the way. To renounce both sets of beliefs – however attractive this might be from an individual standpoint – would turn one into a social pariah. Aside from the personal unpleasantness of such condition, it would surely have adverse economic repercussions.

The second way to free oneself from the tension associated with the contrast between humane beliefs, on the one hand, and harsh behavior, on the other hand, is simply to desist from the latter. Southern American chronicles show that a surprisingly large numbers of slave owners chose that path at any one time. Some tried more compassionate slave driving, with varying degrees of economic success. Others – who left major traces, for documentary reasons – took the more radical step of simply freeing some of their slaves when they could, or when it was convenient. Sometimes, they freed all of their slaves, usually at their death, through their wills, for example. The freeing of slaves – manumission – was so common that the rising number of free blacks was perceived as a social problem in much of the South. Several states actually tried to eliminate the problem by passing legislation forbidding the practice.

Of course, the fact that so many engaged in such an uneconomic practice demonstrates in itself the validity of the idea that the incompatibility between moral convictions and slave driving behavior generated strong tensions. One should not take this evidence too far however because there may have been several reasons to free slaves, not all rooted in this tension. (I address this issue briefly in “Systemic Racism….”)

The easy way out

The third way to reduce the same tension, the most extreme and possibly the least costly took two steps. Step one consisted in recognizing consciously this incompatibility; step two was to begin mentally to separate the black slaves from humanity. This would work because all your bothersome beliefs – religious and political – applied explicitly to other human beings. The less human the objects of your bad treatment the less the treatment contravened your beliefs. After all, while it may be good business to treat farm animals well, there is not much moral judgment involved there. In fact, not immediately but not long after the first Africans landed in the English colonies of North America, there began a collective endeavor aiming at their conceptual de-humanization. It was strongly a collective project addressing ordinary people including many who had no contacts with black slaves or with free blacks. It involved the universities and intellectual milieus in general with a vengeance (more on this latter).

Some churches also lent a hand by placing the sanction of the Bible in the service of the general idea that God himself wanted slaves to submit absolutely to the authority of their masters. To begin with, there was always to story of Noah’s three sons. The disrespectful one, Ham, cursed by Noah, was said to be the father of the black race, on the thin ground that his name means something like “burnt.” However, it’s notable that the tension never disappeared because other churches, even in the Deep South, continued their opposition to slavery on religious grounds. The Quakers, for example, seldom relented.

Their unusual appearance and the fact that the white colonists could not initially understand their non-European languages (plural) was instrumental in the collective denial of full humanity to black slaves. In fact, the arriving slaves themselves often did not understand one another. This is but one step from believing that they did not actually possess the power of speech. Later, as the proportion of America-born slaves increased, they developed what is known technically as a creole language to communicate with one another. That was recognizably a form of English but probably not understood by whites unless they tried hard. Most had few reasons to try at all. Language was not the only factor contributing to the ease with which whites, troubled by their ethical beliefs, denied full humanity to black slaves. Paradoxically, the degrading conditions in which the slaves were held must also have contributed to the impression of their sub-humanity.

Science enlisted

The effort to deny full humanity to people of African descent continued for two centuries. As the Enlightenment reached American shores, the focus shifted from Scriptures to Science (pseudo science, sometimes but not always). Explorers’ first reports from sub-tropical Africa seemed to confirmed the soundness of the view that black Africans were not completely human: There were no real cities there, little by way of written literature, no search for knowledge recognizable as science, seemingly no schools. What art conscious visitors reported on did not seem sufficiently realistic to count as art by 18th and 19th century standards. I think that no one really paid attention to the plentiful African artistic creativity– this unmixed expression of humanity if there ever was one – until the early 1900s. Instead, African art was dismissed as crude stammering in the service of inarticulate superstitions.

The effort to harness science in service of the proposition of African un-humanity easily outlasted the Civil War and even the emancipation of slaves in North America. After he published the Origins of the Species in 1859, Darwin spent much of the balance of his life – curiously allied with Christians – in combating the widespread idea that there had been more than one creation of humanoids, possibly, one for each race. The point most strongly argued by those holding to this view was that Africans could not possibly be the brothers, or other close relatives, of the triumphant Anglo-Saxons. The viewpoint was not limited to the semi-educated by any means. The great naturalist Louis Agassiz himself believed that the races of men were pretty much species. In support, he presented the imaginary fact that the mating of different races – like mating between horses and donkeys – seldom produced fertile offspring. (All recounted in: Desmonds, Adrian, and James Moore. 2009. Darwin’s Sacred Cause: How A Hatred of Slavery Shaped Darwin’s Views on Human Evolution. Hougton: NY.)

Differential persistence

Those three main roads to racism are unequal in their persistence. Dislike for strangers tends to disappear of its own accord. Either the frightening contact ceases or it is repeated. In the first case, dislike turns irrelevant and accordingly becomes blurred. In the second case, repeated experience will often demonstrate that the strangers are not dangerous and the negative feelings subside of their own accord. If the strangers turn out to be dangerous overall, it seems to me that negative feelings toward them does not constitute racism. This, in spite of the fact that the negativity may occasionally be unfair to specific, individual strangers.

Racial prejudice anchored in atavistic fear of the night may persist in the depth of one’s mind but it too, does not survive experience well. Exposed to the fact that dark people are not especially threatening, many will let the link between darkness and fear or distaste subside in their minds. For this reason, it seems to me that the great American experiment in racial integration of the past sixty years was largely successful. Many more white Americans today personally know African Americans than was the case in 1960, for example. The black man whose desk is next to yours, the black woman who attends the same gym as you week after week, the black restaurant goers at you favored eating place, all lose their aura of dangerousness through habituation. Habituation works both ways though. The continued over-representation of black men in violent crimes must necessarily perpetuates in the minds of all (including African Americans) the association between danger and a dark complexion.

The road to racism based on the reduction of the tension between behavior and beliefs via conceptual de-humanization of the victims has proved especially tenacious. Views of people of African descent, but also of other people of color, as less than fully human persist or re-merge frequently because they have proved useful. This approach may have saved the important part of the American economy based on slavery until war freed the slaves without removing the de-humanizing. As many leftists claim (usually without evidence) this was important to the latter fast development of the American economy because cotton production in the South was at its highest in the years right preceding the Civil War. In the next phase the view of black Americans as less than human served well to justify segregation for the next hundred years. It was thus instrumental in protecting poor whites from wage competition with even poorer African Americans.

In the second half of the 19th century and well into the 20th, the opinion that Africans – and other people of color – were not quite human also strengthened the European colonial enterprise in many places. (The de-humanization of colonial people was not inevitable though. The French justification of colonialism – “France’s civilizing mission” – is incompatible with this view. It treated the annexed people instead as immature, as infantile, rather than as subhuman.)

This third road to racism tends to last because it’s a collective response to a difficult situation that soon builds its own supporting institutions. For a long time, in America and in the West, in general, it received some assistance from the new, post-religious ideology, science. Above all, it’s of continuing usefulness in a variety of situations. This explanation reverses the naive, unexamined explanation of much racism: That people act in cruel ways toward others who are unlike them because they are racist. It claims, rather that they become racist in order to continue acting in cruel ways toward others, contrary to their own pre-existing beliefs that enjoin them to treat others with respect. If this perspective is correct, we should find that racism is the more widespread and the more tenacious the more egalitarian and the more charitable the dominant culture where it emerges.

The Revolt of the Baristas

For several weeks, nearly every night, I have a déjà vu experience.

First, I watch Fox News where I see crowds of younger people in dark clothing breaking things, setting buildings on fire, and assaulting police. (I infer they are younger people because of the suppleness of their movements.)

Then, I switch to French news on “Vingt-trois heures.” There, I see young people in large French cities, breaking shop windows, damaging and burning cars, and assaulting police.

The supposed reason for the continuing rioting in several major American cities is police brutality toward Blacks and racial injustice in general.

The rioting on wealthy business arteries of French cities was, as of recently, occasioned by the victory of a favorite soccer club in an important tournament. A week later, the defeat of the same soccer club occasioned the same kind of behavior except worse, by what I am sure were the same people.

No common cause to these similar conducts, you might think. That seems true but the behaviors are so strikingly similar, I am not satisfied with this observation. I have to ask, what do the rioters have in common on the two sides of the Atlantic. Your answer may be as good as mine – probably better – but here is my take:

Two things.

First, both youngish Americans and youngish French people are counting on a high degree of impunity. Both American society and French society have gone wobbly on punishment in the past thirty years (the years of the “participation prize” for school children). Used to be, in France (where I grew up) that you did not set cars afire because there was the off-chance it would earn you several years of your beautiful youth in prison. No more. The police makes little effort to catch the perpetrators anyway. The charging authorities let them go with an admonition, maybe even a severe warning. In the US, the civil authorities often order the police to do nothing, to “stand down” in the face of looting and arson. And they refuse legitimate help. Here, the elected authorities are part- time rioters in their hearts – for whatever reason. The local DAs in Demo strongholds routinely release rioters on their own recognizance. It’s almost a custom.

It seems to me that in any group, from pre-kindergarten on, there are some who will not regulate themselves unless they feel threatened by powerful and likely punishment. Perhaps, it’s a constant proportion of any society. Remove the fear of punishment, it’s 100% certain someone will do something extreme, destructive, or violent. I don’t like this comment but I am pretty sure it’s right.

The second thing the rioting in France and in the US have in common is that they seem to involve people who don’t feel they have a stake in the current social arrangements. In the French case, it’s easy to guess who they are (a strong guess, actually). Bear with me. In the sixties and seventies, various French governments built massive, decent housing projects outside Paris and other big cities (again: “decent”). I was there myself, working as a minor government city planner. The above-board objective was to move people out of slums. It’s too easy to forget that the plan worked fine in this respect. With rising prosperity, inevitably, the new towns and cities became largely occupied by new immigrants.

Those who burn private cars on the Champs Elysees in Paris recently are their children and grandchildren. The immigrants themselves, like immigrants everywhere, tend to work hard to save, and to retain the strict mores of their mostly rural origins. Their children go haywire because the same mores can’t be applied in an urban, developed society. (“Daughter: You may go to the cinema once a month accompanied by your two cousins; no boys.” “Dad: You are kidding right?”) Misery is rarely or never an issue. In the French welfare state, it’s difficult to go hungry or cold. I have often observed that the French rioters are amazingly well dressed by American college standards, for example. Incidentally, the same children of immigrants frequently have several college degrees, sometimes advanced degrees. But, fact is, ordinary French universities are pretty bad. Further fact is that in a slow growing or immobile economy like France’s, few college degrees matter to the chance of employment anyway. The rioters feel that they don’t have a stake in French society, perhaps because they don’t.

Seen from TV and given their agility and sturdiness, American rioters seem to be in their twenties to early thirties; they are “millenials.” I don’t know what really animates them because I don’t believe their slogans. It’s not only that they are badly under-informed. (For example they seem to believe that policemen killing African Americans is common practice. It’s not. See my recent article on “Systemic Racism” for figures.) It’s also that they have not specified what remedies they want to the ills they denounce. An “end to capitalism” does not sound to me like a genuine demand. Neither does the eradication of a kind of racism that, I think, hardly exists in America any more. The impression is made stronger by the fact that they don’t have a replacement program for what they seem bent on destroying. (“Socialization of the means of production” anyone?) Their destructiveness inspires fear and it may be its only objective.

I don’t know well where the American rioters come from, sociologically and intellectually. They are the cohort that marries late or not at all. It is said that many never hope to become home owners, that they see themselves as renters for life. Few buy cars (possibly a healthy choice in every way eliminating a normal American drain on one’s finances). I think that they firmly believe that the Social Security programs to which they contribute through their paychecks will be long gone by their retirement age. (I hear this all the time, in progressive Santa Cruz, California.) I hypothesize that many of those young people have had the worst higher education experience possible. Let me say right away that I don’t blame much so-called “indoctrination” by leftist teachers; leftists are just not very good at what they do. Most students don’t pay attention, in general anyway. Why would they pay attention to Leftie propaganda? Rather it seems to me that many spend years in college studying next to nothing and in vain.

Roughly, there are two main kinds of courses study in American higher education. The first, covering engineers and accountants, and indirectly, medical doctors and vets, for example have a fairly straightforward payoff: Get your degree, win a fairly well paying job quickly. Graduates of these fields seldom have a sense of futility about their schooling though they may be scantily educated (by my exalted standards). The second kind of course of studies was first modeled in the 19th century to serve the children of the moneyed elites. I mean “Liberal Arts” in the broadest sense. Its purpose was first to help young people form judgment and second, to impart to them a language common to the elites of several Western countries. For obvious reasons, degrees in such areas were not linked to jobs (although they may have been a pre-requisite to political careers). Many, most of the majors following this pattern are pretty worthless to most of their graduates. A social critic – whose name escapes me unfortunately – once stated that American universities and colleges graduate each year 10,000 times more journalism majors that there are journalism openings.

As a rule, the Liberal Arts only lead to jobs through much flexibility of both graduates and employers. Thus, in good times, big banks readily hire History and Political Science majors into their lower management ranks on the assumption that they are reasonably articulate and also trainable. Then there are the graduates in Women’s Studies and Environmental Studies who may end up less educated than they were on graduating from high school. It’s not that one could not, in principle acquire habits of intellectual rigor though endeavors focusing on women or on the environments. The problem is that the spirit of inquiry in such fields (and many more) was strangled from the start by an ideological hold. (One women’s studies program, at UC Santa Cruz , is even called “Feminist Studies,” touching candidness!) It seems to me that more and more Liberal Arts disciplines are falling into the same pit, beginning with Modern Languages. There, majors who are Anglos regularly graduate totally unable to read a newspaper in Spanish but well versed in the injustices perpetrated on Hispanic immigrants since the mid 19th century.

Those LA graduates who have trouble finding good employment probably don’t know that they are pretty useless. After all, most never got bad grades. They received at least Bs all along. And why should instructors, especially the growing proportion on fragile, renewable contracts look for trouble by producing non-conforming grade curves? The grading standard is pretty much the same almost (almost) everywhere: You do the work more or less: A; you don’t do the work: B. But nothing will induce disaffection more surely than going unrewarded when one has the sentiment of having done what’s required by the situation. That’s the situation on ten of thousands of new graduates produced each year. And many of those come out burdened by lifetime debts. (Another rich topic, obviously.)

Incidentally, I am in no way opining that higher education studies should always lead to gainful employment. I am arguing instead that many, most, possible almost all LA students shouldn’t be in colleges or universities at all, at least in the manner of the conventional four-year degree (now five or six years).

The college graduates I have in mind, people in their twenties, tend to make work choices that correspond to their life experience devoid of effort. In my town, one hundred will compete for a job as a barista in one of the of several thriving coffee shops while five miles away, jobs picking vegetables that pay 50% or twice more go begging. I suspect the preference is partly because you can’t dress well in the fields and because they, the fields, don’t provide much by way of casual human warmth the way Starbucks routinely does.

Go ahead, feel free to like this analysis. I don’t like it much myself. It’s too anecdotal; it’s too ad hoc. It’s lacking in structural depth. It barely nicks the surface. It’s sociologically poor. At best, it’s unfinished. Why don’t you give it a try?

A last comment: a part of my old brain is temped by the paradoxical thought that the determinedly democratic revolt in Belorussia belongs on the same page as the mindless destructiveness in France and the neo-Bolshevik rioting in large American cities.

The Blind Entrepreneur

Entrepreneurs usually make decisions with incomplete information, in disciplines where we lack expertise, and where time is vital. How, then, can we be expected to make decisions that lead to our success, and how can other people judge our startups on our potential value? And even if there are heuristics for startup value, how can they cross fields?

The answer, to me, comes from a generalizable system for improvement and growth that has proven itself– the blind watchmaker of evolution. In this, the crucial method by which genes promulgate themselves is not by predicting their environments, but by promiscuity and opportunism in a random, dog-eat-dog-world. By this, I mean that successful genes free-ride on or resonate with other genes that promote reproductive success (promiscuity) and select winning strategies by experimenting in the environment and letting reality be the determinant of what gene-pairings to try more often (opportunism). Strategies that are either robust or anti-fragile usually outperform fragile and deleterious strategies, and strategies that exist within an evolutionary framework that enables rapid testing, learning, mixing, and sharing (such as sexual reproduction or lateral gene transfer paired with fast generations) outperform those that do not (such as cloning), as shown by the Red Queen hypothesis.

OK, so startups are survival/reproductive vehicles and startup traits/methods are genes (or memes, in the Selfish Gene paradigm). With analogies, we should throw out what is different and keep what is useful, so what do we need from evolution?

First, one quick note: we can’t borrow the payout calculator exactly. Reproductive success is where a gene makes more of itself, but startups dont make more of themselves. For startups the best metric is probably money. Other than that, what adaptations are best to adopt? Or, in the evolutionary frame, what memes should we imbue in our survival vehicles?

Traits to borrow:

  • Short lives: long generations mean the time between trial and error is too long. Short projects, short-term goals, and concrete exits.
  • Laziness: energy efficiency is far more important than #5 on your priority list.
  • Optionality: when all things are equal, more choices = more chances at success.
  • Evolutionarily Stable Strategies: also called “don’t be a sucker.”
  • React, don’t plan: prediction is difficult or even impossible, but being quick to jump into the breach has the same outcome. Could also be called “prepare, but don’t predict.”
  • Small and many: big investments take a lot of energy and effectively become walking targets. Make small and many bets on try-outs and then feed those that get traction. Note– this is also how to run a military!
  • Auftragstaktik: should be obvious, central planning never works. Entrepreneurs should probably not make any more decisions than they have to.
  • Resonance: I used to call this “endogenous positive feedback loops,” but that doesn’t roll off the tongue. In short, pick traits that make your other traits more powerful–and even better if all of your central traits magnify your other actions.
  • Taking is better than inventing: Its not a better startup if its all yours. Its a better startup if you ruthlessly pick the best idea.
  • Pareto distributions (or really, power laws): Most things don’t really matter. Things that matter, matter a lot.
  • Finite downside, infinite upside: Taleb calls this “convexity”. Whenever presented with a choice that has one finite and one infinite potential, forget about predicting what will happen– focus on the impact’s upper bound in both directions. It goes without saying– avoid infinite downsides!
  • Don’t fall behind (debt): The economy is a Red Queen, anyone carrying anything heavy will continually fall behind. Debt is also the most likely way companies die.
  • Pay it forward to your future self: squirrels bury nuts; you should build generic resources as well.
  • Don’t change things: Intervening takes energy and hurts diversity.
  • Survive: You can’t win if you’re not in the game. More important than being successful is being not-dead.

When following these guidelines, there are two other differences between entrepreneurs and genes: One, genes largely exist in an amoral state, whereas your business is vital to your own life, and if you picked a worthwhile idea, society. Two, unlike evolution, you actually have goals and are trying to achieve something beyond replication, beyond even money. Therefore, you do not need to take your values from evolution. However, if you ignore its lessons, you close your eyes to reality and are truly blind.

Our “blind” entrepreneur, then, can still pick goals and construct what she sees as her utility. But to achieve the highest utility, once defined, she will create unknowable and unpredictable risk of her idea’s demise if she does not learn to grow the way that the blind watchmaker does.

The Non-Partisan Movement We Need: Anti-Authoritarianism

Political/ideological debates have a lot of moving parts, and there are a lot of timely issues to address. Given the marginal impact of anything we do in this sphere (e.g. voting, sharing a blog post on Twitter, or being a solitary voter in a vast sea of the entire 6200 people in this country), it’s only natural that we have to economize on information and argument and that results. We can’t help but deplete the intellectual commons.

What are some low cost ways to improve the quality?

  1. Value Intellectual humility.
  2. Devalue the sort of behavior that makes things worse.

It bears repeating: value intellectual humility. It’s not easy. I’m as drawn the confident claims as you are. I’ve got a lot of smart people in my bubble and when they boldly declare something, I tend to believe them. But the “I honestly don’t know” posts deserve more attention and are less likely to get it. Let’s adjust in that direction. I’ll try to write more about things I don’t know about in the future (although I don’t know what that’s going to look like).

It’s a statistical impossibility that, of all of the people burned at the stake for heresy or witchcraft or whatever, nobody deserved some punishment received in an unfair process. Don’t get me wrong, witch hunts are a bad thing in general, but we can’t discount them as entirely (maybe just 99.9%) unjustified. But cancel culture is, like good old fashioned witch hunts is doing a lot of harm to the intellectual commons. I’m they catch more bad guys than 17th century Puritans, but lets not leave cancellations up to Twitter mobs. Particularly when it comes to cancelling ideas.

Bad ideas don’t need to be cancelled. They need to be crushed under good ideas.

Far be it from me to peddle unreplicated psychological research (confirmation bias alert!), but I tend to believe that there’s something to the claim that the extreme poles of the ideological landscape exhibit some unsettling traits: narrow-mindedness, authoritarianism, and apparently Machiavellianism, narcissism, and psychopathy.

“Narcissistic psychopath” is not a label I’d like to see bandied about because it’s just too close to ad hominum. But “authoritarian” is a term I’d like to see more widely used as a pejorative, regardless of the position taken by would be authoritarians.

Let’s quit with the shouting, cancelling, flag waving, and blindly taking reactionary positions. Invite debate, and invite holding people accountable. But letting Twitter be the last word is as absurd as letting Helen Lovejoy-esque moral scolding decide how things should be.

But then again, maybe I’m wrong.

Women in my life

So, my granddaughter – 12 – says I am “sexist.” She is not completely sure what it means but she knows it’s pretty bad. So, I drop the two boogey boards I am carrying across the beach for her. I think she gets it.

I don’t know how many of you noticed but the more rights women conquer, and the more recognition of women in all areas of life receive, the more angry the average woman seems to be. I know, I know, that correlation is not causation but it’s a good reason to wonder about causation. And how about the surging number of bad and aggressive female drivers? Is it just me making things up in my mind?

My wife Krishna says women are vaguely unhappy because, even if unconsciously, they miss the animals, the catcalls and the wolf-whistles, and a broad variety of bird sounds.

When hard work doesn’t equal productive work

In March 2020, David Rubenstein gave an interview in which he lamented the vanishing of a system in which “hard work” guarantees success. While the source of nostalgia is understandable, there is an epistemological problem with the conjoined assumptions underlying the concept of hard work and also what a “system” promises, i.e. if one works hard, then one become successful. The issue appears to be one of qualifying and quantifying “hard work.”

My previous article “An aspirational paradox” mentions Abigail Fisher and her failed lawsuit against University of Texas – Austin over her non-acceptance to the institution. The case was a painful example of the disillusionment which must follow when believers in the exceptionality of the commonplace are finally made aware of its mediocrity. The Fisher saga represents the modern tragedy of familial ambition: a child’s parents place her on a systemic path, promised by wise public-school teachers and caring guidance counselors to lead to success, only to discover that the end is the furnace of Moloch. Caveat emptor.

The strange, disembodied entity called “the system” doesn’t fail; what fails is individual and collective concepts of what the system is and what it requires. Mankind has a capacity for filling a void of ignorance with figments of its imagination. In general, such practice is harmless. But when a person believes his own creation and builds his future upon it, that is when the ‘systemic failure’ narrative begins.

Drawing again from my own encounters, for many years I knew a music teacher who believed that one must never listen to repertoire. Yes, you read that correctly: a teacher of an aural art form believes that listening to music is detrimental. The person had many long, pseudo-pedagogical explanations for this peculiar belief. His idea was atypical. Professors at the world’s top conservatories and musicians from major ensembles all emphasize listening as a crucial part of study. Listening as a formal component of music study dates to the invention and mass distribution of the phonograph in the early 20th century. Even further back, students attended live concerts.

This teacher had a pedagogical system built around his beliefs, which included that students should neither learn basic keyboard skills nor how to play with accompaniment. Unsurprisingly, students who adhered to his system didn’t progress very well. Problems ranged from poor intonation and lack of ensemble skills to arriving for college auditions with no grasp of appropriate repertoire. Feedback from competitions was kind but completely honest. The more students failed, the more obstinately he insisted that political maneuverings or class biases were to blame. “The system,” by which he meant auditions, was “broken,” designed to not give people a “fair chance.”

Sadly, this man affected a large number of students, many of whom worked hard – practicing long hours, racking up credits, participating in multiple ensembles – only to discover that their “system” was a fraud. All of their hard work was for naught.

There was one particularly heartbreaking case of a young woman who applied to a fairly prominent private university. By her own account, her audition was catastrophic. In the lead up to the audition, she did her best to ensure success; she had two lessons a week, increased her daily practice time by an hour, and played along to background recordings. The amount of work she did, measured in terms of effort and time spent, was brutal. But she didn’t pass the audition and was understandably devastated.

A system she had followed religiously since fourth grade had failed her; moreover, her hard work was guaranteed to fail. There was no way for her to succeed based upon her training. In some ways, this girl’s story parallels Abigail Fisher’s history. For years both put in hours of effort only to discover that they had misjudged and misplaced their energies. Bluntly, these young women worked hard but not strategically.

The failure of these girls was unrelated to the broader “system,” whether that system was auditions or college applications. To argue that “the system” is broken on the basis that hard work is not rewarded is irrational, albeit understandable on an emotional basis. Before rushing off to denounce “the system” for not rewarding hard work, one should critically examine the foundational premise and ask: Was this hard work or was it productive work?

Slate Star Codex and the rationalists

Rick first alerted me to the end of the popular rationalist blog Slate State Codex. Then it was all over my internet. I have never been a huge fan of the rationalist community, mostly because they don’t do history very well, but this is a big deal.

It has also produced some great conversation on both sides of the American cultural divide. Gideon Lewis-Kraus wrote an excellent meta-piece on the whole affair. Lewis-Kraus uses “Silicon Valley” as shorthand for the intellectual right. This is more correct than wrong, even though the region votes Democrat, because Silicon Valley is more of a mindset than a geographic place.

Lewis-Kraus’s Silicon Valley is a new, decentralized informational ecology. He contrasts Silicon Valley with the old media: big corporations trying to maintain a stranglehold on “the narrative.” (Lewis-Kraus readily admits he’s part of the old media.) For Lewis-Kraus, Silicon Valley is trying to build an alternative mediascape. Big corporations such as the NY Times are fighting back.

It’s an interesting cultural war to follow, if you’re in to that kind of stuff. I can’t seem to shake my uneasiness about the rationalist community, though. As I mentioned, they don’t do history, or they don’t do it well. They are also into communes, which I distrust immensely. Utopian and communitarian experiments are bad for all of your healths (physical, emotional, etc.). I don’t know how the rationalists ended up on the side of Silicon Valley. My guess is that the big corporations didn’t like what the rationalists had to say, or how they lived, so the rationalists found solace in the decentralized ecology of Silicon Valley.

I think the verdict is still out on who the victor of this cultural war will be. The big corporations have government backing, and they own the narrative bought by most of the American public, but the old media has shown its true colors in how it covers Donald Trump. I didn’t vote for the guy but it’s obvious his administration is not being reported on by the old media; it’s being slandered and attacked, with lies or with small untruths, rather than objectively reported on. The rationalists and their decentralized allies in the Silicon Valley informational ecology at least have truth on their side. Not the truth, but a commitment to the truth by way of discussion, the sharing of information, and fighting to protect the freedom of everybody’s conscience, rather than just their own team’s conscience.

We live in interesting times, and this makes blogging – a decentralized activity if there ever was one – all the more important.

The importance of gardening, isonomia, federation, and free banking

I’ve recently taken up gardening, in a very amateurish way. Right now I’ve got two plants growing out of a bucket filled with dirt. I water them every day. I talk to them. I rotate them so that different sides face the sun at different times of the day. I spray them with water, too. I have no idea what they are. I suspected they might be peppers, but I’m not sure now because there are tiny white flowers that bloom and then quickly wilt away.

I plan on building a few garden beds when I finally buy a house.

I have become convinced that if Charlie Citrine had simply taken up gardening he would not have gotten into all that trouble.


As a libertarian I think three topics are going to be huge over the next few decades: 1) inequality, 2) foreign policy/IR, and 3) financial markets. Libertarians have great potential for all three arguments, but they also have some not-so-great alternatives, too.

1) Libertarians are terrible on inequality. We try to ignore it. Jacques’ debt-based approach to reparations for slavery is as good as any for addressing inequality in the US. In addition to reparations for slavery, I think Hayek’s concept of isonomia is a great avenue for thinking through inequality at the international level. (I even thought about renaming this consortium “Isonomia” at one point in time.) Isonomia argues for political equality rather than any of the other equalities out there.

2) I think federation as a foreign policy is a great avenue for libertarians to pursue. It’s much better than non-interventionism or the status quo. It’s more libertarian, too. Federation addresses the questions of entrance and exit. It allows for political equality and market competition and open borders. It also takes into account bad international state actors like Russia and China. Dismantling the American overseas empire is needed, but large minorities want the US to stay in their countries. Leaving billions of people at the mercy of illiberal states like Russia and China is morally repugnant and short-sighted (i.e. stupid). It’d be better to dismantle the American empire via federation.

3) Free banking is a wonderful way forward for libertarians to address financial markets. Finance is a boogieman for the Left and can be used as a scapegoat on the Right. They’re not wrong. Financial markets need to be reexamined, and libertarians easily have the best alternative to the status quo out there.

A paradox

You know those little floaters on the surface of your eyes? They drift into view, catch your attention, then when you try to focus directly on one it disappears from view. They’re only really there if you don’t look straight at them.

Goodhart’s Law tells us that “When a measure becomes a target, it ceases to be a good measure.” The same basic logic applies to two of my favorite things: the Internet and college.

The Internet is still a magical thing, but we’ve killed some of the magic by trying to take the Internet seriously. The Internet ceases to provide output worth taking seriously when people actually take the Internet seriously. When you only keep it in your periphery, it’s actually worth taking seriously

Ditto for college. The basic problem with the current system is that we’re all taking it too seriously. That leads to all sorts of specific bad behavior. But it all comes from this root problem. College is only worth taking seriously if we don’t. When college is back in the ivory tower, separated from the “real” world, it’s a place where people can be creative and make non-obvious connections. But once we recognize “hey, that’s a pretty neat thing, let’s make it a one-size-fits-all solution to all of our problems” we kill the goose that lays the golden eggs.

My advice for getting the most out of the Internet: don’t take it too seriously. It was only ever meant to be a place for weirdos to do weird stuff.

My advice for getting the most out of college: don’t take it too seriously. It was only ever meant to be a place for weirdos to do the sort of stuff that the rest of the world doesn’t have time for.

Hazony’s nation-state versus Christensen’s federation

Yoram Hazony’s 2018 book praising the nation-state has garnered so much attention that I thought it wasn’t worth reading. Arnold Kling changed my mind. I’ve been reading through it, and I don’t think there’s much in the book that I can originally criticize.

The one thing I’ll say that others have not is that Hazony’s book is not the best defense of the status quo and the Westphalian state system out there. It’s certainly the most popular, but definitely not the best. The best defense of the status quo still goes to fellow Notewriter Edwin’s 2011 article in the Independent Review: “Hayekian Spontaneous Order and the International Balance of Power.”

Hazony’s book is a defense of Israel more than it is a defense of the abstract nation-state. Hazony’s best argument (“Israel”) has already been identified numerous times elsewhere. It goes like this: the Holocaust happened because the Jews in mid-20th century Europe had nowhere to go in a world defined by nationalism. Two competing arguments arose from this realization. The Israelis took one route (“nation-state”), and the Europeans took another (“confederation”). Many Jews believe that the Israelis are correct and the Europeans are wrong.

My logic follows from this fact as thus: the EU has plenty of problems but nothing on the scale of the Gaza Strip or the constant threat of annihilation by hostile neighbors (and rival nation-states).

The European Union and Israel are thus case studies for two different arguments, much like North and South Korea or East and West Germany. The EU has been bad, so bad in fact that the British have voted to leave, but not so bad that there has been any genocide or mass violence or, indeed, interstate wars within its jurisdiction. Israel has been good, so good in fact that it now has one of the highest standards of living in the world, but not so good that it avoided creating something as awful as the Gaza Strip or making enemies out of every single one of its neighbors.

To me this is a no-brainer. The Europeans were correct and the Israelis are wrong. To me, Israelis (Jewish and Arab) would be much better off living under the jurisdiction of the United States or even the European Union rather than Israel’s. They’d all be safer, too.

What I learned in my bachelor’s degree

I took my bachelor’s degree in History between 2001 and 2005. All the people I asked told me that the course I took was the best in the country. I suppose they were right, but today I understand that they were predominantly talking about the graduate program at the same school. A department with a good master’s and doctoral degree does not necessarily translate into a good undergraduate degree, in the same way that good researchers and writers are not necessarily good teachers. Most of my professors were very bad teachers. I hope to be saying this without bitterness or arrogance, just realizing that although they were good academics, they were mostly not good at imparting knowledge.

Perhaps one of my professors’ difficulties in transmitting knowledge was precisely the constant questioning about the validity of transmitting knowledge. Brazilian pedagogy is strongly influenced by a form of social-constructivism created by educator Paulo Freire. Freire strongly insisted that teachers could not be transmitters of knowledge, but that students created knowledge on their own, and that teachers were, if at all, facilitators of this process. At least that’s what I understood or is what I remember from my pedagogy classes. Paulo Freire’s pedagogy is admittedly a translation of Marxism into the teaching field: students are the oppressed class, teachers are oppressors. Freire wanted pedagogy to reflect a classless society. The result, in my view, was that teachers were terrified of being seen as “the owners of the truth”.

My bachelor’s degree had the bold goal of training teachers and researchers at the same time. In my view, this created a difficulty: students needed to learn to cook and be food critics at the same time. It was not an easy task for people of 18, 20 years of age. Most classes ended up being quite weak. Another problem is that my post-Marxist professors wanted us to have a critical attitude: we needed to be critical of everything that was understood as “traditional”. This ended up creating distrust in the students’ minds: if everything is to be criticized, what should I believe? Of course, contradictorily what professors said should not be criticized, especially the proposition that everything should be criticized. In general, the program tended to generate people of 18, 20 years boisterous or confused. Or both.

Another experience of my bachelor’s degree was the encounter with party politics. In my high school, I had little contact with highly politicized people or student unions. The same cannot be said of my undergraduate studies! I met many people who were already involved to some degree with political parties, always on the left. Some people say that Christians are the main reason churches are empty. I can say something similar about my undergraduate colleagues. It is largely thanks to them that I became conservative. The hypocrisy, the aggressiveness, the arrogance of many of them made me suspect that there was something very wrong with the left. It took a few years, but eventually, I discovered classic liberal or libertarian authors and found my intellectual home.

But there were positive things about my undergraduate studies as well. Undergraduate was my first great opportunity to leave home a little more. I met some people with whom I am still friends today. And I had some good classes too. Some professors were more conservative, and largely ignored the department’s directives. Their classes were more traditional, more expository, more dedicated to informing us about things that happened in history, without much questioning. I remember a quote from my professor of Contemporary History I (roughly equivalent to 19th century): “when writing your paper, don’t say “I think … ”. You don’t think anything. When you are in the master’s or doctorate, you will think something. Today, simply write “the so-and-so author says …”. Be able to understand what the authors are talking about. That is enough for you today ”. There were also professors who were able to introduce a more critical perspective but in a less radical way.

Perhaps my biggest disappointment with undergraduate is that I almost didn’t get to teach History. The education system in Brazil is essentially socialist. The government assumes that everyone has the right to free, good quality education. And you know: when the government says you have a right to something, you’re not gonna get it, it will be expensive and of poor quality. The life of a teacher in Brazil is quite harsh. I have several friends in the teaching profession, and I am very sorry for them. Maybe I should have listened to my mom and study engineering.

But I don’t want to end it bitterly! I studied History because I really wanted to be a teacher. I still think that being a teacher is a beautiful vocation. Unfortunately, in Brazil, this vocation ends up being spoiled by the undue state intervention. I also studied History simply because I liked History, and I still do. If I had the mind I have today, possibly I would have studied something else. But I didn’t, and I am grateful for the way my life happened.

What I learned in the Master’s

In my master’s degree, I studied international relations. As far as I can judge, the program was very good. Excellent even. It was a very good two years, in which I was challenged like never before. The master’s degree was very difficult for me. I was very curious about international affairs, but I knew almost nothing about international relations theory. The professors assumed that students were at least familiar with the content. I was not. So, I went through the experience of learning to cook and learning to be a culinary critic at the same time. I had to chase a lot. But it was good. The master’s taught me like no previous experience to study on my own.

Looking back, I understand that the program was strongly influenced by a light form of postmodernism. That was very difficult for me. There was a strong rejection of more traditional theories of international relations, such as realism and liberalism. It was all very new to me, but I knew that being a classic realist was not an option well regarded by the professors. I ended up finding a kind of lifeboat in constructivism. I didn’t want to be ashamed of being a realist, but my intuition told me that there was something wrong with postmodernism. It was only after the master’s degree, teaching the theory of international relations and studying several other things, that I understood that postmodernism is really crazy, something deeply twisted.

Constructivism is largely weird also. The most sensible thing I read in international relations was John Mearsheimer’s offensive realism. Stephen Walt is an author who also made sense to me in my post-master’s life. In short, I admire my master’s program for its academic excellence, but I find the theories espoused by several of the professors completely flawed.

It was very difficult for me to write my dissertation. I did not have a clear theoretical basis, just the instinct that I did not want to follow a postmodern line and the certainty that a more traditional theory would not be well accepted. I wrote the dissertation without having a very solid theoretical basis. But my research, modesty aside, was still very well done. I researched the arrival of the first Protestant missionaries to Brazil in the 19th century.

It was a topic of personal interest. I was a recently converted Protestant, and I wanted to know more about my history. As they say in Brazil, I joined hunger with the desire to eat. My question, which I was not able to ask so clearly at the time, was whether the presence of missionaries in Brazil, the majority coming from the USA, had affected Brazil-United States relations in any way. Even today, I find it very difficult to analyze causality in such cases, as someone would do in the hard sciences, but I believe that with the information I gathered I can defend that yes, American Protestant missionaries affected Brazil-US relations in many ways. Brazil and the USA were predominantly disinterested in each other in the early 19th century.

In the late 19th and early 20th centuries, this situation changed dramatically, especially on the part of Brazil. The USA started to play a central role in Brazilian foreign policy. It does not seem to me to be the case that the missionaries caused this change, but I believe that their presence in Brazil cooperated, along with other factors, to make this happen. Would Brazil change its foreign policy at the end of the 19th century in one way or another? This is a type of question that, honestly, I’m not interested in answering. But I believe it is clear that the missionaries helped the two countries to become a little more aware of each other.

I faced some opposition from colleagues for choosing this topic. One of the things I heard was that, being a Protestant, I would not have the necessary distance to do a good research. I also heard that missionaries would be little more than tourists, and that they would, therefore, have no chance of affecting relations between the two countries. These were harsh criticisms, which still make me sad when I remember them. I see in these criticisms a certain prejudice against evangelicals that is still present in Brazil, inside and outside academia. Ironically, I did not find the same thing on the part of the professors. On the contrary! Every one of them was always very supportive of my research, and in fact, they found the topic interesting and pertinent.

I would very much like to be able to return to the topic of my research with the head I have today, but I don’t have time for that. To some extent, I would also like to go back to those classes knowing the things I know today. But I also believe that I would not have that much patience. I have a better notion of what I consider epistemologically valid or not. I suppose the master’s degree would be more difficult to take today. Anyway, the master’s degree gave me my first job as a professor: I started teaching international relations when I hadn’t even defended the dissertation, and I did it for eight years. It was a very good eight years. Although I am away from this area, I still like what I learned, and I feel benefited by the time I studied and taught international relations.

The importance of biography

There is a now-out-of-print children’s book series entitled “Childhood of Famous Americans,” published as a subdivision of the Landmark Books series between 1950 and 1970. When I was between the ages of six and ten, I was fortunate to be able to read almost all of the books, which were, unsurprisingly, the biographies of prominent Americans written for children. Even when I was little, the books were fairly ancient: the most recent subjects they covered were Eleanor and Franklin D. Roosevelt and Albert Einstein. Despite, or even because of, their relative antiquity, these books had a major impact on my own trajectory.

This is not to say that they weren’t flawed since they were. Often they were riddled with historical inaccuracies, the quality of writing varied wildly from author to author, and the content could be outright offensive in regards to religion and races. The overall series, however, did a very good job of including biographies of Americans from minority groups, but, depending on the subject, author, and time period, the portrayals of other races could be quite insensitive.

The books all followed the Joseph Campbell theory of story to a T, with the result that they were very good stories. A critic might argue that these biographies lionized or apotheosized individuals in an unrealistic way. While such an accusation would be true, the series was titled “Famous Americans,” not “average Joe Americans.” The important trait of these books though was that they all shared a common theme: stature was a choice and one that was made in childhood or adolescence.

Using Campbell-ian terms, the moment of awakening was almost invariably an episode where the subject realized that the people surrounding him or her were stupid, fearful, and conventional – Mark Twain being expelled from multiple schools, Abraham Lincoln denied an education by his illiterate father (as I said, not all of the stories were tremendously accurate), Henry Clay fighting for his inheritance rights against his extended family, Jim Thorpe struggling against racial and social prejudice throughout his sporting career.

On a side note, there was a remarkable absence of American fine arts figures in the series. Mark Twain was one of a handful of writers that included Edgar Allan Poe and Louisa May Alcott; I don’t recall that some of the more sophisticated writers, such as Washington Irving, Henry James, or Edith Wharton, received the honor. One could say that the absence of fine artists was countered by an equal absence of career military men. Dwight D. Eisenhower had a biography, as did George Armstrong Custer (his was most uncomplimentary). Robert E. Lee and Ulysses S. Grant both received a book. I’m sure that there’s room for analysis of a vision of civil society expressed in who the series’ editors decided to cover.

The “Childhood of Famous Americans” series only rarely had a specific antagonist. Some combination of self-satisfied parents, authority figures attached to a status quo, and parochial small-mindedness served as the villains. The subject’s daily obstacles were educational and cultural mediocrity, societal complacency, intellectually inferior peers, and timorous and incapable mentors, who by extension weren’t very good at their job.

Fundamentally, the goal of the series was to create role models for young readers. The model proposed was complete rejection of (and a little healthy contempt for) existing systems. The unifying theme among all the people selected was the tradition of “rugged individualism” and the idea that progress was due to the action of individuals, not that of their communities (recall, the village inhabitants were invariably shown as small-minded, poor spirited morons).

Carl J. Schramm argued in his 2006 book The Entrepreneurial Imperative that the “rugged individual” ethos was an American casualty of post-World War II society. Americans turned more toward the concept of the “workforce,” with its communal overtones, and away from individual achievement and success. The peak of statist, stagnant communitarianism came in the 1970s, the decade in which the “Childhood of Famous Americans” also ceased publication.

Both biography and entrepreneurial spirit speak of a path to personal greatness, a way for individuals to emancipate themselves from their origins if they have sufficient will. The loss of biography and an entrepreneurial ethos indicate an impoverishment in role models. Without role models of individualistic thought or practice, most people lack the originality to conceive of ways of life beyond their current existence. Discontent and feelings of betrayal by “the system,” society, or the status quo are the ultimate result.

Today, we are confronted by the implosion of the post-WWII status quo. To further complicate matters, the majority of the adult population lacks a blueprint for either challenging what remains of the status quo, or for forging a new path. Without the proper role models of individuality, shown in biography, such people are in thrall to the false promise of communitarianism.

How much more progressive is the corporate world than academia?

Academia is a hotbed of leftism and has been for centuries. At the same time, it’s also one of the most conservative institutions in the Western world. I don’t think this is a coincidence. Leftists are conservative.

The recent writings of Lucas, Mary, and Rick have highlighted well not only academia’s shortcomings but also some great alternatives, but what about stuff like this? The link is an in-depth story on how senior professors use their seniority to procure sexual favors from their junior colleagues. There is, apparently, not much universities can do about it either.

If a manager within a corporation tried any of the stuff listed in the report, he or she would be fired immediately. Sexual harassment is still an issue in the corporate world, but it is much, much easier to confront than it is in academia. The same goes for government work. The President of the United States couldn’t even get away with a blow job from a teenage intern without dire consequences in the 1990s.

What makes academia so different from corporate and government work? Is it tenure? Is it incentives? In the corporate world profits matter most. In government, “the public” matters most. In academia, it’s publish or perish. I don’t think this has always been the case. I think the publish-or-perish model has only been around since the end of World War II. Something is horribly wrong in academia.

In the mean time: corporations, churches, governments (it pains me to say this, but it’s true, especially when compared with academia), and all sorts of other organizations continue to experiment with social arrangements that attempt to make life better and better.

Pathologies in higher education: a book, a review, and a comment

Cracks in the Ivory Tower, by Jason Brennan and Phillip Magness, brings a much needed discussion of the pathologies of US higher education to the table. Brennan and Magness are two well-known classical liberals with a strong record of thoughtful interaction with Public Choice political economy.

Public Choice is an application of mainstream economic concepts to political situations. One of the key points of Public Choice is that people are self-interested and rational. This drives the choices they make. But people also act within formal and informal institutional environments. This constrains and enables some of their choices to a large degree. In other words, people react to incentives.

The Public Choice approach is not so much a normative handbook, but rather an attempt to explain how politics operate. The application of this theory to understand higher education in the US is a welcome addition to a growing literature on the economics of higher education.

It is perhaps surprising how the subtitle of the book stresses an aspect that tends to be extraneous to Public Choice scholarship: “The Moral Mess of Higher Education”. Of course we all draw on moral reasoning and assumptions in order to pass judgment on economic and political phenomena, but normally the descriptive side is kept separate – at least by economists – from explicit value judgment.

John Staddon, from Duke University, has reviewed Brennan’s and Magness’ book. In his review, he focuses on three main key issues. First, colleges and universities act on distorted incentives created, for example, by college rankings, to recruit students in ways that are not necessarily related to maintaining or expanding the academic prestige of the institution.

Second, teaching in higher education, at least in the US, is poorly evaluated. Historically, it has shifted from student evaluation to administrative assessment.

So why the shift from student-run to administration-enforced?  And why did faculty agree to give these mandated evaluations to their students? Faculty acquiescence — naiveté — is relatively easy to understand. Who can object to more information? Who can object to a new, formal system that is bound to be more accurate than any informal student-run one? And besides, for most faculty at elite schools, research, not teaching, is the driver. Faculty often just care less about teaching; some may even regard it as a chore.

The incentives for college administrations are much clearer. Informal, student-run evaluations are assumed to be unreliable, hence cannot be used to evaluate faculty for tenure and promotion. But once the process is formalized, mandatory, and supposedly valid, it becomes a useful disciplinary tool, a way for administrators to control faculty, especially junior and untenured faculty.

This is not necessarily conducive to improvement in the quality of teaching. Perhaps colleges fare better than universities here, given that their faculty is not expected to allocate a large amount of hours per week to research and writing.

Third, Brennan and Magness offer a critique of what is known in the US system as “general education” courses. In their view, it is clear that those courses are unhelpful in a world where academic disciplines are increasingly more specialized. However, offering those courses is a good excuse for universities to grab more money from the students.

This is where Staddon begs to differ:

Cracks in the Ivory Tower usefully emphasizes the economic costs and benefits of university practices. But absent from the book is any consideration of the intrinsic value of the academic endeavor. Remaining is a vacuum that is filled by two things: the university as a business; and the university as a social activist.  Both are destructive of the proper purpose of a university.

I tend to agree with this point, and I do not think it is a minor point. We can do colleges and universities without football, without gigantic administrative bureaucracies, and without the gimmicks to game the college ranking system. I could even go further and argue that we should do colleges and universities without dorms and an artificial second and worse version of teenage years right when students are supposed and expected to behave like adults. Getting rid of those tangential features of US higher education should help refocus on knowledge and reduce the cost.

Colleges and universities in the US are also expensive and unnecessarily inflated because of the structure of the student loans system, which also generates perverse incentives. But this point has been explained and described to exhaustion in the economic literature. This also has to change.

However, I am not convinced that making universities focus on professionalizing their students would be the best way to go. Brennan and Magness raise some important issues and concerns, some of which also apply outside the US, but the Staddon highlights in his review an important counterpoint: higher education, at least on the undergraduate level, shouldn’t be seen 100% as an investment good, but also as a consumer good:

Higher education does not exist for economic reasons. It exists (in the famous words of Matthew Arnold) to transmit “the best that has been thought and said,” in other words the ‘high culture’ of our civilization. Job-related, practical training is not unimportant. Universities, and much else of society, could not exist without a functioning economy. But — and this point is increasingly ignored on the modern campus and by the authors of CIT — these things are not the purpose, the telos if you like, of a university.

Undergraduate education is there to hand over knowledge to the next generation. It can be small and cheap. You need an adequate building, a small library with the best classic books, electronic access to journals, and faculty that excels at teaching. Courses would be general, comprehensive, and interdisciplinary by definition. The program could last only three years. An optional additional year could be offered to those with an academic profile, where they could pursue more specialization as a bridge to graduate education.

This is more or less the mediaeval model. I am not sure we need to reinvent the wheel in order to deal with the crisis of higher education. What we need is to get back on track – back to the bread and butter of college education. This is a reflection that both sides of the story – those who demand education and those that offer it – need to make.


Read more:

In a recent contribution to Notes on Liberty, Mary Lucia Darst has recently commented on the status of higher education during the 2020 pandemic and prospects for the future.

I also wrote about the college trap in the US a few years ago.