Three Roads to Racism

Are you a racist?

Anyone can feel free to answer this question any way it/she/he wishes; they wish. And that’s the problem. In this short essay, I aim first to do a little vocabulary house-keeping. Second, I try to trace three distinct origins of racism. I operate from thin authority. My main sources are sundry un-methodical readings, especially on slavery, spread over fifty years, and my amazingly clear recollection of lectures by my late teacher at Stanford, St. Clair Drake, in the sixties. (He was the author of Black Metropolis among other major contributions.) I also rely on equally vivid memories of casual conversations with that master storyteller. Here you have it. I am trying to plagiarize the pioneer St. Clair Drake. I believe the attempt would please him though possibly not the results.

Feel free to reject everything I say below. If nothing else, it might make you feel good. If you are one of the few liberals still reading me, be my guest and get exercised. Besides, I am an old white man! Why grant me any credence?

That’s on the one hand. On the other hand, in these days (2020) obsessed with racism, I never see or hear the basic ideas about racism set down below expressed in the media, in reviews or on-line although they are substantially more productive than what’s actually around. I mean that they help arrive at a clearer and richer understanding of racism.

If you find this brief essay even a little useful, think of sharing it. Thank you.

Racism

“Racism” is a poor word because today, it refers at once to thoughts, attitudes, feeling, and also to actions and policies. Among the latter, it concerns both individual actions and collective actions, and even policies. Some of the policies may be considered to be included in so-called “systemic racism” about which I wrote in my essay “Systemic Racism: a Rationalist Take.”

The mishmash between what’s in the heads of people and what they actually do is regrettable on two grounds. First, the path from individual belief, individual thoughts, individual attitudes, on the one hand, to individual action, on the other is not straightforward. My beliefs are not always a great predictor of my actions because reality tends to interfere with pure intent.

Second, collective action and, a fortiori policies, rarely looks like the simple addition of individual actions. People act differently in the presence of others than they do alone. Groups (loosely defined) are capable of greater invention than are individuals. Individuals in a group both inspire and censor one another; they even complete one another’s thoughts; the ones often give the others courage to proceed further.

This piece is about racism, the understanding, the attitudes, the collection of beliefs which predispose individuals and groups to thinking of others as inferior and/or unlikable on the basis of some physical characteristics. As I said, racism so defined can be held individually or collectively. Thus, this essay is deliberately not about actions, program, failures to act inspired by racism, the attitude. That’s another topic others can write about.

Fear and loathing of the unknown

Many people seem to assume that racial prejudice is a natural condition that can be fought in simple ways. Others, on the contrary, see it as ineradicable. Perhaps it all depends on the source of racism. The word mean prejudgment about a person’s character and abilities based on persistent physical traits that are genetically transmitted. Thus, dislike of that other guy wearing a ridiculous blue hat does not count; neither does hostility toward one sex or the other (or the other?). I think both assumptions above – racism as natural and as ineradicable – are partly but only partly true. My teacher St. Clair Drake explained to me once, standing in the aisle of a Palo Alto bookstore, that there are three separate kinds of racial prejudice, of racism, with distinct sources.

The first kind of racism is rooted in fear of the unknown or of the unfamiliar. This is probably hard-wired; it’s human nature. It would be a good asset to have for the naked, fairly slow apes that we were for a long time. Unfamiliar creature? Move away; grab a rock. After all, those who look like you are usually not dangerous enemies; those who don’t, you don’t know and why take a risk?

Anecdote: A long time ago, I was acting the discreet tourist in a big Senegalese fishing village. I met a local guy about my age (then). We had tea together, talked about fishing. He asked me if I wanted to see his nearby house. We walked for about five minute to a round adobe construction covered in thatch. He motioned me inside where it was quite dark. A small child was taking a nap on a stack of blankets in the back. Sensing a presence, the toddler woke up, opened his eyes, and began screaming at the top of his lungs. The man picked him up and said very embarrassed. “I am sorry, my son has never seen a toubab before.” (“Toubab” is the local not unfriendly word for light skin people from elsewhere.)

Similarly, Jared Diamond recounts (and show corresponding pictures in his book, The World Until Yesterday: What Can We Learn from Traditional Societies. Viking: New York.) of how central New Guinea natives became disfigured by fear at their first sight of a white person. Some explained later that they thought they might be seeing ghosts.

Night terrors

The second distinctive form of racism simply comes from fear of the dark, rooted itself in dread of the night. It’s common to all people, including dark skinned people, of course. It’s easy to understand once you remember that beings who were clearly our direct ancestors, people whose genes are in our cells, lived in fear of the darkness night after night for several hundreds of thousands of years. Most of their fears were justified because the darkness concealed lions, leopards, hyenas, bears, tigers, saber-toothed cats, wolves, wild dogs, and other predators, themselves with no fear of humans. The fact that the darkness of night also encouraged speculation about other hostile beings -varied spirits – that did not really exist does not diminish the impact of this incomplete zoological list.

As is easy to observe, the association dark= bad is practically universal. Many languages have an expression equivalent to: “the forces of darkness.” I doubt that any (but I can’t prove it, right now) says, “the forces of lightness” to designate something sinister. Same observation with “black magic,” and disappearing into a “black hole.” Similarly, nearly everywhere, uneducated people, and some of their educated betters, express some degree of hostility – mixed with contempt, for those, in their midst or nearby, who are darker than themselves. This is common among African Americans, for example. (Yes, I know, it may have other sources among them, specifically.)

This negative attitude is especially evident in the Indian subcontinent. On a lazy day, thirty years ago in Mumbai, I read several pages of conjugal want ads in a major newspaper. I noticed that 90% of the ads for would-be brides mentioned skin color in parallel with education and mastery of the domestic arts. (The men’s didn’t.) A common description was “wheatish,” which, I was told by Indian relatives, means not quite white but pretty close. (You can’t lie too shamelessly about skin tone because, if all goes well, your daughter will meet the other side in person; you need wiggle room.) In fact, the association between skin color and likability runs so deep in India that the same Sanskrit word, “varna,” designates both caste and color (meaning skin complexion). And, of course, there is a reason why children everywhere turn off the light to tell scary stories.

In a similar vein, the ancient Chinese seem to have believed that aristocrats were made from yellow soil while commoners were made from ordinary brown mud. (Cited by Harari, Yuval N. – 2015 – in: Sapiens: A Brief History of Humankind Harper: New York.)

Some would argue that these examples represent ancestral fears mostly left behind by civilized, urban (same thing) people. My own limited examples, both personal and from observation is that it’s not so. It seems to me that fear of the dark is the first or second page of the book of which our daily street-lit, TV illuminated bravado is the cover. Allow a couple of total power stoppages (as Californians experienced recently) and it’s right there, drilling into our vulnerable minds.

Both of these two first kinds of negative feelings about that which is dark can be minimized, the first through experience and education: No, that pale man will not hurt you. He might even give you candy, or a metal ax. The second source of distaste of darkness has simply been moved to a kind of secondary relevance by the fact that today, most people live most of the time in places where some form of artificial lightning is commonplace. It persists nevertheless where it is shored up by a vast and sturdy institutional scaffolding as with the caste system of largely Hindu India. And it may be always present somewhere in the back of our minds but mostly, we don’t have a chance to find out.

The third source of hostility toward and contempt for a dark appearance is both more difficult to understand and harder to eliminate or even to tamp down. Explaining it requires a significant detour. Bear with me, please.

The origins of useful racism

Suppose you believe in a God who demands unambiguously that you love your “neighbor,” that is, every human being, including those who are not of your tribe, even those you don’t know at all. Suppose further that you are strongly inclined toward a political philosophy that considers all human beings, or at least some large subcategory of them, as fundamentally equal, or at least equal in rights. Or imagine rather that you are indifferent to one or both ideas but that you live among neighbors 90% of whom profess one, and 80% both beliefs. They manifest and celebrate these beliefs in numerous and frequent public exercises, such as church services, elections, and civic meetings where important decisions are launched.

Now a second effort of imagination is required. Suppose also that you or your ancestors came to America from the British Isles, perhaps in the 1600s, perhaps later. You have somehow acquired a nice piece of fertile land, directly from the Crown or from a landed proprietor, or by small incremental purchases. You grow tobacco, or indigo, or rice, or (later) cotton. Fortune does not yet smile on you because you confront a seemingly intractable labor problem. Almost everyone else around you owns land and thus is not eager to work for anyone else. Just about your only recourse is temporarily un-free young men who arrive periodically from old Britain, indentured servants (sometimes also called “apprentices”). Many of them are somewhat alien because they are Irish , although most of them speak English, or some English. Moreover, a good many are sickly when they land. Even the comparatively healthy young men do not adjust well to the hot climate. They have little resistance to local tropical diseases such as malaria and yellow fever. Most don’t last in the fields. You often think they are not worth the trouble. In addition, by contract or by custom, you have to set them free after seven years. With land being so attainable, few wish to stick around and earn a wage from you .

One day you hear that somewhere, not too far, new, different kinds of workers are available that are able to work long days in the heat and under the sun and who don’t succumb easily to disease. You take a trip to find out. The newcomers are chained together. They are a strange dark color, darker than any man you have seen, English, Irish, or Indian. Aside from this, they look really good as field hands go. They are muscular, youngish men in the flower of health. (They are all survivors of the terrible Atlantic passage and, before that, of some sort of long walk on the continent of Africa to the embarkation point at Goree, Senegal, or such. Only the strong and healthy survived such ordeals, as a rule.) There are a few women of the same hue with them, mostly also young.

Those people are from Africa, you are told. They are for outright sale. You gamble on buying two of them to find out more. You carry them to your farmstead and soon put them to work. After some confusion because they don’t understand any English, you and your other servants show them what to do. You are soon dazzled by their physical prowess. You calculate that one of them easily accomplishes the tasks of two of your indentured Irish apprentices. As soon as you can afford it, you go and buy three more Africans.

Soon, your neighbors are imitating you. All the dark skinned servants are snapped up as fast as they are landed. Prices rise. Those people are costly but still well worth the investment because of their superior productivity. Farmers plant new crops, labor intensive, high yield crops – -such as cotton – that they would not have dared investing in with the old kind of labor. To make the new labor even more attractive, you and your neighbors quickly figure that it’s also capital because it can be made to be self-reproducing. The black female servants can both work part of the time and make children who are themselves servants that belong to you by right. (This actually took some time to work out legally.)

Instrumental severity and cruelty

You are now becoming rich, amassing both tools and utensils and more land. All is still not completely rosy on your plantation though. One problem is that not all of your new African servants are docile. Some are warriors who were captured on the battlefield in Africa and they are not resigned to their subjection. A few rebel or try to run away. Mostly, they fail but their doomed attempts become the stuff of legend among other black servants thus feeding a chronic spirit of rebelliousness. Even in the second and third generation away from Africa, some black servants are born restive or sullen. And insubordination is contagious. At any rate, there are enough free white workers in your vicinity for some astute observers among your African servants to realize that they and their companions are treated comparatively badly, that a better fate is possible. Soon, there are even free black people around to whom they unavoidably compare themselves. (This fact deserves a full essay in its own right.)

To make a complex issue simple: Severity is necessary to keep your workforce at work. Such severity sometimes involves brutal public punishment for repeat offenders, such as whippings. There is a belief about that mere severity undermines the usefulness of the workforce without snuffing out its rebelliousness. Downright cruelty is sometimes necessary, the more public, the better. Public punishment is useful to encourage more timid souls to keep towing the line.

And then, there is the issue of escape. After the second generation, black slaves are relatively at home where they work. Your physical environment is also their home where some think they can fend for themselves. The wilderness is not very far. The slaves also know somehow that relatively close by are areas where slavery is prohibited or not actively enforced by authorities. It’s almost a mathematical certainty that at any time, some slaves, a few slaves, will attempt escape. Each escape is a serious economic matter because, aside from providing labor, each slave constitutes live capital. Most owners have only a few slaves. A single escape constitutes for them a significant form of impoverishment. Slaves have to be terrorized into not even wanting to escape.

Soon, it’s well understood that slaves are best kept in a state of more or less constant terror. It’s so well understood that local government will hang your expensive slave for rebellion whether you like it or not.

Inner contradiction

In brief, whatever their natural inclination, whatever their personal preference, slave owners have to be systematically cruel. And, it’s helpful for them to also possess a reputation for cruelty. This reputation has to be maintained and re-inforced periodically by sensationally brutal action. One big problem arises from such a policy of obligatory and vigilant viciousness: It’s in stark contradiction with both your religious and your political ideas that proclaim that one must love others and that all humans are at least potentially equal (before God, if nowhere else). And if you don’t hold deeply such beliefs yourself, you live among people who do, or who profess to. And, by a strange twist of fate, the richest, best educated, probably the most influential strata of your society are also those most committed to those ideals. (They are the class that would eventually produce George Washington and Thomas Jefferson.)

The personal psychological tension between the actual and highly visible brutal treatment of black slaves and prevailing moral values is technically a form of dissonance.” It’s also a social tension; it expresses itself collectively. Those actively involved in mistreating slaves are numerous. In vast regions of the English colonies, and later, of the United States, the contrast between action and beliefs is thus highly visible to everyone, obvious to many who are not themselves actively involved. It becomes increasingly difficult over time to dismiss slavery as a private economic affair because, more and more, political entities make laws actively supporting slavery. There are soon laws about sheltering fugitives, laws regulating the punishment of rebellious slaves, laws about slave marriage and, laws restricting the freeing of slaves, (“manumission”). Slavery thus soon enters the public arena. There are even laws to control the behavior of free blacks, those who merely used to be slaves.

Race as legal status

Special rules governing free blacks constitute an important step because, for the first time it replaces legal status (“slave,” chattel”), with race (dark skin, certain facial features, African ancestry). So, with the advent of legislation supporting slavery, an important symbolic boundary is crossed. The laws don’t concern only those defined by their legal condition of chattel property but also others, defined mostly or largely by their physical appearance and by their putative ancestry in Africa. At this point, every white subject, then every white citizen has become a participant in a struggle that depends on frankly racial categories by virtue of his belonging to the polity. Soon the social racial category “white” comes to stand for the legal status “free person,” “non-slave.”

Then, at this juncture, potentially every white adult becomes a party to the enforcement of slavery. For almost all of them, this participation, however passive, is in stark contradiction with both religious and political values. But ordinary human beings can only live with so much personal duplicity. Some whites will reject black slavery, in part or in whole. Accordingly, it’s notable that abolitionists always existed and were vocal in their opposition to slavery in the English colonies, and then in the United States, even in the deepest South. Their numbers and visibility never flagged until the Civil War.

How to reduce tension between beliefs and deeds

There are three main paths out of this personal moral predicament. They offer different degrees of resistance. The first path is to renounce one’s beliefs, those that are in contradiction to the treatment of one’s slaves. A slave owner could adjust by becoming indifferent to the Christian message, or skeptical of democratic aspiration, or both. No belief in the fraternity of Man or in any sort of equality between persons? Problem solved. This may be relatively feasible for an individual alone. In this case, though the individuals concerned, the slave owners, and their slave drivers, exist within a social matrix that re-inforces frequently, possibly daily the dual religious command to treat others decently and the political view that all men are more or less equal. Churches, political organizations, charity concerns, and gentlemen’s club stand in the way. To renounce both sets of beliefs – however attractive this might be from an individual standpoint – would turn one into a social pariah. Aside from the personal unpleasantness of such condition, it would surely have adverse economic repercussions.

The second way to free oneself from the tension associated with the contrast between humane beliefs, on the one hand, and harsh behavior, on the other hand, is simply to desist from the latter. Southern American chronicles show that a surprisingly large numbers of slave owners chose that path at any one time. Some tried more compassionate slave driving, with varying degrees of economic success. Others – who left major traces, for documentary reasons – took the more radical step of simply freeing some of their slaves when they could, or when it was convenient. Sometimes, they freed all of their slaves, usually at their death, through their wills, for example. The freeing of slaves – manumission – was so common that the rising number of free blacks was perceived as a social problem in much of the South. Several states actually tried to eliminate the problem by passing legislation forbidding the practice.

Of course, the fact that so many engaged in such an uneconomic practice demonstrates in itself the validity of the idea that the incompatibility between moral convictions and slave driving behavior generated strong tensions. One should not take this evidence too far however because there may have been several reasons to free slaves, not all rooted in this tension. (I address this issue briefly in “Systemic Racism….”)

The easy way out

The third way to reduce the same tension, the most extreme and possibly the least costly took two steps. Step one consisted in recognizing consciously this incompatibility; step two was to begin mentally to separate the black slaves from humanity. This would work because all your bothersome beliefs – religious and political – applied explicitly to other human beings. The less human the objects of your bad treatment the less the treatment contravened your beliefs. After all, while it may be good business to treat farm animals well, there is not much moral judgment involved there. In fact, not immediately but not long after the first Africans landed in the English colonies of North America, there began a collective endeavor aiming at their conceptual de-humanization. It was strongly a collective project addressing ordinary people including many who had no contacts with black slaves or with free blacks. It involved the universities and intellectual milieus in general with a vengeance (more on this latter).

Some churches also lent a hand by placing the sanction of the Bible in the service of the general idea that God himself wanted slaves to submit absolutely to the authority of their masters. To begin with, there was always to story of Noah’s three sons. The disrespectful one, Ham, cursed by Noah, was said to be the father of the black race, on the thin ground that his name means something like “burnt.” However, it’s notable that the tension never disappeared because other churches, even in the Deep South, continued their opposition to slavery on religious grounds. The Quakers, for example, seldom relented.

Their unusual appearance and the fact that the white colonists could not initially understand their non-European languages (plural) was instrumental in the collective denial of full humanity to black slaves. In fact, the arriving slaves themselves often did not understand one another. This is but one step from believing that they did not actually possess the power of speech. Later, as the proportion of America-born slaves increased, they developed what is known technically as a creole language to communicate with one another. That was recognizably a form of English but probably not understood by whites unless they tried hard. Most had few reasons to try at all. Language was not the only factor contributing to the ease with which whites, troubled by their ethical beliefs, denied full humanity to black slaves. Paradoxically, the degrading conditions in which the slaves were held must also have contributed to the impression of their sub-humanity.

Science enlisted

The effort to deny full humanity to people of African descent continued for two centuries. As the Enlightenment reached American shores, the focus shifted from Scriptures to Science (pseudo science, sometimes but not always). Explorers’ first reports from sub-tropical Africa seemed to confirmed the soundness of the view that black Africans were not completely human: There were no real cities there, little by way of written literature, no search for knowledge recognizable as science, seemingly no schools. What art conscious visitors reported on did not seem sufficiently realistic to count as art by 18th and 19th century standards. I think that no one really paid attention to the plentiful African artistic creativity– this unmixed expression of humanity if there ever was one – until the early 1900s. Instead, African art was dismissed as crude stammering in the service of inarticulate superstitions.

The effort to harness science in service of the proposition of African un-humanity easily outlasted the Civil War and even the emancipation of slaves in North America. After he published the Origins of the Species in 1859, Darwin spent much of the balance of his life – curiously allied with Christians – in combating the widespread idea that there had been more than one creation of humanoids, possibly, one for each race. The point most strongly argued by those holding to this view was that Africans could not possibly be the brothers, or other close relatives, of the triumphant Anglo-Saxons. The viewpoint was not limited to the semi-educated by any means. The great naturalist Louis Agassiz himself believed that the races of men were pretty much species. In support, he presented the imaginary fact that the mating of different races – like mating between horses and donkeys – seldom produced fertile offspring. (All recounted in: Desmonds, Adrian, and James Moore. 2009. Darwin’s Sacred Cause: How A Hatred of Slavery Shaped Darwin’s Views on Human Evolution. Hougton: NY.)

Differential persistence

Those three main roads to racism are unequal in their persistence. Dislike for strangers tends to disappear of its own accord. Either the frightening contact ceases or it is repeated. In the first case, dislike turns irrelevant and accordingly becomes blurred. In the second case, repeated experience will often demonstrate that the strangers are not dangerous and the negative feelings subside of their own accord. If the strangers turn out to be dangerous overall, it seems to me that negative feelings toward them does not constitute racism. This, in spite of the fact that the negativity may occasionally be unfair to specific, individual strangers.

Racial prejudice anchored in atavistic fear of the night may persist in the depth of one’s mind but it too, does not survive experience well. Exposed to the fact that dark people are not especially threatening, many will let the link between darkness and fear or distaste subside in their minds. For this reason, it seems to me that the great American experiment in racial integration of the past sixty years was largely successful. Many more white Americans today personally know African Americans than was the case in 1960, for example. The black man whose desk is next to yours, the black woman who attends the same gym as you week after week, the black restaurant goers at you favored eating place, all lose their aura of dangerousness through habituation. Habituation works both ways though. The continued over-representation of black men in violent crimes must necessarily perpetuates in the minds of all (including African Americans) the association between danger and a dark complexion.

The road to racism based on the reduction of the tension between behavior and beliefs via conceptual de-humanization of the victims has proved especially tenacious. Views of people of African descent, but also of other people of color, as less than fully human persist or re-merge frequently because they have proved useful. This approach may have saved the important part of the American economy based on slavery until war freed the slaves without removing the de-humanizing. As many leftists claim (usually without evidence) this was important to the latter fast development of the American economy because cotton production in the South was at its highest in the years right preceding the Civil War. In the next phase the view of black Americans as less than human served well to justify segregation for the next hundred years. It was thus instrumental in protecting poor whites from wage competition with even poorer African Americans.

In the second half of the 19th century and well into the 20th, the opinion that Africans – and other people of color – were not quite human also strengthened the European colonial enterprise in many places. (The de-humanization of colonial people was not inevitable though. The French justification of colonialism – “France’s civilizing mission” – is incompatible with this view. It treated the annexed people instead as immature, as infantile, rather than as subhuman.)

This third road to racism tends to last because it’s a collective response to a difficult situation that soon builds its own supporting institutions. For a long time, in America and in the West, in general, it received some assistance from the new, post-religious ideology, science. Above all, it’s of continuing usefulness in a variety of situations. This explanation reverses the naive, unexamined explanation of much racism: That people act in cruel ways toward others who are unlike them because they are racist. It claims, rather that they become racist in order to continue acting in cruel ways toward others, contrary to their own pre-existing beliefs that enjoin them to treat others with respect. If this perspective is correct, we should find that racism is the more widespread and the more tenacious the more egalitarian and the more charitable the dominant culture where it emerges.

AOC doesn’t understand Christianity

I believe I wasted a lot of time some years ago arguing if Venezuela was a democracy or not under Hugo Chávez. The difficulty with this kind of conversation is that people can have very different views on what constitutes a “democracy”. That is part of the reason why North Korea can call itself a “democratic republic”. However, when somebody claims something about Christianity, and specially about what the Bible says, I feel more comfortable to debate.

I understand that it is like flogging a decomposing horse, but some months ago representative Alexandria Ocasio-Cortez supposedly called out the hypocrisy of religious conservatives using their faith to justify bigotry and discrimination in the United States. Her speech can be watched here. I believe her point is this: conservative Christians only care about religion in order to support their so-called “bigotry”. AOC believes that Christians should support socialism, because after all, that’s what Jesus would do.

AOC says some truths: sadly, the Christian Scriptures have been distorted many times over American history to defend political agendas they were never meant to defend. AOC could go even further on that if she wanted: the Cristian Scriptures were completed almost 2 thousand years ago, and they simply don’t talk specifically to many of the political issues we have today. One can say they offer principles of conduct, but it’s really up to us to figure out how these apply to concrete situations we find today. In this case, using Scripture to support political agendas can be not only morally wrong, but also naive and misguided.

AOC is also right when she says that human life is special (although I would question if “holy” applies) and that we should “fight for the least of us”. All these statements and more are true!

What AOC really doesn’t seem to understand (and frankly, this is quite scary) is that Christianity can’t be forced upon people. Yes, biblically speaking, we are to care for the poor. However, the Bible is addressing we as individuals. There is absolutely nothing in the Bible that says that we are to provide medical care for those who cannot afford via government fiat. Actually, there is nothing in the Bible that says that I can force people to act as Christians when they are not.

One of the great gifts from modernity is separation between church and state. I would submit that this separation was present in Christianity from the start, but the concept was so radically different from everything people were used to that it took some centuries for it to be put into practice, and we are actually still working on it. One of the things we realized in modern times is that we can’t force people to be Christians via government. And in her speech, AOC is trying to undo that. She wants government to force people to do a charitable work that can only be done if it is their choice.

As a Christian, I would say this: I would like to diminish suffering in this world, and this is exactly why I’m against the socialism AOC supports. One doesn’t have to be a genius to realize that the poor are much better off in countries that go further away from what Ms. Ocasio supports. It’s not simply a matter of wanting to help the poor, but of doing it in efficacious way. And also: I want to invite people to look to the example of Jesus, who being rich made himself poor for the sake of many. I do hope that more and more people might have their lives changed by Jesus. But I don’t want to force anybody to do that. I want to invite people to consider what Scripture says, and to make their choice to change their lives. As for now, I believe that capitalism is the most efficient way to help the poor humanity has discovered so far.

What is “White Privilege”?

	A privilege is a benefit not generally available to the public. There are two source of privilege, voluntary positions, and governmental subsidies. 

	Suppose a person is the chief officer of a company, and the building has a washroom for the staff and one washroom only for the chief. The chief has the privilege of using the special washroom. It is a voluntary privilege, because the benefit is paid from the company’s revenues, obtained in the market from willing buyers, rather than forcing the public to pay for it. 

	In contrast, if an enterprise obtains a subsidy from government, and the cost is imposed on the taxpayers, it is an unjust privilege.

	What kind is “white privilege”? I will examine this in the context of the United States of America. 

	Consider easy voting, in contrast to difficult voting. Some state governments make voting difficult for some minority groups, such as limiting the ballot boxes, or imposing difficult requirements that are applied more strictly on the minorities. But easy voting is not really a privilege, since voting is a right held by the public. Blocking the voting is a deprivation of the right to vote. Thus voting by white folks is a right, not a privilege. The minorities suffer, not a lack of privilege, but the denial of a legal right.

	Now consider the case of members of a minority being mistreated by the abuse of power  by the police. If a white person can go about his business with little chance of being shot by the police, where as a harmless black person has a much greater chance of being shot at, this is not a white privilege. This is rights-deprivation, a deprivation of the right to not be assaulted. 

	All of what is commonly called “white privilege” is not a direct subsidy to being white, thus not a privilege, but the absence of rights-deprivation. When whites have on average ten times the wealth of blacks, this is not a white privilege. This is the result of the deprivation of economic rights suffered by blacks for 400 years. There is no governmental subsidy that whites are obtaining directly from being white. What is going on is rights-deprivation for minorities.

	There is, however, an indirect privilege obtained by whites relative to blacks. Land owners obtain an implicit  privilege from government in the form of rent. The public goods provided by government, such as streets, security, and education, make locations more attractive and more productive. The goods thus increase the demand to be located there, which generates greater land rent and land value. As land gets inherited, those who obtained this rent privilege in the past then pass it on to future generations. 

	European-Americans conquered the land from the native Indians, and then the labor value stolen from the slaves generated greater production and thus higher land rent generally. African-Americans shared little in this land value, because substantial amounts of land were stolen from black owners, and discrimination prevented many African Americans from obtaining real estate. 

	Thus the greatest white privilege is the governmental subsidy that generates land value to whites who inherited wealth or obtained land value to a much greater degree than blacks.

	The way to eliminate this white privilege is to share the ground rent equally, with a levy on most of the land value. Of course society should also stop the rights-deprivation that keeps minorities poor and deficient in political power. But equal rights will not stop the greatest privilege, the generation of rent from public goods, and the way to stop that is to shift all taxation to land value.

	The greatest rights-deprivation that is taking place today is the taxation of wages, because engaging in labor and enterprise is a right, not a privilege. A tax shift will therefore simultaneously abolish privilege and stop rights-deprivation.

Casey Peterson, Cultural Marxism, and the Goliath of the Diversity Industry

For the past several weeks, Casey Peterson, an electrical engineer in prestigious Sandia Labs (one of the hubs of the federal military industry) has been risking his career to fight mandated ideological training that promotes the systemic racism conspiracy theory and requires from white employees to exorcise their “whiteness.” Pushed by “diversity” commissars from equity/diversity departments, this reeducation campaign based on the Critical Race Theory (Cultural Marxism) spreads like fire over our federal, state, and corporate institutions. Any objections to the mandated indoctrination are considered insubordination and involve disciplinary actions. Many intimidated employees of the Labs secretly showed Peterson their support. But the “diversity” commissars retaliated, putting him on an administrative leave and removing his security clearance. Peterson does not give up. Will he become an American Andrei Sakharov? This Soviet nuclear physicist put his career and elite privileges on line to challenge the suffocating communist ideology in the 1970s-1980s; the Soviets retaliated by removing Sakharov from his job, stripping him of his awards, and putting him under a house arrest.

Politically Incorrect Research: What Scholars Have to Say about the Diversity Propaganda Industry

The recent critical research of the diversity industry, which was conducted by Frank Dobbin and Alexandra Kalev (2016), American and Israeli sociologists, has confirmed existing concerns about the corrosive effects of mandating this industry.  These scholars, who explored the mandatory diversity programs in 816 companies, came to conclusion that command-and-control diversity quota-oriented programs were counterproductive.  Set to reward, discipline, and punish managers and employees, these programs were in fact breeding fear, animosity, and distrust.   The scholars also stressed that, neglecting an individual merit approach, such mandated diversity amplified gender, ethnic, and racial “tribalism.”  The ultimate verdict Dobbin and Kalev issued was quite devastating for the whole multi-million diversity industry in the United States. 

Particularly, they stressed that, contrary to rosy mainstream perceptions, American experience in enforcing diversity miserably failed, and it could not serve as a policy blueprint for other countries.  The researchers have also suggested that the best possible option in this situation would be to “decentralize” the whole diversity machine and let people on the ground decide for themselves how they wanted to reach its goals.  My assumption is that in each university, corporation, school, and institution people should be free to choose and vote (by a secret ballot) on whether they want and need the “diversity” training.  From what we saw in the Sandia Labs, the employees had no say about the reeducation campaign the corporate diversity commissars arbitrarily imposed on them.

Although wrapped into a cautious academic prose, research conducted by a group of social psychologists headed by Leigh Wilton (Wilton 2018; Jacobs 2018; Good 2018) produced even more devastating conclusions, which in fact had been obvious to any critical-minded person.  For the first time targeting the entire multiculturalism ideology, the Wilton research team set out to explore whether the promotion of “diversity” reduced or enhanced a fixation on race on a popular level. Exploring two large groups of people (students and adult non-students), Wilton and her colleagues found out that making people think about racial and cultural differences on a permanent basis hammered in their minds the idea that these differences were central, vital, and crucial.  Obviously, to safeguard themselves, Wilton (2018) and her team included such disclaimers as “We do not mean to imply that multiculturalism should be universally discarded” and “Neither multiculturalism nor color blindness offers a simple panacea for improving diversity.” Still, they have been adamant in their conclusion that, as an unintended consequence, the engineering “diversity” from above enhanced racial essentialism and that “the primacy of Multiculturalism as a mechanism for prejudice reduction or racial inequality is not without question.” They also stressed that, in contrast to a color-blind approach that mutes the fixation on race, the whole “diversity” message amplifies group differences and may lead to negative inter-group outcomes.

One of the natural political side effects of the persistent cultivation of “non-White” identity, attempt to impose it on the rest of society, aggressive rhetoric against “white privilege,” and the promotion of the systemic racism conspiracy theory was the emergence of so-called alt-right White Power movement – a mirror image of the Black Power, Latino Power and similar identity movements among the people of “color.”  Left writer Anis Shivani stressed that by inflaming and empowering the racial and ethnic identity of the “underprivileged,” the cultural left opened the identitarian Pandora’s box, which naturally leads to legitimization of “blood,” “soul,” and “soil” agenda in American politics. Shivani, who became upset about the identitarian turn of his comrades, has stressed that under those circumstances, it is quite natural that “the rise of each group in terms of recognition encourages countervailing reactions amongst other groups, so that recognition becomes simultaneously self-inflating (breeding reactionism and irrationality) and an impossible ideal to attain. Again, the rise of white nationalism recently is a testament to this tendency, a natural corollary to the very logic of identity politics.”

Intellectual Sources of the Diversity Industry

One of the major intellectual sources of the mandated “diversity,” which has been superimposed on our society, go back to the frustration of the left about traditional class-based socialism that had occupied the dominant position in the old intellectual mainstream.  The ole left privileged the industrial working class (or proletariat, according to the traditional Marxist jargon) as the primary victim of and simultaneously the humankind’s redeemer from capitalism.  To the dismay of the left, Marx’s prophecy about the skyrocketing misery of the proletariat under capitalism miserably failed.  On the contrary, the Western labor dramatically improved its living conditions and lost its revolutionary vitality. 

For this reason, in the 1960s and the 1970s, the Western left were gradually ditching the industrial working class, finding instead new kinds of “noble savages” in the Third World and at home among such groups as people of “color,” women, gays, and later in the alphabet soup of newly emerging groups that too claimed a victimhood status.  Along with the Third World, these segments of population were singled out as the new victims of and simultaneously redeemers from the capitalist oppression. To be exact, since the 1960s, for the New Left it was not so much capitalism but rather the entire Western civilization that became the major culprit.  In contrast to the old left who were fixated on material progress, the New Left, on the contrary, came to criticize progress and materialism as spiritually corrupt to authentic and progressive lifestyles.  Such new attitude helped make an ideological switch from the class-based economic agenda to cultural issues.

Conservatives and libertarians have referred to that cultural turn among the Western progressives as Cultural Marxism.  The current mainstream left, who are frequently not aware of or do not want to be reminded of their genetic links with classical Marxism, object to the use of this term.  Instead, they prefer to operate with such broad expression as “Critical Theory” or with more specific definitions such as “Critical Cultural Studies,” “Critical Racial Studies,” “Critical Legal Studies,” and so forth. For the best critical review of the Critical Theory, its rise, and the present-day state of the woke left, see Helen Pluckrose and Jack Lindsay, Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity-and Why This Harms Everybody (2020). The Critical Theory, which claims the supreme knowledge, is notoriously uncritical toward itself; this brings to mind Vladimir Lenin, the chief of the Bolsheviks who once uttered, “The Marxist doctrine is omnipotent because it is true.”

Since in the past the domestic people of “color” in Western countries and the Third World people were the objects of Euro-American racism and colonialism, progressive proponents of the Critical Theory (Cultural Marxism) take it for granted that such things as bigotry, racism, oppression are “white” Western phenomena.  As designated victims, the emerging Third World nations, domestic people of “color” along with sexual minorities are thought to be on the righteous side incapable of any wrongdoing. In other words, the cultural left created the “aristocracy of the outcasts.”  This explains, for example, why the left frequently downplay the brutal treatment of women and gays in Islamic societies and so-called hate crimes (and crimes in general) perpetrated by the representatives of the “victim” groups inside Western countries (for example, Muslim immigrants in France and Sweden or blacks in the United States). To the most ardent proponents of “diversity,” non-Western societies serve as carriers of profound spiritual wisdom and collectivism that serve to educate “rotten” and “materialist” West about better forms of life.

The Rise of the Diversity Industry and the Multiculturalism Ideology

By the end of the 1970s, American administrative and judicial system saw the emergence of “commissars of diversity” – a network of federal, state, and educational bureaucracies that were empowered by laws, institutions, and media outlets to police racial, ethnic, and gender representation both in public and private sector.  The regime of the racial segregation that had existed in the South prior to the 1950s offended American sensibilities to such an extent that both the congress and the “white” majority, driven by the profound guilt feelings, voluntarily accepted special measures designated to correct historical injustice and uplift people of “color.”  Little thought was given to the fact that to fight racism and sexism with racism and sexism was a flawed strategy and that well-meant and benevolent measures did not necessarily produce benevolent outcomes.

The system of job, business contract, and education quotas and preferences introduced in the 1970s through affirmative action programs were thought to be temporary measures that were to “upgrade” selected minorities.  Yet, as it frequently happens, the temporary measures were institutionalized and eventually became a permanent part of American polity, producing an overall corruptive effect on society.  It not only led to the emergence of the alphabet list of new groups that were eager to claim a victimhood status to secure moral, political, and economic benefits, but it also resulted in mass economic and educational fraud.  For example, thousands of dark-skinned immigrants began posing as “black” to fit in the officially established “ethno-racial pentagon” classification that was introduced by the Office of Budget and Management (OBM) in 1977 for policy goals.

This OBM Statistical Policy Directive No. 15 (“Race and Ethnic Standards for Federal Statistics and Administrative Reporting”) pigeonholed Americans into specific racial categories, which people were encouraged to fit themselves in: white (WASPs), black (African-Americans), brown (Hispanics), yellow (Asians), and red (Native Americans).  The official goal was to standardize available statistics to conduct efficiently affirmative action and other race-conscious policies.  One can consider the year of 1977, when this directive was introduced, a symbolic landmark when “diversity” became the guiding light for the entire political and economic establishment in US.  Eventually, this ethno-racial “pentagon” system became so entrenched into American polity that it came to play the role of standard lenses through which both Democratic and Republican elites began to screen their decisions on all kinds of economic and social issues. 

At this point of our history, we already can talk about the existence of the mainstream multicultural ideology that crusades against Western values, and that is fixated on promoting group identity at an expense of an individual. This ideology uses the slogan of toleration to maintain itself as the hegemonic force (pardon my leftist jargon) in our society. Consequently, those who object that ideology and call for the treatment of people as individuals based on their merit are labeled as racist and intolerant people. This explains the reticence and fear both in society and especially among bureaucrats to question the dubious nature of the whole project.  By the way, that was precisely the niche that Cultural Marxists from BLM were able to use to wiggle themselves into the mainstream and to successfully intimidate a large part of American society into submission.  

The “diversity” machine and the multicultural ideology created by that machine by now acquired a life of their own. It is a vivid an example of how seemingly benign initiatives, which had been originally established to resolve an specific urgent problem, lead to unanticipated consequences. As such, the whole situation serves as the illustration of the old wisdom: the road to hell is paved with good intentions.

In addition to influential racial and ethnic lobby groups, this machine now includes a large apparatus in federal, state, university, and corporate institutions.  For example, by 2018, at the University of California, Berkeley, the number of diversity bureaucrats increased up to 175 people.  Many of them generate high salaries. Thus, a diversity chief at the University of Michigan makes $385,000 a year (“The Rise of Universities’ Diversity Bureaucrats”).  For this omnipotent bureaucracy, amplifying identity politics and dramatizing ethnic, racial, and gender issues became one of the major ways to stay in power and secure the continuing flow of finances both from government and private donors.

One can divide the institutions that promote the “diversity” creed in the United States into three large units.  The first is represented by watchdog institutions (Human Resources (HR) and Office of Economic Opportunity (OEO) or equity departments) that gather statistics on how well major racial, ethnic and gender groups are represented in all walks of life.  HRs and equity offices are weaponized institutions that not only collect relevant data and set codes of behavior but also police and penalize bureaucrats and individuals who do not comply with prescribed ideological regulations and imposed quotas (Jeb Kinnison, Death by HR (2016).  The HR and equity/OEO desks share the job of supervision over personnel and its activities. Like HR, equity desks and offices exist in all American federal, state, educational, and in many corporate institutions. 

The second group of institutions is represented by various Multicultural desks and offices that are specialized in popularizing non-Western cultures and lifestyles by organizing, for example, various ethnic, racial, and gender festivals and fairs. These cultural events are usually focused on the valorization of selected cultures and their representatives, which are frequently set into the context of victimhood, oppression, and resistance.  For example, my first introduction to one of such festivals, which took place in Ohio in 1994, was a visit to a Latin American multicultural festival that was celebrating generic Latino legacy.  At the entrance, visitors were welcomed by a huge banner with the following phase, “Latin America: 400 Years of Resistance.”  To this, my Puerto-Rican colleague sarcastically remarked, “Why resistance? Resistance to what and against whom?”  A small example of cultural activism supported by those desks is a campaign of moral shaming of people for so-called cultural appropriation. For those who are not yet familiar with this most recent meme of the cultural left, I want to explain that any “white” person who publicly dons “non-Western” garb or attire (e.g. Mexican sombrero, Japanese kimono, Afro-American dreadlocks) automatically becomes a racist “colonizer” who “steals” and “appropriates” from the victims of “color.”  

The third component of the multicultural “diversity” ideological machine is represented by various identity studies departments such as Black, Hispanic, Native American, and Women Studies (Bruce Bawer, The The Victims’ Revolution: The Rise of Identity Studies and the Closing of the  Liberal Mind (2012). Pioneered in the 1960s as special university-based programs that were expected to inject existing college curricula with non-Western and female perspectives, many of them eventually acquired not only the status of regular university departments but turned into ideological units.  These programs openly declare that their major goal is not traditional academic pursuits but rather activist scholarship.  The latter heavily relies on the above-mentioned Critical Theory methods, which had been pioneered by Herbert Marcuse and like-minded post-Marxist writers

In other words, identity studies are focused on providing an ideological back up to specific racial, ethnic, and gender agendas. The practitioners of identity studies are preoccupied with the critique of what they define as “white” Western civilization and hegemony.  Simultaneously, they valorize non-Western cultures and lifestyles that they define as progressive and spiritually enhancing. From the partisan “diversity” perspective, the cultivation of ethno-racial consciousness and solidarity for designated “non-White” and “non-Western” groups is progressive and desirable, whereas a color-blind individualistic approach is treated as racist and reactionary. 

Moreover, for the past fifty years, mainstream humanities disciplines such as sociology, literary studies, American studies, geography, anthropology, social work, and especially education acquired a similar ideological “diversity” bent that one can find in abundance in the identity studies.  The social scholarship too heavily assimilated Critical Theory into its methodology and became fixated on searching for the signs of racial, ethnic, and gender oppression both in the past and in the present in all walks of surrounding life. 

The threat to our liberty comes from the fact that the greater part of the cadre, which now works in our government, law firms, and corporate world, are former college graduates who internalized memes and precepts propagated by the Critical Theory scholarship and made them the new normal. Many of them are sincerely convinced that they must change the surrounding life according to the ideological prescriptions of “multiculturalism” by promoting the group (racial, gender, ethnic) justice and arbitrarily dividing our society into the classes of the “oppressed” and “oppressors.”  The latter, according to Marcuse who was one of the founders of the Critical Theory, must be shut down and canceled by all means available.  This means that the core values of the Western civilization are now at stake (the rule of law, freedom of speech, checks and balances, and the very institute of elections).

On a final note, responding to the rising tide of mandated “diversity” reeducation programs, on September 4, the US Office of Budget and Management issued a memorandum to stop wasting tax dollars for all race-bating “training” that is based on the ideology of the Critical Theory and that is focused on bashing “whiteness” and Western values. Of course, it is ridiculous to assume that one can simply ban an ideology; it will take years and years to dismantle the “diversity” industry and its ideological apparatus. Yet, as a first step, that measure is essential for our entire political and economic system. The current administration has sent a clear signal to the “deep state” bureaucrats, who are opportunistic by their very nature, that the woke “repressive tolerance” of the cultural left will not be tolerated anymore. If we push further in this direction, there is a hope that we shall overcome.

Game theory in the wild

Game theory is an amazing way to simulate reality, and I strongly recommend any business leader to educate herself on underlying concepts. However, I have found that the way that it is constructed in economic and political science papers has limited connection to the real world–apart from nuclear weapons strategies, of course.

If you are not a mathematician or economist, you don’t really have time to assign exact payoffs to outcomes or calculate an optimal strategy. Instead, you can either guess, or you can use the framework of game theory–but none of the math–to make rapid decisions that cohere to its principles, and thus avoid being a sucker (at least some of the time).

As Yogi Berra didn’t say, “In theory, there is no difference between practice and theory. In practice, there is.” As a daily practitioner of game theory, here are some of its assumptions that I literally had to throw out to make it actually work:

  • Established/certain boundaries on utility: Lots of games bound utility (often from 0 to 1, or -1 to 1, etc. for each individual). Throw away those games, as they preferenced easier math over representation of random, infinite realities, where the outcomes are always more uncertain and tend to be unbounded.
  • Equating participants: Similar to the above, most games have the same utility boundaries for all participants, when in reality it literally always varies. I honestly think that game theorists would model out the benefits of technology based on the assumption that a Sumerian peasant in 3000 BC and an American member of the service economy in 2020 can have equivalent utility. That is dumb.
  • Unchanging calculations: In part because of the uncertainty and asymmetries mentioned above, no exact representation of a game sticks around–instead, the equation constantly shifts as participants change, and utility boundaries move (up with new tech, down with new regs, etc). That is why the math is subordinate to structure: if you are right about the participants, the pathways, and have an OK gut estimate of the payoff magnitudes, you can decide rapidly and then shift your equation as the world changes.
  • Minimal feedback/second order effects: Some games have signal-response, but it is hard to abstract the concept that all decisions enter a complex milieu of interacting causes and effects where the direction arrow is hard to map. Since you can’t model them, just try to guess–what with the response to the game outcome be? Focus on feedback loops–they hold secrets to unbounded long-term utilities.
  • The game ends: Obviously, since games are abstractions, it makes sense to tie them up nicely in one set of inputs and then a final set of outputs. In reality, there is really only one game, and each little representation is a snapshot of life. That means that many games forget that the real goal of the game is to stay in it.

These examples–good rules of thumb to practitioners, certain to be subject to quibbling by any academic reader–remind me of how wrong even the history of game theory is. As with many oversights by historians of science, the attribution of game theory’s invention credits the first theoretician (John von Neumann, who was smart enough to both practice and theorize), not the first practitioner (probably lost to history–but certainly by the 1600’s, as Pascal’s Wager actually lines up better with “game theory in the wild” in that he used infinite payoffs and actually did become religious). Practitioners, I would ignore the conventional history, theory, actual math, and long papers. Focus on easily used principles and heuristics that capture uncertainty, unboundedness, and asymmetries. Some examples:

  • Principle: Prediction is hard. Don’t do it if you can help it.
  • Heuristic: Bounded vs. Unbounded. Magnitude is easier to measure (or at least cap) than likelihood is.

  • Principle: Every variable introduces more complexity and uncertainty.
  • Heuristic: Make decisions for one really good reason. If your best reason is not enough, don’t depend on accumulation.

  • Principle: One-time experiments don’t optimize.
  • Heuristic: If you actually want to find useful methods, iterate.

  • Principle: Anything that matters (power, utility, etc.) tends to be unequally distributed.
  • Heuristic: Ignore the middle. Either make one very rich person very happy (preferred) or make most people at least a little happier. Or pull a barbell strategy if you can.

  • The Academic Certainty Principle: Mere observation of reality by academics inevitably means they don’t get it. (Actually a riff on observer effects, not Hiesenberg, but the name is catchier this way).
  • Heuristic: In game theory as in all academic ideas, if you think an academic stumbled upon a good practice, try it–but assume you will need trial and error to get it right.

  • Principle: Since any action has costs, ‘infinite’ payoffs, in reality, come from dividing by zero.
  • The via negativa: Your base assumption should be inaction, followed by action to eliminate cost. Be very skeptical of “why not” arguments.

So, in summary, most specific game theories are broken because they preference math (finite, tidy, linear) over practice (interconnected, guess-based, asymmetric). That does not mean you can’t use game theory in the wild, it just means that you should focus on structure over math, unbounded/infinite payoffs over solvable games, feedback loops over causal arrows, inaction over action, extremes over moderates, and rules of thumb over quibbles.

Good luck!

Why the US is behind in FinTech, in two charts

The US is frankly terrible at innovation in banking. When Kenya (and its neighbors) has faster adoption of mobile banking–as they have since at least 2012–it is time to reconsider our approach.

Here is the problem: we made new ideas in banking de facto illegal. Especially since the 2008 financial crisis, regulatory bodies (especially the CFPB) has piled on a huge amount of potential liability that scares away any new entrant. Don’t believe me? Let’s look at the data:

bank creation

Notice anything about new bank creation in the US after 2008?

A possible explanation, in a “helpful resource” provided to banking regulators and lawyers for banks:

regulatory complexity

This shows: 8 federal agencies reporting to the FSOC, plus another independent regulatory body for fintech (OFAC/FinCEN). Also, the “helpful” chart notes state regulations just as an addendum in a circle…probably because it would take 50 more, possibly complex and contradictory charts.

So, my fellow citizens, don’t innovate in banking. No one else is, but they are probably right.

The Blind Entrepreneur

Entrepreneurs usually make decisions with incomplete information, in disciplines where we lack expertise, and where time is vital. How, then, can we be expected to make decisions that lead to our success, and how can other people judge our startups on our potential value? And even if there are heuristics for startup value, how can they cross fields?

The answer, to me, comes from a generalizable system for improvement and growth that has proven itself– the blind watchmaker of evolution. In this, the crucial method by which genes promulgate themselves is not by predicting their environments, but by promiscuity and opportunism in a random, dog-eat-dog-world. By this, I mean that successful genes free-ride on or resonate with other genes that promote reproductive success (promiscuity) and select winning strategies by experimenting in the environment and letting reality be the determinant of what gene-pairings to try more often (opportunism). Strategies that are either robust or anti-fragile usually outperform fragile and deleterious strategies, and strategies that exist within an evolutionary framework that enables rapid testing, learning, mixing, and sharing (such as sexual reproduction or lateral gene transfer paired with fast generations) outperform those that do not (such as cloning), as shown by the Red Queen hypothesis.

OK, so startups are survival/reproductive vehicles and startup traits/methods are genes (or memes, in the Selfish Gene paradigm). With analogies, we should throw out what is different and keep what is useful, so what do we need from evolution?

First, one quick note: we can’t borrow the payout calculator exactly. Reproductive success is where a gene makes more of itself, but startups dont make more of themselves. For startups the best metric is probably money. Other than that, what adaptations are best to adopt? Or, in the evolutionary frame, what memes should we imbue in our survival vehicles?

Traits to borrow:

  • Short lives: long generations mean the time between trial and error is too long. Short projects, short-term goals, and concrete exits.
  • Laziness: energy efficiency is far more important than #5 on your priority list.
  • Optionality: when all things are equal, more choices = more chances at success.
  • Evolutionarily Stable Strategies: also called “don’t be a sucker.”
  • React, don’t plan: prediction is difficult or even impossible, but being quick to jump into the breach has the same outcome. Could also be called “prepare, but don’t predict.”
  • Small and many: big investments take a lot of energy and effectively become walking targets. Make small and many bets on try-outs and then feed those that get traction. Note– this is also how to run a military!
  • Auftragstaktik: should be obvious, central planning never works. Entrepreneurs should probably not make any more decisions than they have to.
  • Resonance: I used to call this “endogenous positive feedback loops,” but that doesn’t roll off the tongue. In short, pick traits that make your other traits more powerful–and even better if all of your central traits magnify your other actions.
  • Taking is better than inventing: Its not a better startup if its all yours. Its a better startup if you ruthlessly pick the best idea.
  • Pareto distributions (or really, power laws): Most things don’t really matter. Things that matter, matter a lot.
  • Finite downside, infinite upside: Taleb calls this “convexity”. Whenever presented with a choice that has one finite and one infinite potential, forget about predicting what will happen– focus on the impact’s upper bound in both directions. It goes without saying– avoid infinite downsides!
  • Don’t fall behind (debt): The economy is a Red Queen, anyone carrying anything heavy will continually fall behind. Debt is also the most likely way companies die.
  • Pay it forward to your future self: squirrels bury nuts; you should build generic resources as well.
  • Don’t change things: Intervening takes energy and hurts diversity.
  • Survive: You can’t win if you’re not in the game. More important than being successful is being not-dead.

When following these guidelines, there are two other differences between entrepreneurs and genes: One, genes largely exist in an amoral state, whereas your business is vital to your own life, and if you picked a worthwhile idea, society. Two, unlike evolution, you actually have goals and are trying to achieve something beyond replication, beyond even money. Therefore, you do not need to take your values from evolution. However, if you ignore its lessons, you close your eyes to reality and are truly blind.

Our “blind” entrepreneur, then, can still pick goals and construct what she sees as her utility. But to achieve the highest utility, once defined, she will create unknowable and unpredictable risk of her idea’s demise if she does not learn to grow the way that the blind watchmaker does.

In memory of Gerald Gaus (1952-2020)

I was saddened to hear that Gerald Gaus, the world-renowned liberal philosopher, died yesterday. Gaus was a critical developer of a public reason approach to classical liberalism, and powerful exponent of the interdisciplinary research agenda of Philosophy, Politics and Economics. While we met in person only occasionally, he was a significant influence on my approach to understanding the liberal tradition.

His perspective was deeply pluralist. One observation that really struck me from The Order of Public Reason (and that I still grapple with today) was that a society could function more effectively (in fact, might only function at all) when citizens have a range of moral attitudes towards things like rule-following, and especially eagerness to punish rule-breakers. For society to progress, you may need both conservative-inclined individuals to enforce moral norms and liberal-minded people to challenge them when circumstances prompt reform.

He applied this idea of strength through moral diversity to the political system too. On Gaus’s account, one of the strengths of liberal democracy is its ability to shift from conservative to liberal, and left to right, through competitive elections. Social progress cannot follow a straight and obvious path but requires, at different moments, experimentation, innovation, reversal and consolidation. Democracy helps select the dominant mode from a diversity of perspectives.

This depth of pluralism is counter-intuitive within the discipline of normative political theory that increasingly avers a narrow set of ideological commitments as acceptable, and rejects even fairly minor variations in social morality as possessing little or no value. Indeed, the last time I saw Gaus was early this year when he gave an evening talk at the Britain and Ireland Association for Political Thought conference. He presented a model for seeking political compromises among very different moral ideals. His commitment to treating the whole political spectrum as worthy of engagement drew a few heckles. The prospect of engaging with Trump supporters, for example, evidently nauseated some of the audience. Gaus was the very model of the liberal interlocutor, ignoring the hostility, and responding with grace, civility and ideas for going forward productively.

His approach to scholarship and discussion embodied his commitment to liberal toleration and the fusion of ethical horizons. That’s how he will be remembered.

The Dunning-Kruger Effect on stature

With the collapse of a false sense of stature comes a disintegration of perceptions of personal dignity. The Dunning-Kruger Effect says that a person’s incompetence masks his or her ability to recognize his own (and others’) incompetence. Building off this concept, I hypothesize that many of the social tensions we are experiencing today are a result of a type Dunning-Kruger Effect wherein those who are incapable eventually become aware of their inability or unsuitability.

In 2010, Dr. David Dunning, now retired from Cornell University and one half of the Dunning-Kruger name, gave an interview to the NYT on the eponymous Effect. The genesis of Dunning’s research came when he read about a bank robber who made no attempt to conceal his face, resulting in his being apprehended in less than a day. During his interrogation, the man revealed that he had covered his face with lemon juice, having developed the notion that lemon juice would make “him invisible to the cameras.” He’d even tested the concept beforehand by taking a picture of himself after putting lemon juice on his face. He wasn’t in the picture (Dunning suggested that perhaps camera was pointed in the wrong direction) and concluded that the idea was sound. As Dunning put it, “he was too stupid to be a bank robber but he was too stupid to know he was too stupid to be a bank robber.” As the story of the robber illustrates, though, there always comes a moment of reckoning, where the reward of stupidity, unintentional or willful, is paid.

When I was a recent music grad, I obtained a position as a fellow with a small liberal arts college orchestra. Ostensibly, the school was one of the best of its type, but its weaknesses became apparent fairly quickly. During one rehearsal, what began as a discussion of the minuet form evolved into the conductor having to introduce the students to Jane Austen. The students all assured the conductor that they “had taken” Literature 101 and 102. As it turned out, only one of the students had read Jane Austen, and the girl had done so on her own time as Austen was no longer taught in the curriculum.

The conductor queried the students as to why they thought they could play music while being ignorant of its broader cultural setting. The students became surly in response, retorting that “she isn’t on the reading list,” as if such a statement was all the justification needed. The dynamic in the ensemble was already rocky as two weeks before all but two students had skipped rehearsal to attend a varsity soccer match. The conductor had chastised them for their unprofessionalism, pointing out that missing rehearsals is cause for termination in professional ensembles. As a defense, the students cited college regulations which said that all sports events took precedence over other commitments. In other words, the conductor could not discipline skippers if they missed to attend a sports event.

At the end of that year, the conductor resigned, and I followed one term later. I made the decision to leave after two students for whom I was responsible said in front of me, “at least this one [the new conductor] respects us. [The old one] always treated us like we were stupid, didn’t know things.” As someone who was present, I can testify that the conductor was a model of patience and positive leadership. The students truly didn’t possess the basic knowledge reasonable to expect from third years at a supposedly good private liberal arts college. And they had no interest in remedying their deficiencies. They saw any situation where their ignorance displayed itself as a “gotcha” setup.

Technically these students didn’t fit the Dunning-Kruger pattern as they possessed enough knowledge to know they didn’t know. There is, perhaps, a similarity to the bank robber’s lemon juice in that the students made a blanket assumption that completing college-assigned reading was sufficient to turn them into literate people. Where this notion originated is beyond me, as it is common knowledge that extracurricular reading is a vital component for success at elite institutions. Just like the robber’s, the students’ ignorance was appalling – and much less amusing.

For our non-American readers, it is a conceit of small private liberal arts colleges that they are educating/raising (this is key) the leaders of the future. There is some justification for this view since these colleges can provide a door into better institutions. It is, however, rare to find a national-level politician, industry leader, or public figure who hasn’t at least finished his or her education at one of the über-competitive, big name schools or universities.

The stars of this particular anecdote were convinced that they were destined for great things. A challenge to their knowledge base was an assault on their identities, and therefore their sense of stature. Granted, their unprofessional behavior had already cost them their dignity, but they didn’t know that. The sense that dignity might be a distinction to be earned and not granted through entitlement escaped them. In a modification of the Dunning-Kruger Effect, the students had no dignity because of their ignorance and unprofessionalism but they were too stupid to know that they had forfeited their chance to be respected. 

The Non-Partisan Movement We Need: Anti-Authoritarianism

Political/ideological debates have a lot of moving parts, and there are a lot of timely issues to address. Given the marginal impact of anything we do in this sphere (e.g. voting, sharing a blog post on Twitter, or being a solitary voter in a vast sea of the entire 6200 people in this country), it’s only natural that we have to economize on information and argument and that results. We can’t help but deplete the intellectual commons.

What are some low cost ways to improve the quality?

  1. Value Intellectual humility.
  2. Devalue the sort of behavior that makes things worse.

It bears repeating: value intellectual humility. It’s not easy. I’m as drawn the confident claims as you are. I’ve got a lot of smart people in my bubble and when they boldly declare something, I tend to believe them. But the “I honestly don’t know” posts deserve more attention and are less likely to get it. Let’s adjust in that direction. I’ll try to write more about things I don’t know about in the future (although I don’t know what that’s going to look like).

It’s a statistical impossibility that, of all of the people burned at the stake for heresy or witchcraft or whatever, nobody deserved some punishment received in an unfair process. Don’t get me wrong, witch hunts are a bad thing in general, but we can’t discount them as entirely (maybe just 99.9%) unjustified. But cancel culture is, like good old fashioned witch hunts is doing a lot of harm to the intellectual commons. I’m they catch more bad guys than 17th century Puritans, but lets not leave cancellations up to Twitter mobs. Particularly when it comes to cancelling ideas.

Bad ideas don’t need to be cancelled. They need to be crushed under good ideas.

Far be it from me to peddle unreplicated psychological research (confirmation bias alert!), but I tend to believe that there’s something to the claim that the extreme poles of the ideological landscape exhibit some unsettling traits: narrow-mindedness, authoritarianism, and apparently Machiavellianism, narcissism, and psychopathy.

“Narcissistic psychopath” is not a label I’d like to see bandied about because it’s just too close to ad hominum. But “authoritarian” is a term I’d like to see more widely used as a pejorative, regardless of the position taken by would be authoritarians.

Let’s quit with the shouting, cancelling, flag waving, and blindly taking reactionary positions. Invite debate, and invite holding people accountable. But letting Twitter be the last word is as absurd as letting Helen Lovejoy-esque moral scolding decide how things should be.

But then again, maybe I’m wrong.

Real Decision Rights Theory and Political Coalitions

Libertarians understand these two big ideas:

  1. A system of individual rights can allow widespread cooperation and human flourishing.
  2. The world is full of emergent orders, like markets, with aggregate outcomes that are more than the sum of their parts.

But commitment to the first idea often blinds us to the full implications of the second.

Complex adaptive systems involve an infinity of illegible signals involving cooperation and competition in networks so complex that it would be impossible to replicate their success in any conceivable top-down system. The market is a discovery procedure. But the “it” that is the market is a collective thing. It’s a jointly produced phenomenon and it’s impossible to split it up without fundamentally changing it.

Likewise, a system of rights (including the rights underlying a functioning market) is a jointly produced common good.

Why does it mean anything to say that I own my laptop? Because when push comes to shove (if I’m willing to shove hard enough), other members of my community are willing to act in ways (formal and informal) that enforce my property right. (Interesting aside: If I reported my laptop stolen to the local police, they wouldn’t do anything about it. Perhaps this reflects the median voter’s level of regard for other people’s property rights…)

Ownership is not as simple as “I own this piece of property, period.” Instead, to own something is to have some bundle of rights to make particular decisions. I can decide what to plant in my garden, but I can’t decide to build a nuclear reactor in my front yard. I don’t need to go through some elaborate chain of natural rights reasoning to argue that your negative right to avoid externalities supersedes my positive right to do a thing. Doing so might be a useful exercise to see how (in)consistent our ruleset is. But the real system is much simpler (and much more ad hoc). Rights are as rights are enforced.

What am I driving at here? First, that we should be dealing with property decision rights as they are more than we deal with them as they ought to be. Second, individual rights require collective support. This puts constraints on how we move towards our Utopias.

Debating/convincing our intellectual opponents is necessary, but it’s really just a negotiation tactic. Discounting idiotic opponents is reasonable in the intellectual sphere, but we can’t just overlook the fact that those opponents are part of the environment we’re trying to shape. We don’t necessarily have to throw them a bone, but when we don’t make some group part of our coalition, we have to expect someone else will.

Our normative theories will convince us that group A can’t make group B’s lives worse for the sake of A’s ego. But if A perceives the subjective value of that ego boost to be high enough, and if A has the relevant rights, then B had better look out.

Improving the world isn’t simply a matter of making the right arguments well. We have to be entrepreneurial, and keep an eye out for how others might do the same. Political entrepreneurship means looking for the under-priced voters which is exactly what Trump did in 2016. He found a group A full of low-status voters who had been discounted by the political establishment. And because their rights to shape the collective outcome went unexercised so long, it was that much more disruptive when they were finally brought to the table. Likewise, BLM protests reveal that there is a group B that is ready to throw their weight around.

That leaves a big pile of questions. What is the cost of pride? How can we ensure people have enough dignity that they won’t want to destroy what a functioning (if imperfect) society? How do we account for potential political energy (particularly when we remember that voting is only a tiny part of political participation)?

I don’t know the answers, but I know this: we can’t escape getting our hands dirty and engaging in some political exchange. I don’t like it, but I’m not the only one deciding.

Slate Star Codex and the rationalists

Rick first alerted me to the end of the popular rationalist blog Slate State Codex. Then it was all over my internet. I have never been a huge fan of the rationalist community, mostly because they don’t do history very well, but this is a big deal.

It has also produced some great conversation on both sides of the American cultural divide. Gideon Lewis-Kraus wrote an excellent meta-piece on the whole affair. Lewis-Kraus uses “Silicon Valley” as shorthand for the intellectual right. This is more correct than wrong, even though the region votes Democrat, because Silicon Valley is more of a mindset than a geographic place.

Lewis-Kraus’s Silicon Valley is a new, decentralized informational ecology. He contrasts Silicon Valley with the old media: big corporations trying to maintain a stranglehold on “the narrative.” (Lewis-Kraus readily admits he’s part of the old media.) For Lewis-Kraus, Silicon Valley is trying to build an alternative mediascape. Big corporations such as the NY Times are fighting back.

It’s an interesting cultural war to follow, if you’re in to that kind of stuff. I can’t seem to shake my uneasiness about the rationalist community, though. As I mentioned, they don’t do history, or they don’t do it well. They are also into communes, which I distrust immensely. Utopian and communitarian experiments are bad for all of your healths (physical, emotional, etc.). I don’t know how the rationalists ended up on the side of Silicon Valley. My guess is that the big corporations didn’t like what the rationalists had to say, or how they lived, so the rationalists found solace in the decentralized ecology of Silicon Valley.

I think the verdict is still out on who the victor of this cultural war will be. The big corporations have government backing, and they own the narrative bought by most of the American public, but the old media has shown its true colors in how it covers Donald Trump. I didn’t vote for the guy but it’s obvious his administration is not being reported on by the old media; it’s being slandered and attacked, with lies or with small untruths, rather than objectively reported on. The rationalists and their decentralized allies in the Silicon Valley informational ecology at least have truth on their side. Not the truth, but a commitment to the truth by way of discussion, the sharing of information, and fighting to protect the freedom of everybody’s conscience, rather than just their own team’s conscience.

We live in interesting times, and this makes blogging – a decentralized activity if there ever was one – all the more important.

An aspirational paradox

In general, contemporary society disparages people extremely focused on their careers, labelling them as “careerists.” No real objections are presented; instead there is simply a vague simmering contempt for “careerists.” When an anti-careerist manages to articulate an objection, it is usually couched as a social justice problem: careerists cause unfair societies by being ahead of everyone else. When practiced appropriately, i.e. looking after one’s own best interests, “careerist” mentalities and behaviors, such as discipline, planning, and diligence, are necessary for prosperity, both personal and societal.

During the early stages, the opposite of careerism isn’t drifting: it’s aspirational behavior. An illustrative anecdote: a pre-med major in my year failed part of her science requirements repeatedly, mostly due to partying. Despite her inadequate academic record in her major field area, she applied to Harvard Medical in her final year. Unsurprisingly, she wasn’t accepted. As a mutual acquaintance said: “one’s going up against people who’ve been working to get into Harvard Medical since middle school; they [Harvard] don’t need someone like her.” Although the result was only to be expected, it came as a complete shock to the girl and her parents since they had all believed that she was destined to attend Harvard Med. To listen to the former pre-med student talk today, she didn’t get to go to Harvard Med. It is as if some external force denied her a chance.

As a quick explanation to non-American readers, because the US system requires that students take courses outside of their major field, there is a high tolerance for poor marks in general education requirements; the trade-off is that one is expected to earn reasonably high marks in one’s own field in order to advance to the next level. For an institution such as Harvard Med, that a pre-med student earned anything below average marks in a science course would be unacceptable, unless there was a very good reason clarified in the application statement. A good reason would be a family tragedy or some life event beyond one’s control, but not partying.

The sheer reality is that my friend was right: Harvard Med receives applications from candidates who shown single-minded focus in pursuit of a goal since age twelve. In comparison to that, a twenty-three-year-old woman whose transcript screams “unfocused” is not a prize. Even the act of applying to Harvard would count against her since assessors would conclude that she hadn’t bothered to read the guidance and fit sections, i.e. the pages where expected grades and MCAT scores are listed, on Harvard Med’s website.

The case of Fisher vs. University of Texas 2013 and 2016 (the Supreme Court heard the case twice) is an example of the dichotomy between aspiration and careerism. Abigail Fisher applied to University of Texas – Austin but was turned away as her grades, though above average for her public school, were below the university’s automatic admission standards. The crux of her suit was that UT – Austin both didn’t take into account her extracurriculars and replaced her with an affirmative action candidate. Eventually, the Supreme Court ruled – both times – that although UT – Austin ought to have been more transparent in their admissions process, there was no evidence that the university had discriminated against Fisher.

The aspirational part of this story was that Fisher’s extracurriculars, Habitat for Humanity, local youth orchestra without holding a principle position, and miscellaneous charitable activities, were not genuine distinctions in a candidate pool such as that commanded by UT – Austin. Based on my own experience as someone who was in youth orchestras from middle school until college – if one counts my first training ensembles, then I’ve been in orchestras from the age of six up – should an applicant wish to use music involvement as proof of merit, then youth ensemble participation is a simple expectation. Unless one was a section leader, youth orchestra membership is not a sign of exceptionality. According to Habitat for Humanity’s North America page, the annual number of volunteers is around two million. Volunteering with the charity is generous and worthy, but doing so does not make the volunteer stand out in a candidate pool.

While society can discuss endlessly the merits and demerits of affirmative action, the Fisher case indicates that the policy has taken on a role no one could have predicted: scapegoat. The policy has become an escape hatch for aspirationalists seeking to avoid facing their own inadequacies and lack of proper preparation or career focus. Instead of blunt conversations about the real reasons a person didn’t qualify for a desirable professional school or first-choice university, aspirational individuals can offload blame onto the policy. While one can hardly blame the policy for being a scapegoat, one must acknowledge that such use has potential to be very damaging to the social fabric.

The View from New Delhi: China’s post-pandemic belligerence

Introduction

In the aftermath of the Covid-19 pandemic, the increasingly belligerent behaviour exhibited by China in South Asia and South East Asia, and China’s imposition of the National Security Law in Hong Kong, it is interesting to see the tone of the English language media on China.

Yet a genuinely comprehensive peek into the Chinese view on crucial political, economic, and geopolitical issues requires a perusal of the Chinese language papers. This is imperative. The Global Times, the mouthpiece of the Communist Party, is important because it covers the views of Chinese academics and strategic analysts who, through their opinion pieces, provide a deep insight into China’s approach towards those aforementioned crucial issues.

From the opinion pieces at the Global Times over the past few months, one thing is evident: that with the US becoming increasingly unpredictable under Trump, China is virtually invincible. There is a growing belief that Beijing is formidable both in the economic and strategic context. Strategic analysts and journalists writing for the English language daily have also tried to drive home the point that Beijing is in a position to take on the US and its allies, and that any attempt to isolate China would not be taken lying down.

Other articles in the Global Times warn against anti-China alliances, and explain why these alliances will not be possible due to the fault lines between the US and other countries. It has also not refrained from using strong language against countries like Australia and Canada by insinuating that they are acting as mere appendages of the US.

Aggressive stance vis-à-vis countries which blamed China for lack of transparency with regard to the outbreak of the pandemic

Beijing has been scathing in its criticism not only of the US, which took a firm stand against China in regards to the suppression of crucial information pertaining to the pandemic, but also Australia, which had the temerity to ask for an enquiry into the origins of the deadly pandemic. The Global Times lashed out and labelled Australia as a mere appendage of the US, even dubbing it a ‘poodle’ and ‘dog of the US’.

It has also warned other countries, especially Australia, of the economic consequences of taking on Beijing. An article titled ‘Australia’s economy cannot withstand Cold War with China’, written by Wang Jiamei, concludes by saying:

‘…..If a new Cold War leads to a China-Australia showdown, Australia will pay an unbearable price. Given Australia’s high dependence on the Chinese economy, an all-around confrontation will have a catastrophic effect on the Australian economy’

China has followed this harsh rhetoric with sanctions on imports of certain Australian commodities, like barley, and suspended the import of beef. China has also issued warnings to students and tourists that ask them to reconsider travelling to Australia.

This was done days after China’s envoy in Australia, Cheng Jingye, in an interview to an Australian media outlet, had warned of strong economic repercussions (the envoy was referring not just to the impact on Australia-China trade, but on Chinese students pursuing education in Australia and tourists visiting Australia) if Australia continued to adopt a strong stance against China on the issue of an enquiry into the origins of the Covid-19 pandemic (Australia reacted very strongly to this threat).

Beijing unsettled by emerging alliances?

One interesting point is that while commentaries and reportage in the Global Times try to send out the message that China’s rise is inexorable and that Beijing is not daunted by emerging alliances and emerging narratives of reducing economic dependence upon China, it seems to be wary of partnerships and alliances which seek to challenge it. The newspaper repeatedly warns India, the UK, Australia, and various EU member states about the perils of strengthening ties with the US. Even in the midst of recent tensions between India and China, Global Times tried to argue that India would never openly ally with the US and if it did so, this would be damaging. An article in the Global Times states:

It won’t be in the interest of India, if it really joins the Five Eye intelligence alliance. The role of a little brother of the US within a certain alliance is not what India really wants.

The article also tries to dissect differences between the US and India over a number of issues, which are not wrong, but the piece forgets that the two countries do not have differences over strategic and economic issues.

Strong language against Canada

It is not just the US, Japan, Australia, EU member states, and India that the English-language daily has recently threatened. The Global Times has also adopted an aggressive posture vis-à-vis Canada. One article, titled China-Canada ties wane further as Ottawa becomes Washington’s puppet over HK’, suggests that Justin Trudeau was in the ‘pole position in the circle of bootlickers pleasing the US’ and castigates him for the measures he has taken after China tightened its control over Hong Kong via the imposition of National Security Law. Steps taken by Trudeau include suspension of the extradition treaty with Hong Kong and a decision to end the export of sensitive military items to the region.

Cracks in the bilateral relationship had begun to emerge between Canada and China after Canada detained the CFO of Huawei, Meng Wanzhou, on a US extradition warrant (at the end of May, a Canadian court had ruled that Wanzhou could be extradited to the US, much to the chagrin of the Chinese), while Beijing in return has detained two Canadians, Michael Kovrig and Michael Spavlor (both were charged with espionage in June 2020). It would be pertinent to point out that Beijing has signaled its displeasure with Canada by reducing imports of Canadian products like pork and canola oil.

Conclusion

While Beijing itself is becoming more aggressive and belligerent, it cannot expect other countries to stick to their earlier position on crucial strategic issues. It is somewhat unfair to assume that the Global Times, the mouthpiece for China’s Communist Party, can cover the fact that China is on the defensive. Other countries are now finding common ground in the strategic and economic sphere. While the results may not come overnight, partnerships are likely to concretize and gather momentum, because Beijing seems in no mood to give up on its hegemonic mindset and patronizing approach. Yet, other countries and regional blocs also need to have a clear vision to counter China and divergences over minor issues will not help. It is true that a zero-sum approach vis-à-vis China is not beneficial, but for that to happen Beijing too needs to act responsibly, which seems doubtful given its behavior on a number of issues.

Nightcap

  1. On being black in Europe Chris Bertram, Crooked Timber
  2. Prudence, protests, and pandemics Greg Weiner, National Affairs
  3. Cambodia’s first national health service? Joanna Wolfarth, History Today
  4. Between German culture and Nazi culture Moritz Föllmer, OUPblog