Post-Mortem

Mr Trump is practically gone and he is not coming back. (For one thing, he will be too old in 2024. For another thing, see below.) The political conditions that got such an un-preposterous candidate elected in 2016 however, those conditions, don’t look like they are going away. (I hope I am wrong.) A large fraction of Americans will continue to be ignored from an economic standpoint, as well as insulted daily by their better. Four years of insults thrown at people like me and the hysterical outpouring of contempt by liberal media elites on the last days of the Trump administration are not making me go away. Instead, they will cement my opposition to their vision of the world and to their caste behavior. I would bet dollars on the penny that a high proportion of the 74 million+ who voted for Mr Trump in 2020 feels the same. (That’s assuming that’s the number who voted for him; I am not sure of it at all. It could be more. Currently, with the information available, I vote 60/40 that the election was not – not – stolen.)

I never liked Trump, the man, for all the obvious reasons although I admired his steadfastness because it’s so rare among politicians. In the past two years, I can’t say I liked any of his policies, though I liked his judicial appointments. It’s just that who else could I vote for in 2016? Hillary? You are kidding, right? And in 2020, after President Trump was subjected to four years (and more) of unceasing gross abuse and of persecution guided by a totalitarian spirit, would it not have been dishonorable to vote for anyone but him? (Libertarians: STFU!)

Believe it or not, if Sen. Sanders and his 1950 ideas had not been eliminated again in 2020, again through the machinations of the Dem. National Committee, I would have had a serious talk with myself. At least, Sanders is not personally corrupt, and with a Republican Senate, we would have had a semi-paralyzed government that would have been OK with me.

One week after the event of 1/6/21, maybe “the breach” of the Capitol, many media figures continue to speak of a “coup.” Even the Wall Street Journal has joined in. That’s downright grotesque. I don’t doubt that entering the Capitol in a disorderly fashion and, for many, (not all; see the videos) uninvited, is illegal as well as unseemly. I am in favor of the suspects being found and prosecuted, for trespassing, or something. This will have the merit of throwing some light on the political affiliation(s) of the window breakers. I still see no reason to abandon the possibility that some, maybe (maybe) in the vanguard, were Antifa or BLM professional revolutionaries. Repeating myself: Trump supporters have never behaved in that manner before. I am guessing the investigations and the prosecutions are going to be less than vigorous precisely because the new administration will not want to know or to have the details be known of the criminals’ identity. If I am wrong, and all the brutal participants were Trump supporters, we will know it very quickly. The media will be supine either way.

It’s absurd and obscenely overwrought to call the breaching of the Capitol on January 6th (by whomever), a “coup” because there was never any chance that it would result in transferring control of the federal government to anyone. Develop the scenario: Both chambers are filled with protesters (of whatever ilk); protesters occupy both presiding chairs, and they hold in their hands both House and Senate gavels. What next? Federal agencies start taking their orders from them; the FBI reports to work as usual but only to those the protesters appoint? Then, perhaps, the Chairman of the Joint Chiefs interrupts the sketchy guy who is taking a selfie while sitting in the VP chair. He says he wants to hand him the nuclear controls football. (Ask Nancy Pelosi, herself perpetrator of a coup, though a small one.) If you think any of this is credible, well, think about it, think about yourself, think again. And get a hold!

That the Capitol riot was a political act is true in one way and one way only, a minor way. It derailed the electoral vote counting that had been widely described as “ceremonial.” Happened after (after) the Vice-President had declared loud and clear that he did not have the authority to change the votes. The counting resumed after only a few hours. There is no scenario, zero, under which the riot would have altered the choice of the next president. If there had been, the breach would have been a sort of coup, a weak one.

On 1/9/21, an announcer, I think it was on NPR, I hope it was on NPR, qualified the events as a “deadly” something or other. He, and media in general, including Fox News, I am afraid, forgot to go into the details. In point of fact, five people died during the protest and part-riot of 1/6/21. One was a Capitol policeman who was hit with a fire extinguisher. As I write, there is no official allegation about who did it. There is no information about the political affiliation, if any, of the culprit(s). For sure, protesters caused none of three next deaths which were due to medical emergencies, including a heart attack. The fifth casualty was a protester, who was probably inside the Capitol illegally, and who was shot to death by a policeman. She was definitely a Trump supporter. She was unarmed. Many people who are busy with their lives will think that Trump supporters had massacred five people because of the mendacity of the language used on air. Disgraceful, disgusting reporting; but we are getting used to it.

Today and yesterday, I witnessed a mass movement I think I have not seen in my life though it rings some historical bells. Pundits, lawmakers, and other members of their caste are elbowing one another out of the way to be next to make extremist pronouncements on the 1/6/21 events. Why, a journalist on Fox News, no less, a pretty blond lady wearing a slightly off the shoulder dress referred to a “domestic terror attack.” With a handful of courageous exceptions, all lawmakers I have seen appearing in the media have adopted extreme vocabulary to describe what remained a small riot, if it was a riot at all. I mean that it was a small riot as compared to what happened in several American cities in the past year. The hypocrisy is colossal in people who kept their mouths mostly shut for a hundred nights or more of burning of buildings, of police cars, of at least one police precinct (with people in it), and of massive looting.

It’s hard to explain how the media and the political face of America became unrecognizable in such a short time. Two hypotheses. First, many of the lawmakers who were in the Capitol at the time of the breach came to fear for their personal safety. Four years of describing Trump supporters as Nazis and worse must have left a trace and multiplied their alarm. Except for the handful of Congressmen and women who served in the military and who saw actual combat, our lawmakers have nothing in their lives to prepare them for physical danger. They mostly live cocooned lives; the police forces that protect them have not been disbanded. (What do you know?) I think they converted the abject fear they felt for a short while into righteous indignation. Indignation is more self-respecting than fear for one’s skin.

My second hypothesis to explain the repellent verbal behavior: The shameful noises I heard in the media are the manifestation of a rat race to abandon a sinking ship. Jobs are at stake, careers are at stake, cushy lifestyles are at stake. “After Pres. Trump is gone, as he surely will be soon,” the lawmakers are thinking, “there will be a day of reckoning, and a purge. I have to establish right away a vivid, clear, unforgettable record of my hatred to try and avoid the purge. No language is too strong to achieve this end.” That’s true even for Republican politicians because, they too have careers. Trump cabinet members resigned for the same reason, I think when they could have simply declared, “I don’t approve of…. but I am staying to serve the people to the end.”

Along with an outburst of extremist public language, there came a tsunami of censorship by social media, quite a few cases of people getting fired merely for having been seen at the peaceful demonstration (all legal though repulsive), and even a breach of contract by a major publisher against a US Senator based solely on his political discourse (to be resolved in court). And then, there are the enemy lists aired by the likes of CNN, for the sole purpose of ruining the careers of those who served loyally in the Trump administration.

President-elect Bidden called for “unity.” Well, I have never, ever seen so much unity between a large fraction of the political class – soon an absolute majority in government – the big media, and large corporations. I have never seen it but I have read about it. Such a union constituted the political form called “corporatism.” It was the practical infrastructure of fascism.

As if political correctness had only been its training wheels, the vehicle of political censorship is speeding up. The active policing of political speech can’t be far behind. It won’t even require a revision of the federal constitution so long as private companies such as Twitter and Facebook do the dirty work. Soon, Americans will watch what they are saying in public. I fear that national police agencies will be turned to a new purpose. (The FBI, already proved its faithlessness four years ago, anyway.) Perhaps, there will be little collective cynicism involved. It’s not difficult to adopt liberalism, a self-indulgent creed. And what we understand here (wrongly) to be “socialism” only entails an endless Christmas morning. So, why not? The diabolical Mr Trump will soon be remembered as having incited some misguided, uneducated, unpolished (deplorable) Americans to massacre their legitimately elected representatives.

Incidentally, in spite of a near consensus on the matter, I have not seen or heard anything from Pres. Trump that amounts to incitement to do anything (anything) illegal. There are those who will retort that inviting his angry supporters to protest was tantamount to incitement to violence. The logic of this is clear: Only crowds that are not angry should be invited to protest. Read this again. Does it make any sense? Make a note that the constitutional propriety of Mr Trump’s belief that the election had been stolen is irrelevant here. One does not have to be constitutionally correct to have the right to protest.

Night has fallen over America. We are becoming a totalitarian society with a speed I could not have foreseen. Of course four years of unrelenting plotting to remove the properly elected president under false pretenses paved the way. Those years trained citizens to accept the unacceptable, to be intellectually docile. Suddenly I don’t feel safe. I am going to think over my participation in the social media both because of widespread censorship and because it now seems dangerous. As far as censorship is concerned I tried an alternative to Facebook, “Parler,” but it did not work for me. Besides, it seems that the big corporations, including Amazon and Apple, are ganging up to shut it down. The cloud of totalitarianism gathered so fast over our heads that all my bets are off about the kinds of risks I am now willing to take. I will still consider alternatives to Facebook but they will have to be very user-friendly, and reasonably populated. (If I want to express myself in the wilderness, I can always talk to my wife.) For the foreseeable future, I will still be easy to find in the blogosphere.

Best of luck to all my Facebook friends, including to those who need to learn to think more clearly, including those whose panties are currently in a twist.

Should we scrap STEM in high school?

STEM topics are important (duh!). Finding the future scientists who will improve my health and quality of living is important to me. I want society to cast a wide net to find all those poor kids, minority kids, and girls we’re currently training to be cute who, in the right setting, could be the ones to save me from the cancer I’m statistically likely to get.

But how much value are we really getting from 12th grade? I’m pulling a bait and switch with the title to this post–I think we should keep the norm of teaching 9th graders basic science. But by 12th grade, are we really getting enough value to warrant the millions of hours per year of effort we demand of 16-18-year olds? I’m skeptical.

There are lots of things that should be taught in school. Ask any group of people and you’ll quickly come up with a long list of sensible sounding ideas (personal finance, computer programming, economics, philosophy, professional communication, home ec., and on and on and on). But adding more content only means we do a worse job at all of it. And that means an increased chance of students simply rejecting those topics wholesale.

Society is filled with science/econ deniers of all persuasions. Anti-intellectuals have been a major constituency for at least the last decade. It’s not like these folks didn’t go to school. Someone tried to teach them. What I want to know is how things have would been different if we’d tried something other than overwhelming these people with authoritatively delivered facts (which seem to have resulted in push-back rather than enlightenment)?

The last 6+ years of trying to teach economics to college kids against their will has convinced me that art (especially literature and drama) affects us much more than dissecting frogs or solving equations. And exposing kids to more literature and drama has the added benefit of (possibly) helping them develop their literacy (which we’ve forgotten is not a binary variable).

Although casting a wide net to find potential scientists is important, ultimately, we only need scientific knowledge in the heads of those who don’t flip through it. But literature can help us develop empathy, and that is a mental skill we need in far more heads. I suspect that replacing a 12th grade physics class 98% of students forget with a literature class where you read a good book would do more to promote an enlightened society.

The short-sightedness of big C Conservatism

As we celebrate the approval of the Oxford-AstraZeneca Covid-19 vaccine, it is hard to imagine that anyone might take offense at the existence of an inexpensive, transportable solution to the pandemic. Yet this is exactly what I have encountered. A friend who is an arch-Conservative (note the capital C) responded with hostility during a discussion on differences between the Oxford and Pfizer vaccines. The issue was that my friend couldn’t accept the scientific evidence that the Oxford vaccine is superior to the Pfizer one. He fixated on the fifteen-billion-dollar subsidy Pfizer received from the US government to create their vaccine. For the Conservative, it was as if to admit the difference between the vaccines was unpatriotic since one was bought by the US taxpayer. His objections were not based on scientific evidence or ideology but upon identity and background.

During the discussion, my Conservative friend brought up the Oxford team’s continuous publication of their data as if that action somehow lessened their research’s impact or validity. The final paragraph on the Oxford research team’s webpage says:

This is just one of hundreds of vaccine development projects around the world; several successful vaccines offer the best possible results for humanity. Lessons learned from our work on this project are being shared with teams around the world to ensure the best chances of success.

The implication was “well, they’re just wacko do-gooders! They’re not going to make a profit acting like that!” The idea being that legitimate scientific research bodies behave like Scrooge McDuck with their knowledge. On a side note, this type of “Conservative” mentality has greatly damaged public perception of capitalism, a topic I’ll return to at a later point.

Members of the Oxford vaccine team are assumed to be in the running for the Nobel Prize, and for this, odds of winning are proportionate to the speed with which the broader scientific community can check findings. The Conservative could not overcome a mental block over the fifteen billion dollars. The difference is one of vision. To put it bluntly, Oxford is aware as an institution that it has existed for almost nine hundred years before the creation of Pfizer and that it will probably exist nine hundred years after Pfizer is no longer. Oxford wants the Nobel Prize; the long-term benefits – investment, grants, funding awards, etc. – far outweigh any one-time payout. As to the long-term outlook required for Nobel Prize pursuit, the willingness to pass up one benefit in favor of a multitude of others, it is alien to those whose focus is short-sighted, who are enticed by single-time subsidies or quick profits.

The conversation represented a problem which caused F.A. Hayek to write in “Why I am not a Conservative,”

In general, it can probably be said that the conservative does not object to coercion or arbitrary power so long as it is used for what he regards as the right purposes. He believes that if government is in the hands of decent men, it ought not to be too much restricted by rigid rules. Since he is essentially opportunist and lacks principles, his main hope must be that the wise and the good will rule—not merely by example, as we all must wish, but by authority given to them and enforced by them. Like the socialist, he is less concerned with the problem of how the powers of government should be limited than with that of who wields them; and, like the socialist, he regards himself as entitled to force the value he holds on other people.

In the case of the vaccine, the Conservative I spoke with had the idea that since the government sponsored Pfizer’s version, Americans ought to accept placidly the Pfizer vaccine as their lot in life. Consequently, coercive policies, for instance refusing the AstraZeneca vaccine FDA approval (something which hasn’t occurred – yet), are acceptable. Behind this facile, even lazy, view lies an incomprehension when confronted with behaviors and mindsets calibrated for large scale enterprises. Actions taken to achieve long-term building – in this instance the possibility of winning a Nobel Prize – are branded as suspicious, underhanded. At an even deeper level lies a resentment of AstraZeneca’s partner: Oxford with all of its associations.

Rather than being a malaise of big C “Conservatism,” the response, detailed in this anecdote, to a comparison between the vaccines conforms to Conservative ideas. Narrowness of mind and small scope of vision are prized. As Hayek pointed out in 1960, these traits lead to a socio-cultural and intellectual poverty which is as poisonous as the material and moral poverty of outright socialism. My own recent conclusion is that the poverty of big C “Conservatism” might be even worse than that of socialism because mental and socio-cultural poverty can create circumstances leading to a longer, more subtle slide into material poverty while accompanied by a growing resentment as conformity still leads to failure. When class and ideological dynamics invade matters such that scientific evidence is interpreted through political identities, we face a grave threat to liberty.  

Disruption arises from Antifragility

One of my favorite classics about why big businesses can’t always innovate is Clayton Christiansen’s The Innovator’s Dilemma. It is one of the most misunderstood business books, since its central concept–disruption–has been misquoted, and then popularized. Take the recent post on Investopedia that says in the second sentence that “Disruptive technology sweeps away the systems or habits it replaces because it has attributes that are recognizably superior.” This is the ‘hype’ definition used by non-innovators.

I think part of the misconception comes from thinking of disruption as major, public, technological marvels that are recognizable for their complexity or for even creating entire new industries. Disruptive innovations tend instead to be marginal, demonstrably simpler, worse on conventional scales, and start out by slowly taking over adjacent, small markets.

It recently hit me that you can identify disruption via Nassim Nicholas Taleb’s simple heuristics of recognizing when industry players are fragile. Taleb is my favorite modern philosopher, because he actually brought a new, universally applicable concept to the table, that puts into words what people have been practicing implicitly–but without a term to use. Anti-fragility is the inverse of fragile and actually helps you understand it better. Anti-fragile does not mean ‘resists breaking,’ which is more like ‘robust;’ instead, it means gains from chaos. Ford Pintos are fragile, Nokia phones are robust, but mechanical things are almost never anti-fragile. Bacteria species are anti-fragile to anti-biotics, as trying to kill them makes them stronger. Anti-fragile things are usually organic, and usually made up of fragile things–the death of one bacterium makes the species more resistant.

Taleb has a simple heuristic for finding anti-fragility. I recommend you read his book to get the full picture, but the secret to this concept is a simple thought experiment. Take any concept (or thing), and identify how it works (or fails to work). Now ask, if you subject it to chaos–by that, I mean, if you try to break it–and slowly escalate how hard you try, what happens?

  • If it gets disproportionately harmed, it is fragile. E.g., traffic: as you add cars, time-to-destination gets worse slowly at first, then all of the sudden increases rapidly, and if you do it enough, cars literally stop.
  • If it gets proportionately harmed or there is no effect, it is robust. Examples are easy, since most functional mechanical and electric systems are either fragile (such as Ford Pintos) or robust (Honda engines, Nokia phones, the Great Pyramids).
  • If it gets better, it is anti-fragile. Examples are harder here, since it is easier to destroy than build (and anti-fragility usually occurs based on fragile elements, which gets confusing); bacterial resistance to anti-biotics (or really, the function of evolution itself) is a great one.

The only real way to get anti-fragility outside of evolution is through optionality. Debt (obligation without a choice) is fragile to any extraneous shock, so a ‘free option’–choice without obligation, the opposite, is pure anti-fragility. Not just literal ‘options’ in the market; anti-fragile takes a different form in every case, and though the face is different, the structure is the same. OK, get it? Maybe you do. I recommend coming up with your own example–if you are just free riding on mine, you don’t get it.

Anyway, back to Christiansen. Taleb likes theorizing and leaves example-finding to you, while Christiansen scrupulously documented what happened to hundreds of companies and his concepts arose from his data; think about it like Christiansen is Darwin, carefully measuring beaks, and recognizing natural selection, where Taleb is Wallace, theorizing from his experience and the underlying math of reality. Except in this case, Taleb is not just talking about natural selection, he is also showing how mutation works, and giving a theory of evolution that is not restricted to just biology.

I realized that you can actually figure out whether an innovation is disruptive using this heuristic. It takes some care, because people often look at the technology and ask if it is anti-fragile–which is a mistake. Technologies are inorganic, so usually robust or fragile. Industries are organic, strategies are organic, companies are organic. Many new strategies build on companies’ competencies or existing customer bases, and though they may meet the ‘hype’ definition above, they give upside to incumbents, and are thus not fragilizing. Disruption happens when a company has an exposure to a strategy that it has little to gain from, but that could cannibalize its market if it grows, as anti-fragile things are wont to do.

The questions is: is a given incumbent company fragile with respect to a given strategy? Let’s start with some examples–first Christiansen’s, then my own:

  • Were 3″ drive makers fragile with respect to using smaller drives in cars?
    • In my favorite Christiansen anecdote, a 3″ drive-making-CEO, whose company designed a smaller 1.8″ drive but couldn’t sell it to their PC or mainframe customers, complained that he did exactly what Christiansen said, and built smaller drives, and there was no market. Meanwhile, startups were selling 1.8″ drives like crazy–to car companies, for onboard computers.
    • Christiansen notes that this was a tiny market, which would be an 0.01% change on a big-company income statement, and a low-profit one at that. So, since these companies were big, they were fragile to low-margin, low-volume, fast-growing submarkets. Meanwhile, startups were unbelievably excited about selling small drives at a loss, just so that Honda would buy from them.
    • So, 3″ drive makers had everything to lose (the general drive market) and a blip to gain, where startups had everything to gain and nothing to lose. Note that disruptive technologies are not those that are hard to invent or that immediately revolutionize the industry. Big companies (as Christiansen proved) are actually better at big changes and at invention. They are worse at recognizing value of small changes and jumps between industries.
  • Were book retailers fragile with respect to online book sales?
    • Yes, Amazon is my Christiansen follow-on. Jeff Bezos, as documented in The Everything Store, gets disruption: he invented the ‘two-pizza meeting’, so he ‘gets’ smallness; he intentionally isolates his innovation teams, so he ‘gets’ the excitement of tiny gains and allows cannibalism; he started in a proof-of-concept, narrow, feasible discipline (books) with the knowledge that it would grow into the Everything Store if successful, so he ‘gets’ going from simple beginnings to large-scale, well, disruption.
    • The Everything Store reads like a manual on how to be disrupted. Barnes & Noble first said “We can do that whenever we want.” Then when Bezos got some traction, B&N said “We can try this out but we need to figure out how to do it using our existing infrastructure.” Then when Bezos started eating their lunch, B&N said “We need to get into online book sales,” but sold the way they did in stores, by telling customers what they want, not by using Bezos’ anti-fragile review system. Then B&N said “We need to start doing whatever Bezos does, and beat him by out-spending,” by which time he was past that and selling CDs and then (eventually) everything.
    • Book sellers were fragile because they had existing assets that had running costs; they were catering to customers with not just a book, but with an experience; they were in the business of selecting books for customers, not using customers for recommendations; they treasured partnerships with publishers rather than thinking of how to eliminate them.
  • Now, some rapid-fire. Think carefully, since it is easy to fall into the trap of thinking industry titans were stupid, not fragile, and it is easy to have false positives unless you use Taleb’s heuristic.
    • Car companies were fragile to electric sports cars, and Elon Musk was anti-fragile. Sure, he was up-market, which doesn’t follow Christiansen’s down-market paradigm, but he found the small market that the Nissan Leaf missed.
    • NASA was fragile to modern, cheap, off-the-shelf space solutions, and…yet again…Elon Musk was anti-fragile.
    • Taxis were fragile to app-based rides.
    • Hotels were fragile to app-based rentals.
    • Cable was fragile to sticks you put in your TV.
    • Hedge funds were fragile to index funds, currently are fragile to copy trading, and I hope to god they break.
  • Lastly, some counter-examples, since it is always better to use the via negativa, and assuming you have additive knowledge is dangerous. If you disagree, prove me wrong, found a startup, and make a bajillion dollars by disrupting the big guys who won’t be able to find a market:
    • There is nothing disruptive about 5G.
    • Solar and wind are fragile and fragilizing.
    • What was wrong with WeWork’s business model? Double fragility–fixed contracts with building owners, flexible contracts with customers.
    • On a more optimistic note, cool tech can still be sustaining (as opposed to disruptive), like RoboAdvisors or induction stoves or 3D printed shoes.
    • Artificial intelligence or blockchain any use you have heard of (but not in any that you don’t know yet).

So, to summarize, if a company is fragile to a new strategy, the best it can do is try to robustify itself, since it has little upside. Many innovations give upside to incumbents at the marginal cost of R&D, and thus sustain them; disruption happens when the incumbents have little to gain from adopting a strategy, but startups have a high exposure to positive impact from possible adoption of a strategy due to the potential growth from small-market, incremental/simplifying opportunities, which is definitionally anti-fragility to the strategy.

Now, I hope you have a tool for judging whether industrial incumbents are fragile. Rather than trying to predict success or failure of any, you should just use Taleb’s heuristic–that will help you sort things into ‘hyped as disruptive’ vs. ‘actually probably disruptive.’ A last thought: if you found this wildly confusing, just remember, disruptive innovations tend to steal the jobs of incumbents. So, if an incumbent (say, a Goldman Sachs/Morgan Stanley veteran writing the definition of “disruptive” for Investopedia) is talking about a banking or trading technology, it is almost certainly not disruptive, since he would hardly tell you how to render him extraneous. You will find out what is disruptive when he makes an apology video while wearing a nice watch and French cuffs.

Prediction market update

The market for who wins the presidency closed this morning! But the Electoral College margin of victory market was still open and at 98 cents for the already certain outcome. Maxing out my position there would mean $17 for free! So I did, and the market dipped to 97 cents.

This truly is the dumbest jack in the box. We all know exactly what’s going to happen, and yet…

The poverty of the modern middle class: prologue

About six months after I graduated from Columbia, a couple who knew members of my extended family asked me to lunch unexpectedly. Not wishing to be rude, I went. As it turned out the couple had an agenda; they wanted to talk about having their daughter apply to Ivy League graduate schools.

Their daughter had recently graduated from a private liberal arts college and was having trouble finding permanent employment in a field and at a level her parents considered acceptable given the cost of her education. In fairness to her, she was interning at a non-profit in NYC. Her parents, though, had unrealistic expectations and seemed to feel that having paid for her to go to a “prestigious” private school, she should have entered the workforce at a much higher level.

The parents had some highly specific questions, ones that were so precise that I suggested they needed to contact someone in admissions at the respective universities or speak with an application consultant. In retrospect, I suspect they may already done so and the feedback hadn’t been favorable. Their questions were focused on seeing if there might be workarounds or special exemptions for the graduate program prerequisites. While there are, their daughter wasn’t eligible for any of them.

The parents were visibly angry, unable to accept that their daughter’s endless sports and community involvement, which they had so carefully funded, were meaningless in the face of program prerequisites. The graduate programs had foreign study abroad components, so the language prerequisites, which the daughter couldn’t meet, were immutable. Additionally, as the programs was designed for those interested in careers such as publishing, journalism, or policy writing, all applications demanded a large and exceptionally high-quality writing sample. To have an idea of what was expected, think of Princeton University’s standard 50,000 word (i.e. a small book) undergraduate thesis.[1] The daughter had neither the language skills nor the writing sample. In the case of the former, the private college her parents had chosen didn’t offer modern languages at anything resembling the level expected; for the writing sample, the young woman simply didn’t have one. Her parents were vague as to the reason, but I think she may have chosen an academic track which didn’t require an undergraduate thesis.

The parents weren’t completely sure which upset them more: that their capability as parents was under review, or that everything they thought was “valuable” or “worthy” had been found wanting. Sports? Irrelevant. Door-to-door political canvassing? Commonplace. The parents were proud of having provided certain experiences, such as trips to Disneyland, ski trips, and cruises. These activities have importance as symbols of a financial middle-class with enough liquidity to spend on recreation, but the daughter couldn’t include them as significant in personal statements. In this, the daughter was disadvantaged compared to Abigail Fisher and her discovery that 1,999,999 other people in any given year are Habitat for Humanity volunteers. 

The episode revealed a bankruptcy of mind, culture, and outlook which is the poverty of those whose incomes are firmly middle class but whose intellectual knowledge and cultural capital is lacking. Like the person of my previous post, there was a trust in the opinions of the majority and an uninquiring faith that doing x, y, and z is guaranteed to lead to immediate status, security, and success. The financial but not social or cultural middle-class has realized that parts of life and social experiences are out of reach; Not because they were originally off limits, but because too much time has passed, and individuals, such as those in this story, are behind the curve when it comes to specific skills and types of knowledge. People, entire sections of the population, have gone so far down a particular path, it’s too late to turn back.


[1] In case readers are wondering if it is possible to access this type of writing preparation at the undergraduate level outside of the Ivy League, it is. Speaking from my own experience, most liberal arts colleges and large universities offer an Honors track or program through which participating students receive the support and guidance to write longer, more advanced papers and theses.

You vote is your voice–but actions speak louder than words

On voting day, with everyone tweeting and yelling and spam-calling you to vote, I want to offer some perspective. Sure, ‘your vote is your voice,’ and those who skip the election will remain unheard by political leaders. Sure, these leaders probably determine much more of your life than we probably would like them to. And if you don’t vote, or ‘waste’ your vote on a third party or write in Kim Jong Un, you are excluded from the discussion of how these leaders control you.

But damn, if that is such a limited perspective. It’s like the voting booth has blinders that conceal what is truly meaningful. I’m not going to throw the traditional counter-arguments to ‘vote or die’ at you, though my favorites are Arrow’s Impossibility Theorem and South Park’s Douche and Turd episode. Instead, I just want to say, compared to how you conduct your life, shouting into the political winds is simply not that important.

The wisdom of the stoics resonates greatly with me on this. Seneca, a Roman philosopher, tutor, and businessman, had the following to say on actions, on knowledge, on trust, on fear, and on self-improvement:

  • Lay hold of today’s task, and you will not need to depend so much upon tomorrow’s. While we are postponing, life speeds by. Nothing is ours, except time. On Time
  • Each day acquire something that will fortify you against poverty, against death, indeed against other misfortunes as well; and after you have run over many thoughts, select one to be thoroughly digested that day. This is my own custom; from the many things which I have read, I claim some one part for myself. On Reading
  • If you consider any man a friend whom you do not trust as you trust yourself, you are mightily mistaken and you do not sufficiently understand what true friendship means. On Friendship
  • Reflect that any criminal or stranger may cut your throat; and, though he is not your master, every lowlife wields the power of life and death over you… What matter, therefore, how powerful he be whom you fear, when every one possesses the power which inspires your fear? On Death
  • I commend you and rejoice in the fact that you are persistent in your studies, and that, putting all else aside, you make it each day your endeavour to become a better man. I do not merely exhort you to keep at it; I actually beg you to do so. On the Philosopher’s Lifestyle

Seneca goes on, in this fifth letter, to repeat the stoic refrain of ‘change what you can, accept what you cannot.’ But he expands, reflecting that your mind is “disturbed by looking forward to the future. But the chief cause of [this disease] is that we do not adapt ourselves to the present, but send our thoughts a long way ahead. And so foresight, the noblest blessing of the human race, becomes perverted.”

Good leadership requires good foresight, but panic over futures out of our control pervert this foresight into madness. So, whether you think that Biden’s green promises will destroy the economy or Trump’s tweets will incite racial violence, your actions should be defined by what you can do to improve the world–and this is the only scale against which you should be judged.

So, set aside voting as a concern. Your voice will be drowned out, and then forgotten. But your actions could push humanity forward, in your own way, and if you fail in that endeavor, then no vote will save you from the self-knowledge of a wasted life. If you succeed, then you did the only thing that matters.

Offensive advantage and the vanity of ethics

I have recently shifted my “person I am obsessed with listening to”: my new guy is George Hotz, who is an eccentric innovator who built a cell phone that can drive your car. His best conversations come from Lex Fridman’s podcasts (in 2019 and 2020).

Hotz’s ideas bring into question the efficacy of any ethical strategy to address ‘scary’ innovations. For instance, based on his experience playing “Capture the Flag” in hacking challenges, he noted that he never plays defense: a defender must cover all vulnerabilities, and loses if he fails once. An attacker only needs to find one vulnerability to win. Basically, in CTF, attacking is anti-fragile, and defense is fragile.

Hotz’s work centers around reinforcement learning systems, which learn from AI errors in automated driving to iterate toward a model that mimics ‘good’ drivers. Along the way, he has been bombarded with questions about ethics and safety, and I was startled by the frankness of his answer: there is no way to guarantee safety, and Comma.ai still depends on human drivers to intervene to protect themselves. Hotz basically dismisses any system that claims to take an approach to “Level 5 automation” that is not learning-based and iterative, as driving in any condition, on any road, is an ‘infinite’ problem. Infinite problems have natural vulnerabilities to errors and are usually closer to impossible where finite problems often have effective and world-changing solutions. Here are some of his ideas, and some of mine that spawned from his:

The Seldon fallacy: In short, 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable. See my other post for more details!

Finite solutions to infinite problems: In Hotz’s words regarding how autonomous vehicles take in their environments, “If your perception system can be written as a spec, you have a problem”. When faced with any potential obstacle in the world, a set of plans–no matter how extensive–will never be exhaustive.

Trolling the trolley problem: Every ethicist looks at autonomous vehicles and almost immediately sees a rarity–a chance for an actual direct application of a philosophical riddle! What if a car has to choose between running into several people or alter path to hit only one? I love Hotz’s answer: we give the driver the choice. It is hard to solve the trolley problem, but not hard to notice it, so the software alerts the driver whenever one may occur–just like any other disengagement. To me, this takes the hot air out of the question, since it shows that, as with many ethical worries about robots, the problem is not unique to autonomous AIs, but inherent in driving–and if you really are concerned, you can choose yourself which people to run over.

Vehicle-to-vehicle insanity: While some autonomous vehicle innovators promise “V2V” connections, through which all cars ‘tell’ each other where they are and where they are going and thus gain tremendously from shared information. Hotz cautions (OK, he straight up said ‘this is insane’) that any V2V system depends, for the safety of each vehicle and rider, on 1) no communication errors and 2) no liars. V2V is just a gigantic target waiting for a black hat, and by connecting the vehicles, the potential damage inflicted is magnified thousands-fold. That is not to say the cars should not connect to the internet (e.g. having Google Maps to inform on static obstacles is useful), just that safety of passengers should never depend on a single system evading any errors or malfeasance.

Permissioned innovation is a contradiction in terms: As Hotz says, the only way forward in autonomous driving is incremental innovation. Trial and error. Now, there are less ethically worrisome ways to err–such as requiring a human driver who can correct the system. However, there is no future for innovations that must emerge fully formed before they are tried out. And, unfortunately, ethicists–whose only skin in the game is getting their voice heard over the other loud protesters–have an incentive to promote the precautionary principle, loudly chastise any innovator who causes any harm (like Uber’s first-pedestrian-killed), and demand that ethical frameworks precede new ideas. I would argue back that ‘permissionless innovation‘ leads to more inventions and long-term benefits, but others have done so quite persuasively. So I will just say, even the idea of ethics-before-inventions contradicts itself. If the ethicist could make such a framework effectively, the framework would include the invention itself–making the ethicist the inventor! Since instead, what we get is ethicists hypothesizing as to what the invention will be, and then restricting those hypotheses, we end up with two potential outcomes: one, the ethicist hypothesizes correctly, bringing the invention within the realm of regulatory control, and thus kills it. Two, the ethicist has a blind spot, and someone invents something in it.

“The Attention”: I shamelessly stole this one from video games. Gamers are very focused on optimal strategies, and rather than just focusing on cost-benefit analysis, gamers have another axis of consideration: “the attention.” Whoever forces their opponent to focus on responding to their own actions ‘has the attention,’ which is the gamer equivalent of the weather gauge. The lesson? Advantage is not just about outscoring your opponent, it is about occupying his mind. While he is occupied with lower-level micromanaging, you can build winning macro-strategies. How does this apply to innovation? See “permissioned innovation” above–and imagine if all ethicists were busy fighting internally, or reacting to a topic that was not related to your invention…

The Maginot ideology: All military historians shake their heads in disappointment at the Maginot Line, which Hitler easily circumvented. To me, the Maginot planners suffered from two fallacies: one, they prepared for the war of the past, solving a problem that was no longer extant. Second, they defended all known paths, and thus forgot that, on defense, you fail if you fail once, and that attackers tend to exploit vulnerabilities, not prepared positions. As Hotz puts it, it is far easier to invent a new weapon–say, a new ICBM that splits into 100 tiny AI-controlled warheads–than to defend against it, such as by inventing a tracking-and-elimination “Star Wars” defense system that can shoot down all 100 warheads. If you are the defender, don’t even try to shoot down nukes.

The Pharsalus counter: What, then, can a defender do? Hotz says he never plays defense in CTF–but what if that is your job? The answer is never easy, but should include some level of shifting the vulnerability to uncertainty onto the attacker (as with “the Attention”). As I outlined in my previous overview of Paradoxical genius, one way to do so is to intentionally limit your own options, but double down on the one strategy that remains. Thomas Schelling won the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” for outlining this idea in The Strategy of Conflict, but more importantly, Julius Caesar himself pioneered it by deliberately backing his troops into a corner. As remembered in HBO’s Rome, at the seminal engagement of Pharsalus, Caesar said: “Our men must fight or die. Pompey’s men have other options.” However, he also made another underappreciated innovation, the idea of ‘floating’ reserves. He held back several cohorts of his best men to be deployed wherever vulnerabilities cropped up–thus enabling him to be reactive, and forcing his opponent to react to his counter. Lastly, Caesar knew that Pompey’s ace-in-the-hole, his cavalry, was made up of vain higher-class nobles, so he told his troops, instead of inflicting maximum damage indiscriminately, to focus on stabbing their faces and thus disfigure them. Indeed, Pompey’s cavalry did not flee from death, but did from facial scars. To summarize, the Pharsalus counter is: 1) create a commitment asymmetry, 2) keep reserves to fill vulnerabilities, and 3) deface your opponents.

Offensive privacy and the leprechaun flag: Another way to shift the vulnerability is to give false signals meant to deceive black hats. In Hotz’s parable, imagine that you capture a leprechaun. You know his gold is buried in a field, and you force the leprechaun to plant a flag where he buried it. However, when you show up to the field, you find it planted with thousands of flags over its whole surface. The leprechaun gave you a nugget of information–but it became meaningless in the storm of falsehood. This is a way that privacy may need to evolve in the realm of security; we will never stop all quests for information, but planting false (leprechaun) flags could deter black hats regardless of their information retrieval abilities.

The best ethics is innovation: When asked what his goal in life is, Hotz says ‘winning.’ What does winning mean? It means constantly improving one’s skills and information, while also seeking to find a purpose that changes the world in a way you are willing to dedicate yourself to. I think the important part of this that Hotz does not say “create a good ethical framework, then innovate.” Instead, he is effectively saying do the opposite–learn and innovate to build abilities, and figure out how to apply them later. The insight underlying this is that the ethics are irrelevant until the innovation is there, and once the innovation is there, the ethics are actually easier to nail down. Rather than discussing ‘will AIs drive cars morally,’ he is building the AIs and anticipating that new tech will mean new solutions to the ethical questions, not just the practical considerations. So, in summary, if you care about innovation, focus on building skills and knowledge bases. If you care about ethics, innovate.

The Seldon Fallacy

Like some of my role models, I am inspired by Isaac Asimov’s vision. However, for years, the central ability at the heart of the Foundation series–‘psychohistory,’ which enables Hari Seldon, the protagonist, to predict broad social trends across thousands of galaxies over thousands of years–has bothered me. Not so much because of its impact in the fictional universe of Foundation, but for how closely it matches the real-life ideas of predictive modeling. I truly fear that the Seldon Fallacy is spreading, building up society’s exposure to negative, unpredictable shocks.

The Seldon Fallacy: 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable.

The first part of the Seldon Fallacy is the mistake of assuming reducibility, or more poetically, of NNT’s Procustean Bed. As F.A. Hayek asserted, no predictive model can be less complex than the model it predicts, because of second-order effects and accumulation of errors of approximation. Isaac Asimov’s central character, Hari Seldon, fictionally ‘proves’ the ludicrous fallacy that chaotic systems can be reduced to ‘psychohistorical’ mathematics. I hope you, reader, don’t believe that…so you don’t blow up the economy by betting a fortune on an economic prediction. Two famous thought experiments disprove this: the three-body problem and the damped, driven oscillator. If we can’t even model a system with three ‘movers’, because of second-order effects, how can we model interactions between millions of people? Basically, with no way to know which reductions in complexity are meaningful, Seldon cannot know whether, in laying his living system into a Procustean bed, he has accidentally decapitated it. Using this special ability, while unable to predict individuals’ actions precisely, Seldon can map out social forces with such clarity that he correctly predicts the fall of a 10,000-year empire. Now, to turn to the ‘we can predict social, though not individual futures’ portion of the fallacy: that big things are predictable even if their consituent elements are not.

The second part of the Seldon Fallacy is the mistake of ‘the marble jar.’ Not all randomnesses are equal: drawing white and black marbles from a jar (with replacement) is fundamentally predictable, and the more marbles drawn, the more predictable the mix of marbles in the jar. Many models depend on this assumption or similar ones–that random events distribute normally (in the Gaussian sense) in a way that increases the certainty of the model as the number of samples increases. But what if we are not observing independent events? What if they are not Gaussian? What if someone tricked you, and tied some marbles together so you can’t take out only one? What if one of them is attached to the jar, and by picking it up, you inadvertently break the jar, spilling the marbles? Effectively, what if you are not working with a finite, reducible, Gaussian random system, but an infinite, Mandelbrotian, real-world random system? What if the jar contains not marbles, but living things?

I apologize if I lean too heavily on fiction to make my points, but another amazing author answers this question much more poetically than I could. Just in the ‘quotes’ from wise leaders in the introductions to his historical-fantasy series, Jim Butcher tells stories of the rise and fall of civilizations. First, on cumulative meaning:

“If the beginning of wisdom is in realizing that one knows nothing, then the beginning of understanding is in realizing that all things exist in accord with a single truth: Large things are made of smaller things.

Drops of ink are shaped into letters, letters form words, words form sentences, and sentences combine to express thought. So it is with the growth of plants that spring from seeds, as well as with walls built from many stones. So it is with mankind, as the customs and traditions of our progenitors blend together to form the foundation for our own cities, history, and way of life.

Be they dead stone, living flesh, or rolling sea; be they idle times or events of world-shattering proportion, market days or desperate battles, to this law, all things hold: Large things are made from small things. Significance is cumulative–but not always obvious.”

–Gaius Secundus, Academ’s Fury

Second, on the importance of individuals as causes:

“The course of history is determined not by battles, by sieges, or usurpations, but by the actions of the individual. The strongest city, the largest army is, at its most basic level, a collection of individuals. Their decisions, their passions, their foolishness, and their dreams shape the years to come. If there is any lesson to be learned from history, it is that all too often the fate of armies, of cities, of entire realms rests upon the actions of one person. In that dire moment of uncertainty, that person’s decision, good or bad, right or wrong, big or small, can unwittingly change the world.

But history can be quite the slattern. One never knows who that person is, where he might be, or what decision he might make.

It is almost enough to make me believe in Destiny.”

–Gaius Primus, Furies of Calderon

If you are not convinced by the wisdom of fiction, put down your marble jar, and do a real-world experiment. Take 100 people from your community, and measure their heights. Then, predict the mean and distribution of height. While doing so, ask each of the 100 people for their net worth. Predict a mean and distribution from that as well. Then, take a gun, and shoot the tallest person and the richest person. Run your model again. Before you look at the results, tell me: which one do you expect shifted more?

I seriously hope you bet on the wealth model. Height, like marble-jar samples, is normally distributed. Wealth follows a power law, meaning that individual datapoints at the extremes have outsized impact. If you happen to live in Seattle and shot a tech CEO, you may have lowered the mean income in the group by more than the average income of the other 99 people!

So, unlike the Procustean Bed (part 1 of the Seldon Fallacy), the Marble Jar (part 2 of the Seldon Fallacy) is not always a fallacy. There are systems that follow the Gaussian distribution, and thus the Marble Jar is not a fallacy. However, many consequential systems–including earnings, wars, governmental spending, economic crashes, bacterial resistance, inventions’ impacts, species survival, and climate shocks–are non-Gaussian, and thus the impact of a single individual action could blow up the model.

The crazy thing is, Asimov himself contradicts his own protagonist in his magnum opus (in my opinion). While the Foundation Series keeps alive the myth of the predictive simulation, my favorite of his books–The End of Eternity (spoilers)–is a magnificent destruction of the concept of a ‘controlled’ world. For large systems, this book is also a death knell even of predictability itself. The Seldon Fallacy–that a simplified, non-chaotic model can predict a complex, chaotic reality, and that size enhances predictability–is shown, through the adventures of Andrew Harlan, to be riddled with hubris and catastrophic risk. I cannot reduce his complex ideas into a simple summary, for I may decapitate his central model. Please read the book yourself. I will say, I hope that as part of your reading, I hope you take to heart the larger lesson of Asimov on predictability: it is not only impossible, but undesirable. And please, let’s avoid staking any of our futures on today’s false prophets of predictable randomness.

Necessity constrains even the gods

I was recently talking to my cofounder about the concept of “fuck-you” money. “Fuck-you” money is the point at which you no longer need to care what other people think, you can fund what you want without worrying about ending up broke–so long as you recognize the power of necessity.

It reminded me of three things I have read before. One is from the brilliant economist and historian Thomas Sowell, who wrote in The Conflict of Visions that ideological divides often crop on the disagreement between “constrained” and “unconstrained” visions of the world and humanity. Effectively, the world contains some who recognize that humans have flaws that culture has helped us work through, but that we should be grateful for the virtues handed to us and understand that utopianism is dangerous self-deception. But it contains many others who see all human failings stemming from social injustices, since in nature, humans have no social problems. Those who line up behind Hobbes fight those who believe, still, the noble savage and Rousseau’s perfect state of nature. To me, this divide encapsulates the question of, did necessity emerge before human society? And if so, does it still rule us?

I know what the wisdom of antiquity says. The earliest cosmogonies–origin stories of the gods–identify Ananke (Necessity) as springing forth from the Earth herself, before the gods, and restricting even them. This story was passed on to Greek thinkers like Plato (Republic) and playwrites like Euripides (Alcestis), who found human government and the fate of heroes to also be within the tragic world of necessity first, all else second.

Lastly, this reminds me of Nassim Nicholas Taleb’s Anti-Fragile. He points out that the first virtue is survival, and that optionality is pure gain. Until you address necessity, your optionality–your choices and your chances–are fundamentally limited. As an entrepreneur who literally lives the risk of not surviving, I do not need to be convinced. Necessity rules even the gods, and it certainly rules those with “fuck-you” money. But it rules me even more. I am ruled by the fear that I may fail my family, myself, and my company at the Maslow’s level of survival. Those with “fuck-you” money at least have moved to the level where they have chances to fail society. And the lesson from history, from mythology, and from surviving in the modern economy, is not that one should just be resigned to reaching one’s limits. It is to strive to reach the level where you are pushing them, and the whole time to recognize the power of Necessity.

Choosing inadequacy

About a year ago, I had dinner with a friend who I have known more or less my entire life. We hadn’t seen each other in over ten years, though, not since she started college. During the interval, she became an inveterate social climber – at one point avowing completely seriously that she was open to marrying a rich man if it meant that she could have a flat in one of the world’s most expensive cities. She was also an expert at being woke. The contradiction in her thought processes – her craving for a life of riches and luxury and her woke “eat the rich” attitude – caused me to recognize the fuel behind the attraction redistributionist ideologies have for young Americans.

At some point in her trajectory, my friend had pitched on using the education system to climb the social ladder. In fairness to her, there is a pervasive idea that this is a valid approach; J.D. Vance mentioned it in the conclusion to Hillbilly Elegy. Choosing between the flagship state university and a small private liberal arts college, she picked the latter, which was a “social” school held in high esteem regionally and thought to be intellectually rigorous.

Upon graduating and moving two time zones away for graduate school, she made two unwelcome discoveries: 1) she was behind academically and intellectually, and 2) her college had scant brand-name value in the broader world. According to her, her graduate university’s student body was comprised of the children of America’s elite and “who didn’t get into Harvard.” She held a teaching assistantship for 101-level English literature classes and was discomfited to find that her freshmen students were better writers with a broader sense of literature and the humanities than she. She mentioned that she found out about entire chunks of the English literary canon from them, which is appalling given that she had majored in English at her liberal arts college.  

When Austrian novelist Stefan Zweig died, his executors found the manuscript for his novel Rausch der Verwandlung among his papers. The book’s title in English is The Post-Office Girl,[1] and it tells the story of a 1920s provincial girl who assumes a false identity to join the privileged world of her relatives. Everything works out – until it doesn’t:

Unwittingly Christine revealed the gaps in her worldliness. She didn’t know that polo was played on horseback, wasn’t familiar with common perfumes like Coty and Houbigant, didn’t have a grasp of the price range of cars; she’d never been to the races. Ten or twenty gaucheries like that and it was clear she was poorly versed in the lore of the chic. And compared to a chemistry student’s her schooling was nothing. No secondary school, no languages (she freely admitted she’d long since forgotten the scraps of English she’d learned in school). No, something was just not right about elegant Fräulein von Boolen, it was only a question of digging a little deeper […].

After Christine is unmasked, she returns to her previous life but this time she’s angry and bitter, aware now of the existence of another world, one lost through her own irresponsibility. Most of the book is about the girl’s mental unravelling. When I first read the book, I thought his ending of suicidal thoughts and participation in serious criminality to be melodrama for its own sake. Now, I think he was on to something.

In Zweig’s book, the root of the problem is the anti-heroine’s discovery that what is top-notch in her village isn’t held in the same esteem elsewhere: “[W]hat was the showpiece of her wardrobe [a green rayon blouse] yesterday in Klein-Reifling seems miserably flashy and common to her now.” My friend recounted a similar experience cast in academic terms. She slid through high school and college without any struggle. Upon starting her MA, she had difficulty keeping up with her cohort. Three years after starting a doctoral program, her dissertation proposal was rejected, with the evaluators citing lack of languages as one of the reasons. This last is interesting because it connects to Zweig’s list of faults that expose Christine’s real social standing. In the case of my friend, her background became equivalent to Christine’s blouse: haute couture in one locale and unsophisticated in another.

For both the Bright Young Things of Zweig’s world and my own generation more generally, there is a question over culpability. In the book, Christine’s aunt agonizes over the girl’s uncouth manners and dress, repeatedly reminding herself “how was she to know?” My friend and her parents assumed that “the system” would take care of her. Sure, the public school wasn’t great, but it also wasn’t too terrible and everyone else was going there. The college was the best and most expensive private college in the region, so surely the faculty and advisors there knew what they were doing.

This is not to say that there weren’t red flags if one knew where to look. For example, the college offered only two years of accredited foreign language training. My friend acknowledged this contributed to the problems with her first proposal. However, my friend also admitted that she hadn’t considered the curriculum when she picked the college. Her focus had been purely social. Consequently, the truth is that she chose her path at the moment she picked her values.  The fact that her measurement system didn’t hold up well to broader scrutiny is her fault.

Zweig’s anti-heroine contemplates suicide in response to her inadequacy; kangaroo courts, or cancel culture, are more my friend’s style. Not much has changed over the course of a century. In Zweig’s time, self-destruction was the default choice; in ours, destruction of others is the preferred MO. The source of the anger, though, is the same: envy stemming from inadequacy. Unlike the Bright Young Things, though, the modern generations chose their inadequacy.


[1] Much of the crucial action is set in a Swiss hotel, and Wes Anderson has said that the book was one of his inspirations for The Grand Budapest Hotel.

Why snipers have spotters

Imagine two highly skilled snipers choosing and eliminating targets in tandem. Now imagine I take away one of their rifles, but leave him his scope. How much do you expect their abilities to be decreased?

Surprisingly, there is a strong case that this will actually increase their combined sniping competence. As an economist would point out, this stems from specialization: the sniper sacrifices total situational awareness to improve accurate intervention, and the spotter sacrifices ability to intervene to improve awareness and planning. We can push out beyond the production possibilities curve.

It is also a result of communication. Two independent snipers pick their own shots, and may over-kill a target or miss a pressing threat. By explicitly designating roles, the sniper can depend on the spotter for guidance, and the two-person system means that both parties actually have more information than their cumulative, but separate knowledge without spotting.

There are also long-term positive impacts that likely escape an economist’s models from switching off in each role, or from an apprenticeship model. Eye fatigue that limits accuracy, and mental fatigue that may result from constant awareness, can be eliminated by taking turns. Also, if a skilled sniper has a novice spotter, the spotter observes the sniper’s tactics and can assimilate best practices–and the sniper, by previously working as a spotter, can be more productively empathetic. The system naturally encourages learning and improvement.

I love the sniper-spotter archetype, because it clarifies the advantages of:

  • Going from zero to one: Between two independent snipers, there zero effective lines of communication. Between a sniper and a spotter, there is one. This interaction unlocks potential held in both.
  • More from less: Many innovate by adding new things; however, anti-fragile innovations are more likely to come from removing unnecessary things than by adding new ones.
  • Not the number of people, the number of interactions: Interactions have advantages (specialization, coordination) and disadvantages (communication friction, lack of individual decision-making responsibilities). Scrutinize what interactions you want on your teams and which to avoid.
  • Isolation: Being connected to everyone promotes noise over signal. It also promotes focusing on competitors over opportunities and barriers over permissionless innovation.
  • Separate competencies, shared goals and results: To make working together worth it, define explicit roles that match each individual’s competencies. Then, so long as you have vision alignment, all team members know what they are seeking and how they will be depended upon to succeed.
  • Iterative learning and feedback: Systems that promote self-improvement of their parts outperform systems that do not. Also, at the end of the day, education comes from experimentation and observation of new phenomena, balance on the edge between known and unknown practices.
  • Establish ‘common knowledge’: Communication failures and frictions often occur because independent people assume others have the same assumed set of ‘common knowledge’. If you make communication the root of success, so long as the group is small enough to actual have–and know it has–the same set of ‘common knowledge’, they can act confidently on these shared assumptions.
  • Delegation as productivity: Recognize that doing more does not mean more gets done. Without encouraging slacking off, explicitly rewarding individuals for choosing the right things to delegate and executing effectively will get more from less.
  • Cheating Goodhart: Goodhart’s Law states that the metric of success becomes the goal. If you make the metric of success joint, rather than individual, and shape its incentives to match your vision, your metrics will create an atmosphere bent on achieving your actual goals.
  • Leadership is empowerment: Good leaders don’t tell people what to do, they inform, support, listen, and match people’s abilities and passions to larger purpose.
  • Smallness: Small is reactive, flexible, cohesive, connected, fast-moving, accurate, stealthy, experimental, permissionless, and, counterintuitively, scalable.

My most recent encounter with “sniper and spotter” is in my sister’s Montessori classroom (ages 3-6). She is an innovative educator who noticed that her public school position was rife with top-down management, politics, and perverse incentives, and was not finding systems to promote curiosity or engagement. She has applied the “sniper and the spotter” after noticing that children thrive best in either one-on-one, responsive guidance, where the instructor is totally dedicated to the student, or when left to their own devices in a materials-rich environment, engaging in discovery (or working with other children, or even teaching what they have already learned to newcomers). However, believe it or not, three-year-olds can often cause disruptions or even physical threats if left totally without supervision.

She therefore promotes a teaching model where there are two teachers, one who watches for children’s safety and minimizes disruptiveness. This frees the other teacher to rove student-to-student and give either individual or very-small-group attention. The two teachers communicate to plan next steps, and to ‘spot’ children who most need intervention. This renders ‘class size’ a stupid metric: what matters is how much one-on-one guidance plus permissionless discovery a child engages in. It is also a “barbell” strategy: instead of wallowing in the mediocrity of “group learning”, children get the most of the two extremes–total attention and just-enough-attention-to-remain-safe.

PS: On Smallness, Jeff Bezos has promised $1 billion to support education innovation. So far, despite starting before my sister, he has so far opened as many classrooms: one. As the innovator in the ‘two-pizza meeting’, I wish Bezos would start with many, small experiments in education rather than big public dedications, so he could nurture innovation and select strategies for success.

I would love to see more examples of “sniper and spotter” approaches in the comments…but no sniping please 🙂

Three Roads to Racism

Are you a racist?

Anyone can feel free to answer this question any way it/she/he wishes; they wish. And that’s the problem. In this short essay, I aim first to do a little vocabulary house-keeping. Second, I try to trace three distinct origins of racism. I operate from thin authority. My main sources are sundry un-methodical readings, especially on slavery, spread over fifty years, and my amazingly clear recollection of lectures by my late teacher at Stanford, St. Clair Drake, in the sixties. (He was the author of Black Metropolis among other major contributions.) I also rely on equally vivid memories of casual conversations with that master storyteller. Here you have it. I am trying to plagiarize the pioneer St. Clair Drake. I believe the attempt would please him though possibly not the results.

Feel free to reject everything I say below. If nothing else, it might make you feel good. If you are one of the few liberals still reading me, be my guest and get exercised. Besides, I am an old white man! Why grant me any credence?

That’s on the one hand. On the other hand, in these days (2020) obsessed with racism, I never see or hear the basic ideas about racism set down below expressed in the media, in reviews or on-line although they are substantially more productive than what’s actually around. I mean that they help arrive at a clearer and richer understanding of racism.

If you find this brief essay even a little useful, think of sharing it. Thank you.

Racism

“Racism” is a poor word because today, it refers at once to thoughts, attitudes, feeling, and also to actions and policies. Among the latter, it concerns both individual actions and collective actions, and even policies. Some of the policies may be considered to be included in so-called “systemic racism” about which I wrote in my essay “Systemic Racism: a Rationalist Take.”

The mishmash between what’s in the heads of people and what they actually do is regrettable on two grounds. First, the path from individual belief, individual thoughts, individual attitudes, on the one hand, to individual action, on the other is not straightforward. My beliefs are not always a great predictor of my actions because reality tends to interfere with pure intent.

Second, collective action and, a fortiori policies, rarely looks like the simple addition of individual actions. People act differently in the presence of others than they do alone. Groups (loosely defined) are capable of greater invention than are individuals. Individuals in a group both inspire and censor one another; they even complete one another’s thoughts; the ones often give the others courage to proceed further.

This piece is about racism, the understanding, the attitudes, the collection of beliefs which predispose individuals and groups to thinking of others as inferior and/or unlikable on the basis of some physical characteristics. As I said, racism so defined can be held individually or collectively. Thus, this essay is deliberately not about actions, program, failures to act inspired by racism, the attitude. That’s another topic others can write about.

Fear and loathing of the unknown

Many people seem to assume that racial prejudice is a natural condition that can be fought in simple ways. Others, on the contrary, see it as ineradicable. Perhaps it all depends on the source of racism. The word mean prejudgment about a person’s character and abilities based on persistent physical traits that are genetically transmitted. Thus, dislike of that other guy wearing a ridiculous blue hat does not count; neither does hostility toward one sex or the other (or the other?). I think both assumptions above – racism as natural and as ineradicable – are partly but only partly true. My teacher St. Clair Drake explained to me once, standing in the aisle of a Palo Alto bookstore, that there are three separate kinds of racial prejudice, of racism, with distinct sources.

The first kind of racism is rooted in fear of the unknown or of the unfamiliar. This is probably hard-wired; it’s human nature. It would be a good asset to have for the naked, fairly slow apes that we were for a long time. Unfamiliar creature? Move away; grab a rock. After all, those who look like you are usually not dangerous enemies; those who don’t, you don’t know and why take a risk?

Anecdote: A long time ago, I was acting the discreet tourist in a big Senegalese fishing village. I met a local guy about my age (then). We had tea together, talked about fishing. He asked me if I wanted to see his nearby house. We walked for about five minute to a round adobe construction covered in thatch. He motioned me inside where it was quite dark. A small child was taking a nap on a stack of blankets in the back. Sensing a presence, the toddler woke up, opened his eyes, and began screaming at the top of his lungs. The man picked him up and said very embarrassed. “I am sorry, my son has never seen a toubab before.” (“Toubab” is the local not unfriendly word for light skin people from elsewhere.)

Similarly, Jared Diamond recounts (and show corresponding pictures in his book, The World Until Yesterday: What Can We Learn from Traditional Societies. Viking: New York.) of how central New Guinea natives became disfigured by fear at their first sight of a white person. Some explained later that they thought they might be seeing ghosts.

Night terrors

The second distinctive form of racism simply comes from fear of the dark, rooted itself in dread of the night. It’s common to all people, including dark skinned people, of course. It’s easy to understand once you remember that beings who were clearly our direct ancestors, people whose genes are in our cells, lived in fear of the darkness night after night for several hundreds of thousands of years. Most of their fears were justified because the darkness concealed lions, leopards, hyenas, bears, tigers, saber-toothed cats, wolves, wild dogs, and other predators, themselves with no fear of humans. The fact that the darkness of night also encouraged speculation about other hostile beings -varied spirits – that did not really exist does not diminish the impact of this incomplete zoological list.

As is easy to observe, the association dark= bad is practically universal. Many languages have an expression equivalent to: “the forces of darkness.” I doubt that any (but I can’t prove it, right now) says, “the forces of lightness” to designate something sinister. Same observation with “black magic,” and disappearing into a “black hole.” Similarly, nearly everywhere, uneducated people, and some of their educated betters, express some degree of hostility – mixed with contempt, for those, in their midst or nearby, who are darker than themselves. This is common among African Americans, for example. (Yes, I know, it may have other sources among them, specifically.)

This negative attitude is especially evident in the Indian subcontinent. On a lazy day, thirty years ago in Mumbai, I read several pages of conjugal want ads in a major newspaper. I noticed that 90% of the ads for would-be brides mentioned skin color in parallel with education and mastery of the domestic arts. (The men’s didn’t.) A common description was “wheatish,” which, I was told by Indian relatives, means not quite white but pretty close. (You can’t lie too shamelessly about skin tone because, if all goes well, your daughter will meet the other side in person; you need wiggle room.) In fact, the association between skin color and likability runs so deep in India that the same Sanskrit word, “varna,” designates both caste and color (meaning skin complexion). And, of course, there is a reason why children everywhere turn off the light to tell scary stories.

In a similar vein, the ancient Chinese seem to have believed that aristocrats were made from yellow soil while commoners were made from ordinary brown mud. (Cited by Harari, Yuval N. – 2015 – in: Sapiens: A Brief History of Humankind Harper: New York.)

Some would argue that these examples represent ancestral fears mostly left behind by civilized, urban (same thing) people. My own limited examples, both personal and from observation is that it’s not so. It seems to me that fear of the dark is the first or second page of the book of which our daily street-lit, TV illuminated bravado is the cover. Allow a couple of total power stoppages (as Californians experienced recently) and it’s right there, drilling into our vulnerable minds.

Both of these two first kinds of negative feelings about that which is dark can be minimized, the first through experience and education: No, that pale man will not hurt you. He might even give you candy, or a metal ax. The second source of distaste of darkness has simply been moved to a kind of secondary relevance by the fact that today, most people live most of the time in places where some form of artificial lightning is commonplace. It persists nevertheless where it is shored up by a vast and sturdy institutional scaffolding as with the caste system of largely Hindu India. And it may be always present somewhere in the back of our minds but mostly, we don’t have a chance to find out.

The third source of hostility toward and contempt for a dark appearance is both more difficult to understand and harder to eliminate or even to tamp down. Explaining it requires a significant detour. Bear with me, please.

The origins of useful racism

Suppose you believe in a God who demands unambiguously that you love your “neighbor,” that is, every human being, including those who are not of your tribe, even those you don’t know at all. Suppose further that you are strongly inclined toward a political philosophy that considers all human beings, or at least some large subcategory of them, as fundamentally equal, or at least equal in rights. Or imagine rather that you are indifferent to one or both ideas but that you live among neighbors 90% of whom profess one, and 80% both beliefs. They manifest and celebrate these beliefs in numerous and frequent public exercises, such as church services, elections, and civic meetings where important decisions are launched.

Now a second effort of imagination is required. Suppose also that you or your ancestors came to America from the British Isles, perhaps in the 1600s, perhaps later. You have somehow acquired a nice piece of fertile land, directly from the Crown or from a landed proprietor, or by small incremental purchases. You grow tobacco, or indigo, or rice, or (later) cotton. Fortune does not yet smile on you because you confront a seemingly intractable labor problem. Almost everyone else around you owns land and thus is not eager to work for anyone else. Just about your only recourse is temporarily un-free young men who arrive periodically from old Britain, indentured servants (sometimes also called “apprentices”). Many of them are somewhat alien because they are Irish , although most of them speak English, or some English. Moreover, a good many are sickly when they land. Even the comparatively healthy young men do not adjust well to the hot climate. They have little resistance to local tropical diseases such as malaria and yellow fever. Most don’t last in the fields. You often think they are not worth the trouble. In addition, by contract or by custom, you have to set them free after seven years. With land being so attainable, few wish to stick around and earn a wage from you .

One day you hear that somewhere, not too far, new, different kinds of workers are available that are able to work long days in the heat and under the sun and who don’t succumb easily to disease. You take a trip to find out. The newcomers are chained together. They are a strange dark color, darker than any man you have seen, English, Irish, or Indian. Aside from this, they look really good as field hands go. They are muscular, youngish men in the flower of health. (They are all survivors of the terrible Atlantic passage and, before that, of some sort of long walk on the continent of Africa to the embarkation point at Goree, Senegal, or such. Only the strong and healthy survived such ordeals, as a rule.) There are a few women of the same hue with them, mostly also young.

Those people are from Africa, you are told. They are for outright sale. You gamble on buying two of them to find out more. You carry them to your farmstead and soon put them to work. After some confusion because they don’t understand any English, you and your other servants show them what to do. You are soon dazzled by their physical prowess. You calculate that one of them easily accomplishes the tasks of two of your indentured Irish apprentices. As soon as you can afford it, you go and buy three more Africans.

Soon, your neighbors are imitating you. All the dark skinned servants are snapped up as fast as they are landed. Prices rise. Those people are costly but still well worth the investment because of their superior productivity. Farmers plant new crops, labor intensive, high yield crops – -such as cotton – that they would not have dared investing in with the old kind of labor. To make the new labor even more attractive, you and your neighbors quickly figure that it’s also capital because it can be made to be self-reproducing. The black female servants can both work part of the time and make children who are themselves servants that belong to you by right. (This actually took some time to work out legally.)

Instrumental severity and cruelty

You are now becoming rich, amassing both tools and utensils and more land. All is still not completely rosy on your plantation though. One problem is that not all of your new African servants are docile. Some are warriors who were captured on the battlefield in Africa and they are not resigned to their subjection. A few rebel or try to run away. Mostly, they fail but their doomed attempts become the stuff of legend among other black servants thus feeding a chronic spirit of rebelliousness. Even in the second and third generation away from Africa, some black servants are born restive or sullen. And insubordination is contagious. At any rate, there are enough free white workers in your vicinity for some astute observers among your African servants to realize that they and their companions are treated comparatively badly, that a better fate is possible. Soon, there are even free black people around to whom they unavoidably compare themselves. (This fact deserves a full essay in its own right.)

To make a complex issue simple: Severity is necessary to keep your workforce at work. Such severity sometimes involves brutal public punishment for repeat offenders, such as whippings. There is a belief about that mere severity undermines the usefulness of the workforce without snuffing out its rebelliousness. Downright cruelty is sometimes necessary, the more public, the better. Public punishment is useful to encourage more timid souls to keep towing the line.

And then, there is the issue of escape. After the second generation, black slaves are relatively at home where they work. Your physical environment is also their home where some think they can fend for themselves. The wilderness is not very far. The slaves also know somehow that relatively close by are areas where slavery is prohibited or not actively enforced by authorities. It’s almost a mathematical certainty that at any time, some slaves, a few slaves, will attempt escape. Each escape is a serious economic matter because, aside from providing labor, each slave constitutes live capital. Most owners have only a few slaves. A single escape constitutes for them a significant form of impoverishment. Slaves have to be terrorized into not even wanting to escape.

Soon, it’s well understood that slaves are best kept in a state of more or less constant terror. It’s so well understood that local government will hang your expensive slave for rebellion whether you like it or not.

Inner contradiction

In brief, whatever their natural inclination, whatever their personal preference, slave owners have to be systematically cruel. And, it’s helpful for them to also possess a reputation for cruelty. This reputation has to be maintained and re-inforced periodically by sensationally brutal action. One big problem arises from such a policy of obligatory and vigilant viciousness: It’s in stark contradiction with both your religious and your political ideas that proclaim that one must love others and that all humans are at least potentially equal (before God, if nowhere else). And if you don’t hold deeply such beliefs yourself, you live among people who do, or who profess to. And, by a strange twist of fate, the richest, best educated, probably the most influential strata of your society are also those most committed to those ideals. (They are the class that would eventually produce George Washington and Thomas Jefferson.)

The personal psychological tension between the actual and highly visible brutal treatment of black slaves and prevailing moral values is technically a form of dissonance.” It’s also a social tension; it expresses itself collectively. Those actively involved in mistreating slaves are numerous. In vast regions of the English colonies, and later, of the United States, the contrast between action and beliefs is thus highly visible to everyone, obvious to many who are not themselves actively involved. It becomes increasingly difficult over time to dismiss slavery as a private economic affair because, more and more, political entities make laws actively supporting slavery. There are soon laws about sheltering fugitives, laws regulating the punishment of rebellious slaves, laws about slave marriage and, laws restricting the freeing of slaves, (“manumission”). Slavery thus soon enters the public arena. There are even laws to control the behavior of free blacks, those who merely used to be slaves.

Race as legal status

Special rules governing free blacks constitute an important step because, for the first time it replaces legal status (“slave,” chattel”), with race (dark skin, certain facial features, African ancestry). So, with the advent of legislation supporting slavery, an important symbolic boundary is crossed. The laws don’t concern only those defined by their legal condition of chattel property but also others, defined mostly or largely by their physical appearance and by their putative ancestry in Africa. At this point, every white subject, then every white citizen has become a participant in a struggle that depends on frankly racial categories by virtue of his belonging to the polity. Soon the social racial category “white” comes to stand for the legal status “free person,” “non-slave.”

Then, at this juncture, potentially every white adult becomes a party to the enforcement of slavery. For almost all of them, this participation, however passive, is in stark contradiction with both religious and political values. But ordinary human beings can only live with so much personal duplicity. Some whites will reject black slavery, in part or in whole. Accordingly, it’s notable that abolitionists always existed and were vocal in their opposition to slavery in the English colonies, and then in the United States, even in the deepest South. Their numbers and visibility never flagged until the Civil War.

How to reduce tension between beliefs and deeds

There are three main paths out of this personal moral predicament. They offer different degrees of resistance. The first path is to renounce one’s beliefs, those that are in contradiction to the treatment of one’s slaves. A slave owner could adjust by becoming indifferent to the Christian message, or skeptical of democratic aspiration, or both. No belief in the fraternity of Man or in any sort of equality between persons? Problem solved. This may be relatively feasible for an individual alone. In this case, though the individuals concerned, the slave owners, and their slave drivers, exist within a social matrix that re-inforces frequently, possibly daily the dual religious command to treat others decently and the political view that all men are more or less equal. Churches, political organizations, charity concerns, and gentlemen’s club stand in the way. To renounce both sets of beliefs – however attractive this might be from an individual standpoint – would turn one into a social pariah. Aside from the personal unpleasantness of such condition, it would surely have adverse economic repercussions.

The second way to free oneself from the tension associated with the contrast between humane beliefs, on the one hand, and harsh behavior, on the other hand, is simply to desist from the latter. Southern American chronicles show that a surprisingly large numbers of slave owners chose that path at any one time. Some tried more compassionate slave driving, with varying degrees of economic success. Others – who left major traces, for documentary reasons – took the more radical step of simply freeing some of their slaves when they could, or when it was convenient. Sometimes, they freed all of their slaves, usually at their death, through their wills, for example. The freeing of slaves – manumission – was so common that the rising number of free blacks was perceived as a social problem in much of the South. Several states actually tried to eliminate the problem by passing legislation forbidding the practice.

Of course, the fact that so many engaged in such an uneconomic practice demonstrates in itself the validity of the idea that the incompatibility between moral convictions and slave driving behavior generated strong tensions. One should not take this evidence too far however because there may have been several reasons to free slaves, not all rooted in this tension. (I address this issue briefly in “Systemic Racism….”)

The easy way out

The third way to reduce the same tension, the most extreme and possibly the least costly took two steps. Step one consisted in recognizing consciously this incompatibility; step two was to begin mentally to separate the black slaves from humanity. This would work because all your bothersome beliefs – religious and political – applied explicitly to other human beings. The less human the objects of your bad treatment the less the treatment contravened your beliefs. After all, while it may be good business to treat farm animals well, there is not much moral judgment involved there. In fact, not immediately but not long after the first Africans landed in the English colonies of North America, there began a collective endeavor aiming at their conceptual de-humanization. It was strongly a collective project addressing ordinary people including many who had no contacts with black slaves or with free blacks. It involved the universities and intellectual milieus in general with a vengeance (more on this latter).

Some churches also lent a hand by placing the sanction of the Bible in the service of the general idea that God himself wanted slaves to submit absolutely to the authority of their masters. To begin with, there was always to story of Noah’s three sons. The disrespectful one, Ham, cursed by Noah, was said to be the father of the black race, on the thin ground that his name means something like “burnt.” However, it’s notable that the tension never disappeared because other churches, even in the Deep South, continued their opposition to slavery on religious grounds. The Quakers, for example, seldom relented.

Their unusual appearance and the fact that the white colonists could not initially understand their non-European languages (plural) was instrumental in the collective denial of full humanity to black slaves. In fact, the arriving slaves themselves often did not understand one another. This is but one step from believing that they did not actually possess the power of speech. Later, as the proportion of America-born slaves increased, they developed what is known technically as a creole language to communicate with one another. That was recognizably a form of English but probably not understood by whites unless they tried hard. Most had few reasons to try at all. Language was not the only factor contributing to the ease with which whites, troubled by their ethical beliefs, denied full humanity to black slaves. Paradoxically, the degrading conditions in which the slaves were held must also have contributed to the impression of their sub-humanity.

Science enlisted

The effort to deny full humanity to people of African descent continued for two centuries. As the Enlightenment reached American shores, the focus shifted from Scriptures to Science (pseudo science, sometimes but not always). Explorers’ first reports from sub-tropical Africa seemed to confirmed the soundness of the view that black Africans were not completely human: There were no real cities there, little by way of written literature, no search for knowledge recognizable as science, seemingly no schools. What art conscious visitors reported on did not seem sufficiently realistic to count as art by 18th and 19th century standards. I think that no one really paid attention to the plentiful African artistic creativity– this unmixed expression of humanity if there ever was one – until the early 1900s. Instead, African art was dismissed as crude stammering in the service of inarticulate superstitions.

The effort to harness science in service of the proposition of African un-humanity easily outlasted the Civil War and even the emancipation of slaves in North America. After he published the Origins of the Species in 1859, Darwin spent much of the balance of his life – curiously allied with Christians – in combating the widespread idea that there had been more than one creation of humanoids, possibly, one for each race. The point most strongly argued by those holding to this view was that Africans could not possibly be the brothers, or other close relatives, of the triumphant Anglo-Saxons. The viewpoint was not limited to the semi-educated by any means. The great naturalist Louis Agassiz himself believed that the races of men were pretty much species. In support, he presented the imaginary fact that the mating of different races – like mating between horses and donkeys – seldom produced fertile offspring. (All recounted in: Desmonds, Adrian, and James Moore. 2009. Darwin’s Sacred Cause: How A Hatred of Slavery Shaped Darwin’s Views on Human Evolution. Hougton: NY.)

Differential persistence

Those three main roads to racism are unequal in their persistence. Dislike for strangers tends to disappear of its own accord. Either the frightening contact ceases or it is repeated. In the first case, dislike turns irrelevant and accordingly becomes blurred. In the second case, repeated experience will often demonstrate that the strangers are not dangerous and the negative feelings subside of their own accord. If the strangers turn out to be dangerous overall, it seems to me that negative feelings toward them does not constitute racism. This, in spite of the fact that the negativity may occasionally be unfair to specific, individual strangers.

Racial prejudice anchored in atavistic fear of the night may persist in the depth of one’s mind but it too, does not survive experience well. Exposed to the fact that dark people are not especially threatening, many will let the link between darkness and fear or distaste subside in their minds. For this reason, it seems to me that the great American experiment in racial integration of the past sixty years was largely successful. Many more white Americans today personally know African Americans than was the case in 1960, for example. The black man whose desk is next to yours, the black woman who attends the same gym as you week after week, the black restaurant goers at you favored eating place, all lose their aura of dangerousness through habituation. Habituation works both ways though. The continued over-representation of black men in violent crimes must necessarily perpetuates in the minds of all (including African Americans) the association between danger and a dark complexion.

The road to racism based on the reduction of the tension between behavior and beliefs via conceptual de-humanization of the victims has proved especially tenacious. Views of people of African descent, but also of other people of color, as less than fully human persist or re-merge frequently because they have proved useful. This approach may have saved the important part of the American economy based on slavery until war freed the slaves without removing the de-humanizing. As many leftists claim (usually without evidence) this was important to the latter fast development of the American economy because cotton production in the South was at its highest in the years right preceding the Civil War. In the next phase the view of black Americans as less than human served well to justify segregation for the next hundred years. It was thus instrumental in protecting poor whites from wage competition with even poorer African Americans.

In the second half of the 19th century and well into the 20th, the opinion that Africans – and other people of color – were not quite human also strengthened the European colonial enterprise in many places. (The de-humanization of colonial people was not inevitable though. The French justification of colonialism – “France’s civilizing mission” – is incompatible with this view. It treated the annexed people instead as immature, as infantile, rather than as subhuman.)

This third road to racism tends to last because it’s a collective response to a difficult situation that soon builds its own supporting institutions. For a long time, in America and in the West, in general, it received some assistance from the new, post-religious ideology, science. Above all, it’s of continuing usefulness in a variety of situations. This explanation reverses the naive, unexamined explanation of much racism: That people act in cruel ways toward others who are unlike them because they are racist. It claims, rather that they become racist in order to continue acting in cruel ways toward others, contrary to their own pre-existing beliefs that enjoin them to treat others with respect. If this perspective is correct, we should find that racism is the more widespread and the more tenacious the more egalitarian and the more charitable the dominant culture where it emerges.

AOC doesn’t understand Christianity

I believe I wasted a lot of time some years ago arguing if Venezuela was a democracy or not under Hugo Chávez. The difficulty with this kind of conversation is that people can have very different views on what constitutes a “democracy”. That is part of the reason why North Korea can call itself a “democratic republic”. However, when somebody claims something about Christianity, and specially about what the Bible says, I feel more comfortable to debate.

I understand that it is like flogging a decomposing horse, but some months ago representative Alexandria Ocasio-Cortez supposedly called out the hypocrisy of religious conservatives using their faith to justify bigotry and discrimination in the United States. Her speech can be watched here. I believe her point is this: conservative Christians only care about religion in order to support their so-called “bigotry”. AOC believes that Christians should support socialism, because after all, that’s what Jesus would do.

AOC says some truths: sadly, the Christian Scriptures have been distorted many times over American history to defend political agendas they were never meant to defend. AOC could go even further on that if she wanted: the Cristian Scriptures were completed almost 2 thousand years ago, and they simply don’t talk specifically to many of the political issues we have today. One can say they offer principles of conduct, but it’s really up to us to figure out how these apply to concrete situations we find today. In this case, using Scripture to support political agendas can be not only morally wrong, but also naive and misguided.

AOC is also right when she says that human life is special (although I would question if “holy” applies) and that we should “fight for the least of us”. All these statements and more are true!

What AOC really doesn’t seem to understand (and frankly, this is quite scary) is that Christianity can’t be forced upon people. Yes, biblically speaking, we are to care for the poor. However, the Bible is addressing we as individuals. There is absolutely nothing in the Bible that says that we are to provide medical care for those who cannot afford via government fiat. Actually, there is nothing in the Bible that says that I can force people to act as Christians when they are not.

One of the great gifts from modernity is separation between church and state. I would submit that this separation was present in Christianity from the start, but the concept was so radically different from everything people were used to that it took some centuries for it to be put into practice, and we are actually still working on it. One of the things we realized in modern times is that we can’t force people to be Christians via government. And in her speech, AOC is trying to undo that. She wants government to force people to do a charitable work that can only be done if it is their choice.

As a Christian, I would say this: I would like to diminish suffering in this world, and this is exactly why I’m against the socialism AOC supports. One doesn’t have to be a genius to realize that the poor are much better off in countries that go further away from what Ms. Ocasio supports. It’s not simply a matter of wanting to help the poor, but of doing it in efficacious way. And also: I want to invite people to look to the example of Jesus, who being rich made himself poor for the sake of many. I do hope that more and more people might have their lives changed by Jesus. But I don’t want to force anybody to do that. I want to invite people to consider what Scripture says, and to make their choice to change their lives. As for now, I believe that capitalism is the most efficient way to help the poor humanity has discovered so far.