A warm NOL welcome to Vishnu Modur

Folks, as you have probably guessed by now, NOL has a new blogger. His name is Vishnu, and you can read about him right here:

Vishnu Modur is a Ph.D. in molecular biology who works as a cancer biologist at Cincinnati Children’s Hospital. He has diverse passions outside the lab setting. He is deeply interested in Indic cultural anthropology, Indic philosophy, political philosophy, and philosophy of science. He blogs about his scientific research on Medium and writes about history, politics, and culture on NOL. He quips that as a resident-alien in the United States, he can offer a unique perspective, using his resident and sometimes his ‘alien’ viewpoints on several issues.

Check out his posts so far, and don’t forget to say ‘hi’ in the comments.

‘South Asian’ identity signals alignment without being aligned to anything specific

Of late, a growing number of Indian-Americans look to assert a South Asian identity for most of their sociopolitical and cultural expressions even though actual residents of ‘South Asia’ don’t claim this identity in any way, home or abroad. I realize that second-generation Indian-Americans embrace ‘South Asian’ forums in reaction to various domestic conditions. However, they ignore the polysemy of the term ‘South Asia’ when they project it internationally, for example, to express ‘South Asian’ pride over Kamala Devi Harris’s historic election for the Vice Presidency, instead of just Indian-American pride. Of course, I’m not talking about African-American pride here; it is beyond the purview of my discussion.

According to my understanding, increasing application of the term ‘South Asia’—just like the Middle East—precludes a nuanced perception of the particular countries that make up the region. It permits Americans to perceive the region like it is a monolith. Although the impression of the United States is striking in the Indian imagination, the image of India, as it turns out, is not very obvious for the average citizen in the United States, not even among second-generation Indian Americans, as I see it. To gauge American curiosity in a particular region, language enrollment in US universities is a decent metric. It turns out, around seven times more American students study Russian than all the Indian languages combined. The study of India compares unfavorably with China in nearly every higher education metric, and surprisingly, it also fares poorly compared to Costa Rica! As an aside, to understand India and her neighborhood, an alternate perspective to CNN or BBC on ‘South Asian’ geopolitics is WION (“World is One” News – a take on the Indic vasudhaiva kutumbakam). I highly recommend the Gravitas section of WION for an international audience. 

Back to the central question: ‘South Asia’ and why Indians do not prefer this tag?

For decades, the United States hyphenated its India policy by balancing every action with New Delhi with a counterbalancing activity with Islamabad. So much so that the American focus on Iran and North Korean nuclear proliferation stood out in total contrast to the whitewashing of Pakistan’s private A.Q. Khan network for nuclear proliferation. Furthermore, in a survey conducted by the Chicago Council on Global Affairs that gauges how Americans perceive other countries, India hovered between forty-six and forty-nine on a scale from zero to one hundred since 1978, reflecting its reputation neither as an ally or an adversary. With the civil-nuclear deal, the Bush administration discarded the hyphenation construct and eagerly pursued an independent program between India and the United States. Still, in 2010, only 18 percent of Americans saw India as “very important” to the United States—fewer than those who felt similarly about Pakistan (19%) and Afghanistan (21%), and well below China (54%) and Japan (40%). Even though the Indo-US bilateral relationship has transformed for the better from the Bush era, the increasing use of ‘South Asia’ on various platforms by academics and non-academics alike, while discussing India, represents a new kind of hyphenated view or a bracketed view of India. Many Indian citizens in the US like me find this bracket unnecessary, especially in the present geopolitical context. 

What geopolitical context? There are several reasons why South Asian identity pales in comparison to our national identities:

  1. The word ‘South Asia’ emerged exogenously as a category in the United States to study the Asian continent by dividing it. So, it is a matter of post Second World War scholarship of Asia from the Western perspective.
  2. Despite scholarship, ‘South Asia’ has low intelligibility because there is no real consensus over which countries comprise South Asia. SAARC includes Afghanistan among its members; the World Bank leaves it out. Some research centers include Myanmar—a province of British India till 1937, and Tibet, but leave out Afghanistan and the Maldives. For instance, the UK largely accepts the term ‘Asian’ rather than ‘South Asian’ for academic centers. The rest of Europe uses ‘Southeast Asia.’
  3. Besides, geopolitically, India wants to grow out of the South Asian box; it cares a lot more about the ASEAN and BRICS grouping than SAARC. 
  4. Under Modi, India has a more significant relationship with Japan than with any South Asian neighbor. With Japan and South Korea, India plans to make Indo-pacific a geopolitical reality. 
  5. South Asia symbolizes India’s unique hegemonic fiefdom, which is viewed unfavorably by neighboring Nepal, Sri Lanka, Bangladesh, and Pakistan.
  6. According to the World Bank, South Asia remains one of the least economically integrated regions globally.
  7. South Asia is also among the least physically integrated (by road infrastructure) regions of the world and this disconnect directly affects our politics and culture.

Therefore, the abstract nature of ‘South Asia’ is far from a neutral term that embraces multiple cultures. It is, at best, a placeholder for structured geopolitical co-operation in the subcontinent. However, in socio-cultural terms, ‘South Asia’ used interchangeably with India signals India’s dominance over her neighborhood. Contrarily, in India’s eyes, it is a dilution of her rising aspirations on the world stage. These facts widen the gap between the US’s intentions (general public and particularly, second-generation Indian-Americans) and a prouder India’s growing ambitions. 

Besides, it is worth mentioning that women leaders have already held the highest public office in Pakistan, India, Sri Lanka, Bangladesh, etc. So as you see in this video, the Indian international actress, Priyanka Chopra, tries her best to be diplomatic about this nebulous ‘South Asian’ pride thingy, but she rejoins with the more solid identity, her Indian identity. The next time, say a Nepalese-American does something incredible in the US, and you want to find out how another Nepali feels about this achievement, as a matter of experiment, refer to the accomplishment as Nepali pride, instead of South Asian pride, and see the delight on the person’s face. Repeat this with another Nepali, but this time use the ‘South Asian’ identity tag and note the contrast in the reaction.

Post-Mortem

Mr Trump is practically gone and he is not coming back. (For one thing, he will be too old in 2024. For another thing, see below.) The political conditions that got such an un-preposterous candidate elected in 2016 however, those conditions, don’t look like they are going away. (I hope I am wrong.) A large fraction of Americans will continue to be ignored from an economic standpoint, as well as insulted daily by their better. Four years of insults thrown at people like me and the hysterical outpouring of contempt by liberal media elites on the last days of the Trump administration are not making me go away. Instead, they will cement my opposition to their vision of the world and to their caste behavior. I would bet dollars on the penny that a high proportion of the 74 million+ who voted for Mr Trump in 2020 feels the same. (That’s assuming that’s the number who voted for him; I am not sure of it at all. It could be more. Currently, with the information available, I vote 60/40 that the election was not – not – stolen.)

I never liked Trump, the man, for all the obvious reasons although I admired his steadfastness because it’s so rare among politicians. In the past two years, I can’t say I liked any of his policies, though I liked his judicial appointments. It’s just that who else could I vote for in 2016? Hillary? You are kidding, right? And in 2020, after President Trump was subjected to four years (and more) of unceasing gross abuse and of persecution guided by a totalitarian spirit, would it not have been dishonorable to vote for anyone but him? (Libertarians: STFU!)

Believe it or not, if Sen. Sanders and his 1950 ideas had not been eliminated again in 2020, again through the machinations of the Dem. National Committee, I would have had a serious talk with myself. At least, Sanders is not personally corrupt, and with a Republican Senate, we would have had a semi-paralyzed government that would have been OK with me.

One week after the event of 1/6/21, maybe “the breach” of the Capitol, many media figures continue to speak of a “coup.” Even the Wall Street Journal has joined in. That’s downright grotesque. I don’t doubt that entering the Capitol in a disorderly fashion and, for many, (not all; see the videos) uninvited, is illegal as well as unseemly. I am in favor of the suspects being found and prosecuted, for trespassing, or something. This will have the merit of throwing some light on the political affiliation(s) of the window breakers. I still see no reason to abandon the possibility that some, maybe (maybe) in the vanguard, were Antifa or BLM professional revolutionaries. Repeating myself: Trump supporters have never behaved in that manner before. I am guessing the investigations and the prosecutions are going to be less than vigorous precisely because the new administration will not want to know or to have the details be known of the criminals’ identity. If I am wrong, and all the brutal participants were Trump supporters, we will know it very quickly. The media will be supine either way.

It’s absurd and obscenely overwrought to call the breaching of the Capitol on January 6th (by whomever), a “coup” because there was never any chance that it would result in transferring control of the federal government to anyone. Develop the scenario: Both chambers are filled with protesters (of whatever ilk); protesters occupy both presiding chairs, and they hold in their hands both House and Senate gavels. What next? Federal agencies start taking their orders from them; the FBI reports to work as usual but only to those the protesters appoint? Then, perhaps, the Chairman of the Joint Chiefs interrupts the sketchy guy who is taking a selfie while sitting in the VP chair. He says he wants to hand him the nuclear controls football. (Ask Nancy Pelosi, herself perpetrator of a coup, though a small one.) If you think any of this is credible, well, think about it, think about yourself, think again. And get a hold!

That the Capitol riot was a political act is true in one way and one way only, a minor way. It derailed the electoral vote counting that had been widely described as “ceremonial.” Happened after (after) the Vice-President had declared loud and clear that he did not have the authority to change the votes. The counting resumed after only a few hours. There is no scenario, zero, under which the riot would have altered the choice of the next president. If there had been, the breach would have been a sort of coup, a weak one.

On 1/9/21, an announcer, I think it was on NPR, I hope it was on NPR, qualified the events as a “deadly” something or other. He, and media in general, including Fox News, I am afraid, forgot to go into the details. In point of fact, five people died during the protest and part-riot of 1/6/21. One was a Capitol policeman who was hit with a fire extinguisher. As I write, there is no official allegation about who did it. There is no information about the political affiliation, if any, of the culprit(s). For sure, protesters caused none of three next deaths which were due to medical emergencies, including a heart attack. The fifth casualty was a protester, who was probably inside the Capitol illegally, and who was shot to death by a policeman. She was definitely a Trump supporter. She was unarmed. Many people who are busy with their lives will think that Trump supporters had massacred five people because of the mendacity of the language used on air. Disgraceful, disgusting reporting; but we are getting used to it.

Today and yesterday, I witnessed a mass movement I think I have not seen in my life though it rings some historical bells. Pundits, lawmakers, and other members of their caste are elbowing one another out of the way to be next to make extremist pronouncements on the 1/6/21 events. Why, a journalist on Fox News, no less, a pretty blond lady wearing a slightly off the shoulder dress referred to a “domestic terror attack.” With a handful of courageous exceptions, all lawmakers I have seen appearing in the media have adopted extreme vocabulary to describe what remained a small riot, if it was a riot at all. I mean that it was a small riot as compared to what happened in several American cities in the past year. The hypocrisy is colossal in people who kept their mouths mostly shut for a hundred nights or more of burning of buildings, of police cars, of at least one police precinct (with people in it), and of massive looting.

It’s hard to explain how the media and the political face of America became unrecognizable in such a short time. Two hypotheses. First, many of the lawmakers who were in the Capitol at the time of the breach came to fear for their personal safety. Four years of describing Trump supporters as Nazis and worse must have left a trace and multiplied their alarm. Except for the handful of Congressmen and women who served in the military and who saw actual combat, our lawmakers have nothing in their lives to prepare them for physical danger. They mostly live cocooned lives; the police forces that protect them have not been disbanded. (What do you know?) I think they converted the abject fear they felt for a short while into righteous indignation. Indignation is more self-respecting than fear for one’s skin.

My second hypothesis to explain the repellent verbal behavior: The shameful noises I heard in the media are the manifestation of a rat race to abandon a sinking ship. Jobs are at stake, careers are at stake, cushy lifestyles are at stake. “After Pres. Trump is gone, as he surely will be soon,” the lawmakers are thinking, “there will be a day of reckoning, and a purge. I have to establish right away a vivid, clear, unforgettable record of my hatred to try and avoid the purge. No language is too strong to achieve this end.” That’s true even for Republican politicians because, they too have careers. Trump cabinet members resigned for the same reason, I think when they could have simply declared, “I don’t approve of…. but I am staying to serve the people to the end.”

Along with an outburst of extremist public language, there came a tsunami of censorship by social media, quite a few cases of people getting fired merely for having been seen at the peaceful demonstration (all legal though repulsive), and even a breach of contract by a major publisher against a US Senator based solely on his political discourse (to be resolved in court). And then, there are the enemy lists aired by the likes of CNN, for the sole purpose of ruining the careers of those who served loyally in the Trump administration.

President-elect Bidden called for “unity.” Well, I have never, ever seen so much unity between a large fraction of the political class – soon an absolute majority in government – the big media, and large corporations. I have never seen it but I have read about it. Such a union constituted the political form called “corporatism.” It was the practical infrastructure of fascism.

As if political correctness had only been its training wheels, the vehicle of political censorship is speeding up. The active policing of political speech can’t be far behind. It won’t even require a revision of the federal constitution so long as private companies such as Twitter and Facebook do the dirty work. Soon, Americans will watch what they are saying in public. I fear that national police agencies will be turned to a new purpose. (The FBI, already proved its faithlessness four years ago, anyway.) Perhaps, there will be little collective cynicism involved. It’s not difficult to adopt liberalism, a self-indulgent creed. And what we understand here (wrongly) to be “socialism” only entails an endless Christmas morning. So, why not? The diabolical Mr Trump will soon be remembered as having incited some misguided, uneducated, unpolished (deplorable) Americans to massacre their legitimately elected representatives.

Incidentally, in spite of a near consensus on the matter, I have not seen or heard anything from Pres. Trump that amounts to incitement to do anything (anything) illegal. There are those who will retort that inviting his angry supporters to protest was tantamount to incitement to violence. The logic of this is clear: Only crowds that are not angry should be invited to protest. Read this again. Does it make any sense? Make a note that the constitutional propriety of Mr Trump’s belief that the election had been stolen is irrelevant here. One does not have to be constitutionally correct to have the right to protest.

Night has fallen over America. We are becoming a totalitarian society with a speed I could not have foreseen. Of course four years of unrelenting plotting to remove the properly elected president under false pretenses paved the way. Those years trained citizens to accept the unacceptable, to be intellectually docile. Suddenly I don’t feel safe. I am going to think over my participation in the social media both because of widespread censorship and because it now seems dangerous. As far as censorship is concerned I tried an alternative to Facebook, “Parler,” but it did not work for me. Besides, it seems that the big corporations, including Amazon and Apple, are ganging up to shut it down. The cloud of totalitarianism gathered so fast over our heads that all my bets are off about the kinds of risks I am now willing to take. I will still consider alternatives to Facebook but they will have to be very user-friendly, and reasonably populated. (If I want to express myself in the wilderness, I can always talk to my wife.) For the foreseeable future, I will still be easy to find in the blogosphere.

Best of luck to all my Facebook friends, including to those who need to learn to think more clearly, including those whose panties are currently in a twist.

Should we scrap STEM in high school?

STEM topics are important (duh!). Finding the future scientists who will improve my health and quality of living is important to me. I want society to cast a wide net to find all those poor kids, minority kids, and girls we’re currently training to be cute who, in the right setting, could be the ones to save me from the cancer I’m statistically likely to get.

But how much value are we really getting from 12th grade? I’m pulling a bait and switch with the title to this post–I think we should keep the norm of teaching 9th graders basic science. But by 12th grade, are we really getting enough value to warrant the millions of hours per year of effort we demand of 16-18-year olds? I’m skeptical.

There are lots of things that should be taught in school. Ask any group of people and you’ll quickly come up with a long list of sensible sounding ideas (personal finance, computer programming, economics, philosophy, professional communication, home ec., and on and on and on). But adding more content only means we do a worse job at all of it. And that means an increased chance of students simply rejecting those topics wholesale.

Society is filled with science/econ deniers of all persuasions. Anti-intellectuals have been a major constituency for at least the last decade. It’s not like these folks didn’t go to school. Someone tried to teach them. What I want to know is how things have would been different if we’d tried something other than overwhelming these people with authoritatively delivered facts (which seem to have resulted in push-back rather than enlightenment)?

The last 6+ years of trying to teach economics to college kids against their will has convinced me that art (especially literature and drama) affects us much more than dissecting frogs or solving equations. And exposing kids to more literature and drama has the added benefit of (possibly) helping them develop their literacy (which we’ve forgotten is not a binary variable).

Although casting a wide net to find potential scientists is important, ultimately, we only need scientific knowledge in the heads of those who don’t flip through it. But literature can help us develop empathy, and that is a mental skill we need in far more heads. I suspect that replacing a 12th grade physics class 98% of students forget with a literature class where you read a good book would do more to promote an enlightened society.

The short-sightedness of big C Conservatism

As we celebrate the approval of the Oxford-AstraZeneca Covid-19 vaccine, it is hard to imagine that anyone might take offense at the existence of an inexpensive, transportable solution to the pandemic. Yet this is exactly what I have encountered. A friend who is an arch-Conservative (note the capital C) responded with hostility during a discussion on differences between the Oxford and Pfizer vaccines. The issue was that my friend couldn’t accept the scientific evidence that the Oxford vaccine is superior to the Pfizer one. He fixated on the fifteen-billion-dollar subsidy Pfizer received from the US government to create their vaccine. For the Conservative, it was as if to admit the difference between the vaccines was unpatriotic since one was bought by the US taxpayer. His objections were not based on scientific evidence or ideology but upon identity and background.

During the discussion, my Conservative friend brought up the Oxford team’s continuous publication of their data as if that action somehow lessened their research’s impact or validity. The final paragraph on the Oxford research team’s webpage says:

This is just one of hundreds of vaccine development projects around the world; several successful vaccines offer the best possible results for humanity. Lessons learned from our work on this project are being shared with teams around the world to ensure the best chances of success.

The implication was “well, they’re just wacko do-gooders! They’re not going to make a profit acting like that!” The idea being that legitimate scientific research bodies behave like Scrooge McDuck with their knowledge. On a side note, this type of “Conservative” mentality has greatly damaged public perception of capitalism, a topic I’ll return to at a later point.

Members of the Oxford vaccine team are assumed to be in the running for the Nobel Prize, and for this, odds of winning are proportionate to the speed with which the broader scientific community can check findings. The Conservative could not overcome a mental block over the fifteen billion dollars. The difference is one of vision. To put it bluntly, Oxford is aware as an institution that it has existed for almost nine hundred years before the creation of Pfizer and that it will probably exist nine hundred years after Pfizer is no longer. Oxford wants the Nobel Prize; the long-term benefits – investment, grants, funding awards, etc. – far outweigh any one-time payout. As to the long-term outlook required for Nobel Prize pursuit, the willingness to pass up one benefit in favor of a multitude of others, it is alien to those whose focus is short-sighted, who are enticed by single-time subsidies or quick profits.

The conversation represented a problem which caused F.A. Hayek to write in “Why I am not a Conservative,”

In general, it can probably be said that the conservative does not object to coercion or arbitrary power so long as it is used for what he regards as the right purposes. He believes that if government is in the hands of decent men, it ought not to be too much restricted by rigid rules. Since he is essentially opportunist and lacks principles, his main hope must be that the wise and the good will rule—not merely by example, as we all must wish, but by authority given to them and enforced by them. Like the socialist, he is less concerned with the problem of how the powers of government should be limited than with that of who wields them; and, like the socialist, he regards himself as entitled to force the value he holds on other people.

In the case of the vaccine, the Conservative I spoke with had the idea that since the government sponsored Pfizer’s version, Americans ought to accept placidly the Pfizer vaccine as their lot in life. Consequently, coercive policies, for instance refusing the AstraZeneca vaccine FDA approval (something which hasn’t occurred – yet), are acceptable. Behind this facile, even lazy, view lies an incomprehension when confronted with behaviors and mindsets calibrated for large scale enterprises. Actions taken to achieve long-term building – in this instance the possibility of winning a Nobel Prize – are branded as suspicious, underhanded. At an even deeper level lies a resentment of AstraZeneca’s partner: Oxford with all of its associations.

Rather than being a malaise of big C “Conservatism,” the response, detailed in this anecdote, to a comparison between the vaccines conforms to Conservative ideas. Narrowness of mind and small scope of vision are prized. As Hayek pointed out in 1960, these traits lead to a socio-cultural and intellectual poverty which is as poisonous as the material and moral poverty of outright socialism. My own recent conclusion is that the poverty of big C “Conservatism” might be even worse than that of socialism because mental and socio-cultural poverty can create circumstances leading to a longer, more subtle slide into material poverty while accompanied by a growing resentment as conformity still leads to failure. When class and ideological dynamics invade matters such that scientific evidence is interpreted through political identities, we face a grave threat to liberty.  

Nightcap

  1. Merry Christmas!

Disruption arises from Antifragility

One of my favorite classics about why big businesses can’t always innovate is Clayton Christiansen’s The Innovator’s Dilemma. It is one of the most misunderstood business books, since its central concept–disruption–has been misquoted, and then popularized. Take the recent post on Investopedia that says in the second sentence that “Disruptive technology sweeps away the systems or habits it replaces because it has attributes that are recognizably superior.” This is the ‘hype’ definition used by non-innovators.

I think part of the misconception comes from thinking of disruption as major, public, technological marvels that are recognizable for their complexity or for even creating entire new industries. Disruptive innovations tend instead to be marginal, demonstrably simpler, worse on conventional scales, and start out by slowly taking over adjacent, small markets.

It recently hit me that you can identify disruption via Nassim Nicholas Taleb’s simple heuristics of recognizing when industry players are fragile. Taleb is my favorite modern philosopher, because he actually brought a new, universally applicable concept to the table, that puts into words what people have been practicing implicitly–but without a term to use. Anti-fragility is the inverse of fragile and actually helps you understand it better. Anti-fragile does not mean ‘resists breaking,’ which is more like ‘robust;’ instead, it means gains from chaos. Ford Pintos are fragile, Nokia phones are robust, but mechanical things are almost never anti-fragile. Bacteria species are anti-fragile to anti-biotics, as trying to kill them makes them stronger. Anti-fragile things are usually organic, and usually made up of fragile things–the death of one bacterium makes the species more resistant.

Taleb has a simple heuristic for finding anti-fragility. I recommend you read his book to get the full picture, but the secret to this concept is a simple thought experiment. Take any concept (or thing), and identify how it works (or fails to work). Now ask, if you subject it to chaos–by that, I mean, if you try to break it–and slowly escalate how hard you try, what happens?

  • If it gets disproportionately harmed, it is fragile. E.g., traffic: as you add cars, time-to-destination gets worse slowly at first, then all of the sudden increases rapidly, and if you do it enough, cars literally stop.
  • If it gets proportionately harmed or there is no effect, it is robust. Examples are easy, since most functional mechanical and electric systems are either fragile (such as Ford Pintos) or robust (Honda engines, Nokia phones, the Great Pyramids).
  • If it gets better, it is anti-fragile. Examples are harder here, since it is easier to destroy than build (and anti-fragility usually occurs based on fragile elements, which gets confusing); bacterial resistance to anti-biotics (or really, the function of evolution itself) is a great one.

The only real way to get anti-fragility outside of evolution is through optionality. Debt (obligation without a choice) is fragile to any extraneous shock, so a ‘free option’–choice without obligation, the opposite, is pure anti-fragility. Not just literal ‘options’ in the market; anti-fragile takes a different form in every case, and though the face is different, the structure is the same. OK, get it? Maybe you do. I recommend coming up with your own example–if you are just free riding on mine, you don’t get it.

Anyway, back to Christiansen. Taleb likes theorizing and leaves example-finding to you, while Christiansen scrupulously documented what happened to hundreds of companies and his concepts arose from his data; think about it like Christiansen is Darwin, carefully measuring beaks, and recognizing natural selection, where Taleb is Wallace, theorizing from his experience and the underlying math of reality. Except in this case, Taleb is not just talking about natural selection, he is also showing how mutation works, and giving a theory of evolution that is not restricted to just biology.

I realized that you can actually figure out whether an innovation is disruptive using this heuristic. It takes some care, because people often look at the technology and ask if it is anti-fragile–which is a mistake. Technologies are inorganic, so usually robust or fragile. Industries are organic, strategies are organic, companies are organic. Many new strategies build on companies’ competencies or existing customer bases, and though they may meet the ‘hype’ definition above, they give upside to incumbents, and are thus not fragilizing. Disruption happens when a company has an exposure to a strategy that it has little to gain from, but that could cannibalize its market if it grows, as anti-fragile things are wont to do.

The questions is: is a given incumbent company fragile with respect to a given strategy? Let’s start with some examples–first Christiansen’s, then my own:

  • Were 3″ drive makers fragile with respect to using smaller drives in cars?
    • In my favorite Christiansen anecdote, a 3″ drive-making-CEO, whose company designed a smaller 1.8″ drive but couldn’t sell it to their PC or mainframe customers, complained that he did exactly what Christiansen said, and built smaller drives, and there was no market. Meanwhile, startups were selling 1.8″ drives like crazy–to car companies, for onboard computers.
    • Christiansen notes that this was a tiny market, which would be an 0.01% change on a big-company income statement, and a low-profit one at that. So, since these companies were big, they were fragile to low-margin, low-volume, fast-growing submarkets. Meanwhile, startups were unbelievably excited about selling small drives at a loss, just so that Honda would buy from them.
    • So, 3″ drive makers had everything to lose (the general drive market) and a blip to gain, where startups had everything to gain and nothing to lose. Note that disruptive technologies are not those that are hard to invent or that immediately revolutionize the industry. Big companies (as Christiansen proved) are actually better at big changes and at invention. They are worse at recognizing value of small changes and jumps between industries.
  • Were book retailers fragile with respect to online book sales?
    • Yes, Amazon is my Christiansen follow-on. Jeff Bezos, as documented in The Everything Store, gets disruption: he invented the ‘two-pizza meeting’, so he ‘gets’ smallness; he intentionally isolates his innovation teams, so he ‘gets’ the excitement of tiny gains and allows cannibalism; he started in a proof-of-concept, narrow, feasible discipline (books) with the knowledge that it would grow into the Everything Store if successful, so he ‘gets’ going from simple beginnings to large-scale, well, disruption.
    • The Everything Store reads like a manual on how to be disrupted. Barnes & Noble first said “We can do that whenever we want.” Then when Bezos got some traction, B&N said “We can try this out but we need to figure out how to do it using our existing infrastructure.” Then when Bezos started eating their lunch, B&N said “We need to get into online book sales,” but sold the way they did in stores, by telling customers what they want, not by using Bezos’ anti-fragile review system. Then B&N said “We need to start doing whatever Bezos does, and beat him by out-spending,” by which time he was past that and selling CDs and then (eventually) everything.
    • Book sellers were fragile because they had existing assets that had running costs; they were catering to customers with not just a book, but with an experience; they were in the business of selecting books for customers, not using customers for recommendations; they treasured partnerships with publishers rather than thinking of how to eliminate them.
  • Now, some rapid-fire. Think carefully, since it is easy to fall into the trap of thinking industry titans were stupid, not fragile, and it is easy to have false positives unless you use Taleb’s heuristic.
    • Car companies were fragile to electric sports cars, and Elon Musk was anti-fragile. Sure, he was up-market, which doesn’t follow Christiansen’s down-market paradigm, but he found the small market that the Nissan Leaf missed.
    • NASA was fragile to modern, cheap, off-the-shelf space solutions, and…yet again…Elon Musk was anti-fragile.
    • Taxis were fragile to app-based rides.
    • Hotels were fragile to app-based rentals.
    • Cable was fragile to sticks you put in your TV.
    • Hedge funds were fragile to index funds, currently are fragile to copy trading, and I hope to god they break.
  • Lastly, some counter-examples, since it is always better to use the via negativa, and assuming you have additive knowledge is dangerous. If you disagree, prove me wrong, found a startup, and make a bajillion dollars by disrupting the big guys who won’t be able to find a market:
    • There is nothing disruptive about 5G.
    • Solar and wind are fragile and fragilizing.
    • What was wrong with WeWork’s business model? Double fragility–fixed contracts with building owners, flexible contracts with customers.
    • On a more optimistic note, cool tech can still be sustaining (as opposed to disruptive), like RoboAdvisors or induction stoves or 3D printed shoes.
    • Artificial intelligence or blockchain any use you have heard of (but not in any that you don’t know yet).

So, to summarize, if a company is fragile to a new strategy, the best it can do is try to robustify itself, since it has little upside. Many innovations give upside to incumbents at the marginal cost of R&D, and thus sustain them; disruption happens when the incumbents have little to gain from adopting a strategy, but startups have a high exposure to positive impact from possible adoption of a strategy due to the potential growth from small-market, incremental/simplifying opportunities, which is definitionally anti-fragility to the strategy.

Now, I hope you have a tool for judging whether industrial incumbents are fragile. Rather than trying to predict success or failure of any, you should just use Taleb’s heuristic–that will help you sort things into ‘hyped as disruptive’ vs. ‘actually probably disruptive.’ A last thought: if you found this wildly confusing, just remember, disruptive innovations tend to steal the jobs of incumbents. So, if an incumbent (say, a Goldman Sachs/Morgan Stanley veteran writing the definition of “disruptive” for Investopedia) is talking about a banking or trading technology, it is almost certainly not disruptive, since he would hardly tell you how to render him extraneous. You will find out what is disruptive when he makes an apology video while wearing a nice watch and French cuffs.

Prediction market update

The market for who wins the presidency closed this morning! But the Electoral College margin of victory market was still open and at 98 cents for the already certain outcome. Maxing out my position there would mean $17 for free! So I did, and the market dipped to 97 cents.

This truly is the dumbest jack in the box. We all know exactly what’s going to happen, and yet…

The poverty of the modern middle class: prologue

About six months after I graduated from Columbia, a couple who knew members of my extended family asked me to lunch unexpectedly. Not wishing to be rude, I went. As it turned out the couple had an agenda; they wanted to talk about having their daughter apply to Ivy League graduate schools.

Their daughter had recently graduated from a private liberal arts college and was having trouble finding permanent employment in a field and at a level her parents considered acceptable given the cost of her education. In fairness to her, she was interning at a non-profit in NYC. Her parents, though, had unrealistic expectations and seemed to feel that having paid for her to go to a “prestigious” private school, she should have entered the workforce at a much higher level.

The parents had some highly specific questions, ones that were so precise that I suggested they needed to contact someone in admissions at the respective universities or speak with an application consultant. In retrospect, I suspect they may already done so and the feedback hadn’t been favorable. Their questions were focused on seeing if there might be workarounds or special exemptions for the graduate program prerequisites. While there are, their daughter wasn’t eligible for any of them.

The parents were visibly angry, unable to accept that their daughter’s endless sports and community involvement, which they had so carefully funded, were meaningless in the face of program prerequisites. The graduate programs had foreign study abroad components, so the language prerequisites, which the daughter couldn’t meet, were immutable. Additionally, as the programs was designed for those interested in careers such as publishing, journalism, or policy writing, all applications demanded a large and exceptionally high-quality writing sample. To have an idea of what was expected, think of Princeton University’s standard 50,000 word (i.e. a small book) undergraduate thesis.[1] The daughter had neither the language skills nor the writing sample. In the case of the former, the private college her parents had chosen didn’t offer modern languages at anything resembling the level expected; for the writing sample, the young woman simply didn’t have one. Her parents were vague as to the reason, but I think she may have chosen an academic track which didn’t require an undergraduate thesis.

The parents weren’t completely sure which upset them more: that their capability as parents was under review, or that everything they thought was “valuable” or “worthy” had been found wanting. Sports? Irrelevant. Door-to-door political canvassing? Commonplace. The parents were proud of having provided certain experiences, such as trips to Disneyland, ski trips, and cruises. These activities have importance as symbols of a financial middle-class with enough liquidity to spend on recreation, but the daughter couldn’t include them as significant in personal statements. In this, the daughter was disadvantaged compared to Abigail Fisher and her discovery that 1,999,999 other people in any given year are Habitat for Humanity volunteers. 

The episode revealed a bankruptcy of mind, culture, and outlook which is the poverty of those whose incomes are firmly middle class but whose intellectual knowledge and cultural capital is lacking. Like the person of my previous post, there was a trust in the opinions of the majority and an uninquiring faith that doing x, y, and z is guaranteed to lead to immediate status, security, and success. The financial but not social or cultural middle-class has realized that parts of life and social experiences are out of reach; Not because they were originally off limits, but because too much time has passed, and individuals, such as those in this story, are behind the curve when it comes to specific skills and types of knowledge. People, entire sections of the population, have gone so far down a particular path, it’s too late to turn back.


[1] In case readers are wondering if it is possible to access this type of writing preparation at the undergraduate level outside of the Ivy League, it is. Speaking from my own experience, most liberal arts colleges and large universities offer an Honors track or program through which participating students receive the support and guidance to write longer, more advanced papers and theses.

You vote is your voice–but actions speak louder than words

On voting day, with everyone tweeting and yelling and spam-calling you to vote, I want to offer some perspective. Sure, ‘your vote is your voice,’ and those who skip the election will remain unheard by political leaders. Sure, these leaders probably determine much more of your life than we probably would like them to. And if you don’t vote, or ‘waste’ your vote on a third party or write in Kim Jong Un, you are excluded from the discussion of how these leaders control you.

But damn, if that is such a limited perspective. It’s like the voting booth has blinders that conceal what is truly meaningful. I’m not going to throw the traditional counter-arguments to ‘vote or die’ at you, though my favorites are Arrow’s Impossibility Theorem and South Park’s Douche and Turd episode. Instead, I just want to say, compared to how you conduct your life, shouting into the political winds is simply not that important.

The wisdom of the stoics resonates greatly with me on this. Seneca, a Roman philosopher, tutor, and businessman, had the following to say on actions, on knowledge, on trust, on fear, and on self-improvement:

  • Lay hold of today’s task, and you will not need to depend so much upon tomorrow’s. While we are postponing, life speeds by. Nothing is ours, except time. On Time
  • Each day acquire something that will fortify you against poverty, against death, indeed against other misfortunes as well; and after you have run over many thoughts, select one to be thoroughly digested that day. This is my own custom; from the many things which I have read, I claim some one part for myself. On Reading
  • If you consider any man a friend whom you do not trust as you trust yourself, you are mightily mistaken and you do not sufficiently understand what true friendship means. On Friendship
  • Reflect that any criminal or stranger may cut your throat; and, though he is not your master, every lowlife wields the power of life and death over you… What matter, therefore, how powerful he be whom you fear, when every one possesses the power which inspires your fear? On Death
  • I commend you and rejoice in the fact that you are persistent in your studies, and that, putting all else aside, you make it each day your endeavour to become a better man. I do not merely exhort you to keep at it; I actually beg you to do so. On the Philosopher’s Lifestyle

Seneca goes on, in this fifth letter, to repeat the stoic refrain of ‘change what you can, accept what you cannot.’ But he expands, reflecting that your mind is “disturbed by looking forward to the future. But the chief cause of [this disease] is that we do not adapt ourselves to the present, but send our thoughts a long way ahead. And so foresight, the noblest blessing of the human race, becomes perverted.”

Good leadership requires good foresight, but panic over futures out of our control pervert this foresight into madness. So, whether you think that Biden’s green promises will destroy the economy or Trump’s tweets will incite racial violence, your actions should be defined by what you can do to improve the world–and this is the only scale against which you should be judged.

So, set aside voting as a concern. Your voice will be drowned out, and then forgotten. But your actions could push humanity forward, in your own way, and if you fail in that endeavor, then no vote will save you from the self-knowledge of a wasted life. If you succeed, then you did the only thing that matters.

Offensive advantage and the vanity of ethics

I have recently shifted my “person I am obsessed with listening to”: my new guy is George Hotz, who is an eccentric innovator who built a cell phone that can drive your car. His best conversations come from Lex Fridman’s podcasts (in 2019 and 2020).

Hotz’s ideas bring into question the efficacy of any ethical strategy to address ‘scary’ innovations. For instance, based on his experience playing “Capture the Flag” in hacking challenges, he noted that he never plays defense: a defender must cover all vulnerabilities, and loses if he fails once. An attacker only needs to find one vulnerability to win. Basically, in CTF, attacking is anti-fragile, and defense is fragile.

Hotz’s work centers around reinforcement learning systems, which learn from AI errors in automated driving to iterate toward a model that mimics ‘good’ drivers. Along the way, he has been bombarded with questions about ethics and safety, and I was startled by the frankness of his answer: there is no way to guarantee safety, and Comma.ai still depends on human drivers to intervene to protect themselves. Hotz basically dismisses any system that claims to take an approach to “Level 5 automation” that is not learning-based and iterative, as driving in any condition, on any road, is an ‘infinite’ problem. Infinite problems have natural vulnerabilities to errors and are usually closer to impossible where finite problems often have effective and world-changing solutions. Here are some of his ideas, and some of mine that spawned from his:

The Seldon fallacy: In short, 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable. See my other post for more details!

Finite solutions to infinite problems: In Hotz’s words regarding how autonomous vehicles take in their environments, “If your perception system can be written as a spec, you have a problem”. When faced with any potential obstacle in the world, a set of plans–no matter how extensive–will never be exhaustive.

Trolling the trolley problem: Every ethicist looks at autonomous vehicles and almost immediately sees a rarity–a chance for an actual direct application of a philosophical riddle! What if a car has to choose between running into several people or alter path to hit only one? I love Hotz’s answer: we give the driver the choice. It is hard to solve the trolley problem, but not hard to notice it, so the software alerts the driver whenever one may occur–just like any other disengagement. To me, this takes the hot air out of the question, since it shows that, as with many ethical worries about robots, the problem is not unique to autonomous AIs, but inherent in driving–and if you really are concerned, you can choose yourself which people to run over.

Vehicle-to-vehicle insanity: While some autonomous vehicle innovators promise “V2V” connections, through which all cars ‘tell’ each other where they are and where they are going and thus gain tremendously from shared information. Hotz cautions (OK, he straight up said ‘this is insane’) that any V2V system depends, for the safety of each vehicle and rider, on 1) no communication errors and 2) no liars. V2V is just a gigantic target waiting for a black hat, and by connecting the vehicles, the potential damage inflicted is magnified thousands-fold. That is not to say the cars should not connect to the internet (e.g. having Google Maps to inform on static obstacles is useful), just that safety of passengers should never depend on a single system evading any errors or malfeasance.

Permissioned innovation is a contradiction in terms: As Hotz says, the only way forward in autonomous driving is incremental innovation. Trial and error. Now, there are less ethically worrisome ways to err–such as requiring a human driver who can correct the system. However, there is no future for innovations that must emerge fully formed before they are tried out. And, unfortunately, ethicists–whose only skin in the game is getting their voice heard over the other loud protesters–have an incentive to promote the precautionary principle, loudly chastise any innovator who causes any harm (like Uber’s first-pedestrian-killed), and demand that ethical frameworks precede new ideas. I would argue back that ‘permissionless innovation‘ leads to more inventions and long-term benefits, but others have done so quite persuasively. So I will just say, even the idea of ethics-before-inventions contradicts itself. If the ethicist could make such a framework effectively, the framework would include the invention itself–making the ethicist the inventor! Since instead, what we get is ethicists hypothesizing as to what the invention will be, and then restricting those hypotheses, we end up with two potential outcomes: one, the ethicist hypothesizes correctly, bringing the invention within the realm of regulatory control, and thus kills it. Two, the ethicist has a blind spot, and someone invents something in it.

“The Attention”: I shamelessly stole this one from video games. Gamers are very focused on optimal strategies, and rather than just focusing on cost-benefit analysis, gamers have another axis of consideration: “the attention.” Whoever forces their opponent to focus on responding to their own actions ‘has the attention,’ which is the gamer equivalent of the weather gauge. The lesson? Advantage is not just about outscoring your opponent, it is about occupying his mind. While he is occupied with lower-level micromanaging, you can build winning macro-strategies. How does this apply to innovation? See “permissioned innovation” above–and imagine if all ethicists were busy fighting internally, or reacting to a topic that was not related to your invention…

The Maginot ideology: All military historians shake their heads in disappointment at the Maginot Line, which Hitler easily circumvented. To me, the Maginot planners suffered from two fallacies: one, they prepared for the war of the past, solving a problem that was no longer extant. Second, they defended all known paths, and thus forgot that, on defense, you fail if you fail once, and that attackers tend to exploit vulnerabilities, not prepared positions. As Hotz puts it, it is far easier to invent a new weapon–say, a new ICBM that splits into 100 tiny AI-controlled warheads–than to defend against it, such as by inventing a tracking-and-elimination “Star Wars” defense system that can shoot down all 100 warheads. If you are the defender, don’t even try to shoot down nukes.

The Pharsalus counter: What, then, can a defender do? Hotz says he never plays defense in CTF–but what if that is your job? The answer is never easy, but should include some level of shifting the vulnerability to uncertainty onto the attacker (as with “the Attention”). As I outlined in my previous overview of Paradoxical genius, one way to do so is to intentionally limit your own options, but double down on the one strategy that remains. Thomas Schelling won the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” for outlining this idea in The Strategy of Conflict, but more importantly, Julius Caesar himself pioneered it by deliberately backing his troops into a corner. As remembered in HBO’s Rome, at the seminal engagement of Pharsalus, Caesar said: “Our men must fight or die. Pompey’s men have other options.” However, he also made another underappreciated innovation, the idea of ‘floating’ reserves. He held back several cohorts of his best men to be deployed wherever vulnerabilities cropped up–thus enabling him to be reactive, and forcing his opponent to react to his counter. Lastly, Caesar knew that Pompey’s ace-in-the-hole, his cavalry, was made up of vain higher-class nobles, so he told his troops, instead of inflicting maximum damage indiscriminately, to focus on stabbing their faces and thus disfigure them. Indeed, Pompey’s cavalry did not flee from death, but did from facial scars. To summarize, the Pharsalus counter is: 1) create a commitment asymmetry, 2) keep reserves to fill vulnerabilities, and 3) deface your opponents.

Offensive privacy and the leprechaun flag: Another way to shift the vulnerability is to give false signals meant to deceive black hats. In Hotz’s parable, imagine that you capture a leprechaun. You know his gold is buried in a field, and you force the leprechaun to plant a flag where he buried it. However, when you show up to the field, you find it planted with thousands of flags over its whole surface. The leprechaun gave you a nugget of information–but it became meaningless in the storm of falsehood. This is a way that privacy may need to evolve in the realm of security; we will never stop all quests for information, but planting false (leprechaun) flags could deter black hats regardless of their information retrieval abilities.

The best ethics is innovation: When asked what his goal in life is, Hotz says ‘winning.’ What does winning mean? It means constantly improving one’s skills and information, while also seeking to find a purpose that changes the world in a way you are willing to dedicate yourself to. I think the important part of this that Hotz does not say “create a good ethical framework, then innovate.” Instead, he is effectively saying do the opposite–learn and innovate to build abilities, and figure out how to apply them later. The insight underlying this is that the ethics are irrelevant until the innovation is there, and once the innovation is there, the ethics are actually easier to nail down. Rather than discussing ‘will AIs drive cars morally,’ he is building the AIs and anticipating that new tech will mean new solutions to the ethical questions, not just the practical considerations. So, in summary, if you care about innovation, focus on building skills and knowledge bases. If you care about ethics, innovate.

The Seldon Fallacy

Like some of my role models, I am inspired by Isaac Asimov’s vision. However, for years, the central ability at the heart of the Foundation series–‘psychohistory,’ which enables Hari Seldon, the protagonist, to predict broad social trends across thousands of galaxies over thousands of years–has bothered me. Not so much because of its impact in the fictional universe of Foundation, but for how closely it matches the real-life ideas of predictive modeling. I truly fear that the Seldon Fallacy is spreading, building up society’s exposure to negative, unpredictable shocks.

The Seldon Fallacy: 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable.

The first part of the Seldon Fallacy is the mistake of assuming reducibility, or more poetically, of NNT’s Procustean Bed. As F.A. Hayek asserted, no predictive model can be less complex than the model it predicts, because of second-order effects and accumulation of errors of approximation. Isaac Asimov’s central character, Hari Seldon, fictionally ‘proves’ the ludicrous fallacy that chaotic systems can be reduced to ‘psychohistorical’ mathematics. I hope you, reader, don’t believe that…so you don’t blow up the economy by betting a fortune on an economic prediction. Two famous thought experiments disprove this: the three-body problem and the damped, driven oscillator. If we can’t even model a system with three ‘movers’, because of second-order effects, how can we model interactions between millions of people? Basically, with no way to know which reductions in complexity are meaningful, Seldon cannot know whether, in laying his living system into a Procustean bed, he has accidentally decapitated it. Using this special ability, while unable to predict individuals’ actions precisely, Seldon can map out social forces with such clarity that he correctly predicts the fall of a 10,000-year empire. Now, to turn to the ‘we can predict social, though not individual futures’ portion of the fallacy: that big things are predictable even if their consituent elements are not.

The second part of the Seldon Fallacy is the mistake of ‘the marble jar.’ Not all randomnesses are equal: drawing white and black marbles from a jar (with replacement) is fundamentally predictable, and the more marbles drawn, the more predictable the mix of marbles in the jar. Many models depend on this assumption or similar ones–that random events distribute normally (in the Gaussian sense) in a way that increases the certainty of the model as the number of samples increases. But what if we are not observing independent events? What if they are not Gaussian? What if someone tricked you, and tied some marbles together so you can’t take out only one? What if one of them is attached to the jar, and by picking it up, you inadvertently break the jar, spilling the marbles? Effectively, what if you are not working with a finite, reducible, Gaussian random system, but an infinite, Mandelbrotian, real-world random system? What if the jar contains not marbles, but living things?

I apologize if I lean too heavily on fiction to make my points, but another amazing author answers this question much more poetically than I could. Just in the ‘quotes’ from wise leaders in the introductions to his historical-fantasy series, Jim Butcher tells stories of the rise and fall of civilizations. First, on cumulative meaning:

“If the beginning of wisdom is in realizing that one knows nothing, then the beginning of understanding is in realizing that all things exist in accord with a single truth: Large things are made of smaller things.

Drops of ink are shaped into letters, letters form words, words form sentences, and sentences combine to express thought. So it is with the growth of plants that spring from seeds, as well as with walls built from many stones. So it is with mankind, as the customs and traditions of our progenitors blend together to form the foundation for our own cities, history, and way of life.

Be they dead stone, living flesh, or rolling sea; be they idle times or events of world-shattering proportion, market days or desperate battles, to this law, all things hold: Large things are made from small things. Significance is cumulative–but not always obvious.”

–Gaius Secundus, Academ’s Fury

Second, on the importance of individuals as causes:

“The course of history is determined not by battles, by sieges, or usurpations, but by the actions of the individual. The strongest city, the largest army is, at its most basic level, a collection of individuals. Their decisions, their passions, their foolishness, and their dreams shape the years to come. If there is any lesson to be learned from history, it is that all too often the fate of armies, of cities, of entire realms rests upon the actions of one person. In that dire moment of uncertainty, that person’s decision, good or bad, right or wrong, big or small, can unwittingly change the world.

But history can be quite the slattern. One never knows who that person is, where he might be, or what decision he might make.

It is almost enough to make me believe in Destiny.”

–Gaius Primus, Furies of Calderon

If you are not convinced by the wisdom of fiction, put down your marble jar, and do a real-world experiment. Take 100 people from your community, and measure their heights. Then, predict the mean and distribution of height. While doing so, ask each of the 100 people for their net worth. Predict a mean and distribution from that as well. Then, take a gun, and shoot the tallest person and the richest person. Run your model again. Before you look at the results, tell me: which one do you expect shifted more?

I seriously hope you bet on the wealth model. Height, like marble-jar samples, is normally distributed. Wealth follows a power law, meaning that individual datapoints at the extremes have outsized impact. If you happen to live in Seattle and shot a tech CEO, you may have lowered the mean income in the group by more than the average income of the other 99 people!

So, unlike the Procustean Bed (part 1 of the Seldon Fallacy), the Marble Jar (part 2 of the Seldon Fallacy) is not always a fallacy. There are systems that follow the Gaussian distribution, and thus the Marble Jar is not a fallacy. However, many consequential systems–including earnings, wars, governmental spending, economic crashes, bacterial resistance, inventions’ impacts, species survival, and climate shocks–are non-Gaussian, and thus the impact of a single individual action could blow up the model.

The crazy thing is, Asimov himself contradicts his own protagonist in his magnum opus (in my opinion). While the Foundation Series keeps alive the myth of the predictive simulation, my favorite of his books–The End of Eternity (spoilers)–is a magnificent destruction of the concept of a ‘controlled’ world. For large systems, this book is also a death knell even of predictability itself. The Seldon Fallacy–that a simplified, non-chaotic model can predict a complex, chaotic reality, and that size enhances predictability–is shown, through the adventures of Andrew Harlan, to be riddled with hubris and catastrophic risk. I cannot reduce his complex ideas into a simple summary, for I may decapitate his central model. Please read the book yourself. I will say, I hope that as part of your reading, I hope you take to heart the larger lesson of Asimov on predictability: it is not only impossible, but undesirable. And please, let’s avoid staking any of our futures on today’s false prophets of predictable randomness.

Necessity constrains even the gods

I was recently talking to my cofounder about the concept of “fuck-you” money. “Fuck-you” money is the point at which you no longer need to care what other people think, you can fund what you want without worrying about ending up broke–so long as you recognize the power of necessity.

It reminded me of three things I have read before. One is from the brilliant economist and historian Thomas Sowell, who wrote in The Conflict of Visions that ideological divides often crop on the disagreement between “constrained” and “unconstrained” visions of the world and humanity. Effectively, the world contains some who recognize that humans have flaws that culture has helped us work through, but that we should be grateful for the virtues handed to us and understand that utopianism is dangerous self-deception. But it contains many others who see all human failings stemming from social injustices, since in nature, humans have no social problems. Those who line up behind Hobbes fight those who believe, still, the noble savage and Rousseau’s perfect state of nature. To me, this divide encapsulates the question of, did necessity emerge before human society? And if so, does it still rule us?

I know what the wisdom of antiquity says. The earliest cosmogonies–origin stories of the gods–identify Ananke (Necessity) as springing forth from the Earth herself, before the gods, and restricting even them. This story was passed on to Greek thinkers like Plato (Republic) and playwrites like Euripides (Alcestis), who found human government and the fate of heroes to also be within the tragic world of necessity first, all else second.

Lastly, this reminds me of Nassim Nicholas Taleb’s Anti-Fragile. He points out that the first virtue is survival, and that optionality is pure gain. Until you address necessity, your optionality–your choices and your chances–are fundamentally limited. As an entrepreneur who literally lives the risk of not surviving, I do not need to be convinced. Necessity rules even the gods, and it certainly rules those with “fuck-you” money. But it rules me even more. I am ruled by the fear that I may fail my family, myself, and my company at the Maslow’s level of survival. Those with “fuck-you” money at least have moved to the level where they have chances to fail society. And the lesson from history, from mythology, and from surviving in the modern economy, is not that one should just be resigned to reaching one’s limits. It is to strive to reach the level where you are pushing them, and the whole time to recognize the power of Necessity.

Choosing inadequacy

About a year ago, I had dinner with a friend who I have known more or less my entire life. We hadn’t seen each other in over ten years, though, not since she started college. During the interval, she became an inveterate social climber – at one point avowing completely seriously that she was open to marrying a rich man if it meant that she could have a flat in one of the world’s most expensive cities. She was also an expert at being woke. The contradiction in her thought processes – her craving for a life of riches and luxury and her woke “eat the rich” attitude – caused me to recognize the fuel behind the attraction redistributionist ideologies have for young Americans.

At some point in her trajectory, my friend had pitched on using the education system to climb the social ladder. In fairness to her, there is a pervasive idea that this is a valid approach; J.D. Vance mentioned it in the conclusion to Hillbilly Elegy. Choosing between the flagship state university and a small private liberal arts college, she picked the latter, which was a “social” school held in high esteem regionally and thought to be intellectually rigorous.

Upon graduating and moving two time zones away for graduate school, she made two unwelcome discoveries: 1) she was behind academically and intellectually, and 2) her college had scant brand-name value in the broader world. According to her, her graduate university’s student body was comprised of the children of America’s elite and “who didn’t get into Harvard.” She held a teaching assistantship for 101-level English literature classes and was discomfited to find that her freshmen students were better writers with a broader sense of literature and the humanities than she. She mentioned that she found out about entire chunks of the English literary canon from them, which is appalling given that she had majored in English at her liberal arts college.  

When Austrian novelist Stefan Zweig died, his executors found the manuscript for his novel Rausch der Verwandlung among his papers. The book’s title in English is The Post-Office Girl,[1] and it tells the story of a 1920s provincial girl who assumes a false identity to join the privileged world of her relatives. Everything works out – until it doesn’t:

Unwittingly Christine revealed the gaps in her worldliness. She didn’t know that polo was played on horseback, wasn’t familiar with common perfumes like Coty and Houbigant, didn’t have a grasp of the price range of cars; she’d never been to the races. Ten or twenty gaucheries like that and it was clear she was poorly versed in the lore of the chic. And compared to a chemistry student’s her schooling was nothing. No secondary school, no languages (she freely admitted she’d long since forgotten the scraps of English she’d learned in school). No, something was just not right about elegant Fräulein von Boolen, it was only a question of digging a little deeper […].

After Christine is unmasked, she returns to her previous life but this time she’s angry and bitter, aware now of the existence of another world, one lost through her own irresponsibility. Most of the book is about the girl’s mental unravelling. When I first read the book, I thought his ending of suicidal thoughts and participation in serious criminality to be melodrama for its own sake. Now, I think he was on to something.

In Zweig’s book, the root of the problem is the anti-heroine’s discovery that what is top-notch in her village isn’t held in the same esteem elsewhere: “[W]hat was the showpiece of her wardrobe [a green rayon blouse] yesterday in Klein-Reifling seems miserably flashy and common to her now.” My friend recounted a similar experience cast in academic terms. She slid through high school and college without any struggle. Upon starting her MA, she had difficulty keeping up with her cohort. Three years after starting a doctoral program, her dissertation proposal was rejected, with the evaluators citing lack of languages as one of the reasons. This last is interesting because it connects to Zweig’s list of faults that expose Christine’s real social standing. In the case of my friend, her background became equivalent to Christine’s blouse: haute couture in one locale and unsophisticated in another.

For both the Bright Young Things of Zweig’s world and my own generation more generally, there is a question over culpability. In the book, Christine’s aunt agonizes over the girl’s uncouth manners and dress, repeatedly reminding herself “how was she to know?” My friend and her parents assumed that “the system” would take care of her. Sure, the public school wasn’t great, but it also wasn’t too terrible and everyone else was going there. The college was the best and most expensive private college in the region, so surely the faculty and advisors there knew what they were doing.

This is not to say that there weren’t red flags if one knew where to look. For example, the college offered only two years of accredited foreign language training. My friend acknowledged this contributed to the problems with her first proposal. However, my friend also admitted that she hadn’t considered the curriculum when she picked the college. Her focus had been purely social. Consequently, the truth is that she chose her path at the moment she picked her values.  The fact that her measurement system didn’t hold up well to broader scrutiny is her fault.

Zweig’s anti-heroine contemplates suicide in response to her inadequacy; kangaroo courts, or cancel culture, are more my friend’s style. Not much has changed over the course of a century. In Zweig’s time, self-destruction was the default choice; in ours, destruction of others is the preferred MO. The source of the anger, though, is the same: envy stemming from inadequacy. Unlike the Bright Young Things, though, the modern generations chose their inadequacy.


[1] Much of the crucial action is set in a Swiss hotel, and Wes Anderson has said that the book was one of his inspirations for The Grand Budapest Hotel.

Why snipers have spotters

Imagine two highly skilled snipers choosing and eliminating targets in tandem. Now imagine I take away one of their rifles, but leave him his scope. How much do you expect their abilities to be decreased?

Surprisingly, there is a strong case that this will actually increase their combined sniping competence. As an economist would point out, this stems from specialization: the sniper sacrifices total situational awareness to improve accurate intervention, and the spotter sacrifices ability to intervene to improve awareness and planning. We can push out beyond the production possibilities curve.

It is also a result of communication. Two independent snipers pick their own shots, and may over-kill a target or miss a pressing threat. By explicitly designating roles, the sniper can depend on the spotter for guidance, and the two-person system means that both parties actually have more information than their cumulative, but separate knowledge without spotting.

There are also long-term positive impacts that likely escape an economist’s models from switching off in each role, or from an apprenticeship model. Eye fatigue that limits accuracy, and mental fatigue that may result from constant awareness, can be eliminated by taking turns. Also, if a skilled sniper has a novice spotter, the spotter observes the sniper’s tactics and can assimilate best practices–and the sniper, by previously working as a spotter, can be more productively empathetic. The system naturally encourages learning and improvement.

I love the sniper-spotter archetype, because it clarifies the advantages of:

  • Going from zero to one: Between two independent snipers, there zero effective lines of communication. Between a sniper and a spotter, there is one. This interaction unlocks potential held in both.
  • More from less: Many innovate by adding new things; however, anti-fragile innovations are more likely to come from removing unnecessary things than by adding new ones.
  • Not the number of people, the number of interactions: Interactions have advantages (specialization, coordination) and disadvantages (communication friction, lack of individual decision-making responsibilities). Scrutinize what interactions you want on your teams and which to avoid.
  • Isolation: Being connected to everyone promotes noise over signal. It also promotes focusing on competitors over opportunities and barriers over permissionless innovation.
  • Separate competencies, shared goals and results: To make working together worth it, define explicit roles that match each individual’s competencies. Then, so long as you have vision alignment, all team members know what they are seeking and how they will be depended upon to succeed.
  • Iterative learning and feedback: Systems that promote self-improvement of their parts outperform systems that do not. Also, at the end of the day, education comes from experimentation and observation of new phenomena, balance on the edge between known and unknown practices.
  • Establish ‘common knowledge’: Communication failures and frictions often occur because independent people assume others have the same assumed set of ‘common knowledge’. If you make communication the root of success, so long as the group is small enough to actual have–and know it has–the same set of ‘common knowledge’, they can act confidently on these shared assumptions.
  • Delegation as productivity: Recognize that doing more does not mean more gets done. Without encouraging slacking off, explicitly rewarding individuals for choosing the right things to delegate and executing effectively will get more from less.
  • Cheating Goodhart: Goodhart’s Law states that the metric of success becomes the goal. If you make the metric of success joint, rather than individual, and shape its incentives to match your vision, your metrics will create an atmosphere bent on achieving your actual goals.
  • Leadership is empowerment: Good leaders don’t tell people what to do, they inform, support, listen, and match people’s abilities and passions to larger purpose.
  • Smallness: Small is reactive, flexible, cohesive, connected, fast-moving, accurate, stealthy, experimental, permissionless, and, counterintuitively, scalable.

My most recent encounter with “sniper and spotter” is in my sister’s Montessori classroom (ages 3-6). She is an innovative educator who noticed that her public school position was rife with top-down management, politics, and perverse incentives, and was not finding systems to promote curiosity or engagement. She has applied the “sniper and the spotter” after noticing that children thrive best in either one-on-one, responsive guidance, where the instructor is totally dedicated to the student, or when left to their own devices in a materials-rich environment, engaging in discovery (or working with other children, or even teaching what they have already learned to newcomers). However, believe it or not, three-year-olds can often cause disruptions or even physical threats if left totally without supervision.

She therefore promotes a teaching model where there are two teachers, one who watches for children’s safety and minimizes disruptiveness. This frees the other teacher to rove student-to-student and give either individual or very-small-group attention. The two teachers communicate to plan next steps, and to ‘spot’ children who most need intervention. This renders ‘class size’ a stupid metric: what matters is how much one-on-one guidance plus permissionless discovery a child engages in. It is also a “barbell” strategy: instead of wallowing in the mediocrity of “group learning”, children get the most of the two extremes–total attention and just-enough-attention-to-remain-safe.

PS: On Smallness, Jeff Bezos has promised $1 billion to support education innovation. So far, despite starting before my sister, he has so far opened as many classrooms: one. As the innovator in the ‘two-pizza meeting’, I wish Bezos would start with many, small experiments in education rather than big public dedications, so he could nurture innovation and select strategies for success.

I would love to see more examples of “sniper and spotter” approaches in the comments…but no sniping please 🙂