You vote is your voice–but actions speak louder than words

On voting day, with everyone tweeting and yelling and spam-calling you to vote, I want to offer some perspective. Sure, ‘your vote is your voice,’ and those who skip the election will remain unheard by political leaders. Sure, these leaders probably determine much more of your life than we probably would like them to. And if you don’t vote, or ‘waste’ your vote on a third party or write in Kim Jong Un, you are excluded from the discussion of how these leaders control you.

But damn, if that is such a limited perspective. It’s like the voting booth has blinders that conceal what is truly meaningful. I’m not going to throw the traditional counter-arguments to ‘vote or die’ at you, though my favorites are Arrow’s Impossibility Theorem and South Park’s Douche and Turd episode. Instead, I just want to say, compared to how you conduct your life, shouting into the political winds is simply not that important.

The wisdom of the stoics resonates greatly with me on this. Seneca, a Roman philosopher, tutor, and businessman, had the following to say on actions, on knowledge, on trust, on fear, and on self-improvement:

  • Lay hold of today’s task, and you will not need to depend so much upon tomorrow’s. While we are postponing, life speeds by. Nothing is ours, except time. On Time
  • Each day acquire something that will fortify you against poverty, against death, indeed against other misfortunes as well; and after you have run over many thoughts, select one to be thoroughly digested that day. This is my own custom; from the many things which I have read, I claim some one part for myself. On Reading
  • If you consider any man a friend whom you do not trust as you trust yourself, you are mightily mistaken and you do not sufficiently understand what true friendship means. On Friendship
  • Reflect that any criminal or stranger may cut your throat; and, though he is not your master, every lowlife wields the power of life and death over you… What matter, therefore, how powerful he be whom you fear, when every one possesses the power which inspires your fear? On Death
  • I commend you and rejoice in the fact that you are persistent in your studies, and that, putting all else aside, you make it each day your endeavour to become a better man. I do not merely exhort you to keep at it; I actually beg you to do so. On the Philosopher’s Lifestyle

Seneca goes on, in this fifth letter, to repeat the stoic refrain of ‘change what you can, accept what you cannot.’ But he expands, reflecting that your mind is “disturbed by looking forward to the future. But the chief cause of [this disease] is that we do not adapt ourselves to the present, but send our thoughts a long way ahead. And so foresight, the noblest blessing of the human race, becomes perverted.”

Good leadership requires good foresight, but panic over futures out of our control pervert this foresight into madness. So, whether you think that Biden’s green promises will destroy the economy or Trump’s tweets will incite racial violence, your actions should be defined by what you can do to improve the world–and this is the only scale against which you should be judged.

So, set aside voting as a concern. Your voice will be drowned out, and then forgotten. But your actions could push humanity forward, in your own way, and if you fail in that endeavor, then no vote will save you from the self-knowledge of a wasted life. If you succeed, then you did the only thing that matters.

Offensive advantage and the vanity of ethics

I have recently shifted my “person I am obsessed with listening to”: my new guy is George Hotz, who is an eccentric innovator who built a cell phone that can drive your car. His best conversations come from Lex Fridman’s podcasts (in 2019 and 2020).

Hotz’s ideas bring into question the efficacy of any ethical strategy to address ‘scary’ innovations. For instance, based on his experience playing “Capture the Flag” in hacking challenges, he noted that he never plays defense: a defender must cover all vulnerabilities, and loses if he fails once. An attacker only needs to find one vulnerability to win. Basically, in CTF, attacking is anti-fragile, and defense is fragile.

Hotz’s work centers around reinforcement learning systems, which learn from AI errors in automated driving to iterate toward a model that mimics ‘good’ drivers. Along the way, he has been bombarded with questions about ethics and safety, and I was startled by the frankness of his answer: there is no way to guarantee safety, and Comma.ai still depends on human drivers to intervene to protect themselves. Hotz basically dismisses any system that claims to take an approach to “Level 5 automation” that is not learning-based and iterative, as driving in any condition, on any road, is an ‘infinite’ problem. Infinite problems have natural vulnerabilities to errors and are usually closer to impossible where finite problems often have effective and world-changing solutions. Here are some of his ideas, and some of mine that spawned from his:

The Seldon fallacy: In short, 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable. See my other post for more details!

Finite solutions to infinite problems: In Hotz’s words regarding how autonomous vehicles take in their environments, “If your perception system can be written as a spec, you have a problem”. When faced with any potential obstacle in the world, a set of plans–no matter how extensive–will never be exhaustive.

Trolling the trolley problem: Every ethicist looks at autonomous vehicles and almost immediately sees a rarity–a chance for an actual direct application of a philosophical riddle! What if a car has to choose between running into several people or alter path to hit only one? I love Hotz’s answer: we give the driver the choice. It is hard to solve the trolley problem, but not hard to notice it, so the software alerts the driver whenever one may occur–just like any other disengagement. To me, this takes the hot air out of the question, since it shows that, as with many ethical worries about robots, the problem is not unique to autonomous AIs, but inherent in driving–and if you really are concerned, you can choose yourself which people to run over.

Vehicle-to-vehicle insanity: While some autonomous vehicle innovators promise “V2V” connections, through which all cars ‘tell’ each other where they are and where they are going and thus gain tremendously from shared information. Hotz cautions (OK, he straight up said ‘this is insane’) that any V2V system depends, for the safety of each vehicle and rider, on 1) no communication errors and 2) no liars. V2V is just a gigantic target waiting for a black hat, and by connecting the vehicles, the potential damage inflicted is magnified thousands-fold. That is not to say the cars should not connect to the internet (e.g. having Google Maps to inform on static obstacles is useful), just that safety of passengers should never depend on a single system evading any errors or malfeasance.

Permissioned innovation is a contradiction in terms: As Hotz says, the only way forward in autonomous driving is incremental innovation. Trial and error. Now, there are less ethically worrisome ways to err–such as requiring a human driver who can correct the system. However, there is no future for innovations that must emerge fully formed before they are tried out. And, unfortunately, ethicists–whose only skin in the game is getting their voice heard over the other loud protesters–have an incentive to promote the precautionary principle, loudly chastise any innovator who causes any harm (like Uber’s first-pedestrian-killed), and demand that ethical frameworks precede new ideas. I would argue back that ‘permissionless innovation‘ leads to more inventions and long-term benefits, but others have done so quite persuasively. So I will just say, even the idea of ethics-before-inventions contradicts itself. If the ethicist could make such a framework effectively, the framework would include the invention itself–making the ethicist the inventor! Since instead, what we get is ethicists hypothesizing as to what the invention will be, and then restricting those hypotheses, we end up with two potential outcomes: one, the ethicist hypothesizes correctly, bringing the invention within the realm of regulatory control, and thus kills it. Two, the ethicist has a blind spot, and someone invents something in it.

“The Attention”: I shamelessly stole this one from video games. Gamers are very focused on optimal strategies, and rather than just focusing on cost-benefit analysis, gamers have another axis of consideration: “the attention.” Whoever forces their opponent to focus on responding to their own actions ‘has the attention,’ which is the gamer equivalent of the weather gauge. The lesson? Advantage is not just about outscoring your opponent, it is about occupying his mind. While he is occupied with lower-level micromanaging, you can build winning macro-strategies. How does this apply to innovation? See “permissioned innovation” above–and imagine if all ethicists were busy fighting internally, or reacting to a topic that was not related to your invention…

The Maginot ideology: All military historians shake their heads in disappointment at the Maginot Line, which Hitler easily circumvented. To me, the Maginot planners suffered from two fallacies: one, they prepared for the war of the past, solving a problem that was no longer extant. Second, they defended all known paths, and thus forgot that, on defense, you fail if you fail once, and that attackers tend to exploit vulnerabilities, not prepared positions. As Hotz puts it, it is far easier to invent a new weapon–say, a new ICBM that splits into 100 tiny AI-controlled warheads–than to defend against it, such as by inventing a tracking-and-elimination “Star Wars” defense system that can shoot down all 100 warheads. If you are the defender, don’t even try to shoot down nukes.

The Pharsalus counter: What, then, can a defender do? Hotz says he never plays defense in CTF–but what if that is your job? The answer is never easy, but should include some level of shifting the vulnerability to uncertainty onto the attacker (as with “the Attention”). As I outlined in my previous overview of Paradoxical genius, one way to do so is to intentionally limit your own options, but double down on the one strategy that remains. Thomas Schelling won the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” for outlining this idea in The Strategy of Conflict, but more importantly, Julius Caesar himself pioneered it by deliberately backing his troops into a corner. As remembered in HBO’s Rome, at the seminal engagement of Pharsalus, Caesar said: “Our men must fight or die. Pompey’s men have other options.” However, he also made another underappreciated innovation, the idea of ‘floating’ reserves. He held back several cohorts of his best men to be deployed wherever vulnerabilities cropped up–thus enabling him to be reactive, and forcing his opponent to react to his counter. Lastly, Caesar knew that Pompey’s ace-in-the-hole, his cavalry, was made up of vain higher-class nobles, so he told his troops, instead of inflicting maximum damage indiscriminately, to focus on stabbing their faces and thus disfigure them. Indeed, Pompey’s cavalry did not flee from death, but did from facial scars. To summarize, the Pharsalus counter is: 1) create a commitment asymmetry, 2) keep reserves to fill vulnerabilities, and 3) deface your opponents.

Offensive privacy and the leprechaun flag: Another way to shift the vulnerability is to give false signals meant to deceive black hats. In Hotz’s parable, imagine that you capture a leprechaun. You know his gold is buried in a field, and you force the leprechaun to plant a flag where he buried it. However, when you show up to the field, you find it planted with thousands of flags over its whole surface. The leprechaun gave you a nugget of information–but it became meaningless in the storm of falsehood. This is a way that privacy may need to evolve in the realm of security; we will never stop all quests for information, but planting false (leprechaun) flags could deter black hats regardless of their information retrieval abilities.

The best ethics is innovation: When asked what his goal in life is, Hotz says ‘winning.’ What does winning mean? It means constantly improving one’s skills and information, while also seeking to find a purpose that changes the world in a way you are willing to dedicate yourself to. I think the important part of this that Hotz does not say “create a good ethical framework, then innovate.” Instead, he is effectively saying do the opposite–learn and innovate to build abilities, and figure out how to apply them later. The insight underlying this is that the ethics are irrelevant until the innovation is there, and once the innovation is there, the ethics are actually easier to nail down. Rather than discussing ‘will AIs drive cars morally,’ he is building the AIs and anticipating that new tech will mean new solutions to the ethical questions, not just the practical considerations. So, in summary, if you care about innovation, focus on building skills and knowledge bases. If you care about ethics, innovate.

The Seldon Fallacy

Like some of my role models, I am inspired by Isaac Asimov’s vision. However, for years, the central ability at the heart of the Foundation series–‘psychohistory,’ which enables Hari Seldon, the protagonist, to predict broad social trends across thousands of galaxies over thousands of years–has bothered me. Not so much because of its impact in the fictional universe of Foundation, but for how closely it matches the real-life ideas of predictive modeling. I truly fear that the Seldon Fallacy is spreading, building up society’s exposure to negative, unpredictable shocks.

The Seldon Fallacy: 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable.

The first part of the Seldon Fallacy is the mistake of assuming reducibility, or more poetically, of NNT’s Procustean Bed. As F.A. Hayek asserted, no predictive model can be less complex than the model it predicts, because of second-order effects and accumulation of errors of approximation. Isaac Asimov’s central character, Hari Seldon, fictionally ‘proves’ the ludicrous fallacy that chaotic systems can be reduced to ‘psychohistorical’ mathematics. I hope you, reader, don’t believe that…so you don’t blow up the economy by betting a fortune on an economic prediction. Two famous thought experiments disprove this: the three-body problem and the damped, driven oscillator. If we can’t even model a system with three ‘movers’, because of second-order effects, how can we model interactions between millions of people? Basically, with no way to know which reductions in complexity are meaningful, Seldon cannot know whether, in laying his living system into a Procustean bed, he has accidentally decapitated it. Using this special ability, while unable to predict individuals’ actions precisely, Seldon can map out social forces with such clarity that he correctly predicts the fall of a 10,000-year empire. Now, to turn to the ‘we can predict social, though not individual futures’ portion of the fallacy: that big things are predictable even if their consituent elements are not.

The second part of the Seldon Fallacy is the mistake of ‘the marble jar.’ Not all randomnesses are equal: drawing white and black marbles from a jar (with replacement) is fundamentally predictable, and the more marbles drawn, the more predictable the mix of marbles in the jar. Many models depend on this assumption or similar ones–that random events distribute normally (in the Gaussian sense) in a way that increases the certainty of the model as the number of samples increases. But what if we are not observing independent events? What if they are not Gaussian? What if someone tricked you, and tied some marbles together so you can’t take out only one? What if one of them is attached to the jar, and by picking it up, you inadvertently break the jar, spilling the marbles? Effectively, what if you are not working with a finite, reducible, Gaussian random system, but an infinite, Mandelbrotian, real-world random system? What if the jar contains not marbles, but living things?

I apologize if I lean too heavily on fiction to make my points, but another amazing author answers this question much more poetically than I could. Just in the ‘quotes’ from wise leaders in the introductions to his historical-fantasy series, Jim Butcher tells stories of the rise and fall of civilizations. First, on cumulative meaning:

“If the beginning of wisdom is in realizing that one knows nothing, then the beginning of understanding is in realizing that all things exist in accord with a single truth: Large things are made of smaller things.

Drops of ink are shaped into letters, letters form words, words form sentences, and sentences combine to express thought. So it is with the growth of plants that spring from seeds, as well as with walls built from many stones. So it is with mankind, as the customs and traditions of our progenitors blend together to form the foundation for our own cities, history, and way of life.

Be they dead stone, living flesh, or rolling sea; be they idle times or events of world-shattering proportion, market days or desperate battles, to this law, all things hold: Large things are made from small things. Significance is cumulative–but not always obvious.”

–Gaius Secundus, Academ’s Fury

Second, on the importance of individuals as causes:

“The course of history is determined not by battles, by sieges, or usurpations, but by the actions of the individual. The strongest city, the largest army is, at its most basic level, a collection of individuals. Their decisions, their passions, their foolishness, and their dreams shape the years to come. If there is any lesson to be learned from history, it is that all too often the fate of armies, of cities, of entire realms rests upon the actions of one person. In that dire moment of uncertainty, that person’s decision, good or bad, right or wrong, big or small, can unwittingly change the world.

But history can be quite the slattern. One never knows who that person is, where he might be, or what decision he might make.

It is almost enough to make me believe in Destiny.”

–Gaius Primus, Furies of Calderon

If you are not convinced by the wisdom of fiction, put down your marble jar, and do a real-world experiment. Take 100 people from your community, and measure their heights. Then, predict the mean and distribution of height. While doing so, ask each of the 100 people for their net worth. Predict a mean and distribution from that as well. Then, take a gun, and shoot the tallest person and the richest person. Run your model again. Before you look at the results, tell me: which one do you expect shifted more?

I seriously hope you bet on the wealth model. Height, like marble-jar samples, is normally distributed. Wealth follows a power law, meaning that individual datapoints at the extremes have outsized impact. If you happen to live in Seattle and shot a tech CEO, you may have lowered the mean income in the group by more than the average income of the other 99 people!

So, unlike the Procustean Bed (part 1 of the Seldon Fallacy), the Marble Jar (part 2 of the Seldon Fallacy) is not always a fallacy. There are systems that follow the Gaussian distribution, and thus the Marble Jar is not a fallacy. However, many consequential systems–including earnings, wars, governmental spending, economic crashes, bacterial resistance, inventions’ impacts, species survival, and climate shocks–are non-Gaussian, and thus the impact of a single individual action could blow up the model.

The crazy thing is, Asimov himself contradicts his own protagonist in his magnum opus (in my opinion). While the Foundation Series keeps alive the myth of the predictive simulation, my favorite of his books–The End of Eternity (spoilers)–is a magnificent destruction of the concept of a ‘controlled’ world. For large systems, this book is also a death knell even of predictability itself. The Seldon Fallacy–that a simplified, non-chaotic model can predict a complex, chaotic reality, and that size enhances predictability–is shown, through the adventures of Andrew Harlan, to be riddled with hubris and catastrophic risk. I cannot reduce his complex ideas into a simple summary, for I may decapitate his central model. Please read the book yourself. I will say, I hope that as part of your reading, I hope you take to heart the larger lesson of Asimov on predictability: it is not only impossible, but undesirable. And please, let’s avoid staking any of our futures on today’s false prophets of predictable randomness.

Necessity constrains even the gods

I was recently talking to my cofounder about the concept of “fuck-you” money. “Fuck-you” money is the point at which you no longer need to care what other people think, you can fund what you want without worrying about ending up broke–so long as you recognize the power of necessity.

It reminded me of three things I have read before. One is from the brilliant economist and historian Thomas Sowell, who wrote in The Conflict of Visions that ideological divides often crop on the disagreement between “constrained” and “unconstrained” visions of the world and humanity. Effectively, the world contains some who recognize that humans have flaws that culture has helped us work through, but that we should be grateful for the virtues handed to us and understand that utopianism is dangerous self-deception. But it contains many others who see all human failings stemming from social injustices, since in nature, humans have no social problems. Those who line up behind Hobbes fight those who believe, still, the noble savage and Rousseau’s perfect state of nature. To me, this divide encapsulates the question of, did necessity emerge before human society? And if so, does it still rule us?

I know what the wisdom of antiquity says. The earliest cosmogonies–origin stories of the gods–identify Ananke (Necessity) as springing forth from the Earth herself, before the gods, and restricting even them. This story was passed on to Greek thinkers like Plato (Republic) and playwrites like Euripides (Alcestis), who found human government and the fate of heroes to also be within the tragic world of necessity first, all else second.

Lastly, this reminds me of Nassim Nicholas Taleb’s Anti-Fragile. He points out that the first virtue is survival, and that optionality is pure gain. Until you address necessity, your optionality–your choices and your chances–are fundamentally limited. As an entrepreneur who literally lives the risk of not surviving, I do not need to be convinced. Necessity rules even the gods, and it certainly rules those with “fuck-you” money. But it rules me even more. I am ruled by the fear that I may fail my family, myself, and my company at the Maslow’s level of survival. Those with “fuck-you” money at least have moved to the level where they have chances to fail society. And the lesson from history, from mythology, and from surviving in the modern economy, is not that one should just be resigned to reaching one’s limits. It is to strive to reach the level where you are pushing them, and the whole time to recognize the power of Necessity.

Triple-blinded trials in political economy

In medicine, randomized controlled trials are the most highly regarded type of primary study, as they separately track treatment and control groups to determine whether an observed effect is actually caused by the intervention.

Bias, the constant bane of statisticians, can be minimized further by completing a blinded trial. In a single-blinded trial, the patient population is not informed which group they are in, to prevent knowledge of therapy from impacting results. Placebos are powerful, so blinding has helped identify dozens of therapies that are no better than sugar pills!

However, knowledge can contaminate studies in another way–through the physicians administering the therapies. Bias can be further reduced by double blinding, in which the physicians are also kept in the dark about which therapy was administered, so that their knowledge does not contaminate their reporting of results. In a double-blind trial, only the study administrators know which therapy is applied to each patient, and sometimes an independent lab is tasked with analysis to further limit bias.

Overall, these blinding mechanisms are meant to make us more certain that the results of a study are reflective of an intervention’s actual efficacy. However, medicine is not the only field where the efficacy of many interventions is impactful, highly debated, and worthy of study. Why, then, do we not have blinded studies in political economy?

We all know that randomized controlled trials are pretty much impossible in political economy. North/South Korea and West/East Germany were amazing accidental trials, but we can still hope that politicians and economists make policies that can at least be tracked to determine their ‘change from baseline’ even if we have no control group. Because of how easy it is to harm socioeconomic systems and sweep the ruinous results under the rug, I personally consider it unethical to intervene in a complex system without careful prior consideration, and straight up evil to do so without plans to track the impact of that intervention. So, how can politicians take an ‘evidence-based approach’ to their interventions?

I think that, in recent years, politicians–especially in the US and especially liberals and COVID-reactionaries–have come up with an amazing new experimental method: the triple blinded study. Examples include the ACA, the ARRA, and the recent $3 trillion stimulus package. In a triple blinded study, politicians carefully draft bills so that they are (1) too long for anyone, especially the politicians themselves, to read; (2) filled with a mish-mash of dozens of strategies implemented simultaneously or that are delegated vaguely to administrative agencies; and (3) have no pre-specified metrics by which the policy will be judged, thus blinding everyone to any useful study of signal and response.

I am reminded of one of the most painful West Wing episodes ever made, in which “President Bartlett” is addressing an economic crisis, and is fielding dozens of suggestions from experts–without being able to choose among the candidate interventions. Donna, assistant to his Deputy Chief of Staff, tells a parable about how her grandmother would use ‘a little bit of this, a little bit of that’ to cure minor illnesses. Inspired, Bartlett adopts a policy of ALL suggested economic interventions, thus ensuring that we try everything–and learn nothing. I shudder to think that this strategy was ever broached publicly…and copied from fiction into reality.

In this way, politicians have cleverly enabled us to reduce the bias caused by any knowledge of the intervention or its impact. The patients (citizens), physicians (politicians), and study administrators (economists?) are all kept carefully in the dark so that none of them can know how a policy impacted the economy. Thus, anyone debating any of these topics is given the full freedom to invent whatever argument they want, cherry-pick any data they want, and continue peddling their politics without ever being called to task by the data.

Even more insanely, doctors are held not only to the standard of evidence-based medicine, but also to that of of the precautionary principle–where passivity is preferred to action and novel methods are treated with special scrutiny. “Evidence-based policy”, on the other hand, is a buzzword and not an actual practice to align with RCTs, and any politician who actually followed the precautionary principle would be considered ‘do-nothing’. Thus, we carefully keep both evidence and principles of ‘do no harm’ far from the realm of political action, and continue a general practice across politics of the blind making sure that they lead the blind.

In sum, political leaders, please ignore Donna. Stop intentionally blinding us to policy impacts. Stop doing triple-blinded studies with the future of our country. Sincerely, all data-hounds, ever.

Why snipers have spotters

Imagine two highly skilled snipers choosing and eliminating targets in tandem. Now imagine I take away one of their rifles, but leave him his scope. How much do you expect their abilities to be decreased?

Surprisingly, there is a strong case that this will actually increase their combined sniping competence. As an economist would point out, this stems from specialization: the sniper sacrifices total situational awareness to improve accurate intervention, and the spotter sacrifices ability to intervene to improve awareness and planning. We can push out beyond the production possibilities curve.

It is also a result of communication. Two independent snipers pick their own shots, and may over-kill a target or miss a pressing threat. By explicitly designating roles, the sniper can depend on the spotter for guidance, and the two-person system means that both parties actually have more information than their cumulative, but separate knowledge without spotting.

There are also long-term positive impacts that likely escape an economist’s models from switching off in each role, or from an apprenticeship model. Eye fatigue that limits accuracy, and mental fatigue that may result from constant awareness, can be eliminated by taking turns. Also, if a skilled sniper has a novice spotter, the spotter observes the sniper’s tactics and can assimilate best practices–and the sniper, by previously working as a spotter, can be more productively empathetic. The system naturally encourages learning and improvement.

I love the sniper-spotter archetype, because it clarifies the advantages of:

  • Going from zero to one: Between two independent snipers, there zero effective lines of communication. Between a sniper and a spotter, there is one. This interaction unlocks potential held in both.
  • More from less: Many innovate by adding new things; however, anti-fragile innovations are more likely to come from removing unnecessary things than by adding new ones.
  • Not the number of people, the number of interactions: Interactions have advantages (specialization, coordination) and disadvantages (communication friction, lack of individual decision-making responsibilities). Scrutinize what interactions you want on your teams and which to avoid.
  • Isolation: Being connected to everyone promotes noise over signal. It also promotes focusing on competitors over opportunities and barriers over permissionless innovation.
  • Separate competencies, shared goals and results: To make working together worth it, define explicit roles that match each individual’s competencies. Then, so long as you have vision alignment, all team members know what they are seeking and how they will be depended upon to succeed.
  • Iterative learning and feedback: Systems that promote self-improvement of their parts outperform systems that do not. Also, at the end of the day, education comes from experimentation and observation of new phenomena, balance on the edge between known and unknown practices.
  • Establish ‘common knowledge’: Communication failures and frictions often occur because independent people assume others have the same assumed set of ‘common knowledge’. If you make communication the root of success, so long as the group is small enough to actual have–and know it has–the same set of ‘common knowledge’, they can act confidently on these shared assumptions.
  • Delegation as productivity: Recognize that doing more does not mean more gets done. Without encouraging slacking off, explicitly rewarding individuals for choosing the right things to delegate and executing effectively will get more from less.
  • Cheating Goodhart: Goodhart’s Law states that the metric of success becomes the goal. If you make the metric of success joint, rather than individual, and shape its incentives to match your vision, your metrics will create an atmosphere bent on achieving your actual goals.
  • Leadership is empowerment: Good leaders don’t tell people what to do, they inform, support, listen, and match people’s abilities and passions to larger purpose.
  • Smallness: Small is reactive, flexible, cohesive, connected, fast-moving, accurate, stealthy, experimental, permissionless, and, counterintuitively, scalable.

My most recent encounter with “sniper and spotter” is in my sister’s Montessori classroom (ages 3-6). She is an innovative educator who noticed that her public school position was rife with top-down management, politics, and perverse incentives, and was not finding systems to promote curiosity or engagement. She has applied the “sniper and the spotter” after noticing that children thrive best in either one-on-one, responsive guidance, where the instructor is totally dedicated to the student, or when left to their own devices in a materials-rich environment, engaging in discovery (or working with other children, or even teaching what they have already learned to newcomers). However, believe it or not, three-year-olds can often cause disruptions or even physical threats if left totally without supervision.

She therefore promotes a teaching model where there are two teachers, one who watches for children’s safety and minimizes disruptiveness. This frees the other teacher to rove student-to-student and give either individual or very-small-group attention. The two teachers communicate to plan next steps, and to ‘spot’ children who most need intervention. This renders ‘class size’ a stupid metric: what matters is how much one-on-one guidance plus permissionless discovery a child engages in. It is also a “barbell” strategy: instead of wallowing in the mediocrity of “group learning”, children get the most of the two extremes–total attention and just-enough-attention-to-remain-safe.

PS: On Smallness, Jeff Bezos has promised $1 billion to support education innovation. So far, despite starting before my sister, he has so far opened as many classrooms: one. As the innovator in the ‘two-pizza meeting’, I wish Bezos would start with many, small experiments in education rather than big public dedications, so he could nurture innovation and select strategies for success.

I would love to see more examples of “sniper and spotter” approaches in the comments…but no sniping please 🙂

Game theory in the wild

Game theory is an amazing way to simulate reality, and I strongly recommend any business leader to educate herself on underlying concepts. However, I have found that the way that it is constructed in economic and political science papers has limited connection to the real world–apart from nuclear weapons strategies, of course.

If you are not a mathematician or economist, you don’t really have time to assign exact payoffs to outcomes or calculate an optimal strategy. Instead, you can either guess, or you can use the framework of game theory–but none of the math–to make rapid decisions that cohere to its principles, and thus avoid being a sucker (at least some of the time).

As Yogi Berra didn’t say, “In theory, there is no difference between practice and theory. In practice, there is.” As a daily practitioner of game theory, here are some of its assumptions that I literally had to throw out to make it actually work:

  • Established/certain boundaries on utility: Lots of games bound utility (often from 0 to 1, or -1 to 1, etc. for each individual). Throw away those games, as they preferenced easier math over representation of random, infinite realities, where the outcomes are always more uncertain and tend to be unbounded.
  • Equating participants: Similar to the above, most games have the same utility boundaries for all participants, when in reality it literally always varies. I honestly think that game theorists would model out the benefits of technology based on the assumption that a Sumerian peasant in 3000 BC and an American member of the service economy in 2020 can have equivalent utility. That is dumb.
  • Unchanging calculations: In part because of the uncertainty and asymmetries mentioned above, no exact representation of a game sticks around–instead, the equation constantly shifts as participants change, and utility boundaries move (up with new tech, down with new regs, etc). That is why the math is subordinate to structure: if you are right about the participants, the pathways, and have an OK gut estimate of the payoff magnitudes, you can decide rapidly and then shift your equation as the world changes.
  • Minimal feedback/second order effects: Some games have signal-response, but it is hard to abstract the concept that all decisions enter a complex milieu of interacting causes and effects where the direction arrow is hard to map. Since you can’t model them, just try to guess–what with the response to the game outcome be? Focus on feedback loops–they hold secrets to unbounded long-term utilities.
  • The game ends: Obviously, since games are abstractions, it makes sense to tie them up nicely in one set of inputs and then a final set of outputs. In reality, there is really only one game, and each little representation is a snapshot of life. That means that many games forget that the real goal of the game is to stay in it.

These examples–good rules of thumb to practitioners, certain to be subject to quibbling by any academic reader–remind me of how wrong even the history of game theory is. As with many oversights by historians of science, the attribution of game theory’s invention credits the first theoretician (John von Neumann, who was smart enough to both practice and theorize), not the first practitioner (probably lost to history–but certainly by the 1600’s, as Pascal’s Wager actually lines up better with “game theory in the wild” in that he used infinite payoffs and actually did become religious). Practitioners, I would ignore the conventional history, theory, actual math, and long papers. Focus on easily used principles and heuristics that capture uncertainty, unboundedness, and asymmetries. Some examples:

  • Principle: Prediction is hard. Don’t do it if you can help it.
  • Heuristic: Bounded vs. Unbounded. Magnitude is easier to measure (or at least cap) than likelihood is.

  • Principle: Every variable introduces more complexity and uncertainty.
  • Heuristic: Make decisions for one really good reason. If your best reason is not enough, don’t depend on accumulation.

  • Principle: One-time experiments don’t optimize.
  • Heuristic: If you actually want to find useful methods, iterate.

  • Principle: Anything that matters (power, utility, etc.) tends to be unequally distributed.
  • Heuristic: Ignore the middle. Either make one very rich person very happy (preferred) or make most people at least a little happier. Or pull a barbell strategy if you can.

  • The Academic Certainty Principle: Mere observation of reality by academics inevitably means they don’t get it. (Actually a riff on observer effects, not Hiesenberg, but the name is catchier this way).
  • Heuristic: In game theory as in all academic ideas, if you think an academic stumbled upon a good practice, try it–but assume you will need trial and error to get it right.

  • Principle: Since any action has costs, ‘infinite’ payoffs, in reality, come from dividing by zero.
  • The via negativa: Your base assumption should be inaction, followed by action to eliminate cost. Be very skeptical of “why not” arguments.

So, in summary, most specific game theories are broken because they preference math (finite, tidy, linear) over practice (interconnected, guess-based, asymmetric). That does not mean you can’t use game theory in the wild, it just means that you should focus on structure over math, unbounded/infinite payoffs over solvable games, feedback loops over causal arrows, inaction over action, extremes over moderates, and rules of thumb over quibbles.

Good luck!

Why the US is behind in FinTech, in two charts

The US is frankly terrible at innovation in banking. When Kenya (and its neighbors) has faster adoption of mobile banking–as they have since at least 2012–it is time to reconsider our approach.

Here is the problem: we made new ideas in banking de facto illegal. Especially since the 2008 financial crisis, regulatory bodies (especially the CFPB) has piled on a huge amount of potential liability that scares away any new entrant. Don’t believe me? Let’s look at the data:

bank creation

Notice anything about new bank creation in the US after 2008?

A possible explanation, in a “helpful resource” provided to banking regulators and lawyers for banks:

regulatory complexity

This shows: 8 federal agencies reporting to the FSOC, plus another independent regulatory body for fintech (OFAC/FinCEN). Also, the “helpful” chart notes state regulations just as an addendum in a circle…probably because it would take 50 more, possibly complex and contradictory charts.

So, my fellow citizens, don’t innovate in banking. No one else is, but they are probably right.

The Blind Entrepreneur

Entrepreneurs usually make decisions with incomplete information, in disciplines where we lack expertise, and where time is vital. How, then, can we be expected to make decisions that lead to our success, and how can other people judge our startups on our potential value? And even if there are heuristics for startup value, how can they cross fields?

The answer, to me, comes from a generalizable system for improvement and growth that has proven itself– the blind watchmaker of evolution. In this, the crucial method by which genes promulgate themselves is not by predicting their environments, but by promiscuity and opportunism in a random, dog-eat-dog-world. By this, I mean that successful genes free-ride on or resonate with other genes that promote reproductive success (promiscuity) and select winning strategies by experimenting in the environment and letting reality be the determinant of what gene-pairings to try more often (opportunism). Strategies that are either robust or anti-fragile usually outperform fragile and deleterious strategies, and strategies that exist within an evolutionary framework that enables rapid testing, learning, mixing, and sharing (such as sexual reproduction or lateral gene transfer paired with fast generations) outperform those that do not (such as cloning), as shown by the Red Queen hypothesis.

OK, so startups are survival/reproductive vehicles and startup traits/methods are genes (or memes, in the Selfish Gene paradigm). With analogies, we should throw out what is different and keep what is useful, so what do we need from evolution?

First, one quick note: we can’t borrow the payout calculator exactly. Reproductive success is where a gene makes more of itself, but startups dont make more of themselves. For startups the best metric is probably money. Other than that, what adaptations are best to adopt? Or, in the evolutionary frame, what memes should we imbue in our survival vehicles?

Traits to borrow:

  • Short lives: long generations mean the time between trial and error is too long. Short projects, short-term goals, and concrete exits.
  • Laziness: energy efficiency is far more important than #5 on your priority list.
  • Optionality: when all things are equal, more choices = more chances at success.
  • Evolutionarily Stable Strategies: also called “don’t be a sucker.”
  • React, don’t plan: prediction is difficult or even impossible, but being quick to jump into the breach has the same outcome. Could also be called “prepare, but don’t predict.”
  • Small and many: big investments take a lot of energy and effectively become walking targets. Make small and many bets on try-outs and then feed those that get traction. Note– this is also how to run a military!
  • Auftragstaktik: should be obvious, central planning never works. Entrepreneurs should probably not make any more decisions than they have to.
  • Resonance: I used to call this “endogenous positive feedback loops,” but that doesn’t roll off the tongue. In short, pick traits that make your other traits more powerful–and even better if all of your central traits magnify your other actions.
  • Taking is better than inventing: Its not a better startup if its all yours. Its a better startup if you ruthlessly pick the best idea.
  • Pareto distributions (or really, power laws): Most things don’t really matter. Things that matter, matter a lot.
  • Finite downside, infinite upside: Taleb calls this “convexity”. Whenever presented with a choice that has one finite and one infinite potential, forget about predicting what will happen– focus on the impact’s upper bound in both directions. It goes without saying– avoid infinite downsides!
  • Don’t fall behind (debt): The economy is a Red Queen, anyone carrying anything heavy will continually fall behind. Debt is also the most likely way companies die.
  • Pay it forward to your future self: squirrels bury nuts; you should build generic resources as well.
  • Don’t change things: Intervening takes energy and hurts diversity.
  • Survive: You can’t win if you’re not in the game. More important than being successful is being not-dead.

When following these guidelines, there are two other differences between entrepreneurs and genes: One, genes largely exist in an amoral state, whereas your business is vital to your own life, and if you picked a worthwhile idea, society. Two, unlike evolution, you actually have goals and are trying to achieve something beyond replication, beyond even money. Therefore, you do not need to take your values from evolution. However, if you ignore its lessons, you close your eyes to reality and are truly blind.

Our “blind” entrepreneur, then, can still pick goals and construct what she sees as her utility. But to achieve the highest utility, once defined, she will create unknowable and unpredictable risk of her idea’s demise if she does not learn to grow the way that the blind watchmaker does.

Broken incentives in medical research

Last week, I sat down with Scott Johnson of the Device Alliance to discuss how medical research is communicated only through archaic and disorganized methods, and how the root of this is the “economy” of Impact Factor, citations, and tenure-seeking as opposed to an exercise in scientific communication.

We also discussed a vision of the future of medical publishing, where the basic method of communicating knowledge was no longer uploading a PDF but contributing structured data to a living, growing database.

You can listen here: https://www.devicealliance.org/medtech_radio_podcast/

As background, I recommend the recent work by Patrick Collison and Tyler Cowen on broken incentives in medical research funding (as opposed to publishing), as I think their research on funding shows that a great slow-down in medical innovation has resulted from systematic errors in organizing knowledge gathering. Mark Zuckerberg actually interviewed them about it here: https://conversationswithtyler.com/episodes/mark-zuckerberg-interviews-patrick-collison-and-tyler-cowen/.

Launching our COVID-19 visualization

I know everyone is buried in COVID-19 news, updates, and theories. To me, that makes it difficult to cut through the opinions and see the evidence that should actually guide physicians, policymakers, and the public.

To me, the most important thing is the ability to find the answer to my research question easily and to know that this answer is reasonably complete and evidence-driven. That means getting organized access to the scientific literature. Many sources (including a National Library of Medicine database) present thousands of articles, but the organization is the piece that is missing.

That is why I launched StudyViz, a new product that enables physicians to build an updatable visualization of all studies related to a topic of interest. Then, my physician collaborators built just such a visual for COVID-19 research, presenting a sunburst diagram that users can select to identify their research question of interest.

Studyviz sunburst

For instance, if you are interested in the impact of COVID-19 on pregnant patients, just go to “Subpopulations” and find “Pregnancy” (or neonates, if that is your concern). We nested the tags so that you can “drill down” on your question, and so that related concepts are close to each other. Then, to view the studies themselves, just click on them to see an abstract with the key info (patients, interventions, and outcomes) highlighted:

Abstract

This is based on a complex concept hierarchy built by our collaborators and that is constantly evolving as the literature does:

Hierarchy

Even beyond that, we opened up our software to let any researchers who are interested build similar visuals on any disease state, as COVID-19 is not the only disease for which organizing and accessing the scientific literature is important!

We are seeking medical coinvestigators–any physician interested in working with us, simply email contact@nested-knowledge.com or contact us on our website!

A History of Plagues

As COVID-19 continues to spread, fears and extraordinary predictions have also gone viral. While facing a new infectious threat, the unknowns of how new traits of our societies worldwide or of this novel coronavirus impact its spread. Though no two pandemics are equivalent, I thought it best to face this new threat armed with knowledge from past infectious episodes. The best inoculation against a plague of panic is to use evidence gained through billions of deaths, thousands of years, and a few vital breakthroughs to prepare our knowledge of today’s biological crises, social prognosis, and choices.

Below, I address three key questions: First, what precedents do we have for infections with catastrophic potential across societies? Second, what are the greatest killers and how do pandemics compare? Lastly, what are our greatest accomplishments in fighting infectious diseases?

As foundation for understanding how threats like COVID-19 come about and how their hosts fight back, I recommend reading The Red Queen concerning the evolutionary impact and mechanisms of host-disease competition and listening to Sam Harris’ “The Plague Years” podcast with Matt McCarthy from August 2019, which predated COVID-19 but had a strangely prophetic discussion of in-hospital strategies to mitigate drug resistance and their direct relation to evolutionary competition.

  • The Biggest Killers:

Infectious diseases plagued humanity throughout prehistory and history, with a dramatic decrease in the number of infectious disease deaths coming in the past 200 years. In 1900, the leading killers of people were (1) Influenza, (2) Tuberculosis, and (3) Intestinal diseases, whereas now we die from (1) Heart disease, (2) Cancer, and (3) Stroke, all chronic conditions. This graph shows not that humans have vanquished infectious disease as a threat, but that in the never-ending war of evolutionary one-upmanship, we have won battles consistently since 1920 forward. When paired with Jonathan Haidt’s Most Important Graph in the World, this vindicates humanity’s methods of scientific and economic progress toward human flourishing.Death rates

However, if the CDC had earlier data, it would show a huge range of diseases that dwarf wars and famines and dictators as causes of death in the premodern world. If we look to the history of plagues, we are really looking at the history of humanity’s greatest killers.

The sources on the history of pandemics are astonishingly sparse/non-comprehensive. I created the following graphs only by combining evidence and estimates from the WHO, CDC, Wikipedia, Our World in Data, VisualCapitalist, and others (lowest estimates shown where ranges were presented) for both major historic pandemics and for ongoing communicable disease threats. This is not a complete dataset, and I will continue to add to it, but it shows representative death counts from across major infectious disease episodes, as well as the death rate per year based on world population estimates. See the end of this post for the full underlying data. First, the top 12 “plagues” in history:

Capture disease top 12

 

Note: blue=min, orange=max across the sources I examined. For ongoing diseases with year-by-year WHO evidence, like tuberculosis, measles, and cholera, I grouped mortality in 5-year spans (except AIDS, which does not have good estimates from the 1980s-90s, so I reported based on total estimated deaths).

Now, let’s look at the plagues that were lowest on my list (number 55-66). Again, my list was not comprehensive, but this should provide context for COVID-19:

Capture covid

As we can see, the 11,400 people who have died from COVID-19 recently passed Ebola to take the 61st (out of 66) place on our list of plagues. Note again that several ongoing diseases were recorded in 5-year increments, and COVID-19 still comes in under the death rates for cholera. Even more notably, it has 0.015% as many victims as the plague in the 14th Century,

  • In Context of Current Infectious Diseases:

For recent/ongoing diseases, it is easier to compare year-by-year data. Adding UNAIDS to our sources, we found the following rates of death across some of the leading infectious causes of death. Again, this is not comprehensive, but helps put COVID-19 (the small red dot, so far in the first 3 months of 2020) in context:

Capture diseases by year

Note: darker segments of lines are my own estimates; full data at bottom of the post. I did not include influenza due to the lack of good sources on a year-by-year basis, but a Lancet article found that 291,000-645,000 deaths from influenza in a year is predictable based on data from 1999-2015.

None of this is to say that COVID-19 is not a major threat to human health globally–it is, and precautions could save lives. However, it should show us that there are major threats to human health globally all the time, that we must continue to fight. These trendlines tend to be going the right direction, but our war for survival has many foes, and will have more emerge in the future, and we should expend our resources in fighting them rationally based on the benefits to human health, not panic or headlines.

  • The Eradication List:

As we think about the way to address COVID-19, we should keep in mind that this fight against infectious disease builds upon work so amazing that most internet junkies approach new infectious diseases with fear of the unknown, rather than tired acceptance that most humans succumb to them. That is a recent innovation in the human experience, and the strategies used to fight other diseases can inform our work now to reduce human suffering.

While influenzas may be impossible to eradicate (in part due to an evolved strategy of constantly changing antigens), I wanted to direct everyone to an ever-growing monument to human achievement, the Eradication List. While humans have eradicated only a few infectious diseases, the amazing thing is that we can discuss which diseases may in fact disappear as threats through the work of scientists.

On that happy note, I leave you here. More History of Plagues to come, in Volume 2: Vectors, Vaccines, and Virulence!

Disease Start Year End Year Death Toll (low) Death Toll (high) Deaths per 100,000 people per year (global)
Antonine Plague 165 180 5,000,000 5,000,000 164.5
Plague of Justinian 541 542 25,000,000 100,000,000 6,250.0
Japanese Smallpox Epidemic 735 737 1,000,000 1,000,000 158.7
Bubonic Plague 1347 1351 75,000,000 200,000,000 4,166.7
Smallpox (Central and South America) 1520 1591 56,000,000 56,000,000 172.8
Cocoliztli (Mexico) 1545 1545 12,000,000 15,000,000 2,666.7
Cocoliztli resurgence (Mexico) 1576 1576 2,000,000 2,000,000 444.4
17th Century Plagues 1600 1699 3,000,000 3,000,000 6.0
18th Century Plagues 1700 1799 600,000 600,000 1.0
New World Measles 1700 1799 2,000,000 2,000,000 3.3
Smallpox (North America) 1763 1782 400,000 500,000 2.6
Cholera Pandemic (India, 1817-60) 1817 1860 15,000,000 15,000,000 34.1
Cholera Pandemic (International, 1824-37) 1824 1837 305,000 305,000 2.2
Great Plains Smallpox 1837 1837 17,200 17,200 1.7
Cholera Pandemic (International, 1846-60) 1846 1860 1,488,000 1,488,000 8.3
Hawaiian Plagues 1848 1849 40,000 40,000 1.7
Yellow Fever 1850 1899 100,000 150,000 0.2
The Third Plague (Bubonic) 1855 1855 12,000,000 12,000,000 1,000.0
Cholera Pandemic (International, 1863-75) 1863 1875 170,000 170,000 1.1
Indian Smallpox 1868 1907 4,700,000 4,700,000 9.8
Franco-Prussian Smallpox 1870 1875 500,000 500,000 6.9
Cholera Pandemic (International, 1881-96) 1881 1896 846,000 846,000 4.4
Russian Flu 1889 1890 1,000,000 1,000,000 41.7
Cholera Pandemic (India and Russia) 1899 1923 1,300,000 1,300,000 3.3
Cholera Pandemic (Philippenes) 1902 1904 200,000 200,000 4.2
Spanish Flu 1918 1919 40,000,000 100,000,000 1,250.0
Cholera (International, 1950-54) 1950 1954 316,201 316,201 2.4
Cholera (International, 1955-59) 1955 1959 186,055 186,055 1.3
Asian Flu 1957 1958 1,100,000 1,100,000 19.1
Cholera (International, 1960-64) 1960 1964 110,449 110,449 0.7
Cholera (International, 1965-69) 1965 1969 22,244 22,244 0.1
Hong Kong Flu 1968 1970 1,000,000 1,000,000 9.4
Cholera (International, 1970-75) 1970 1974 62,053 62,053 0.3
Cholera (International, 1975-79) 1975 1979 20,038 20,038 0.1
Cholera (International, 1980-84) 1980 1984 12,714 12,714 0.1
AIDS 1981 2020 25,000,000 35,000,000 13.8
Measles (International, 1985) 1985 1989 4,800,000 4,800,000 19.7
Cholera (International, 1985-89) 1985 1989 15,655 15,655 0.1
Measles (International, 1990-94) 1990 1994 2,900,000 2,900,000 10.9
Cholera (International, 1990-94) 1990 1994 47,829 47,829 0.2
Malaria (International, 1990-94) 1990 1994 3,549,921 3,549,921 13.3
Measles (International, 1995-99) 1995 1999 2,400,000 2,400,000 8.4
Cholera (International, 1995-99) 1995 1999 37,887 37,887 0.1
Malaria (International, 1995-99) 1995 1999 3,987,145 3,987,145 13.9
Measles (International, 2000-04) 2000 2004 2,300,000 2,300,000 7.5
Malaria (International, 2000-04) 2000 2004 4,516,664 4,516,664 14.7
Tuberculosis (International, 2000-04) 2000 2004 7,890,000 8,890,000 25.7
Cholera (International, 2000-04) 2000 2004 16,969 16,969 0.1
SARS 2002 2003 770 770 0.0
Measles (International, 2005-09) 2005 2009 1,300,000 1,300,000 4.0
Malaria (International, 2005-09) 2005 2009 4,438,106 4,438,106 13.6
Tuberculosis (International, 2005-09) 2005 2009 7,210,000 8,010,000 22.0
Cholera (International, 2005-09) 2005 2009 22,694 22,694 0.1
Swine Flu 2009 2010 200,000 500,000 1.5
Measles (International, 2010-14) 2010 2014 700,000 700,000 2.0
Malaria (International, 2010-14) 2010 2014 3,674,781 3,674,781 10.6
Tuberculosis (International, 2010-14) 2010 2014 6,480,000 7,250,000 18.6
Cholera (International, 2010-14) 2010 2014 22,691 22,691 0.1
MERS 2012 2020 850 850 0.0
Ebola 2014 2016 11,300 11,300 0.1
Malaria (International, 2015-17) 2015 2017 1,907,872 1,907,872 8.6
Tuberculosis (International, 2015-18) 2015 2018 4,800,000 5,440,000 16.3
Cholera (International, 2015-16) 2015 2016 3,724 3,724 0.0
Measles (International, 2019) 2019 2019 140,000 140,000 1.8
COVID-19 2019 2020 11,400 11,400 0.1

 

Year Malaria Cholera Measles Tuberculosis Meningitis HIV/AIDS COVID-19
1990 672,518 2,487 670,000 1,903 310,000
1991 692,990 19,302 550,000 1,777 360,000
1992 711,535 8,214 700,000 2,482 440,000
1993 729,735 6,761 540,000 1,986 540,000
1994 743,143 10,750 540,000 3,335 620,000
1995 761,617 5,045 400,000 4,787 720,000
1996 777,012 6,418 510,000 3,325 870,000
1997 797,091 6,371 420,000 5,254 1,060,000
1998 816,733 10,832 560,000 4,929 1,210,000
1999 834,692 9,221 550,000 2,705 1,390,000
2000 851,785 5,269 555,000 1,700,000 4,298 1,540,000
2001 885,057 2,897 550,000 1,680,000 6,398 1,680,000
2002 911,230 4,564 415,000 1,710,000 6,122 1,820,000
2003 934,048 1,894 490,000 1,670,000 7,441 1,965,000
2004 934,544 2,345 370,000 1,610,000 6,428 2,003,000
2005 927,109 2,272 375,000 1,590,000 6,671 2,000,000
2006 909,899 6,300 240,000 1,550,000 4,720 1,880,000
2007 895,528 4,033 170,000 1,520,000 7,028 1,740,000
2008 874,087 5,143 180,000 1,480,000 4,363 1,630,000
2009 831,483 4,946 190,000 1,450,000 3,187 1,530,000
2010 788,442 7,543 170,000 1,420,000 2,198 1,460,000
2011 755,544 7,781 200,000 1,400,000 3,726 1,400,000
2012 725,676 3,034 150,000 1,370,000 3,926 1,340,000
2013 710,114 2,102 160,000 1,350,000 3,453 1,290,000
2014 695,005 2,231 120,000 1,340,000 2,992 1,240,000
2015 662,164 1,304 150,000 1,310,000 1,190,000
2016 625,883 2,420 90,000 1,290,000 1,170,000
2017 619,825 100,000 1,270,000 1,150,000
2018 1,240,000
2019
2020 16,514

Broken Incentives in Medical Innovation

I recently listened to Mark Zuckerberg interviewing Tyler Cowen and Patrick Collison concerning their thesis that the process of using scientific research to advance major development goals (e.g. extending the average human lifespan) has stagnated. It is a fascinating discussion that fundamentally questions the practice of scientific research as it is currently completed.

Their conversation also made me consider more deeply the incentives in my industry, medical R&D, that have shaped the practices that Cowen and Collison find so problematic. While there are many reasons for the difficulties in maintaining a breakneck pace of technological progress (“all the easy ideas are already done,” “the American education system fails badly on STEM,” etc), I think that there are structural causes that are major contributors to the great slowdown in medical progress. See my full discussion here!

The open secrets of what medicine actually helps

One of the things that I was most surprised by when I joined the medical field was how variable the average patient benefit was for different therapies. Obviously, Alzheimer’s treatments are less helpful than syphilis ones, but even within treatment categories, there are huge ranges in actual efficacy for treatments with similar cost, materials, and public conception.

What worries me about this is that not only in public but within the medical establishment, actually differentiating these therapies–and therefore deciding what therapies, ultimately, to use and pay for–is not prioritized in medical practice.

I wrote about this on my company’s blog, but its concept is purely as a comment on the most surprising dichotomy I learned about–that between stenting (no benefit shown for most patients!!) vs. clot retrieval during strokes (amazing benefits, including double the odds of good neurological outcome). Amazingly, the former is a far more common procedure, and the latter is underprovided in rural areas and in most countries outside of the US, EU, Japan, and Korea. Read more here: https://about.nested-knowledge.com/2020/01/27/not-all-minimally-invasive-procedures-are-created-equal/.

There is no Bloomberg for medicine

When I began working in medical research, I was shocked to find that no one in the medical industry has actually collected and compared all of the clinical outcomes data that has been published. With Big Data in Healthcare as such a major initiative, it was incomprehensible to me that the highest-value data–the data that is directly used to clear therapies, recommend them to the medical community, and assess their efficacy–were being managed in the following way:

  1. Physician completes study, and then spends up to a year writing it up and submitting it,
  2. Journal sits on the study for months, then publishes (in some cases), but without ensuring that it matches similar studies in the data it reports.
  3. Oh, by the way, the journal does not make the data available in a structured format!
  4. Then, if you want to see how that one study compares to related studies, you have to either find a recent, comprehensive, on-point meta-analysis (which is a very low chance in my experience), or comb the literature and extract the data by hand.
  5. That’s it.

This strikes me as mismanagement of data that are relevant to lifechanging healthcare decisions. Effectively, no one in the medical field has anything like what the financial industry has had for decades–the Bloomberg terminal, which presents comprehensive information on an updatable basis by pulling data from centralized repositories. If we can do it for stocks, we can do it for medical studies, and in fact that is what I am trying to do. I recently wrote an article on the topic for the Minneapolis-St Paul Business Journal, calling for the medical community to support a centralized, constantly-updated, data-centric platform to enable not only physicians but also insurers, policymakers, and even patients examine the actual scientific consensus, and the data that support it, in a single interface.

Read the full article at https://www.bizjournals.com/twincities/news/2019/12/27/there-is-no-bloomberg-for-medicine.html!