Game theory in the wild

Game theory is an amazing way to simulate reality, and I strongly recommend any business leader to educate herself on underlying concepts. However, I have found that the way that it is constructed in economic and political science papers has limited connection to the real world–apart from nuclear weapons strategies, of course.

If you are not a mathematician or economist, you don’t really have time to assign exact payoffs to outcomes or calculate an optimal strategy. Instead, you can either guess, or you can use the framework of game theory–but none of the math–to make rapid decisions that cohere to its principles, and thus avoid being a sucker (at least some of the time).

As Yogi Berra didn’t say, “In theory, there is no difference between practice and theory. In practice, there is.” As a daily practitioner of game theory, here are some of its assumptions that I literally had to throw out to make it actually work:

  • Established/certain boundaries on utility: Lots of games bound utility (often from 0 to 1, or -1 to 1, etc. for each individual). Throw away those games, as they preferenced easier math over representation of random, infinite realities, where the outcomes are always more uncertain and tend to be unbounded.
  • Equating participants: Similar to the above, most games have the same utility boundaries for all participants, when in reality it literally always varies. I honestly think that game theorists would model out the benefits of technology based on the assumption that a Sumerian peasant in 3000 BC and an American member of the service economy in 2020 can have equivalent utility. That is dumb.
  • Unchanging calculations: In part because of the uncertainty and asymmetries mentioned above, no exact representation of a game sticks around–instead, the equation constantly shifts as participants change, and utility boundaries move (up with new tech, down with new regs, etc). That is why the math is subordinate to structure: if you are right about the participants, the pathways, and have an OK gut estimate of the payoff magnitudes, you can decide rapidly and then shift your equation as the world changes.
  • Minimal feedback/second order effects: Some games have signal-response, but it is hard to abstract the concept that all decisions enter a complex milieu of interacting causes and effects where the direction arrow is hard to map. Since you can’t model them, just try to guess–what with the response to the game outcome be? Focus on feedback loops–they hold secrets to unbounded long-term utilities.
  • The game ends: Obviously, since games are abstractions, it makes sense to tie them up nicely in one set of inputs and then a final set of outputs. In reality, there is really only one game, and each little representation is a snapshot of life. That means that many games forget that the real goal of the game is to stay in it.

These examples–good rules of thumb to practitioners, certain to be subject to quibbling by any academic reader–remind me of how wrong even the history of game theory is. As with many oversights by historians of science, the attribution of game theory’s invention credits the first theoretician (John von Neumann, who was smart enough to both practice and theorize), not the first practitioner (probably lost to history–but certainly by the 1600’s, as Pascal’s Wager actually lines up better with “game theory in the wild” in that he used infinite payoffs and actually did become religious). Practitioners, I would ignore the conventional history, theory, actual math, and long papers. Focus on easily used principles and heuristics that capture uncertainty, unboundedness, and asymmetries. Some examples:

  • Principle: Prediction is hard. Don’t do it if you can help it.
  • Heuristic: Bounded vs. Unbounded. Magnitude is easier to measure (or at least cap) than likelihood is.

  • Principle: Every variable introduces more complexity and uncertainty.
  • Heuristic: Make decisions for one really good reason. If your best reason is not enough, don’t depend on accumulation.

  • Principle: One-time experiments don’t optimize.
  • Heuristic: If you actually want to find useful methods, iterate.

  • Principle: Anything that matters (power, utility, etc.) tends to be unequally distributed.
  • Heuristic: Ignore the middle. Either make one very rich person very happy (preferred) or make most people at least a little happier. Or pull a barbell strategy if you can.

  • The Academic Certainty Principle: Mere observation of reality by academics inevitably means they don’t get it. (Actually a riff on observer effects, not Hiesenberg, but the name is catchier this way).
  • Heuristic: In game theory as in all academic ideas, if you think an academic stumbled upon a good practice, try it–but assume you will need trial and error to get it right.

  • Principle: Since any action has costs, ‘infinite’ payoffs, in reality, come from dividing by zero.
  • The via negativa: Your base assumption should be inaction, followed by action to eliminate cost. Be very skeptical of “why not” arguments.

So, in summary, most specific game theories are broken because they preference math (finite, tidy, linear) over practice (interconnected, guess-based, asymmetric). That does not mean you can’t use game theory in the wild, it just means that you should focus on structure over math, unbounded/infinite payoffs over solvable games, feedback loops over causal arrows, inaction over action, extremes over moderates, and rules of thumb over quibbles.

Good luck!

Why the US is behind in FinTech, in two charts

The US is frankly terrible at innovation in banking. When Kenya (and its neighbors) has faster adoption of mobile banking–as they have since at least 2012–it is time to reconsider our approach.

Here is the problem: we made new ideas in banking de facto illegal. Especially since the 2008 financial crisis, regulatory bodies (especially the CFPB) has piled on a huge amount of potential liability that scares away any new entrant. Don’t believe me? Let’s look at the data:

bank creation

Notice anything about new bank creation in the US after 2008?

A possible explanation, in a “helpful resource” provided to banking regulators and lawyers for banks:

regulatory complexity

This shows: 8 federal agencies reporting to the FSOC, plus another independent regulatory body for fintech (OFAC/FinCEN). Also, the “helpful” chart notes state regulations just as an addendum in a circle…probably because it would take 50 more, possibly complex and contradictory charts.

So, my fellow citizens, don’t innovate in banking. No one else is, but they are probably right.

The Blind Entrepreneur

Entrepreneurs usually make decisions with incomplete information, in disciplines where we lack expertise, and where time is vital. How, then, can we be expected to make decisions that lead to our success, and how can other people judge our startups on our potential value? And even if there are heuristics for startup value, how can they cross fields?

The answer, to me, comes from a generalizable system for improvement and growth that has proven itself– the blind watchmaker of evolution. In this, the crucial method by which genes promulgate themselves is not by predicting their environments, but by promiscuity and opportunism in a random, dog-eat-dog-world. By this, I mean that successful genes free-ride on or resonate with other genes that promote reproductive success (promiscuity) and select winning strategies by experimenting in the environment and letting reality be the determinant of what gene-pairings to try more often (opportunism). Strategies that are either robust or anti-fragile usually outperform fragile and deleterious strategies, and strategies that exist within an evolutionary framework that enables rapid testing, learning, mixing, and sharing (such as sexual reproduction or lateral gene transfer paired with fast generations) outperform those that do not (such as cloning), as shown by the Red Queen hypothesis.

OK, so startups are survival/reproductive vehicles and startup traits/methods are genes (or memes, in the Selfish Gene paradigm). With analogies, we should throw out what is different and keep what is useful, so what do we need from evolution?

First, one quick note: we can’t borrow the payout calculator exactly. Reproductive success is where a gene makes more of itself, but startups dont make more of themselves. For startups the best metric is probably money. Other than that, what adaptations are best to adopt? Or, in the evolutionary frame, what memes should we imbue in our survival vehicles?

Traits to borrow:

  • Short lives: long generations mean the time between trial and error is too long. Short projects, short-term goals, and concrete exits.
  • Laziness: energy efficiency is far more important than #5 on your priority list.
  • Optionality: when all things are equal, more choices = more chances at success.
  • Evolutionarily Stable Strategies: also called “don’t be a sucker.”
  • React, don’t plan: prediction is difficult or even impossible, but being quick to jump into the breach has the same outcome. Could also be called “prepare, but don’t predict.”
  • Small and many: big investments take a lot of energy and effectively become walking targets. Make small and many bets on try-outs and then feed those that get traction. Note– this is also how to run a military!
  • Auftragstaktik: should be obvious, central planning never works. Entrepreneurs should probably not make any more decisions than they have to.
  • Resonance: I used to call this “endogenous positive feedback loops,” but that doesn’t roll off the tongue. In short, pick traits that make your other traits more powerful–and even better if all of your central traits magnify your other actions.
  • Taking is better than inventing: Its not a better startup if its all yours. Its a better startup if you ruthlessly pick the best idea.
  • Pareto distributions (or really, power laws): Most things don’t really matter. Things that matter, matter a lot.
  • Finite downside, infinite upside: Taleb calls this “convexity”. Whenever presented with a choice that has one finite and one infinite potential, forget about predicting what will happen– focus on the impact’s upper bound in both directions. It goes without saying– avoid infinite downsides!
  • Don’t fall behind (debt): The economy is a Red Queen, anyone carrying anything heavy will continually fall behind. Debt is also the most likely way companies die.
  • Pay it forward to your future self: squirrels bury nuts; you should build generic resources as well.
  • Don’t change things: Intervening takes energy and hurts diversity.
  • Survive: You can’t win if you’re not in the game. More important than being successful is being not-dead.

When following these guidelines, there are two other differences between entrepreneurs and genes: One, genes largely exist in an amoral state, whereas your business is vital to your own life, and if you picked a worthwhile idea, society. Two, unlike evolution, you actually have goals and are trying to achieve something beyond replication, beyond even money. Therefore, you do not need to take your values from evolution. However, if you ignore its lessons, you close your eyes to reality and are truly blind.

Our “blind” entrepreneur, then, can still pick goals and construct what she sees as her utility. But to achieve the highest utility, once defined, she will create unknowable and unpredictable risk of her idea’s demise if she does not learn to grow the way that the blind watchmaker does.

Broken incentives in medical research

Last week, I sat down with Scott Johnson of the Device Alliance to discuss how medical research is communicated only through archaic and disorganized methods, and how the root of this is the “economy” of Impact Factor, citations, and tenure-seeking as opposed to an exercise in scientific communication.

We also discussed a vision of the future of medical publishing, where the basic method of communicating knowledge was no longer uploading a PDF but contributing structured data to a living, growing database.

You can listen here: https://www.devicealliance.org/medtech_radio_podcast/

As background, I recommend the recent work by Patrick Collison and Tyler Cowen on broken incentives in medical research funding (as opposed to publishing), as I think their research on funding shows that a great slow-down in medical innovation has resulted from systematic errors in organizing knowledge gathering. Mark Zuckerberg actually interviewed them about it here: https://conversationswithtyler.com/episodes/mark-zuckerberg-interviews-patrick-collison-and-tyler-cowen/.

Launching our COVID-19 visualization

I know everyone is buried in COVID-19 news, updates, and theories. To me, that makes it difficult to cut through the opinions and see the evidence that should actually guide physicians, policymakers, and the public.

To me, the most important thing is the ability to find the answer to my research question easily and to know that this answer is reasonably complete and evidence-driven. That means getting organized access to the scientific literature. Many sources (including a National Library of Medicine database) present thousands of articles, but the organization is the piece that is missing.

That is why I launched StudyViz, a new product that enables physicians to build an updatable visualization of all studies related to a topic of interest. Then, my physician collaborators built just such a visual for COVID-19 research, presenting a sunburst diagram that users can select to identify their research question of interest.

Studyviz sunburst

For instance, if you are interested in the impact of COVID-19 on pregnant patients, just go to “Subpopulations” and find “Pregnancy” (or neonates, if that is your concern). We nested the tags so that you can “drill down” on your question, and so that related concepts are close to each other. Then, to view the studies themselves, just click on them to see an abstract with the key info (patients, interventions, and outcomes) highlighted:

Abstract

This is based on a complex concept hierarchy built by our collaborators and that is constantly evolving as the literature does:

Hierarchy

Even beyond that, we opened up our software to let any researchers who are interested build similar visuals on any disease state, as COVID-19 is not the only disease for which organizing and accessing the scientific literature is important!

We are seeking medical coinvestigators–any physician interested in working with us, simply email contact@nested-knowledge.com or contact us on our website!

A History of Plagues

As COVID-19 continues to spread, fears and extraordinary predictions have also gone viral. While facing a new infectious threat, the unknowns of how new traits of our societies worldwide or of this novel coronavirus impact its spread. Though no two pandemics are equivalent, I thought it best to face this new threat armed with knowledge from past infectious episodes. The best inoculation against a plague of panic is to use evidence gained through billions of deaths, thousands of years, and a few vital breakthroughs to prepare our knowledge of today’s biological crises, social prognosis, and choices.

Below, I address three key questions: First, what precedents do we have for infections with catastrophic potential across societies? Second, what are the greatest killers and how do pandemics compare? Lastly, what are our greatest accomplishments in fighting infectious diseases?

As foundation for understanding how threats like COVID-19 come about and how their hosts fight back, I recommend reading The Red Queen concerning the evolutionary impact and mechanisms of host-disease competition and listening to Sam Harris’ “The Plague Years” podcast with Matt McCarthy from August 2019, which predated COVID-19 but had a strangely prophetic discussion of in-hospital strategies to mitigate drug resistance and their direct relation to evolutionary competition.

  • The Biggest Killers:

Infectious diseases plagued humanity throughout prehistory and history, with a dramatic decrease in the number of infectious disease deaths coming in the past 200 years. In 1900, the leading killers of people were (1) Influenza, (2) Tuberculosis, and (3) Intestinal diseases, whereas now we die from (1) Heart disease, (2) Cancer, and (3) Stroke, all chronic conditions. This graph shows not that humans have vanquished infectious disease as a threat, but that in the never-ending war of evolutionary one-upmanship, we have won battles consistently since 1920 forward. When paired with Jonathan Haidt’s Most Important Graph in the World, this vindicates humanity’s methods of scientific and economic progress toward human flourishing.Death rates

However, if the CDC had earlier data, it would show a huge range of diseases that dwarf wars and famines and dictators as causes of death in the premodern world. If we look to the history of plagues, we are really looking at the history of humanity’s greatest killers.

The sources on the history of pandemics are astonishingly sparse/non-comprehensive. I created the following graphs only by combining evidence and estimates from the WHO, CDC, Wikipedia, Our World in Data, VisualCapitalist, and others (lowest estimates shown where ranges were presented) for both major historic pandemics and for ongoing communicable disease threats. This is not a complete dataset, and I will continue to add to it, but it shows representative death counts from across major infectious disease episodes, as well as the death rate per year based on world population estimates. See the end of this post for the full underlying data. First, the top 12 “plagues” in history:

Capture disease top 12

 

Note: blue=min, orange=max across the sources I examined. For ongoing diseases with year-by-year WHO evidence, like tuberculosis, measles, and cholera, I grouped mortality in 5-year spans (except AIDS, which does not have good estimates from the 1980s-90s, so I reported based on total estimated deaths).

Now, let’s look at the plagues that were lowest on my list (number 55-66). Again, my list was not comprehensive, but this should provide context for COVID-19:

Capture covid

As we can see, the 11,400 people who have died from COVID-19 recently passed Ebola to take the 61st (out of 66) place on our list of plagues. Note again that several ongoing diseases were recorded in 5-year increments, and COVID-19 still comes in under the death rates for cholera. Even more notably, it has 0.015% as many victims as the plague in the 14th Century,

  • In Context of Current Infectious Diseases:

For recent/ongoing diseases, it is easier to compare year-by-year data. Adding UNAIDS to our sources, we found the following rates of death across some of the leading infectious causes of death. Again, this is not comprehensive, but helps put COVID-19 (the small red dot, so far in the first 3 months of 2020) in context:

Capture diseases by year

Note: darker segments of lines are my own estimates; full data at bottom of the post. I did not include influenza due to the lack of good sources on a year-by-year basis, but a Lancet article found that 291,000-645,000 deaths from influenza in a year is predictable based on data from 1999-2015.

None of this is to say that COVID-19 is not a major threat to human health globally–it is, and precautions could save lives. However, it should show us that there are major threats to human health globally all the time, that we must continue to fight. These trendlines tend to be going the right direction, but our war for survival has many foes, and will have more emerge in the future, and we should expend our resources in fighting them rationally based on the benefits to human health, not panic or headlines.

  • The Eradication List:

As we think about the way to address COVID-19, we should keep in mind that this fight against infectious disease builds upon work so amazing that most internet junkies approach new infectious diseases with fear of the unknown, rather than tired acceptance that most humans succumb to them. That is a recent innovation in the human experience, and the strategies used to fight other diseases can inform our work now to reduce human suffering.

While influenzas may be impossible to eradicate (in part due to an evolved strategy of constantly changing antigens), I wanted to direct everyone to an ever-growing monument to human achievement, the Eradication List. While humans have eradicated only a few infectious diseases, the amazing thing is that we can discuss which diseases may in fact disappear as threats through the work of scientists.

On that happy note, I leave you here. More History of Plagues to come, in Volume 2: Vectors, Vaccines, and Virulence!

Disease Start Year End Year Death Toll (low) Death Toll (high) Deaths per 100,000 people per year (global)
Antonine Plague 165 180 5,000,000 5,000,000 164.5
Plague of Justinian 541 542 25,000,000 100,000,000 6,250.0
Japanese Smallpox Epidemic 735 737 1,000,000 1,000,000 158.7
Bubonic Plague 1347 1351 75,000,000 200,000,000 4,166.7
Smallpox (Central and South America) 1520 1591 56,000,000 56,000,000 172.8
Cocoliztli (Mexico) 1545 1545 12,000,000 15,000,000 2,666.7
Cocoliztli resurgence (Mexico) 1576 1576 2,000,000 2,000,000 444.4
17th Century Plagues 1600 1699 3,000,000 3,000,000 6.0
18th Century Plagues 1700 1799 600,000 600,000 1.0
New World Measles 1700 1799 2,000,000 2,000,000 3.3
Smallpox (North America) 1763 1782 400,000 500,000 2.6
Cholera Pandemic (India, 1817-60) 1817 1860 15,000,000 15,000,000 34.1
Cholera Pandemic (International, 1824-37) 1824 1837 305,000 305,000 2.2
Great Plains Smallpox 1837 1837 17,200 17,200 1.7
Cholera Pandemic (International, 1846-60) 1846 1860 1,488,000 1,488,000 8.3
Hawaiian Plagues 1848 1849 40,000 40,000 1.7
Yellow Fever 1850 1899 100,000 150,000 0.2
The Third Plague (Bubonic) 1855 1855 12,000,000 12,000,000 1,000.0
Cholera Pandemic (International, 1863-75) 1863 1875 170,000 170,000 1.1
Indian Smallpox 1868 1907 4,700,000 4,700,000 9.8
Franco-Prussian Smallpox 1870 1875 500,000 500,000 6.9
Cholera Pandemic (International, 1881-96) 1881 1896 846,000 846,000 4.4
Russian Flu 1889 1890 1,000,000 1,000,000 41.7
Cholera Pandemic (India and Russia) 1899 1923 1,300,000 1,300,000 3.3
Cholera Pandemic (Philippenes) 1902 1904 200,000 200,000 4.2
Spanish Flu 1918 1919 40,000,000 100,000,000 1,250.0
Cholera (International, 1950-54) 1950 1954 316,201 316,201 2.4
Cholera (International, 1955-59) 1955 1959 186,055 186,055 1.3
Asian Flu 1957 1958 1,100,000 1,100,000 19.1
Cholera (International, 1960-64) 1960 1964 110,449 110,449 0.7
Cholera (International, 1965-69) 1965 1969 22,244 22,244 0.1
Hong Kong Flu 1968 1970 1,000,000 1,000,000 9.4
Cholera (International, 1970-75) 1970 1974 62,053 62,053 0.3
Cholera (International, 1975-79) 1975 1979 20,038 20,038 0.1
Cholera (International, 1980-84) 1980 1984 12,714 12,714 0.1
AIDS 1981 2020 25,000,000 35,000,000 13.8
Measles (International, 1985) 1985 1989 4,800,000 4,800,000 19.7
Cholera (International, 1985-89) 1985 1989 15,655 15,655 0.1
Measles (International, 1990-94) 1990 1994 2,900,000 2,900,000 10.9
Cholera (International, 1990-94) 1990 1994 47,829 47,829 0.2
Malaria (International, 1990-94) 1990 1994 3,549,921 3,549,921 13.3
Measles (International, 1995-99) 1995 1999 2,400,000 2,400,000 8.4
Cholera (International, 1995-99) 1995 1999 37,887 37,887 0.1
Malaria (International, 1995-99) 1995 1999 3,987,145 3,987,145 13.9
Measles (International, 2000-04) 2000 2004 2,300,000 2,300,000 7.5
Malaria (International, 2000-04) 2000 2004 4,516,664 4,516,664 14.7
Tuberculosis (International, 2000-04) 2000 2004 7,890,000 8,890,000 25.7
Cholera (International, 2000-04) 2000 2004 16,969 16,969 0.1
SARS 2002 2003 770 770 0.0
Measles (International, 2005-09) 2005 2009 1,300,000 1,300,000 4.0
Malaria (International, 2005-09) 2005 2009 4,438,106 4,438,106 13.6
Tuberculosis (International, 2005-09) 2005 2009 7,210,000 8,010,000 22.0
Cholera (International, 2005-09) 2005 2009 22,694 22,694 0.1
Swine Flu 2009 2010 200,000 500,000 1.5
Measles (International, 2010-14) 2010 2014 700,000 700,000 2.0
Malaria (International, 2010-14) 2010 2014 3,674,781 3,674,781 10.6
Tuberculosis (International, 2010-14) 2010 2014 6,480,000 7,250,000 18.6
Cholera (International, 2010-14) 2010 2014 22,691 22,691 0.1
MERS 2012 2020 850 850 0.0
Ebola 2014 2016 11,300 11,300 0.1
Malaria (International, 2015-17) 2015 2017 1,907,872 1,907,872 8.6
Tuberculosis (International, 2015-18) 2015 2018 4,800,000 5,440,000 16.3
Cholera (International, 2015-16) 2015 2016 3,724 3,724 0.0
Measles (International, 2019) 2019 2019 140,000 140,000 1.8
COVID-19 2019 2020 11,400 11,400 0.1

 

Year Malaria Cholera Measles Tuberculosis Meningitis HIV/AIDS COVID-19
1990 672,518 2,487 670,000 1,903 310,000
1991 692,990 19,302 550,000 1,777 360,000
1992 711,535 8,214 700,000 2,482 440,000
1993 729,735 6,761 540,000 1,986 540,000
1994 743,143 10,750 540,000 3,335 620,000
1995 761,617 5,045 400,000 4,787 720,000
1996 777,012 6,418 510,000 3,325 870,000
1997 797,091 6,371 420,000 5,254 1,060,000
1998 816,733 10,832 560,000 4,929 1,210,000
1999 834,692 9,221 550,000 2,705 1,390,000
2000 851,785 5,269 555,000 1,700,000 4,298 1,540,000
2001 885,057 2,897 550,000 1,680,000 6,398 1,680,000
2002 911,230 4,564 415,000 1,710,000 6,122 1,820,000
2003 934,048 1,894 490,000 1,670,000 7,441 1,965,000
2004 934,544 2,345 370,000 1,610,000 6,428 2,003,000
2005 927,109 2,272 375,000 1,590,000 6,671 2,000,000
2006 909,899 6,300 240,000 1,550,000 4,720 1,880,000
2007 895,528 4,033 170,000 1,520,000 7,028 1,740,000
2008 874,087 5,143 180,000 1,480,000 4,363 1,630,000
2009 831,483 4,946 190,000 1,450,000 3,187 1,530,000
2010 788,442 7,543 170,000 1,420,000 2,198 1,460,000
2011 755,544 7,781 200,000 1,400,000 3,726 1,400,000
2012 725,676 3,034 150,000 1,370,000 3,926 1,340,000
2013 710,114 2,102 160,000 1,350,000 3,453 1,290,000
2014 695,005 2,231 120,000 1,340,000 2,992 1,240,000
2015 662,164 1,304 150,000 1,310,000 1,190,000
2016 625,883 2,420 90,000 1,290,000 1,170,000
2017 619,825 100,000 1,270,000 1,150,000
2018 1,240,000
2019
2020 16,514

Broken Incentives in Medical Innovation

I recently listened to Mark Zuckerberg interviewing Tyler Cowen and Patrick Collison concerning their thesis that the process of using scientific research to advance major development goals (e.g. extending the average human lifespan) has stagnated. It is a fascinating discussion that fundamentally questions the practice of scientific research as it is currently completed.

Their conversation also made me consider more deeply the incentives in my industry, medical R&D, that have shaped the practices that Cowen and Collison find so problematic. While there are many reasons for the difficulties in maintaining a breakneck pace of technological progress (“all the easy ideas are already done,” “the American education system fails badly on STEM,” etc), I think that there are structural causes that are major contributors to the great slowdown in medical progress. See my full discussion here!

The open secrets of what medicine actually helps

One of the things that I was most surprised by when I joined the medical field was how variable the average patient benefit was for different therapies. Obviously, Alzheimer’s treatments are less helpful than syphilis ones, but even within treatment categories, there are huge ranges in actual efficacy for treatments with similar cost, materials, and public conception.

What worries me about this is that not only in public but within the medical establishment, actually differentiating these therapies–and therefore deciding what therapies, ultimately, to use and pay for–is not prioritized in medical practice.

I wrote about this on my company’s blog, but its concept is purely as a comment on the most surprising dichotomy I learned about–that between stenting (no benefit shown for most patients!!) vs. clot retrieval during strokes (amazing benefits, including double the odds of good neurological outcome). Amazingly, the former is a far more common procedure, and the latter is underprovided in rural areas and in most countries outside of the US, EU, Japan, and Korea. Read more here: https://about.nested-knowledge.com/2020/01/27/not-all-minimally-invasive-procedures-are-created-equal/.

There is no Bloomberg for medicine

When I began working in medical research, I was shocked to find that no one in the medical industry has actually collected and compared all of the clinical outcomes data that has been published. With Big Data in Healthcare as such a major initiative, it was incomprehensible to me that the highest-value data–the data that is directly used to clear therapies, recommend them to the medical community, and assess their efficacy–were being managed in the following way:

  1. Physician completes study, and then spends up to a year writing it up and submitting it,
  2. Journal sits on the study for months, then publishes (in some cases), but without ensuring that it matches similar studies in the data it reports.
  3. Oh, by the way, the journal does not make the data available in a structured format!
  4. Then, if you want to see how that one study compares to related studies, you have to either find a recent, comprehensive, on-point meta-analysis (which is a very low chance in my experience), or comb the literature and extract the data by hand.
  5. That’s it.

This strikes me as mismanagement of data that are relevant to lifechanging healthcare decisions. Effectively, no one in the medical field has anything like what the financial industry has had for decades–the Bloomberg terminal, which presents comprehensive information on an updatable basis by pulling data from centralized repositories. If we can do it for stocks, we can do it for medical studies, and in fact that is what I am trying to do. I recently wrote an article on the topic for the Minneapolis-St Paul Business Journal, calling for the medical community to support a centralized, constantly-updated, data-centric platform to enable not only physicians but also insurers, policymakers, and even patients examine the actual scientific consensus, and the data that support it, in a single interface.

Read the full article at https://www.bizjournals.com/twincities/news/2019/12/27/there-is-no-bloomberg-for-medicine.html!

Changing the way doctors see data

Over the past four years, my brother and I have grown a business that helps doctors publish data-driven articles from the two of us to over 30 experienced researchers. However, along the way, we noticed that data management in medical publication was decades behind other fields–in fact, the vital clinical outcomes from major trials are generally published as singular PDFs with no structured data, and are analyzed in comparison to existing studies only in nonsystematic, nonupdatable publications. Effectively, medicine has no central method for sharing or comparing patient outcomes across therapies, and I think that it is our responsibility as researchers to present these data to the medical community.

Based on our internal estimates, there are >3 million published clinical outcomes studies (with over 200 million individual datapoints) that need to be abstracted, structured, and compared through a central database. We recognized that this is a monumental task, and we therefore have focused on automating and scaling research processes that have been, through today, entirely manual. Only after a year of intensive work have we found a path toward creating a central database for all published patient outcomes, and we are excited to debut our technology publicly!

Keith recently presented our venture at a Mayo Clinic-hosted event, Walleye Tank (a Shark Tank-style competition of medical ventures), and I think that it is an excellent fast-paced introduction to a complex issue. Thanks also to the Mayo Clinic researchers for their interesting questions! You can see his two-minute presentation and the Q&A here. We would love to get more questions from the economic/data science/medical communities, and will continue putting our ideas out there for feedback!

My Startup Experience

Over the past 4 years, I have had a huge transition in my life–from history student to law student to serial medical entrepreneur. Essentially, I have learned a great deal from my academic work that taught me the value that we can create if we find an unmet need in the world, create an idea that fills that need, and then use technology, personal networks, and hard work to create novelties. While startups obviously tackle any new problem under the sun, to me, they are the mechanism to bring about a positive change–and, along the way, get the resources to scale that change across the globe.

I am still very far from reaching that goal, but my family and cofounders have several visions of how to improve not only how patients are treated but also how we build the knowledge base that physicians, patients, and researchers can use to inform care and innovation. My brother/cofounder and I were recently on an entrepreneurship-focused podcast, and we got the chance to discuss our experience, our vision, and our companies. I hope this can be a springboard for more discussions about how companies are a unique agent of advancing human flourishing, and about the history and philosophy of entrepreneurship, technology, and knowledge.

You can listen here: http://rochesterrising.org/podcast/episode-151-talking-medical-startups-with-keith-and-kevin-kallmes. Heartfelt thanks to Amanda Leightner and Rochester Rising for a great conversation!

Thank you!

Kevin Kallmes

The Myth of the Nazi War Machine

Nazism and fascism, in the popular imagination, are associated with evil, immoral, inhumane treatment across conquered groups and their own subjects alike. These evil actions loom even larger because the thought of an entire society dedicated to military industry, extending its reach across and beyond Europe, inspires ghastly fears not only of evil intent but also astonishing military might that could overwhelm the Allies with the technological wonder of the V2 rocket, the deadly and ever-present U-boat threat, and the German “Royal Tiger” tank that was so well armored that Sherman-fired shells literally bounced off of it. This vision of the Nazis as conquering through technological and industrial superiority is not just a mistake of modern historians, but is actually based on the overestimation of their foes by the Allies and on the disastrously misplaced overconfident messaging of the Germans, Italians, and Japanese that their technology, industrial power, and elan gave them even a chance of victory. The miscalculation of the Hitler in extrapolating his successes in Poland and France to assuming his alliance could overwhelm the combined defenses of over 1.5 billion people represents the most astonishing delusion in military history.

The inspiration for this comes from Victor Davis Hanson’s fascinating economic and industrial history, The Second World Wars. One of his major arguments is that the Axis leaders lost because their commitment to their ideology became a fantasy that they had abilities that directly contradicted the reality of their actual abilities and those of their opponents. I heartily recommend the book and this shorter interview where he lays out the book’s central concepts. My major takeaway was that this fantasy has gone beyond the minds of Hitler, Tojo, and Mussolini, and the vision of a vast industrial empire looming over the world is now imprinted on our memory of World War II. I think it is past time that we recognize Nazism as not only immoral but also incompetent. Below, I hope to share some astonishing statistics that show beyond a shadow of a doubt that the modern concept of Nazi military might is a myth.

  1. The Allies rode in cars, the Germans rode horses. In 1939, the only transportation available to 85% of German infantry other than walking was horses. By 1945…it was still 85%. In total, the US and UK produced almost 4 million general-use vehicles, compared to 160,000 German vehicles. That is a 25-fold advantage. The Allies also had 1 million infantry-supporting artillery compared to less than 100,000 for all of the Axis.
  2. Where were the supplies? The Allies had 46 million tonnes of merchant shipping vessels to the Axis’ 5 million, five times as much aluminum (key for engines and planes), and by 1943 had cut off all German access to rare metals such as tungsten, one of the key metals used in munitions, manufacturing, and electronics. The US supplied Britain and the USSR through the Lend-Lease Act with almost $700 billion (inflation-adjusted 2019 dollars) in supplies throughout the war, which is roughly double the entire German annual GDP in 1939.
  3. The Allies swam to victory on a sea of oil. Though Rommel came within a battle of accessing the British Middle-Eastern oil fields, the Axis still had astonishingly little fuel (which they needed to power their King Tiger, which drank a gallon of gas every 700 yards, the vast Luftwaffe that put over 130,000 planes into action, and their gigantic battleship Bismark). The Axis as a whole used 66 million metric tonnes of oil, while the Allies used a billion. A 15X advantage.
  4. The panzers were neither numerous nor superior technologically. The Mark 1 and 2 panzers that conquered France were actually less numerous and less technologically advanced than France’s. While blitzkrieg and elan overwhelmed the French, even the Mark 4–the most commonly used panzer in the late war–underperformed Shermans in infantry support and reliability and were even considered inferior to the Soviet T34 by Hitler himself. Even including the outmoded Czech tanks repurposed by the Germans, they fielded only 67,000 tanks on all fronts to face 270,000 Allied tanks (with no help from Italy, with a pitiful 3,300 tanks, and Japan largely ignored mobile land armor and created only 4,500 tanks). The environment of idealogical zeal in Germany prevented a military researcher from telling Hitler about the true tank numbers of the Soviets, as Hitler himself recognized later in the war by repeating that if he had known the true number of T34’s he faced, he would never have invaded. The US and USSR deployed massive numbers of upgraded Shermans and the workhorse T34s, while Germany sank huge investments into specialized and scary duds the Royal Tiger–300,000 man-hours and ten times as much as a Sherman. Only 1,300 Royal Tigers were ever produced, and their 70 tonnes of weight, constant mechanical issues, and cost undercut their supremacy in tank-on-tank duels. The US and Britain used precision bombing to inflict major tank losses on Germany, and while German tanks outfought Soviet tanks roughly 4:1, by 1945 the Soviets still had 25,000 tanks against the Germans’ 6,000.
  5. Collaboration helps both tech and strategy. The Allies worked together–the Sherman’s underpowered 75mm (corrected) could be upgraded with a British gun because of interoperability of parts, and the US and Brits delivered over 12,000 tanks and 18,000 planes to the Soviets under Lend-Lease; the Germans did not even have replaceable parts for their own tanks, and the Germans never helped their Italian allies (who had lost a land invasion even to the collapsing French) develop industrial capabilities. Bletchley Park gave advance warning to US merchant convoys, but the Italians and Japanese found out that Hitler had invaded the USSR only after troops had crossed into Ukraine.
  6. Fascism is not industrially sound. Even though the Nazis put an astonishing 75% of their GDP toward the military by 1944 and despite taking on unsustainable debt to sustain their production, their GDP in 1939 was $384 billion, roughly equal to the Soviets and $100 billion less than the UK and France combined. By the end of the war, this fell to $310 billion, compared to a whopping $1.4 trillion US GDP. However, even these numbers do not fully represent how non-mechanized, non-scalable, and non-industrial Germany was even under military dictatorship. While German science and engineering had been pre-eminent pre-WW I, the central control and obsession with infeasible, custom projects before and during the war meant that the Germans had a lower percentage of their population that could be mobilized for wartime production than their opponents, not to mention that their GDP per capita was half of that of the US, and yet the Axis still took on opponents that had productive populations five times their size.
  7. The V2 was a terrible investment. After losing the Battle of Britain (largely because of inferior training, radar, and plane production), the Nazis tried to use ballistic missiles to bomb the Brits into submission. The less technologically sophisticated V1 delivered a respectable 1,000 kg of explosives, but despite launching over 10,000, by mid-1944 the British countermeasures stopped 80% of these, and many misfired, failed to explode, or had guidance system malfunctions. The V2 was more sophisticated, but was never mass produced: only 3,000 were launched, and more Nazis were killed as part of the development of the rocket than Brits by their launch. The V1 and V2 programs combined cost 50% more than the Manhattan project, and even compared to the US’s most expensive bombing program (developing the B29), the cost-per-explosives-delivered was thirty times higher for the V2.
  8. The Luftwaffe was completely overmatched even by the RAF alone. Before the Battle of Britain, the Luftwaffe (2,500 planes) outnumbered the RAF (about 1,500), and the RAF was using more outdated Hurricanes than they were the newer Spitfire; however, the Brits scaled up training and production and even put novel innovations into their manufacturing within the 3 months of battle.
  9. The Germans underestimated the scalability of their opponent’s production. By the end of the war, the Brits manufactured 177,000 planes, 44,000 more than Germany. Crucially, though they started the war with far fewer experienced pilots, the Brits used this production advantage to train their pilots far better (in fact, the Brits had over 40,000 training aircraft). The US was similarly underprepared in terms of both aircraft production and training, but within a year had increased production from one B-24 every two weeks in 1940 to one every two hours in 1942. The US manufactured almost 300,000 planes by the end of the war, with far superior bombers (the figher-resistant B-17 and the giant, sophisticated Super Fortress B-29). However, the German air force personnel still needed to be more numerous than either the US or Britain because of the lack of mechanization.
  10. The Germans could not replace their pilots. By early 1945, the Germans were losing 30% of their pilots every month, even after giving up on bombing campaigns because of high pilot and plane attrition. They never scaled training and were sending completely green pilots against well-trained Allied opponents who had numerical, technological, and experience superiority by 1943 and air supremacy by 1944.
  11. The Germans did not deploy new air technologies to their advantage. While the jet engine and V2 rockets would revolutionize air power after the war, they did not impact the outcome of the war except to drain German R&D. Germany also failed to develop a functional heavy bomber, did not update their fighters’ technology during the war, never fully or effectively deployed radar, and never matched the Allies’ anti-aircraft defenses.
  12. The Allies could win through strategic bombing, but the reverse was not true. Both sides targeted industry and killed civilians en masse in strategic campaigns, but Germany never had the ability to strategically reduce their enemies’ production. Though Germany dropped 760,000 tonnes of ordnance on the Soviets and systematically destroyed production west of the Urals, the Soviets moved their industry to the East and continued outproducing their opponents with respect to tanks, vehicles, artillery, machine guns, and munitions. The Germans never produced a functional 4-engine bomber, so they could not use strategic bombing to undercut industry beyond this; the Blitz killed 40,000 civilians and destroyed over a million homes, but never developed into a threat against British military production. This also cost the Luftwaffe over 2,200 planes and 3,500 of their best pilots. However, nearly every major German and Japanese city was reduced by an unbelievable 3.5 million tonnes of ordnance dropped by the Allies, which killed over 700,000 German and Japanese civilians and destroyed the majority of both empires’ military production.
  13. The U-boat campaign became a colossal failure by 1943. Though the unrestricted submarine warfare of 1940-41 was sinking enough merchant vessels to truly threaten British supplies, Allied countermeasures–code-cracking, sonar, depth charges, Hedgehogs, Squids, and the use of surface aircraft to screen fleets–systematically destroyed the U-boats, which had losses of over 80% by the end of the war. In fact, the Germans barely managed to exceed the total merchant losses inflicted in World War I, and in May-June 1943 only sank two ships for every U-boat lost, ending the Battle of the Atlantic in just two disastrous months. The US was producing ships and supplies so quickly and in such vast quantities that the U-boats needed to sink 700,000 tonnes of shipping every month just to keep up with this production, which they did in only one month (November 1942); this number sank to less than a tenth of that by early 1943.
  14. The US actually waged a successful submarine campaign. Unlike the Germans, the US completely neutered the Japanese merchant fleet using submarines, which also inflicted over 55% of total Japanese fleet losses during the war, with minimal losses of submarine crews. Using just 235 submarines, the US sank 1,000 ships, compared to roughly 2,000 sunk by Germany (which cost almost 800 U-boat losses).
  15. Naval war had changed, and only the US responded. After the sinking of the HMS Prince of Wales near Singapore, all nations should have recognized that naval air forces were the new way to rule the waves. And yet, the Germans only ever built a single aircraft carrier despite their need to support operations in North Africa, and built the Tirpitz, a gigantic Bismarck-class battleship (that cost as much as 20 submarines), which barely participated in any offensive action before being destroyed by successive air raids. Germany never assembled a fleet capable of actually invading Britain, so even if they had won the Battle of Britain, there were no serious plans to actually conquer the island. Japan recognized the importance of aircraft carriers, and built 18, but the US vastly overmatched them with at least 100 (many of them more efficient light carriers), and Japan failed to predict how naval air supremacy would effectively cut them off from their empire and enable systematic destruction of their homeland without a single US landing on Japanese home soil.
  16. The Nazis forgot blitzkrieg. The rapid advances of Germany in 1939 is largely attributable to the decentralized command structure that enabled leaders on the front to respond flexibly based on mission-driven instructions rather than bureaucracy. However, as early as Dunkirk (when Hitler himself held back his tank forces out of fear), the command structure had already shifted toward top-down bureaucracy that drummed out gifted commanders and made disastrous blunders through plodding focuses on besieging Sevastopol and Stalingrad rather than chasing the reeling Soviets. Later, the inflexibility of defenses and “no-retreat” commands that allowed encirclement of key German forces replayed in reverse the inflexibility of the Maginot line and Stalin’s early mistakes, showing that the fascist system prevented learning from one’s enemy and even robbed the Germans of their own institutional advantages over the course of the war.
  17. Even the elan was illusory. Both Germany and Japan knew they were numerically inferior and depended on military tradition and zeal to overcome this. While German armies generally went 1:1 or better (especially in 1941 against the Soviets, when they killed or captured 4 million badly-led, outdated Soviet infantry), even the US–fighting across an ocean, with green infantry and on the offensive against the dug-in Germans–matched the Germans in commitment to war and inflicted casualties at 1:1. At the darkest hour, alone against the entire continent and while losing their important Pacific bases one by one, the Brits threw themselves into saving themselves and the world from fascists; only secret police and brute force kept the Nazis afloat once the tide had turned. The German high command was neutered by the need for secrecy and the systematic replacement of talented generals with loyal idiots, and the many mutinies, surrenders, and assassination attempts by Nazi leaders show that the illusory unity of fascism was in fact weaker under pressure than the commitment and cooperation of democratic systems.
  18. The Nazis never actually had plans that could win an existential war. Blitzkrieg scored some successes against the underprepared Poles and demoralized French, but these major regional victories were fundamentally of a different character than the conflicts the Nazis proceeded to start. While the Germans did take over a million square miles from the Soviets while destroying a 4-million-strong army, the industry was eventually transferred beyond the Urals and the Soviets replenished their army with, over 4 years, a further 30 million men. But most of all, even if Hitler somehow achieved what Napoleon himself could not, neither he nor Tojo had any ability to attack Detroit, so an implacable, distant foe was able to rain down destruction without ever facing a threat on home soil. The Nazis simply did not have the technology, money, or even the plans to conquer their most industrially powerful opponent, and perhaps the greatest tragedy of the entire war is that 60 million people died to prove something that was obvious from the start.

Overall, the Nazis failed to recognize how air and naval air superiority would impact the war effort, still believed that infantry zeal could overcome technological superiority, could not keep pace with the scale of the Allies’ industry or speed of their technological advances, spent inefficiently on R&D duds, never solved crucial resource issues, and sacrificed millions of their own subjects in no-retreat disasters. Fooled by their early success, delusions of grandeur, and belief in their own propaganda, Hitler and his collaborators not only instituted a morally repugnant regime but destroyed themselves. Fascism a scary ideology that promises great power for great personal sacrifice, but while the sacrifice was real, the power was illusory: as a system, it actually underperformed democracy technologically, strategically, industrially, and militarily in nearly every important category. Hopefully, this diametrical failure is evidence enough for even those who are morally open to fascism to discard it as simply unworkable. And maybe, if we dispel the myth of Nazi industry, we can head off any future experiments in fascism and give due recognition to the awe-inspiring productivity of systems that recognize the value of liberty.

This is in no way exhaustive, and in the interest of space I have not included the analogous Italian and Japanese military delusions and industrial shortcomings in World War II. I hope that this shortlist of facts inspires you to learn more and tell posterity that fascism is not only evil but delusional and incompetent.

All facts taken from The Second World Wars, Wikipedia, or general internet trawling.

Thoughts on Time from a College Library

Note: This was written by my brother Keith, and he did not originally post it online but sent it to our family members. For being a younger brother, he brings a hell of a lot of wisdom to the table, and I think this thought-provoking epistle deserves to be shared more widely. I am publishing it here, with permission:

From Keith:

I learn a great deal from my family.  The facts, figures, and articles that commonly result from discussing and arguing with each other are a reward in and of themselves.  As might be expected, many of these experiences and facts are soon forgotten, making way for new debates.  Once in a while, however, when discussing a topic, we–or I–stumble upon an insight which radically changes, clarifies, or re-enforces my understanding.

In recent months, I had two routine, incidental, and unrelated conversations, one with my brother, and the other with my sister.  The conversation with my sister did not start during some contentious economic debate, but when we were eating dinner together.  Offhand, my sister said to me:  “Keith, I have really come to appreciate the ideas from your econ classes you told me about, like opportunity cost, especially the opportunity cost of time spent on one task being a loss of all other possible actions.  When I applied those ideas to my everyday life, I saw a marked improvement, because I had become more efficient, simply from valuing my time appropriately.”  We often complain that few people these days recognize how econ is not a theory of how society works but of how math can represent human reality at any level. This is one case where there are real, personal benefits from understanding the math of limited lifespan.

My second recent conversation of note did not concern this day and age, in fact, it concerned the ideas of a wealthy 2000-year-old Roman by the name of Seneca.  My brother had recently been translating his Epistulae morales ad Lucilium (literally “Moral letters to Lucilius” in Latin, courtesy of Wikipedia), and had stumbled upon Roman intellectual gold.  Any attempt of mine to summarize the ideas in the letter would be less than adequate, so I shall copy it here.  I know that it is long, and rather Latin-ish, but I would encourage anyone to take the time to read it, if only because reading it will pay your time back, with interest:

Greetings from Seneca to his friend Lucilius.

Continue to act in the way you described, my dear Lucilius: set yourself free for your own sake; gather and save your time, which till lately has been forced from you, or stolen away, or has merely slipped from your hands. Make yourself believe the truth of my words, that certain moments are torn from us, that some are gently removed, and that others glide beyond our reach. The most disgraceful kind of loss, however, is that due to carelessness. Furthermore, if you will pay close heed to the problem of lost time, you will find that the largest portion of our life passes while we are doing ill, a goodly share while we are doing nothing, and the whole while we are doing that which is not to the purpose. What man can you show me who places any value on his time, who reckons the worth of each day, who understands that he is dying daily? For we are mistaken when we look forward to death; the major portion of death has already passed, Whatever years be behind us are in death’s hands.

Therefore, Lucilius, do as you write me that you are doing: hold every hour in your grasp. Lay hold of today’s task, and you will not need to depend so much upon to-morrow’s. While we are postponing, life speeds by. Nothing, Lucilius, is ours, except time. We were entrusted by nature with the ownership of this single thing, so fleeting and slippery that anyone who will can oust us from possession. What fools these mortals be! They allow the cheapest and most useless things, which can easily be replaced, to be charged in the reckoning, after they have acquired them; but they never regard themselves as in debt when they have received some of that precious commodity: time! And yet time is the one loan that even a grateful recipient cannot repay.

You may desire to know how I, who preach to you so freely, am practising. I confess frankly: my time account balances, as you would expect from one who is free-handed but careful. I cannot boast that I waste nothing, but I can at least tell you what I am wasting, and the cause and manner of the loss; I can give you the reasons why I am a poor man. My situation, however, is the same as that of many who are reduced to slender means through no fault of their own: everyone forgives them, but no one comes to their rescue.

What is the state of things, then? It is this: I do not regard a man as poor, if the little which remains is enough for him. I advise you, however, to keep what is really yours; and you cannot begin too early.  For, as our ancestors believed, it is too late to spare when you reach the dregs of the cask. Of that which remains at the bottom, the amount is slight, and the quality is vile.  

Farewell

After listening to my brother dictate the whole of this letter, I felt genuine chills.  The truth it contains is so blatant, a simple calculation could yield the same result:  life is made up of a limited number of hours, therefore life is time.  Whenever you work, you are giving up your time for money (hence the old adage that time is money).  This means that whenever you waste time, or money, you are wasting your life, and wasted life is death.  This single fact horrifies me every day, because like most every other human, I waste an obscene amount of time.  Time watching a movie I have already seen, trolling through Facebook without really reading any of the posts, or having the same argument all over again:  rarely, when I am doing these things do I think about what else I could be doing.

Therein lies the link, which most will have already seen, between my two conversations.  Our time is not free.  Every moment we spend sleeping, eating, studying, etc., has a cost–an opportunity cost–and once it has been spent, if it was not truly the best way to spend it, then some small part of your life has been lost without reward.

I see this nearly everywhere:  students doze off in class or idly check their email or texts, they, when “studying” in the library, will spend a majority of the time effectively idle.  Writing this, I am in a college library, and with sample size n=11, I may, without prying too much, say that ~7/11ths  of my fellow computer users are not doing what they came intending to do.  They are wasting time they will not get back.

And so I say to you, whoever you may be reading this (perhaps idly), much the same as what Seneca might say to you, only I will say it less eloquently, and more directly:  value your time.  Do not waste it.  Work on being efficient not for the sake of productivity, but for the sake of leisure, for we all have our jobs to do, and if we get them done faster then there is more time for enjoyment.  If you spent less time complaining, you might spend that time actively addressing your problems, solving them rationally and thus eliminating your cause for complaint.

Vale.

Hyperinflation and trust in Ancient Rome

Since it hit 1,000,000% in 2018, Venezuelan hyperinflation has actually been not only continuing but accelerating. Recently, Venezuela’s annual inflation hit 10 million percent, as predicted by the IMF; the inflation jumped so quickly that the Venezuelan government actually struggled to print its constantly-inflated money fast enough. This may seem unbelievable, but peak rates of monthly inflation were actually higher than this in Zimbabwe (80 billion percent/month) in 2008, Yugoslavia (313 million percent/month) in 1994, and in Hungary, where inflation reached an astonishing 41.9 quadrillion percent per month in 1946.

The continued struggles to reverse hyperinflation in Venezuela are following a trend that has been played out dozens of times, mostly in the 20th century, including trying to “reset” the currency with fewer zeroes, return to barter, and turning to other countries’ currencies for transactions and storing value. Hyperinflation’s consistent characteristics, including its roots in discretionary/fiat money, large fiscal deficits, and imminent solvency crises are outlined in an excellent in-depth book covering 30 episodes of hyperinflation by Peter Bernholz. I recommend the book (and the Wikipedia page on hyperinflations) to anyone interested in this recurrent phenomenon.

However, I want to focus on one particular inflationary episode that I think receives too little attention as a case study in how value can be robbed from a currency: the 3rd Century AD Roman debasement and inflation. This involved an iterative experiment by Roman emperors in reducing the valuable metal content in their coins, largely driven by the financial needs of the army and countless usurpers, and has some very interesting lessons for leaders facing uncontrollable inflation.

The Ancient Roman Currency

The Romans encountered a system with many currencies, largely based on Greek precedents in weights and measures, and iteratively increased imperial power over hundreds of years by taking over municipal mints and having them create the gold (aureus) and silver (denarius) coins of the emperor (copper/bronze coins were also circulated but had negligible value and less centralization of minting). Minting was intimately related to army leadership, as mints tended to follow armies to the front and the major method of distributing new currency was through payment of the Roman army. Under Nero, the aureus was 99% gold and the denarius was 97% silver, matching the low debasement of eastern/Greek currencies and holding a commodity value roughly commensurate with its value as a currency.

The Crisis of the Third Century

However, a major plague in 160 AD followed by auctions of the imperial seat, major military setbacks, usurpations, loss of gold from mines in Dacia and silver from conquest, and high bread-dole costs drove emperors from 160-274 AD to iterative debase their coinage (by reducing the size and purity of gold coins and by reducing the silver content of coins from 97% to <2%). A major bullion shortage (of both gold and silver) and the demands of the army and imperial maintenance created a situation where a major government with fiscal deficits, huge costs of appeasing the army and urban populace, and diminishing faith in leaders’ abilities drove the governing body to vastly increase the monetary volume. This not only reflects Bernholz’ theories of the causes of hyperinflations but also parallels the high deficits and diminishing public credit of the Maduro regime.

Inflation and debasementFigure 1 for Fiat paper

Unlike modern economies, the Romans did not have paper money, and that meant that to “print” money they had to debase their coins. The question of whether the emperor or his subjects understood the way that coins represented value went beyond the commodity value of the coins has been hotly debated in academic circles, and the debasement of the 3rd century may be the best “test” of whether they understood value as commodity-based or as a representation of social trust in the issuing body and other users of the currency.

Figure 2 for Fiat paper

Given that the silver content of coins decreased by over 95% (gold content decreased slower, at an exchange-adjusted rate shown in Figure 1) from 160-274 AD but inflation over this period was only slightly over 100% (see Figure 2, which shows the prices of wine, wheat, and donkeys in Roman Egypt over that period as attested by papyri). If inflation had followed the commodity value of the coins, it would have been roughly 2,000%, as the coins in 274 had 1/20th of the commodity value of coins in 160 AD. This is a major gap that can only be filled in by some other method of maintaining currency value, namely fiat.

Effectively, a gradual debasement was not followed by insipid ignorance of the reduced silver content (Gresham’s Law continued to influence hoards into the early 3rd Century), but the inflation of prices also did not match the change in commodity value, and in fact lagged behind it for over a century. This shows the influence of market forces (as monetary volume increased, so did prices), but soundly punctures the idea that coins at the time were simply a convenient way to store silver–the value of the coins was in the trust of the emperor and of the community recognition of value in imperial currency. Especially as non-imperial silver and gold currencies disappeared, the emperor no longer had to maintain an equivalence with eastern currencies, and despite enormous military and prestige-related setbacks (including an emperor being captured by the Persians and a single year in which 6 emperors were recognized, sometimes for less than a month), trade within the empire continued without major price shocks following any specific event. This shows that trust in the solvency and currency management by emperors, and trust in merchants and other members of the market to recognize coin values during exchanges, was maintained throughout the Crisis of the Third Century.

Imperial communication through coinage

This idea that fiat and social trust maintained higher-than-commodity-values of coins is bolstered by the fact that coins were a major method of communicating imperial will, trust, and power to subjects. Even as Roman coins began to be rejected in trade with outsiders, legal records from Egypt show that the official values of coins was accepted within the army and bureaucracy (including a 1:25 ratio of aureus-to-denarius value) so long as they depicted an emperor who was not considered a usurper. Amazingly, even after two major portions of the empire split off–the Gallic Empire and the Palmyrene Empire–continued to represent their affiliation with the Roman emperor, including leaders minting coins with their face on one side and the Roman emperor (their foe but the trusted face behind Roman currency) on the other and imitating the symbols and imperial language of Roman coins, through their coins. Despite this, and despite the fact that the Roman coins were more debased (lower commodity value) compared to Gallic ones, the Roman coins tended to be accepted in Gaul but the reverse was not always true.

Interestingly, the aureus, which was used primarily by upper social strata and to pay soldiers, saw far less debasement than the more “common” silver coins (which were so heavily debased that the denarius was replaced with the antoninianus, a coin with barely more silver but that was supposed to be twice as valuable, to maintain the nominal 1:25 gold-to-silver rate). This may show that the army and upper social strata were either suspicious enough of emperors or powerful enough to appease with more “commodity backing.” This differential bimetallist debasing is possibly a singular event in history in the magnitude of difference in nominal vs. commodity value between two interchangeable coins, and it may show that trust in imperial fiat was incomplete and may even have been different across social hierarchies.

Collapse following Reform

In 274 AD, after reconquering both the Gallic and Palmyrene Empire, with an excellent reputation across the empire and in the fourth year of his reign (which was long by 3rd Century standards), the emperor Aurelian recognized that the debasement of his currency was against imperial interests. He decided to double the amount of silver in a new coin to replace the antoninianus, and bumped up the gold content of the aureus. Also, because of the demands of ever-larger bread doles to the urban poor and alongside this reform, Aurelian took far more taxes in kind and far fewer in money. Given that this represented an imperial reform to increase the value of the currency (at least concerning its silver/gold content), shouldn’t it logically lead to a deflation or at least cease the measured inflation over the previous century?

In fact, the opposite occurred. It appears that between 274 AD and 275 AD, under a stable emperor who had brought unity and peace and who had restored some commodity value to the imperial coinage, with a collapse in purchasing power of the currency of over 90% (equivalent to 1,000% inflation) in several months. After a century in which inflation was roughly 3% per year despite debasement (a rate that was unprecedentedly high at the time), the currency simply collapsed in value. How could a currency reform that restricted the monetary volume have such a paradoxical reaction?

Explanation: Social trust and feedback loops

In a paper I published earlier this summer, I argue that this paradoxical collapse is because Aurelian’s reform was a blaring signal from the emperor that he did not trust the fiat value of his own currency. Though he was promising to increase the commodity value of coins, he was also implicitly stating (and explicitly stating by not accepting taxes in coin) that the fiat value that had been maintained throughout the 3rd Century by his predecessors would not be recognized going forward by the imperial bureaucracy in its transactions, thus signalling that for all army payment and other transactions, the social trust in the emperor and in other market members that had undergirded the value of money would now be ignored by the issuing body itself. Once the issuer (and a major market actor) abandoned fiat currency and stated that newly minted coins would have better commodity value than previous coins, the market–rationally–answered by moving quickly toward commodity value of the coins and abandoned the idea of fiat.

Furthermore, not only were taxes taken in kind rather than coin, but there was widespread return to barter as those transacting tried to avoid holding coins as a store of value. This pushed up the velocity of money (as people abandoned it as a store of value and paid higher and higher amounts for commodities to get rid of their currency). The demonetization/return to barter reduced the market size that was transacted in currency, meaning that there were even more coins (mostly aureliani, the new coin, and antoniniani) chasing fewer goods. The high velocity of money, under Quantity Theory of Money, would also contribute to inflation, and the unholy feedback loop of decreasing value causing distrust, which caused demonetization and higher velocity, which led to decreasing value and more distrust in coins as stores of value kept this cycle going until all fiat value was driven out of Roman coinage.

Aftermath

This was followed by Aurelian’s assassination, and there were several monetary collapses from 275 AD forward as successive emperors attempted to recreate the debased/fiat system of their predecessors without success. This continued through the reign of Diocletian, whose major reforms got rid of the previous coinage and included the famous (and famously failed) Edict on Maximum Prices. Inflation continued to be a problem through 312 AD, when Constantine re-instituted commodity-based currencies, largely by seizing the assets of rich competitors and liquidating them to fund his army and public donations. The impact of that sort of private seizure is a topic for another time, but the major lesson of the aftermath is that fiat, once abandoned, is difficult to restore because the very trust on which it was based has been undermined. While later 4th Century emperors managed to again debase without major inflationary consequences, and Byzantine emperors did the same to some extent, the Roman currency was never again divorced from its commodity value and fiat currency would have to wait centuries before the next major experiment.

Lessons for Today?

While this all makes for interesting history, is it relevant to today’s monetary systems? The sophistication of modern markets and communication render some of the signalling discussed above rather archaic and quaint, but the core principles stand:

  1. Fiat currencies are based on social trust in other market actors, but also on the solvency and rule-based systems of the issuing body.
  2. Expansions in monetary volume can lead to inflation, but slow transitions away from commodity value are possible even for a distressed government.
  3. Undermining a currency can have different impacts across social strata and certainly across national borders.
  4. Central abandonment of past promises by an issuer can cause inflationary collapse of their currency through demonetization, increased velocity, and distrust, regardless of intention.
  5. Once rapid inflation begins, it has feedback loops that increase inflation that are hard to stop.

The situation in Venezuela continues to give more lessons to issuing bodies about how to manage hyperinflations, but the major lesson is that those sorts of cycles should be avoided at all costs because of the difficulty in reversing them. Modern governments and independent currency issuers (cryptocurrencies, stablecoins, etc.) should take lessons from the early stages of previous currency trends toward trust and recognition of value, and then how these can be destroyed in a single action against the promised and perceived value of a currency.

Inventions that didn’t change the world

Have you ever learned about an amazing invention–whether it was the Baghdad battery or the ancient Roman steam engine or Chinese firecrackers–and wondered why it didn’t do more to change the world? In this podcast, we examine a selection of curiosities and explore hypotheses for why their inventors didn’t use them to full effect.

We move VERY quickly through a range of fascinating examples and hypotheses, and therefore leave a lot up to discussion. We hope to see your thoughts, feedback, and additions in the comments section!

For any invention that you want to learn more about, see the links below:

Knossos’ toilets

In the 2nd millennium BC, a “palace” (now thought to be a building that served as administrative, trade, and gathering hub) had running-water toilet flushing. Much like the Roman Cloaca Maxima, likely a HUGE public-health benefit, but basically died out. Does this show that military protection/staving off the “Dark Ages” was the only way to maintain amazing inventions?

Link: http://www.nature.com/news/the-secret-history-of-ancient-toilets-1.19960;

The Nimrud lens

Whether it was a fire-starter, a magnifying glass, or (for some overeager astronomy enthusaists), the Neo-Assyrian ground-crystal Nimrud lens is an invention thousands of years out of place. While the Egyptians, Greeks, and Romans all used lenses of different sorts, and glass-blowing was certainly popular by the 1st century BC in Roman Egypt, no glass lenses were made until the Middle Ages and the potential scientific and engineering uses of lenses–that can hardly be understated even in their 16th-to-18th-century applications–had to wait another couple millennia. Many devices like the Baghdad battery and Antikythera device are heralded for their possible engineering genius, but this seems like a simple one with readily available applications that disappeared from the historical record.

https://en.wikipedia.org/wiki/Nimrud_lens

Hero of Alexandria’s steam engine

In the 1st century AD, Hero was a master of simple machines (that were mostly used for plays) and also invented a force pump, a wind-powered machine, even an early vending machine. However, he is likely most famous for his Aeolipile, a rotating steam engine that used heated water to spin an axle. The best attested use of this is for devotion to the divine and party tricks.

https://en.wikipedia.org/wiki/Aeolipile

The ancient mechanical reaper

Ancient Gallo-Romans (or just Gauls) invented a novel way of grain harvesting: rather than using sickles or scythes, they used a mechanical reaper, 1700 years before Cyrus McCormick more than tripled the productivity of American farmers. This antiquated device literally but the cart before the oxen and required two men to operate: one man to drive the beasts, and another to knock the ears off the stalk (this reaper was obviously far less sophisticated than McCormick’s). This invention did not survive the Volkswanderung period.

http://www.gnrtr.com/Generator.html?pi=208&cp=3

http://reapertakethewheel.blogspot.com/2013/03/impacts-of-invention.html

Note: the horse collar (which allowed horses to be used to plow) was invented in 1600-1400 BC in China AND the Levant, but was not applied widely until 1000 AD in Europe. https://en.wikipedia.org/wiki/Horse_collar.

Inoculation

Madhav, an Indian doctor, compiled hundreds of cures in his Nidana, including an inoculation against smallpox that showed an understanding of disease transmission (he would take year-old smallpox-infected flesh and touch it to a recently made cutaneous wound). However, the next 13 centuries did not see Indian medical understanding of viruses or bacteria, or even copied techniques of this, development. https://books.google.com/books?id=Hkc3QnbagK4C&pg=PA105&lpg=PA105&dq=madhav+indian+smallpox+inoculation&source=bl&ots=4RFPuvbf5Y&sig=iyDaNUs4u5N7xHH6-pvlbAY9fcQ&hl=en&sa=X&ved=0ahUKEwic8e-1-JXVAhUp6IMKHfw3DLsQ6AEIOjAD#v=onepage&q=madhav%20indian%20smallpox%20inoculation&f=false

At least, thank god, their methods of giving nose jobs to those who had had their noses cut off as a punishment survived: https://en.wikipedia.org/wiki/History_of_rhinoplasty

The Chinese:

List of all chinese inventions:

https://en.wikipedia.org/wiki/List_of_Chinese_inventions#Four_Great_Inventions

Gunpowder

Gunpowder was discovered by Chinese alchemists attempting to discover the elixir of life (irony, no?)

https://www.thoughtco.com/invention-of-gunpowder-195160

https://en.wikipedia.org/wiki/Four_Great_Inventions

(maybe a good corollary would be Greek fire, which was used effectively in naval warfare by the Byzantines, but which was not improved upon and the recipe of which is still secret: https://en.wikipedia.org/wiki/Greek_fire)

Printing

The Chinese invented the printing press possibly as early as the 6th century. However, unlike the explosion of literacy seen in much of Europe (particularly Protestant Europe–see our last podcast), the Chinese masses never learned to read. In fact, in 1950 fewer than 20% of Chinese citizens were literate. Compare this to Europe, where some societies saw literacy rates of as high as 90% (Sweden, Male population) in some societies within a few centuries of the introduction of the printing press. Why? There may be several reasons–cultural, religious, political–but in our opinion, it would have to be the characters: 100,000 blocks were needed to create a single set.

http://www.nytimes.com/2001/02/12/news/chinas-long-but-uneven-march-to-literacy.html

https://en.wikipedia.org/wiki/History_of_printing_in_East_Asia

They also invented pulped paper by the 2nd century BC: https://en.wikipedia.org/wiki/List_of_Chinese_inventions.

The compass

Invented by 200 BC for divination and used for navigation by the Song dynasty; despite this and the availability of easily colonizable islands within easy sailing distance, the Chinese did not colonize Indonesia, Polynesia, or Oceania, while the Europeans did within the century after they developed the technology and first sailed there.

https://en.wikipedia.org/wiki/History_of_the_compass.

The rudder

While they did not invent the rudder, they invented the “medial, axial, and vertical” sternpost rudder that would become standard in Europe almost 1,000 years before it was used in Europe (1st century AD vs 11th century).

Natural gas

The Chinese discovered “fire wells” (natural gas near the surface) and erected shrines to worship there.

https://link.springer.com/referenceworkentry/10.1007%2F978-1-4020-4425-0_9568

They even understood their potential for fuel, but never developed beyond primitive burning and bamboo piping despite having advanced mining techniques for it by the 1st century BC.

Chinese miscelleni:

Hydraulic powered fan: https://en.wikipedia.org/wiki/Fan_(machine)#History

Cuppola furnace for smelting and molding iron: https://en.wikipedia.org/wiki/Cupola_furnace.

Coke as a fuel source: https://en.wikipedia.org/wiki/Coke_(fuel).

Belt-drive spinning wheel: https://en.wikipedia.org/wiki/Coke_(fuel).

The Precolumbian wheel

The pre- and early Mayans had toys that utilized primitive wheels, but did not use them for any labor-saving purpose (even their gods were depicted carrying loads on their backs). This may have been because scaling up met with mechanical difficulties, but the potential utility of wheels in this case with a bit of investment literally sat unrealized for centuries.

https://tcmam.wordpress.com/2010/11/11/did-pre-columbian-mesoamericans-use-wheels/

The Tucker:

http://www.smithsonianmag.com/history/the-tucker-was-the-1940s-car-of-the-future-135008742/

The following book contained some of our hypotheses:

https://books.google.com/books?id=ynejM1-TATMC&pg=PA399&lpg=PA399&dq=roman+and+greek+labor-saving+devices&source=bl&ots=BI6GVGTrxC&sig=8ZJqirOVUyjH7TNq0fcW6UUPn1k&hl=en&sa=X&ved=0ahUKEwj55O7395XVAhVqwYMKHSb2Dy4Q6AEIKTAB#v=onepage&q=roman%20and%20greek%20labor-saving%20devices&f=false

 

The rest of our hypotheses were amalgamated from our disparate classes in economics and history, but none of them are our own or uncommon in academic circles. Thanks for listening!