Broken incentives in medical research

Last week, I sat down with Scott Johnson of the Device Alliance to discuss how medical research is communicated only through archaic and disorganized methods, and how the root of this is the “economy” of Impact Factor, citations, and tenure-seeking as opposed to an exercise in scientific communication.

We also discussed a vision of the future of medical publishing, where the basic method of communicating knowledge was no longer uploading a PDF but contributing structured data to a living, growing database.

You can listen here: https://www.devicealliance.org/medtech_radio_podcast/

As background, I recommend the recent work by Patrick Collison and Tyler Cowen on broken incentives in medical research funding (as opposed to publishing), as I think their research on funding shows that a great slow-down in medical innovation has resulted from systematic errors in organizing knowledge gathering. Mark Zuckerberg actually interviewed them about it here: https://conversationswithtyler.com/episodes/mark-zuckerberg-interviews-patrick-collison-and-tyler-cowen/.

Launching our COVID-19 visualization

I know everyone is buried in COVID-19 news, updates, and theories. To me, that makes it difficult to cut through the opinions and see the evidence that should actually guide physicians, policymakers, and the public.

To me, the most important thing is the ability to find the answer to my research question easily and to know that this answer is reasonably complete and evidence-driven. That means getting organized access to the scientific literature. Many sources (including a National Library of Medicine database) present thousands of articles, but the organization is the piece that is missing.

That is why I launched StudyViz, a new product that enables physicians to build an updatable visualization of all studies related to a topic of interest. Then, my physician collaborators built just such a visual for COVID-19 research, presenting a sunburst diagram that users can select to identify their research question of interest.

Studyviz sunburst

For instance, if you are interested in the impact of COVID-19 on pregnant patients, just go to “Subpopulations” and find “Pregnancy” (or neonates, if that is your concern). We nested the tags so that you can “drill down” on your question, and so that related concepts are close to each other. Then, to view the studies themselves, just click on them to see an abstract with the key info (patients, interventions, and outcomes) highlighted:

Abstract

This is based on a complex concept hierarchy built by our collaborators and that is constantly evolving as the literature does:

Hierarchy

Even beyond that, we opened up our software to let any researchers who are interested build similar visuals on any disease state, as COVID-19 is not the only disease for which organizing and accessing the scientific literature is important!

We are seeking medical coinvestigators–any physician interested in working with us, simply email contact@nested-knowledge.com or contact us on our website!

A History of Plagues

As COVID-19 continues to spread, fears and extraordinary predictions have also gone viral. While facing a new infectious threat, the unknowns of how new traits of our societies worldwide or of this novel coronavirus impact its spread. Though no two pandemics are equivalent, I thought it best to face this new threat armed with knowledge from past infectious episodes. The best inoculation against a plague of panic is to use evidence gained through billions of deaths, thousands of years, and a few vital breakthroughs to prepare our knowledge of today’s biological crises, social prognosis, and choices.

Below, I address three key questions: First, what precedents do we have for infections with catastrophic potential across societies? Second, what are the greatest killers and how do pandemics compare? Lastly, what are our greatest accomplishments in fighting infectious diseases?

As foundation for understanding how threats like COVID-19 come about and how their hosts fight back, I recommend reading The Red Queen concerning the evolutionary impact and mechanisms of host-disease competition and listening to Sam Harris’ “The Plague Years” podcast with Matt McCarthy from August 2019, which predated COVID-19 but had a strangely prophetic discussion of in-hospital strategies to mitigate drug resistance and their direct relation to evolutionary competition.

  • The Biggest Killers:

Infectious diseases plagued humanity throughout prehistory and history, with a dramatic decrease in the number of infectious disease deaths coming in the past 200 years. In 1900, the leading killers of people were (1) Influenza, (2) Tuberculosis, and (3) Intestinal diseases, whereas now we die from (1) Heart disease, (2) Cancer, and (3) Stroke, all chronic conditions. This graph shows not that humans have vanquished infectious disease as a threat, but that in the never-ending war of evolutionary one-upmanship, we have won battles consistently since 1920 forward. When paired with Jonathan Haidt’s Most Important Graph in the World, this vindicates humanity’s methods of scientific and economic progress toward human flourishing.Death rates

However, if the CDC had earlier data, it would show a huge range of diseases that dwarf wars and famines and dictators as causes of death in the premodern world. If we look to the history of plagues, we are really looking at the history of humanity’s greatest killers.

The sources on the history of pandemics are astonishingly sparse/non-comprehensive. I created the following graphs only by combining evidence and estimates from the WHO, CDC, Wikipedia, Our World in Data, VisualCapitalist, and others (lowest estimates shown where ranges were presented) for both major historic pandemics and for ongoing communicable disease threats. This is not a complete dataset, and I will continue to add to it, but it shows representative death counts from across major infectious disease episodes, as well as the death rate per year based on world population estimates. See the end of this post for the full underlying data. First, the top 12 “plagues” in history:

Capture disease top 12

 

Note: blue=min, orange=max across the sources I examined. For ongoing diseases with year-by-year WHO evidence, like tuberculosis, measles, and cholera, I grouped mortality in 5-year spans (except AIDS, which does not have good estimates from the 1980s-90s, so I reported based on total estimated deaths).

Now, let’s look at the plagues that were lowest on my list (number 55-66). Again, my list was not comprehensive, but this should provide context for COVID-19:

Capture covid

As we can see, the 11,400 people who have died from COVID-19 recently passed Ebola to take the 61st (out of 66) place on our list of plagues. Note again that several ongoing diseases were recorded in 5-year increments, and COVID-19 still comes in under the death rates for cholera. Even more notably, it has 0.015% as many victims as the plague in the 14th Century,

  • In Context of Current Infectious Diseases:

For recent/ongoing diseases, it is easier to compare year-by-year data. Adding UNAIDS to our sources, we found the following rates of death across some of the leading infectious causes of death. Again, this is not comprehensive, but helps put COVID-19 (the small red dot, so far in the first 3 months of 2020) in context:

Capture diseases by year

Note: darker segments of lines are my own estimates; full data at bottom of the post. I did not include influenza due to the lack of good sources on a year-by-year basis, but a Lancet article found that 291,000-645,000 deaths from influenza in a year is predictable based on data from 1999-2015.

None of this is to say that COVID-19 is not a major threat to human health globally–it is, and precautions could save lives. However, it should show us that there are major threats to human health globally all the time, that we must continue to fight. These trendlines tend to be going the right direction, but our war for survival has many foes, and will have more emerge in the future, and we should expend our resources in fighting them rationally based on the benefits to human health, not panic or headlines.

  • The Eradication List:

As we think about the way to address COVID-19, we should keep in mind that this fight against infectious disease builds upon work so amazing that most internet junkies approach new infectious diseases with fear of the unknown, rather than tired acceptance that most humans succumb to them. That is a recent innovation in the human experience, and the strategies used to fight other diseases can inform our work now to reduce human suffering.

While influenzas may be impossible to eradicate (in part due to an evolved strategy of constantly changing antigens), I wanted to direct everyone to an ever-growing monument to human achievement, the Eradication List. While humans have eradicated only a few infectious diseases, the amazing thing is that we can discuss which diseases may in fact disappear as threats through the work of scientists.

On that happy note, I leave you here. More History of Plagues to come, in Volume 2: Vectors, Vaccines, and Virulence!

Disease Start Year End Year Death Toll (low) Death Toll (high) Deaths per 100,000 people per year (global)
Antonine Plague 165 180 5,000,000 5,000,000 164.5
Plague of Justinian 541 542 25,000,000 100,000,000 6,250.0
Japanese Smallpox Epidemic 735 737 1,000,000 1,000,000 158.7
Bubonic Plague 1347 1351 75,000,000 200,000,000 4,166.7
Smallpox (Central and South America) 1520 1591 56,000,000 56,000,000 172.8
Cocoliztli (Mexico) 1545 1545 12,000,000 15,000,000 2,666.7
Cocoliztli resurgence (Mexico) 1576 1576 2,000,000 2,000,000 444.4
17th Century Plagues 1600 1699 3,000,000 3,000,000 6.0
18th Century Plagues 1700 1799 600,000 600,000 1.0
New World Measles 1700 1799 2,000,000 2,000,000 3.3
Smallpox (North America) 1763 1782 400,000 500,000 2.6
Cholera Pandemic (India, 1817-60) 1817 1860 15,000,000 15,000,000 34.1
Cholera Pandemic (International, 1824-37) 1824 1837 305,000 305,000 2.2
Great Plains Smallpox 1837 1837 17,200 17,200 1.7
Cholera Pandemic (International, 1846-60) 1846 1860 1,488,000 1,488,000 8.3
Hawaiian Plagues 1848 1849 40,000 40,000 1.7
Yellow Fever 1850 1899 100,000 150,000 0.2
The Third Plague (Bubonic) 1855 1855 12,000,000 12,000,000 1,000.0
Cholera Pandemic (International, 1863-75) 1863 1875 170,000 170,000 1.1
Indian Smallpox 1868 1907 4,700,000 4,700,000 9.8
Franco-Prussian Smallpox 1870 1875 500,000 500,000 6.9
Cholera Pandemic (International, 1881-96) 1881 1896 846,000 846,000 4.4
Russian Flu 1889 1890 1,000,000 1,000,000 41.7
Cholera Pandemic (India and Russia) 1899 1923 1,300,000 1,300,000 3.3
Cholera Pandemic (Philippenes) 1902 1904 200,000 200,000 4.2
Spanish Flu 1918 1919 40,000,000 100,000,000 1,250.0
Cholera (International, 1950-54) 1950 1954 316,201 316,201 2.4
Cholera (International, 1955-59) 1955 1959 186,055 186,055 1.3
Asian Flu 1957 1958 1,100,000 1,100,000 19.1
Cholera (International, 1960-64) 1960 1964 110,449 110,449 0.7
Cholera (International, 1965-69) 1965 1969 22,244 22,244 0.1
Hong Kong Flu 1968 1970 1,000,000 1,000,000 9.4
Cholera (International, 1970-75) 1970 1974 62,053 62,053 0.3
Cholera (International, 1975-79) 1975 1979 20,038 20,038 0.1
Cholera (International, 1980-84) 1980 1984 12,714 12,714 0.1
AIDS 1981 2020 25,000,000 35,000,000 13.8
Measles (International, 1985) 1985 1989 4,800,000 4,800,000 19.7
Cholera (International, 1985-89) 1985 1989 15,655 15,655 0.1
Measles (International, 1990-94) 1990 1994 2,900,000 2,900,000 10.9
Cholera (International, 1990-94) 1990 1994 47,829 47,829 0.2
Malaria (International, 1990-94) 1990 1994 3,549,921 3,549,921 13.3
Measles (International, 1995-99) 1995 1999 2,400,000 2,400,000 8.4
Cholera (International, 1995-99) 1995 1999 37,887 37,887 0.1
Malaria (International, 1995-99) 1995 1999 3,987,145 3,987,145 13.9
Measles (International, 2000-04) 2000 2004 2,300,000 2,300,000 7.5
Malaria (International, 2000-04) 2000 2004 4,516,664 4,516,664 14.7
Tuberculosis (International, 2000-04) 2000 2004 7,890,000 8,890,000 25.7
Cholera (International, 2000-04) 2000 2004 16,969 16,969 0.1
SARS 2002 2003 770 770 0.0
Measles (International, 2005-09) 2005 2009 1,300,000 1,300,000 4.0
Malaria (International, 2005-09) 2005 2009 4,438,106 4,438,106 13.6
Tuberculosis (International, 2005-09) 2005 2009 7,210,000 8,010,000 22.0
Cholera (International, 2005-09) 2005 2009 22,694 22,694 0.1
Swine Flu 2009 2010 200,000 500,000 1.5
Measles (International, 2010-14) 2010 2014 700,000 700,000 2.0
Malaria (International, 2010-14) 2010 2014 3,674,781 3,674,781 10.6
Tuberculosis (International, 2010-14) 2010 2014 6,480,000 7,250,000 18.6
Cholera (International, 2010-14) 2010 2014 22,691 22,691 0.1
MERS 2012 2020 850 850 0.0
Ebola 2014 2016 11,300 11,300 0.1
Malaria (International, 2015-17) 2015 2017 1,907,872 1,907,872 8.6
Tuberculosis (International, 2015-18) 2015 2018 4,800,000 5,440,000 16.3
Cholera (International, 2015-16) 2015 2016 3,724 3,724 0.0
Measles (International, 2019) 2019 2019 140,000 140,000 1.8
COVID-19 2019 2020 11,400 11,400 0.1

 

Year Malaria Cholera Measles Tuberculosis Meningitis HIV/AIDS COVID-19
1990 672,518 2,487 670,000 1,903 310,000
1991 692,990 19,302 550,000 1,777 360,000
1992 711,535 8,214 700,000 2,482 440,000
1993 729,735 6,761 540,000 1,986 540,000
1994 743,143 10,750 540,000 3,335 620,000
1995 761,617 5,045 400,000 4,787 720,000
1996 777,012 6,418 510,000 3,325 870,000
1997 797,091 6,371 420,000 5,254 1,060,000
1998 816,733 10,832 560,000 4,929 1,210,000
1999 834,692 9,221 550,000 2,705 1,390,000
2000 851,785 5,269 555,000 1,700,000 4,298 1,540,000
2001 885,057 2,897 550,000 1,680,000 6,398 1,680,000
2002 911,230 4,564 415,000 1,710,000 6,122 1,820,000
2003 934,048 1,894 490,000 1,670,000 7,441 1,965,000
2004 934,544 2,345 370,000 1,610,000 6,428 2,003,000
2005 927,109 2,272 375,000 1,590,000 6,671 2,000,000
2006 909,899 6,300 240,000 1,550,000 4,720 1,880,000
2007 895,528 4,033 170,000 1,520,000 7,028 1,740,000
2008 874,087 5,143 180,000 1,480,000 4,363 1,630,000
2009 831,483 4,946 190,000 1,450,000 3,187 1,530,000
2010 788,442 7,543 170,000 1,420,000 2,198 1,460,000
2011 755,544 7,781 200,000 1,400,000 3,726 1,400,000
2012 725,676 3,034 150,000 1,370,000 3,926 1,340,000
2013 710,114 2,102 160,000 1,350,000 3,453 1,290,000
2014 695,005 2,231 120,000 1,340,000 2,992 1,240,000
2015 662,164 1,304 150,000 1,310,000 1,190,000
2016 625,883 2,420 90,000 1,290,000 1,170,000
2017 619,825 100,000 1,270,000 1,150,000
2018 1,240,000
2019
2020 16,514

Broken Incentives in Medical Innovation

I recently listened to Mark Zuckerberg interviewing Tyler Cowen and Patrick Collison concerning their thesis that the process of using scientific research to advance major development goals (e.g. extending the average human lifespan) has stagnated. It is a fascinating discussion that fundamentally questions the practice of scientific research as it is currently completed.

Their conversation also made me consider more deeply the incentives in my industry, medical R&D, that have shaped the practices that Cowen and Collison find so problematic. While there are many reasons for the difficulties in maintaining a breakneck pace of technological progress (“all the easy ideas are already done,” “the American education system fails badly on STEM,” etc), I think that there are structural causes that are major contributors to the great slowdown in medical progress. See my full discussion here!

The open secrets of what medicine actually helps

One of the things that I was most surprised by when I joined the medical field was how variable the average patient benefit was for different therapies. Obviously, Alzheimer’s treatments are less helpful than syphilis ones, but even within treatment categories, there are huge ranges in actual efficacy for treatments with similar cost, materials, and public conception.

What worries me about this is that not only in public but within the medical establishment, actually differentiating these therapies–and therefore deciding what therapies, ultimately, to use and pay for–is not prioritized in medical practice.

I wrote about this on my company’s blog, but its concept is purely as a comment on the most surprising dichotomy I learned about–that between stenting (no benefit shown for most patients!!) vs. clot retrieval during strokes (amazing benefits, including double the odds of good neurological outcome). Amazingly, the former is a far more common procedure, and the latter is underprovided in rural areas and in most countries outside of the US, EU, Japan, and Korea. Read more here: https://about.nested-knowledge.com/2020/01/27/not-all-minimally-invasive-procedures-are-created-equal/.

There is no Bloomberg for medicine

When I began working in medical research, I was shocked to find that no one in the medical industry has actually collected and compared all of the clinical outcomes data that has been published. With Big Data in Healthcare as such a major initiative, it was incomprehensible to me that the highest-value data–the data that is directly used to clear therapies, recommend them to the medical community, and assess their efficacy–were being managed in the following way:

  1. Physician completes study, and then spends up to a year writing it up and submitting it,
  2. Journal sits on the study for months, then publishes (in some cases), but without ensuring that it matches similar studies in the data it reports.
  3. Oh, by the way, the journal does not make the data available in a structured format!
  4. Then, if you want to see how that one study compares to related studies, you have to either find a recent, comprehensive, on-point meta-analysis (which is a very low chance in my experience), or comb the literature and extract the data by hand.
  5. That’s it.

This strikes me as mismanagement of data that are relevant to lifechanging healthcare decisions. Effectively, no one in the medical field has anything like what the financial industry has had for decades–the Bloomberg terminal, which presents comprehensive information on an updatable basis by pulling data from centralized repositories. If we can do it for stocks, we can do it for medical studies, and in fact that is what I am trying to do. I recently wrote an article on the topic for the Minneapolis-St Paul Business Journal, calling for the medical community to support a centralized, constantly-updated, data-centric platform to enable not only physicians but also insurers, policymakers, and even patients examine the actual scientific consensus, and the data that support it, in a single interface.

Read the full article at https://www.bizjournals.com/twincities/news/2019/12/27/there-is-no-bloomberg-for-medicine.html!

Changing the way doctors see data

Over the past four years, my brother and I have grown a business that helps doctors publish data-driven articles from the two of us to over 30 experienced researchers. However, along the way, we noticed that data management in medical publication was decades behind other fields–in fact, the vital clinical outcomes from major trials are generally published as singular PDFs with no structured data, and are analyzed in comparison to existing studies only in nonsystematic, nonupdatable publications. Effectively, medicine has no central method for sharing or comparing patient outcomes across therapies, and I think that it is our responsibility as researchers to present these data to the medical community.

Based on our internal estimates, there are >3 million published clinical outcomes studies (with over 200 million individual datapoints) that need to be abstracted, structured, and compared through a central database. We recognized that this is a monumental task, and we therefore have focused on automating and scaling research processes that have been, through today, entirely manual. Only after a year of intensive work have we found a path toward creating a central database for all published patient outcomes, and we are excited to debut our technology publicly!

Keith recently presented our venture at a Mayo Clinic-hosted event, Walleye Tank (a Shark Tank-style competition of medical ventures), and I think that it is an excellent fast-paced introduction to a complex issue. Thanks also to the Mayo Clinic researchers for their interesting questions! You can see his two-minute presentation and the Q&A here. We would love to get more questions from the economic/data science/medical communities, and will continue putting our ideas out there for feedback!

My Startup Experience

Over the past 4 years, I have had a huge transition in my life–from history student to law student to serial medical entrepreneur. Essentially, I have learned a great deal from my academic work that taught me the value that we can create if we find an unmet need in the world, create an idea that fills that need, and then use technology, personal networks, and hard work to create novelties. While startups obviously tackle any new problem under the sun, to me, they are the mechanism to bring about a positive change–and, along the way, get the resources to scale that change across the globe.

I am still very far from reaching that goal, but my family and cofounders have several visions of how to improve not only how patients are treated but also how we build the knowledge base that physicians, patients, and researchers can use to inform care and innovation. My brother/cofounder and I were recently on an entrepreneurship-focused podcast, and we got the chance to discuss our experience, our vision, and our companies. I hope this can be a springboard for more discussions about how companies are a unique agent of advancing human flourishing, and about the history and philosophy of entrepreneurship, technology, and knowledge.

You can listen here: http://rochesterrising.org/podcast/episode-151-talking-medical-startups-with-keith-and-kevin-kallmes. Heartfelt thanks to Amanda Leightner and Rochester Rising for a great conversation!

Thank you!

Kevin Kallmes

The Myth of the Nazi War Machine

Nazism and fascism, in the popular imagination, are associated with evil, immoral, inhumane treatment across conquered groups and their own subjects alike. These evil actions loom even larger because the thought of an entire society dedicated to military industry, extending its reach across and beyond Europe, inspires ghastly fears not only of evil intent but also astonishing military might that could overwhelm the Allies with the technological wonder of the V2 rocket, the deadly and ever-present U-boat threat, and the German “Royal Tiger” tank that was so well armored that Sherman-fired shells literally bounced off of it. This vision of the Nazis as conquering through technological and industrial superiority is not just a mistake of modern historians, but is actually based on the overestimation of their foes by the Allies and on the disastrously misplaced overconfident messaging of the Germans, Italians, and Japanese that their technology, industrial power, and elan gave them even a chance of victory. The miscalculation of the Hitler in extrapolating his successes in Poland and France to assuming his alliance could overwhelm the combined defenses of over 1.5 billion people represents the most astonishing delusion in military history.

The inspiration for this comes from Victor Davis Hanson’s fascinating economic and industrial history, The Second World Wars. One of his major arguments is that the Axis leaders lost because their commitment to their ideology became a fantasy that they had abilities that directly contradicted the reality of their actual abilities and those of their opponents. I heartily recommend the book and this shorter interview where he lays out the book’s central concepts. My major takeaway was that this fantasy has gone beyond the minds of Hitler, Tojo, and Mussolini, and the vision of a vast industrial empire looming over the world is now imprinted on our memory of World War II. I think it is past time that we recognize Nazism as not only immoral but also incompetent. Below, I hope to share some astonishing statistics that show beyond a shadow of a doubt that the modern concept of Nazi military might is a myth.

  1. The Allies rode in cars, the Germans rode horses. In 1939, the only transportation available to 85% of German infantry other than walking was horses. By 1945…it was still 85%. In total, the US and UK produced almost 4 million general-use vehicles, compared to 160,000 German vehicles. That is a 25-fold advantage. The Allies also had 1 million infantry-supporting artillery compared to less than 100,000 for all of the Axis.
  2. Where were the supplies? The Allies had 46 million tonnes of merchant shipping vessels to the Axis’ 5 million, five times as much aluminum (key for engines and planes), and by 1943 had cut off all German access to rare metals such as tungsten, one of the key metals used in munitions, manufacturing, and electronics. The US supplied Britain and the USSR through the Lend-Lease Act with almost $700 billion (inflation-adjusted 2019 dollars) in supplies throughout the war, which is roughly double the entire German annual GDP in 1939.
  3. The Allies swam to victory on a sea of oil. Though Rommel came within a battle of accessing the British Middle-Eastern oil fields, the Axis still had astonishingly little fuel (which they needed to power their King Tiger, which drank a gallon of gas every 700 yards, the vast Luftwaffe that put over 130,000 planes into action, and their gigantic battleship Bismark). The Axis as a whole used 66 million metric tonnes of oil, while the Allies used a billion. A 15X advantage.
  4. The panzers were neither numerous nor superior technologically. The Mark 1 and 2 panzers that conquered France were actually less numerous and less technologically advanced than France’s. While blitzkrieg and elan overwhelmed the French, even the Mark 4–the most commonly used panzer in the late war–underperformed Shermans in infantry support and reliability and were even considered inferior to the Soviet T34 by Hitler himself. Even including the outmoded Czech tanks repurposed by the Germans, they fielded only 67,000 tanks on all fronts to face 270,000 Allied tanks (with no help from Italy, with a pitiful 3,300 tanks, and Japan largely ignored mobile land armor and created only 4,500 tanks). The environment of idealogical zeal in Germany prevented a military researcher from telling Hitler about the true tank numbers of the Soviets, as Hitler himself recognized later in the war by repeating that if he had known the true number of T34’s he faced, he would never have invaded. The US and USSR deployed massive numbers of upgraded Shermans and the workhorse T34s, while Germany sank huge investments into specialized and scary duds the Royal Tiger–300,000 man-hours and ten times as much as a Sherman. Only 1,300 Royal Tigers were ever produced, and their 70 tonnes of weight, constant mechanical issues, and cost undercut their supremacy in tank-on-tank duels. The US and Britain used precision bombing to inflict major tank losses on Germany, and while German tanks outfought Soviet tanks roughly 4:1, by 1945 the Soviets still had 25,000 tanks against the Germans’ 6,000.
  5. Collaboration helps both tech and strategy. The Allies worked together–the Sherman’s underpowered 75mm (corrected) could be upgraded with a British gun because of interoperability of parts, and the US and Brits delivered over 12,000 tanks and 18,000 planes to the Soviets under Lend-Lease; the Germans did not even have replaceable parts for their own tanks, and the Germans never helped their Italian allies (who had lost a land invasion even to the collapsing French) develop industrial capabilities. Bletchley Park gave advance warning to US merchant convoys, but the Italians and Japanese found out that Hitler had invaded the USSR only after troops had crossed into Ukraine.
  6. Fascism is not industrially sound. Even though the Nazis put an astonishing 75% of their GDP toward the military by 1944 and despite taking on unsustainable debt to sustain their production, their GDP in 1939 was $384 billion, roughly equal to the Soviets and $100 billion less than the UK and France combined. By the end of the war, this fell to $310 billion, compared to a whopping $1.4 trillion US GDP. However, even these numbers do not fully represent how non-mechanized, non-scalable, and non-industrial Germany was even under military dictatorship. While German science and engineering had been pre-eminent pre-WW I, the central control and obsession with infeasible, custom projects before and during the war meant that the Germans had a lower percentage of their population that could be mobilized for wartime production than their opponents, not to mention that their GDP per capita was half of that of the US, and yet the Axis still took on opponents that had productive populations five times their size.
  7. The V2 was a terrible investment. After losing the Battle of Britain (largely because of inferior training, radar, and plane production), the Nazis tried to use ballistic missiles to bomb the Brits into submission. The less technologically sophisticated V1 delivered a respectable 1,000 kg of explosives, but despite launching over 10,000, by mid-1944 the British countermeasures stopped 80% of these, and many misfired, failed to explode, or had guidance system malfunctions. The V2 was more sophisticated, but was never mass produced: only 3,000 were launched, and more Nazis were killed as part of the development of the rocket than Brits by their launch. The V1 and V2 programs combined cost 50% more than the Manhattan project, and even compared to the US’s most expensive bombing program (developing the B29), the cost-per-explosives-delivered was thirty times higher for the V2.
  8. The Luftwaffe was completely overmatched even by the RAF alone. Before the Battle of Britain, the Luftwaffe (2,500 planes) outnumbered the RAF (about 1,500), and the RAF was using more outdated Hurricanes than they were the newer Spitfire; however, the Brits scaled up training and production and even put novel innovations into their manufacturing within the 3 months of battle.
  9. The Germans underestimated the scalability of their opponent’s production. By the end of the war, the Brits manufactured 177,000 planes, 44,000 more than Germany. Crucially, though they started the war with far fewer experienced pilots, the Brits used this production advantage to train their pilots far better (in fact, the Brits had over 40,000 training aircraft). The US was similarly underprepared in terms of both aircraft production and training, but within a year had increased production from one B-24 every two weeks in 1940 to one every two hours in 1942. The US manufactured almost 300,000 planes by the end of the war, with far superior bombers (the figher-resistant B-17 and the giant, sophisticated Super Fortress B-29). However, the German air force personnel still needed to be more numerous than either the US or Britain because of the lack of mechanization.
  10. The Germans could not replace their pilots. By early 1945, the Germans were losing 30% of their pilots every month, even after giving up on bombing campaigns because of high pilot and plane attrition. They never scaled training and were sending completely green pilots against well-trained Allied opponents who had numerical, technological, and experience superiority by 1943 and air supremacy by 1944.
  11. The Germans did not deploy new air technologies to their advantage. While the jet engine and V2 rockets would revolutionize air power after the war, they did not impact the outcome of the war except to drain German R&D. Germany also failed to develop a functional heavy bomber, did not update their fighters’ technology during the war, never fully or effectively deployed radar, and never matched the Allies’ anti-aircraft defenses.
  12. The Allies could win through strategic bombing, but the reverse was not true. Both sides targeted industry and killed civilians en masse in strategic campaigns, but Germany never had the ability to strategically reduce their enemies’ production. Though Germany dropped 760,000 tonnes of ordnance on the Soviets and systematically destroyed production west of the Urals, the Soviets moved their industry to the East and continued outproducing their opponents with respect to tanks, vehicles, artillery, machine guns, and munitions. The Germans never produced a functional 4-engine bomber, so they could not use strategic bombing to undercut industry beyond this; the Blitz killed 40,000 civilians and destroyed over a million homes, but never developed into a threat against British military production. This also cost the Luftwaffe over 2,200 planes and 3,500 of their best pilots. However, nearly every major German and Japanese city was reduced by an unbelievable 3.5 million tonnes of ordnance dropped by the Allies, which killed over 700,000 German and Japanese civilians and destroyed the majority of both empires’ military production.
  13. The U-boat campaign became a colossal failure by 1943. Though the unrestricted submarine warfare of 1940-41 was sinking enough merchant vessels to truly threaten British supplies, Allied countermeasures–code-cracking, sonar, depth charges, Hedgehogs, Squids, and the use of surface aircraft to screen fleets–systematically destroyed the U-boats, which had losses of over 80% by the end of the war. In fact, the Germans barely managed to exceed the total merchant losses inflicted in World War I, and in May-June 1943 only sank two ships for every U-boat lost, ending the Battle of the Atlantic in just two disastrous months. The US was producing ships and supplies so quickly and in such vast quantities that the U-boats needed to sink 700,000 tonnes of shipping every month just to keep up with this production, which they did in only one month (November 1942); this number sank to less than a tenth of that by early 1943.
  14. The US actually waged a successful submarine campaign. Unlike the Germans, the US completely neutered the Japanese merchant fleet using submarines, which also inflicted over 55% of total Japanese fleet losses during the war, with minimal losses of submarine crews. Using just 235 submarines, the US sank 1,000 ships, compared to roughly 2,000 sunk by Germany (which cost almost 800 U-boat losses).
  15. Naval war had changed, and only the US responded. After the sinking of the HMS Prince of Wales near Singapore, all nations should have recognized that naval air forces were the new way to rule the waves. And yet, the Germans only ever built a single aircraft carrier despite their need to support operations in North Africa, and built the Tirpitz, a gigantic Bismarck-class battleship (that cost as much as 20 submarines), which barely participated in any offensive action before being destroyed by successive air raids. Germany never assembled a fleet capable of actually invading Britain, so even if they had won the Battle of Britain, there were no serious plans to actually conquer the island. Japan recognized the importance of aircraft carriers, and built 18, but the US vastly overmatched them with at least 100 (many of them more efficient light carriers), and Japan failed to predict how naval air supremacy would effectively cut them off from their empire and enable systematic destruction of their homeland without a single US landing on Japanese home soil.
  16. The Nazis forgot blitzkrieg. The rapid advances of Germany in 1939 is largely attributable to the decentralized command structure that enabled leaders on the front to respond flexibly based on mission-driven instructions rather than bureaucracy. However, as early as Dunkirk (when Hitler himself held back his tank forces out of fear), the command structure had already shifted toward top-down bureaucracy that drummed out gifted commanders and made disastrous blunders through plodding focuses on besieging Sevastopol and Stalingrad rather than chasing the reeling Soviets. Later, the inflexibility of defenses and “no-retreat” commands that allowed encirclement of key German forces replayed in reverse the inflexibility of the Maginot line and Stalin’s early mistakes, showing that the fascist system prevented learning from one’s enemy and even robbed the Germans of their own institutional advantages over the course of the war.
  17. Even the elan was illusory. Both Germany and Japan knew they were numerically inferior and depended on military tradition and zeal to overcome this. While German armies generally went 1:1 or better (especially in 1941 against the Soviets, when they killed or captured 4 million badly-led, outdated Soviet infantry), even the US–fighting across an ocean, with green infantry and on the offensive against the dug-in Germans–matched the Germans in commitment to war and inflicted casualties at 1:1. At the darkest hour, alone against the entire continent and while losing their important Pacific bases one by one, the Brits threw themselves into saving themselves and the world from fascists; only secret police and brute force kept the Nazis afloat once the tide had turned. The German high command was neutered by the need for secrecy and the systematic replacement of talented generals with loyal idiots, and the many mutinies, surrenders, and assassination attempts by Nazi leaders show that the illusory unity of fascism was in fact weaker under pressure than the commitment and cooperation of democratic systems.
  18. The Nazis never actually had plans that could win an existential war. Blitzkrieg scored some successes against the underprepared Poles and demoralized French, but these major regional victories were fundamentally of a different character than the conflicts the Nazis proceeded to start. While the Germans did take over a million square miles from the Soviets while destroying a 4-million-strong army, the industry was eventually transferred beyond the Urals and the Soviets replenished their army with, over 4 years, a further 30 million men. But most of all, even if Hitler somehow achieved what Napoleon himself could not, neither he nor Tojo had any ability to attack Detroit, so an implacable, distant foe was able to rain down destruction without ever facing a threat on home soil. The Nazis simply did not have the technology, money, or even the plans to conquer their most industrially powerful opponent, and perhaps the greatest tragedy of the entire war is that 60 million people died to prove something that was obvious from the start.

Overall, the Nazis failed to recognize how air and naval air superiority would impact the war effort, still believed that infantry zeal could overcome technological superiority, could not keep pace with the scale of the Allies’ industry or speed of their technological advances, spent inefficiently on R&D duds, never solved crucial resource issues, and sacrificed millions of their own subjects in no-retreat disasters. Fooled by their early success, delusions of grandeur, and belief in their own propaganda, Hitler and his collaborators not only instituted a morally repugnant regime but destroyed themselves. Fascism a scary ideology that promises great power for great personal sacrifice, but while the sacrifice was real, the power was illusory: as a system, it actually underperformed democracy technologically, strategically, industrially, and militarily in nearly every important category. Hopefully, this diametrical failure is evidence enough for even those who are morally open to fascism to discard it as simply unworkable. And maybe, if we dispel the myth of Nazi industry, we can head off any future experiments in fascism and give due recognition to the awe-inspiring productivity of systems that recognize the value of liberty.

This is in no way exhaustive, and in the interest of space I have not included the analogous Italian and Japanese military delusions and industrial shortcomings in World War II. I hope that this shortlist of facts inspires you to learn more and tell posterity that fascism is not only evil but delusional and incompetent.

All facts taken from The Second World Wars, Wikipedia, or general internet trawling.

Thoughts on Time from a College Library

Note: This was written by my brother Keith, and he did not originally post it online but sent it to our family members. For being a younger brother, he brings a hell of a lot of wisdom to the table, and I think this thought-provoking epistle deserves to be shared more widely. I am publishing it here, with permission:

From Keith:

I learn a great deal from my family.  The facts, figures, and articles that commonly result from discussing and arguing with each other are a reward in and of themselves.  As might be expected, many of these experiences and facts are soon forgotten, making way for new debates.  Once in a while, however, when discussing a topic, we–or I–stumble upon an insight which radically changes, clarifies, or re-enforces my understanding.

In recent months, I had two routine, incidental, and unrelated conversations, one with my brother, and the other with my sister.  The conversation with my sister did not start during some contentious economic debate, but when we were eating dinner together.  Offhand, my sister said to me:  “Keith, I have really come to appreciate the ideas from your econ classes you told me about, like opportunity cost, especially the opportunity cost of time spent on one task being a loss of all other possible actions.  When I applied those ideas to my everyday life, I saw a marked improvement, because I had become more efficient, simply from valuing my time appropriately.”  We often complain that few people these days recognize how econ is not a theory of how society works but of how math can represent human reality at any level. This is one case where there are real, personal benefits from understanding the math of limited lifespan.

My second recent conversation of note did not concern this day and age, in fact, it concerned the ideas of a wealthy 2000-year-old Roman by the name of Seneca.  My brother had recently been translating his Epistulae morales ad Lucilium (literally “Moral letters to Lucilius” in Latin, courtesy of Wikipedia), and had stumbled upon Roman intellectual gold.  Any attempt of mine to summarize the ideas in the letter would be less than adequate, so I shall copy it here.  I know that it is long, and rather Latin-ish, but I would encourage anyone to take the time to read it, if only because reading it will pay your time back, with interest:

Greetings from Seneca to his friend Lucilius.

Continue to act in the way you described, my dear Lucilius: set yourself free for your own sake; gather and save your time, which till lately has been forced from you, or stolen away, or has merely slipped from your hands. Make yourself believe the truth of my words, that certain moments are torn from us, that some are gently removed, and that others glide beyond our reach. The most disgraceful kind of loss, however, is that due to carelessness. Furthermore, if you will pay close heed to the problem of lost time, you will find that the largest portion of our life passes while we are doing ill, a goodly share while we are doing nothing, and the whole while we are doing that which is not to the purpose. What man can you show me who places any value on his time, who reckons the worth of each day, who understands that he is dying daily? For we are mistaken when we look forward to death; the major portion of death has already passed, Whatever years be behind us are in death’s hands.

Therefore, Lucilius, do as you write me that you are doing: hold every hour in your grasp. Lay hold of today’s task, and you will not need to depend so much upon to-morrow’s. While we are postponing, life speeds by. Nothing, Lucilius, is ours, except time. We were entrusted by nature with the ownership of this single thing, so fleeting and slippery that anyone who will can oust us from possession. What fools these mortals be! They allow the cheapest and most useless things, which can easily be replaced, to be charged in the reckoning, after they have acquired them; but they never regard themselves as in debt when they have received some of that precious commodity: time! And yet time is the one loan that even a grateful recipient cannot repay.

You may desire to know how I, who preach to you so freely, am practising. I confess frankly: my time account balances, as you would expect from one who is free-handed but careful. I cannot boast that I waste nothing, but I can at least tell you what I am wasting, and the cause and manner of the loss; I can give you the reasons why I am a poor man. My situation, however, is the same as that of many who are reduced to slender means through no fault of their own: everyone forgives them, but no one comes to their rescue.

What is the state of things, then? It is this: I do not regard a man as poor, if the little which remains is enough for him. I advise you, however, to keep what is really yours; and you cannot begin too early.  For, as our ancestors believed, it is too late to spare when you reach the dregs of the cask. Of that which remains at the bottom, the amount is slight, and the quality is vile.  

Farewell

After listening to my brother dictate the whole of this letter, I felt genuine chills.  The truth it contains is so blatant, a simple calculation could yield the same result:  life is made up of a limited number of hours, therefore life is time.  Whenever you work, you are giving up your time for money (hence the old adage that time is money).  This means that whenever you waste time, or money, you are wasting your life, and wasted life is death.  This single fact horrifies me every day, because like most every other human, I waste an obscene amount of time.  Time watching a movie I have already seen, trolling through Facebook without really reading any of the posts, or having the same argument all over again:  rarely, when I am doing these things do I think about what else I could be doing.

Therein lies the link, which most will have already seen, between my two conversations.  Our time is not free.  Every moment we spend sleeping, eating, studying, etc., has a cost–an opportunity cost–and once it has been spent, if it was not truly the best way to spend it, then some small part of your life has been lost without reward.

I see this nearly everywhere:  students doze off in class or idly check their email or texts, they, when “studying” in the library, will spend a majority of the time effectively idle.  Writing this, I am in a college library, and with sample size n=11, I may, without prying too much, say that ~7/11ths  of my fellow computer users are not doing what they came intending to do.  They are wasting time they will not get back.

And so I say to you, whoever you may be reading this (perhaps idly), much the same as what Seneca might say to you, only I will say it less eloquently, and more directly:  value your time.  Do not waste it.  Work on being efficient not for the sake of productivity, but for the sake of leisure, for we all have our jobs to do, and if we get them done faster then there is more time for enjoyment.  If you spent less time complaining, you might spend that time actively addressing your problems, solving them rationally and thus eliminating your cause for complaint.

Vale.

Hyperinflation and trust in Ancient Rome

Since it hit 1,000,000% in 2018, Venezuelan hyperinflation has actually been not only continuing but accelerating. Recently, Venezuela’s annual inflation hit 10 million percent, as predicted by the IMF; the inflation jumped so quickly that the Venezuelan government actually struggled to print its constantly-inflated money fast enough. This may seem unbelievable, but peak rates of monthly inflation were actually higher than this in Zimbabwe (80 billion percent/month) in 2008, Yugoslavia (313 million percent/month) in 1994, and in Hungary, where inflation reached an astonishing 41.9 quadrillion percent per month in 1946.

The continued struggles to reverse hyperinflation in Venezuela are following a trend that has been played out dozens of times, mostly in the 20th century, including trying to “reset” the currency with fewer zeroes, return to barter, and turning to other countries’ currencies for transactions and storing value. Hyperinflation’s consistent characteristics, including its roots in discretionary/fiat money, large fiscal deficits, and imminent solvency crises are outlined in an excellent in-depth book covering 30 episodes of hyperinflation by Peter Bernholz. I recommend the book (and the Wikipedia page on hyperinflations) to anyone interested in this recurrent phenomenon.

However, I want to focus on one particular inflationary episode that I think receives too little attention as a case study in how value can be robbed from a currency: the 3rd Century AD Roman debasement and inflation. This involved an iterative experiment by Roman emperors in reducing the valuable metal content in their coins, largely driven by the financial needs of the army and countless usurpers, and has some very interesting lessons for leaders facing uncontrollable inflation.

The Ancient Roman Currency

The Romans encountered a system with many currencies, largely based on Greek precedents in weights and measures, and iteratively increased imperial power over hundreds of years by taking over municipal mints and having them create the gold (aureus) and silver (denarius) coins of the emperor (copper/bronze coins were also circulated but had negligible value and less centralization of minting). Minting was intimately related to army leadership, as mints tended to follow armies to the front and the major method of distributing new currency was through payment of the Roman army. Under Nero, the aureus was 99% gold and the denarius was 97% silver, matching the low debasement of eastern/Greek currencies and holding a commodity value roughly commensurate with its value as a currency.

The Crisis of the Third Century

However, a major plague in 160 AD followed by auctions of the imperial seat, major military setbacks, usurpations, loss of gold from mines in Dacia and silver from conquest, and high bread-dole costs drove emperors from 160-274 AD to iterative debase their coinage (by reducing the size and purity of gold coins and by reducing the silver content of coins from 97% to <2%). A major bullion shortage (of both gold and silver) and the demands of the army and imperial maintenance created a situation where a major government with fiscal deficits, huge costs of appeasing the army and urban populace, and diminishing faith in leaders’ abilities drove the governing body to vastly increase the monetary volume. This not only reflects Bernholz’ theories of the causes of hyperinflations but also parallels the high deficits and diminishing public credit of the Maduro regime.

Inflation and debasementFigure 1 for Fiat paper

Unlike modern economies, the Romans did not have paper money, and that meant that to “print” money they had to debase their coins. The question of whether the emperor or his subjects understood the way that coins represented value went beyond the commodity value of the coins has been hotly debated in academic circles, and the debasement of the 3rd century may be the best “test” of whether they understood value as commodity-based or as a representation of social trust in the issuing body and other users of the currency.

Figure 2 for Fiat paper

Given that the silver content of coins decreased by over 95% (gold content decreased slower, at an exchange-adjusted rate shown in Figure 1) from 160-274 AD but inflation over this period was only slightly over 100% (see Figure 2, which shows the prices of wine, wheat, and donkeys in Roman Egypt over that period as attested by papyri). If inflation had followed the commodity value of the coins, it would have been roughly 2,000%, as the coins in 274 had 1/20th of the commodity value of coins in 160 AD. This is a major gap that can only be filled in by some other method of maintaining currency value, namely fiat.

Effectively, a gradual debasement was not followed by insipid ignorance of the reduced silver content (Gresham’s Law continued to influence hoards into the early 3rd Century), but the inflation of prices also did not match the change in commodity value, and in fact lagged behind it for over a century. This shows the influence of market forces (as monetary volume increased, so did prices), but soundly punctures the idea that coins at the time were simply a convenient way to store silver–the value of the coins was in the trust of the emperor and of the community recognition of value in imperial currency. Especially as non-imperial silver and gold currencies disappeared, the emperor no longer had to maintain an equivalence with eastern currencies, and despite enormous military and prestige-related setbacks (including an emperor being captured by the Persians and a single year in which 6 emperors were recognized, sometimes for less than a month), trade within the empire continued without major price shocks following any specific event. This shows that trust in the solvency and currency management by emperors, and trust in merchants and other members of the market to recognize coin values during exchanges, was maintained throughout the Crisis of the Third Century.

Imperial communication through coinage

This idea that fiat and social trust maintained higher-than-commodity-values of coins is bolstered by the fact that coins were a major method of communicating imperial will, trust, and power to subjects. Even as Roman coins began to be rejected in trade with outsiders, legal records from Egypt show that the official values of coins was accepted within the army and bureaucracy (including a 1:25 ratio of aureus-to-denarius value) so long as they depicted an emperor who was not considered a usurper. Amazingly, even after two major portions of the empire split off–the Gallic Empire and the Palmyrene Empire–continued to represent their affiliation with the Roman emperor, including leaders minting coins with their face on one side and the Roman emperor (their foe but the trusted face behind Roman currency) on the other and imitating the symbols and imperial language of Roman coins, through their coins. Despite this, and despite the fact that the Roman coins were more debased (lower commodity value) compared to Gallic ones, the Roman coins tended to be accepted in Gaul but the reverse was not always true.

Interestingly, the aureus, which was used primarily by upper social strata and to pay soldiers, saw far less debasement than the more “common” silver coins (which were so heavily debased that the denarius was replaced with the antoninianus, a coin with barely more silver but that was supposed to be twice as valuable, to maintain the nominal 1:25 gold-to-silver rate). This may show that the army and upper social strata were either suspicious enough of emperors or powerful enough to appease with more “commodity backing.” This differential bimetallist debasing is possibly a singular event in history in the magnitude of difference in nominal vs. commodity value between two interchangeable coins, and it may show that trust in imperial fiat was incomplete and may even have been different across social hierarchies.

Collapse following Reform

In 274 AD, after reconquering both the Gallic and Palmyrene Empire, with an excellent reputation across the empire and in the fourth year of his reign (which was long by 3rd Century standards), the emperor Aurelian recognized that the debasement of his currency was against imperial interests. He decided to double the amount of silver in a new coin to replace the antoninianus, and bumped up the gold content of the aureus. Also, because of the demands of ever-larger bread doles to the urban poor and alongside this reform, Aurelian took far more taxes in kind and far fewer in money. Given that this represented an imperial reform to increase the value of the currency (at least concerning its silver/gold content), shouldn’t it logically lead to a deflation or at least cease the measured inflation over the previous century?

In fact, the opposite occurred. It appears that between 274 AD and 275 AD, under a stable emperor who had brought unity and peace and who had restored some commodity value to the imperial coinage, with a collapse in purchasing power of the currency of over 90% (equivalent to 1,000% inflation) in several months. After a century in which inflation was roughly 3% per year despite debasement (a rate that was unprecedentedly high at the time), the currency simply collapsed in value. How could a currency reform that restricted the monetary volume have such a paradoxical reaction?

Explanation: Social trust and feedback loops

In a paper I published earlier this summer, I argue that this paradoxical collapse is because Aurelian’s reform was a blaring signal from the emperor that he did not trust the fiat value of his own currency. Though he was promising to increase the commodity value of coins, he was also implicitly stating (and explicitly stating by not accepting taxes in coin) that the fiat value that had been maintained throughout the 3rd Century by his predecessors would not be recognized going forward by the imperial bureaucracy in its transactions, thus signalling that for all army payment and other transactions, the social trust in the emperor and in other market members that had undergirded the value of money would now be ignored by the issuing body itself. Once the issuer (and a major market actor) abandoned fiat currency and stated that newly minted coins would have better commodity value than previous coins, the market–rationally–answered by moving quickly toward commodity value of the coins and abandoned the idea of fiat.

Furthermore, not only were taxes taken in kind rather than coin, but there was widespread return to barter as those transacting tried to avoid holding coins as a store of value. This pushed up the velocity of money (as people abandoned it as a store of value and paid higher and higher amounts for commodities to get rid of their currency). The demonetization/return to barter reduced the market size that was transacted in currency, meaning that there were even more coins (mostly aureliani, the new coin, and antoniniani) chasing fewer goods. The high velocity of money, under Quantity Theory of Money, would also contribute to inflation, and the unholy feedback loop of decreasing value causing distrust, which caused demonetization and higher velocity, which led to decreasing value and more distrust in coins as stores of value kept this cycle going until all fiat value was driven out of Roman coinage.

Aftermath

This was followed by Aurelian’s assassination, and there were several monetary collapses from 275 AD forward as successive emperors attempted to recreate the debased/fiat system of their predecessors without success. This continued through the reign of Diocletian, whose major reforms got rid of the previous coinage and included the famous (and famously failed) Edict on Maximum Prices. Inflation continued to be a problem through 312 AD, when Constantine re-instituted commodity-based currencies, largely by seizing the assets of rich competitors and liquidating them to fund his army and public donations. The impact of that sort of private seizure is a topic for another time, but the major lesson of the aftermath is that fiat, once abandoned, is difficult to restore because the very trust on which it was based has been undermined. While later 4th Century emperors managed to again debase without major inflationary consequences, and Byzantine emperors did the same to some extent, the Roman currency was never again divorced from its commodity value and fiat currency would have to wait centuries before the next major experiment.

Lessons for Today?

While this all makes for interesting history, is it relevant to today’s monetary systems? The sophistication of modern markets and communication render some of the signalling discussed above rather archaic and quaint, but the core principles stand:

  1. Fiat currencies are based on social trust in other market actors, but also on the solvency and rule-based systems of the issuing body.
  2. Expansions in monetary volume can lead to inflation, but slow transitions away from commodity value are possible even for a distressed government.
  3. Undermining a currency can have different impacts across social strata and certainly across national borders.
  4. Central abandonment of past promises by an issuer can cause inflationary collapse of their currency through demonetization, increased velocity, and distrust, regardless of intention.
  5. Once rapid inflation begins, it has feedback loops that increase inflation that are hard to stop.

The situation in Venezuela continues to give more lessons to issuing bodies about how to manage hyperinflations, but the major lesson is that those sorts of cycles should be avoided at all costs because of the difficulty in reversing them. Modern governments and independent currency issuers (cryptocurrencies, stablecoins, etc.) should take lessons from the early stages of previous currency trends toward trust and recognition of value, and then how these can be destroyed in a single action against the promised and perceived value of a currency.

Inventions that didn’t change the world

Have you ever learned about an amazing invention–whether it was the Baghdad battery or the ancient Roman steam engine or Chinese firecrackers–and wondered why it didn’t do more to change the world? In this podcast, we examine a selection of curiosities and explore hypotheses for why their inventors didn’t use them to full effect.

We move VERY quickly through a range of fascinating examples and hypotheses, and therefore leave a lot up to discussion. We hope to see your thoughts, feedback, and additions in the comments section!

For any invention that you want to learn more about, see the links below:

Knossos’ toilets

In the 2nd millennium BC, a “palace” (now thought to be a building that served as administrative, trade, and gathering hub) had running-water toilet flushing. Much like the Roman Cloaca Maxima, likely a HUGE public-health benefit, but basically died out. Does this show that military protection/staving off the “Dark Ages” was the only way to maintain amazing inventions?

Link: http://www.nature.com/news/the-secret-history-of-ancient-toilets-1.19960;

The Nimrud lens

Whether it was a fire-starter, a magnifying glass, or (for some overeager astronomy enthusaists), the Neo-Assyrian ground-crystal Nimrud lens is an invention thousands of years out of place. While the Egyptians, Greeks, and Romans all used lenses of different sorts, and glass-blowing was certainly popular by the 1st century BC in Roman Egypt, no glass lenses were made until the Middle Ages and the potential scientific and engineering uses of lenses–that can hardly be understated even in their 16th-to-18th-century applications–had to wait another couple millennia. Many devices like the Baghdad battery and Antikythera device are heralded for their possible engineering genius, but this seems like a simple one with readily available applications that disappeared from the historical record.

https://en.wikipedia.org/wiki/Nimrud_lens

Hero of Alexandria’s steam engine

In the 1st century AD, Hero was a master of simple machines (that were mostly used for plays) and also invented a force pump, a wind-powered machine, even an early vending machine. However, he is likely most famous for his Aeolipile, a rotating steam engine that used heated water to spin an axle. The best attested use of this is for devotion to the divine and party tricks.

https://en.wikipedia.org/wiki/Aeolipile

The ancient mechanical reaper

Ancient Gallo-Romans (or just Gauls) invented a novel way of grain harvesting: rather than using sickles or scythes, they used a mechanical reaper, 1700 years before Cyrus McCormick more than tripled the productivity of American farmers. This antiquated device literally but the cart before the oxen and required two men to operate: one man to drive the beasts, and another to knock the ears off the stalk (this reaper was obviously far less sophisticated than McCormick’s). This invention did not survive the Volkswanderung period.

http://www.gnrtr.com/Generator.html?pi=208&cp=3

http://reapertakethewheel.blogspot.com/2013/03/impacts-of-invention.html

Note: the horse collar (which allowed horses to be used to plow) was invented in 1600-1400 BC in China AND the Levant, but was not applied widely until 1000 AD in Europe. https://en.wikipedia.org/wiki/Horse_collar.

Inoculation

Madhav, an Indian doctor, compiled hundreds of cures in his Nidana, including an inoculation against smallpox that showed an understanding of disease transmission (he would take year-old smallpox-infected flesh and touch it to a recently made cutaneous wound). However, the next 13 centuries did not see Indian medical understanding of viruses or bacteria, or even copied techniques of this, development. https://books.google.com/books?id=Hkc3QnbagK4C&pg=PA105&lpg=PA105&dq=madhav+indian+smallpox+inoculation&source=bl&ots=4RFPuvbf5Y&sig=iyDaNUs4u5N7xHH6-pvlbAY9fcQ&hl=en&sa=X&ved=0ahUKEwic8e-1-JXVAhUp6IMKHfw3DLsQ6AEIOjAD#v=onepage&q=madhav%20indian%20smallpox%20inoculation&f=false

At least, thank god, their methods of giving nose jobs to those who had had their noses cut off as a punishment survived: https://en.wikipedia.org/wiki/History_of_rhinoplasty

The Chinese:

List of all chinese inventions:

https://en.wikipedia.org/wiki/List_of_Chinese_inventions#Four_Great_Inventions

Gunpowder

Gunpowder was discovered by Chinese alchemists attempting to discover the elixir of life (irony, no?)

https://www.thoughtco.com/invention-of-gunpowder-195160

https://en.wikipedia.org/wiki/Four_Great_Inventions

(maybe a good corollary would be Greek fire, which was used effectively in naval warfare by the Byzantines, but which was not improved upon and the recipe of which is still secret: https://en.wikipedia.org/wiki/Greek_fire)

Printing

The Chinese invented the printing press possibly as early as the 6th century. However, unlike the explosion of literacy seen in much of Europe (particularly Protestant Europe–see our last podcast), the Chinese masses never learned to read. In fact, in 1950 fewer than 20% of Chinese citizens were literate. Compare this to Europe, where some societies saw literacy rates of as high as 90% (Sweden, Male population) in some societies within a few centuries of the introduction of the printing press. Why? There may be several reasons–cultural, religious, political–but in our opinion, it would have to be the characters: 100,000 blocks were needed to create a single set.

http://www.nytimes.com/2001/02/12/news/chinas-long-but-uneven-march-to-literacy.html

https://en.wikipedia.org/wiki/History_of_printing_in_East_Asia

They also invented pulped paper by the 2nd century BC: https://en.wikipedia.org/wiki/List_of_Chinese_inventions.

The compass

Invented by 200 BC for divination and used for navigation by the Song dynasty; despite this and the availability of easily colonizable islands within easy sailing distance, the Chinese did not colonize Indonesia, Polynesia, or Oceania, while the Europeans did within the century after they developed the technology and first sailed there.

https://en.wikipedia.org/wiki/History_of_the_compass.

The rudder

While they did not invent the rudder, they invented the “medial, axial, and vertical” sternpost rudder that would become standard in Europe almost 1,000 years before it was used in Europe (1st century AD vs 11th century).

Natural gas

The Chinese discovered “fire wells” (natural gas near the surface) and erected shrines to worship there.

https://link.springer.com/referenceworkentry/10.1007%2F978-1-4020-4425-0_9568

They even understood their potential for fuel, but never developed beyond primitive burning and bamboo piping despite having advanced mining techniques for it by the 1st century BC.

Chinese miscelleni:

Hydraulic powered fan: https://en.wikipedia.org/wiki/Fan_(machine)#History

Cuppola furnace for smelting and molding iron: https://en.wikipedia.org/wiki/Cupola_furnace.

Coke as a fuel source: https://en.wikipedia.org/wiki/Coke_(fuel).

Belt-drive spinning wheel: https://en.wikipedia.org/wiki/Coke_(fuel).

The Precolumbian wheel

The pre- and early Mayans had toys that utilized primitive wheels, but did not use them for any labor-saving purpose (even their gods were depicted carrying loads on their backs). This may have been because scaling up met with mechanical difficulties, but the potential utility of wheels in this case with a bit of investment literally sat unrealized for centuries.

https://tcmam.wordpress.com/2010/11/11/did-pre-columbian-mesoamericans-use-wheels/

The Tucker:

http://www.smithsonianmag.com/history/the-tucker-was-the-1940s-car-of-the-future-135008742/

The following book contained some of our hypotheses:

https://books.google.com/books?id=ynejM1-TATMC&pg=PA399&lpg=PA399&dq=roman+and+greek+labor-saving+devices&source=bl&ots=BI6GVGTrxC&sig=8ZJqirOVUyjH7TNq0fcW6UUPn1k&hl=en&sa=X&ved=0ahUKEwj55O7395XVAhVqwYMKHSb2Dy4Q6AEIKTAB#v=onepage&q=roman%20and%20greek%20labor-saving%20devices&f=false

 

The rest of our hypotheses were amalgamated from our disparate classes in economics and history, but none of them are our own or uncommon in academic circles. Thanks for listening!

The Deleted Clause of the Declaration of Independence

As a tribute to the great events that occurred 241 years ago, I wanted to recognize the importance of the unity of purpose behind supporting liberty in all of its forms. While an unequivocal statement of natural rights and the virtues of liberty, the Declaration of Independence also came close to bringing another vital aspect of liberty to the forefront of public attention. As has been addressed in multiple fascinating podcasts (Joe Janes, Robert Olwell), a censure of slavery and George III’s connection to the slave trade was in the first draft of the Declaration.

Thomas Jefferson, a man who has been criticized as a man of inherent contradiction between his high morals and his active participation in slavery, was a major contributor to the popularizing of classical liberal principles. Many have pointed to his hypocrisy in that he owned over 180 slaves, fathered children on them, and did not free them in his will (because of his debts). Even given his personal slaves, Jefferson made his moral stance on slavery quite clear through his famous efforts toward ending the transatlantic slave trade, which exemplify early steps in securing the abolition of the repugnant act of chattel slavery in America and applying classically liberal principles toward all humans. However, this very practice may have been enacted far sooner, avoiding decades of appalling misery and its long-reaching effects, if his (hypocritical but principled) position had been adopted from the day of the USA’s first taste of political freedom.

This is the text of the deleted Declaration of Independence clause:

“He has waged cruel war against human nature itself, violating its most sacred rights of life and liberty in the persons of a distant people who never offended him, captivating and carrying them into slavery in another hemisphere or to incur miserable death in their transportation thither.  This piratical warfare, the opprobrium of infidel powers, is the warfare of the Christian King of Great Britain.  Determined to keep open a market where Men should be bought and sold, he has prostituted his negative for suppressing every legislative attempt to prohibit or restrain this execrable commerce.  And that this assemblage of horrors might want no fact of distinguished die, he is now exciting those very people to rise in arms among us, and to purchase that liberty of which he has deprived them, by murdering the people on whom he has obtruded them: thus paying off former crimes committed against the Liberties of one people, with crimes which he urges them to commit against the lives of another..”

The second Continental Congress, based on hardline votes of South Carolina and the desire to avoid alienating potential sympathizers in England, slaveholding patriots, and the harbor cities of the North that were complicit in the slave trade, dropped this vital statement of principle

The removal of the anti-slavery clause of the declaration was not the only time Jefferson’s efforts might have led to the premature end of the “peculiar institution.” Economist and cultural historian Thomas Sowell notes that Jefferson’s 1784 anti-slavery bill, which had the votes to pass but did not because of a single ill legislator’s absence from the floor, would have ended the expansion of slavery to any newly admitted states to the Union years before the Constitution’s infamous three-fifths compromise. One wonders if America would have seen a secessionist movement or Civil War, and how the economies of states from Alabama and Florida to Texas would have developed without slave labor, which in some states and counties constituted the majority.

These ideas form a core moral principle for most Americans today, but they are not hypothetical or irrelevant to modern debates about liberty. Though America and the broader Western World have brought the slavery debate to an end, the larger world has not; though countries have officially made enslavement a crime (true only since 2007), many within the highest levels of government aid and abet the practice. 30 million individuals around the world suffer under the same types of chattel slavery seen millennia ago, including in nominal US allies in the Middle East. The debates between the pursuit of non-intervention as a form of freedom and the defense of the liberty of others as a form of freedom have been consistently important since the 1800’s (or arguably earlier), and I think it is vital that these discussions continue in the public forum. I hope that this 4th of July reminds us that liberty is not just a distant concept, but a set of values that requires constant support, intellectual nurturing, and pursuit.

The Old Deluder Satan Act: Literacy, Religion, and Prosperity

So, my brother (Keith Kallmes, graduate of the University of Minnesota in economics and history) and I have decided to start podcasting some of our ideas. The topics we hope to discuss range from ancient coinage to modern medical ethics, but with a general background of economic history. I have posted here our first episode, the Old Deluder Satan Act. This early American legislation, passed by the Massachusetts Bay Colonists, displays some of the key values that we posit as causes of New England’s principal role in the Industrial Revolution. The episode: 

We hope you enjoy this 20-minute discussion of the history of literacy, religion, and prosperity, and we are also happy to get feedback, episode suggestions, and further discussion in the comments below. Lastly, we have included links to some of the sources cited in the podcast.


Sources:

The Legacy of Literacy: Continuity and Contradictions in Western Culture, by Harvey Graff

Roman literacy evidence based on inscriptions discussed by Dennis Kehoe and Benjamin Kelly

Mark Koyama’s argument

European literacy rates

The Agricultural Revolution and the Industrial Revolution: England, 1500-1912, by Gregory Clark

Abstract of Becker and Woessman’s “Was Weber Wrong?”

New England literacy rates

(Also worth a quick look: the history of English Protestantism, the Puritans, the Green Revolution, and Weber’s influence, as well as an alternative argument for the cause of increased literacy)

Paradoxical Geniuses: “Let us burn the ships”

In 1519, Hernán Cortés landed 500 men in 11 ships on the coast of the Yucatan, knowing that he was openly disobeying the governor of Cuba and that he was facing unknown numbers of potential enemies in an unknown situation. Regardless of the moral implications, what happened next was strategically extraordinary: he and his men formed a local alliance, and despite having to beat a desperate retreat on La Noche Triste, they conquered the second largest empire in the New World. As the expeditionary force landed, Cortés made a tactically irrational decision: he scuttled all but one of his ships. In doing so, he hamstrung his own maneuverability, scouting, and communication and supply lines, but he gained one incredible advantage: the complete commitment of his men to the mission, for as Cortés himself said, “If we are going home, we are going in our foes’ ships.” This strategic choice highlights the difference between logic and economists’ concept of “rationality,” in that illogical destruction of one’s own powerful and expensive tools creates a credible commitment that can overcome a serious problem in warfare, that of desertion or cowardice. While Cortés certainly increased the risk to his own life and that of his men, the powerful psychology of being trapped by necessity brought out the very best of the fighting spirit in his men, leading to his dramatic victory.

This episode is certainly not unique in the history of warfare, and was not only enacted by leaders as a method of ensuring commitment, but actually underlay the seemingly crazy (or at least overly risky) cultural practices of several ancient groups. The pervasiveness of these psychological strategies shows that, whether each case was because of a genius decision or an accident of history, they conferred a substantial advantage to their practitioners. (If you are interested in how rational choices are revealed in the history of warfare, please also feel free to read about hostage exchanges and ransoming practices from an earlier blog!) I have collected some of the most interesting examples that I know of, but the following is certainly not an exhaustive list and I encourage other episodes to be mentioned in the comments:

  • Julian the Apostate
    • Julian the Apostate is most famous for his attempt to reverse Constantine the Great’s Christianization of the Roman Empire, but he was also an ambitious general whose audacity gained him an incredible victory over Germanic invaders against steep odds. He wanted to reverse the stagnation of Roman interests on the Eastern front, where the Sasanian empire had been challenging the Roman army since the mid-3rd century. Having gathered an overwhelming force, he marched to the Euphrates river, took ships from there to the Sasanian capital, while the Sasanians used slash-and-burn tactics to slow his advance. When Julian found the capital (Ctesiphon) undefended, he worried that his men would want to loot the capital and return homeward, continuing the status quo of raiding and retreating. To prevent this, in a move much like that of Cortés, he set fire to his ships and forced his men to press on. In his case, this did not end with stunning victory; Julian overextended his front, was killed, and lost the campaign. Julian’s death shows the very real risks involved in this bold strategy.
  • Julius Caesar
    • Julian may have taken his cue from a vaunted Roman historical figure. Dramatized perfectly by HBO, the great Roman general and statesman Julius Caesar made huge gamble by taking on the might of the Roman Senate. Despite being heavily outnumbered (over 2 to 1 on foot and as much as 5 to 1 in cavalry), Caesar committed to a decisive battle against his rival Pompey in Greece. While Pompey’s troops had the option of retreating, Caesar relied on the fact that his legionaries had their backs to the Mediterranean, effectively trapping them and giving them no opportunity to rout. While Caesar also tactically out-thought Pompey (he used cunning deployment of reserves to stymie a cavalry charge and break Pompey’s left flank), the key to his victory was that Pompey’s numerically superior force ran first; Pompey met his grisly end shortly thereafter in Egypt, and Caesar went on to gain power over all of Rome.
  • Teutones
    • The impact of the Teutones on the Roman cultural memory proved so enduring that Teutonic is used today to refer to Germanic peoples, despite the fact that the Teutones themselves were of unknown linguistic origin (they could very well have been Celtic). The Teutones and their allies, the Cimbri, smashed Roman armies which were better trained and equipped multiple times in a row; later Roman authors said they were possessed by the Furor Teutonicus, as they seemed to posses an irrational lack of fear, never fleeing before the enemy. Like many Celtic and Germanic peoples of Northern Europe, the Teutones exhibited a peculiar cultural practice to give an incentive to their men in battle: all of the tribe’s women, children, and supplies were drawn up on wagons behind the men before battles, where the women would take up axes to kill any man who attempted to flee. In doing so, they solved the collective action problem which plagued ancient armies in which a few men running could quickly turn into a rout. If you ran, not only would you die, but your wife and children would as well, and this psychological edge allowed a roving tribe to place the powerful Roman empire in jeopardy for a decade.
  • The Persian emperors
    • The earliest recorded example of paradoxical risk as a battle custom is the Persian imperial practice of bringing the women, children, and treasure of the emperor and noble families to the war-camp. This seems like a needless and reckless risk, as it would turn a defeat into a disaster in the loss of family and fortune. However, this case is comparable to that of the Teutones, in that it demonstrated the credible commitment of the emperor and nobles to victory, and used this raising of the stakes to incentivize bravery. While the Persians did conquer much of the known world under the nearly mythical leadership of Cyrus the Great, this strategy backfired for the last Achaemenid Persian emperor: when Darius III confronted Alexander the Great at Issus, Alexander’s crack hypaspist troops routed Darius’ flank as well as Darius himself! The imperial family and a great hoard of silver fell into Alexander’s hands, and he would go on to conquer the entirety of the Persian empire.

These examples show the diversity of cultural and personal illustrations of the rational choice theory and psychological warfare that typified some of the most successful military leaders and societies. As the Roman military writer Vegetius stated, “an adversary is more hurt by desertion than slaughter.” Creating unity of purpose is by no means an easy task, and balancing the threat of death by frontline combat with the threat of death during a rout was a problem that plagued leaders from the earliest recorded histories forward (in ancient Greek battles, there were few casualties on the line of battle and the majority of casualties took place during flight from the battlefield. This made the game theoretical choice for each soldier an interesting balance of possibly dying on the line but living if ONLY he ran away, but having a much higher risk of death if a critical mass of troops ran away–perhaps this will be fodder for a future post?). This was a salient and even vital issue for leaders to overcome, and despite the high risks that led to the fall of both Julian and Darius, forcing credible commitment to battle is a fascinating strategy with good historical support for its success. The modern implications of credible commitment problems range from wedding rings to climate accords, but very few modern practices utilize the “illogical rationality” of intentional destruction of secondary options. I continue to wonder what genius, or what society, will come up with a novel application of this concept, and I look forward to seeing the results.

P.S.–thanks to Keith Kallmes for the idea for this article and for helping to write it. Truly, it is his economic background that leads to many of these historical questions about rational choice and human ingenuity in the face of adversity.