I did not meet many of the postwar great thinkers of classical liberalism. There are two exceptions. In 2005 I had a chat with James Buchanan to ask him if I could translate the talk he gave to an audience of graduate students at the IHS summer seminar at the University of Virginia at Charlottesville. He agreed and I translated and published his ideas on ‘the soul of classical liberalism’ in a Dutch liberal periodical.
The other exception is Julian Simon. Perhaps not in the same league as Buchanan, he was certainly a maverick thinker and a classical liberal great. A navy officer, business man, and advertising expert who turned to academia, he is known, to name just a few, for his arguments in the field of population growth, immigration studies and of course the book The Ultimate Resource. In it he argues that all raw materials become cheaper, while humans are the ultimate resource, among many other issues. He also won a famous wager with his critic Paul Ehrlich, stating that the prices of the raw materials Ehrlich could choose (in fact copper, chromium, nickel, tin, tungsten) would decrease (inflation adjusted) over the period of a decade they agreed upon. But that is just the tip of iceberg of this most interesting man. You should really read his autobiography A Life Against the Grain, whenever you have the chance.
In 1995 a friend of mine and I founded the Dutch Benedictus de Spinoza Foundation, meant to group young people educated in (classical) liberalism. In our first public Spinoza-lecture in 1996 Simon agreed to be the speaker. If memory serves right he was on his way to or from a Mont Pelerin Society meeting in Vienna, and was willing to make a small detour. We spent two full days with him, touring The Hague, arranging an interview in a national paper, have a formal dinner with Simon as gues of honor and speaker, and so forth. He was the most congenial guest one can wish. He clearly did not want to be among the hot shots only. In fact he insisted that we should visit ‘the worst neighborhood of the city’. So we went to one of the poorest parts in town, which he found delightful, not because of the (relative) poverty, but because of the multicultural experience and multicultural food at the market. An other remarkable feature was that in the half hour before we opened the lecture hall, he wished to take a nap on the floor right there!
In his autobiography he is open about his many rejected papers throughout his career, and the way he described how difficult it is to convince academic colleagues of a point that goes against conventional wisdom. No matter how strong the counter-evidence, people will choose to ignore the new facts or insights and keep the author out of the inner circle for as long as possible. I must say it sounds familiar to me, as an author who has attempted to change the views of (classical) liberals and IR theorists on international relations and (classical) liberalism. Even the obvious fact that trade cannot possibly foster peace seems impossible to establish. Alas, reading Simon one also learns to never give up, the truth shall be told, although there is no guarantee of success!
My music playlist has nearly stagnated for years and, depending on your age, maybe yours has too. Evidence suggests that (partly) because of mind shenanigans, our musical palette does not quite expand past the age of 30. I think that something similar goes for gaming. I am still fond of those (pc) games from my late teen – early adult years and stay happily ignorant about the newer ones. Those single player games immersed you through substance over eye-candies. Some in-game scenes remain pure gold after all these years. Like that dialogue, when one of my younger siblings was delving in a fictional setting resembling the Caribbean during the Golden Age of Piracy. (Escape from Monkey Island. I preferred RPGs. Nowadays, only books – like this one.)
At some point, the protagonist, a witty swashbuckler, visited the Second Bank of an island called Lucre. “What happened to the First Bank of Lucre?”, he inquired. “Nothing”, said the bank teller, “It was our public relations department’s idea. They felt that being called the ‘First’ bank didn’t project an image of experience”. At the time I thought it as just a funny anachronism. Later, I recognized a jab to brand marketing practices and the corporate-speak more generally. But it was also the scheme of a “fledgling” first banking institution versus a “trustworthy” second one that almost held a real-world analogy.
Some kind of a theory
There is a rich discussion on the origins of money, its form and the proper control of it, as well as a few historical cases of either state or private currencies thriving – or failing. Hard. In the thick of it, we talk about two positions. From the one hand, the “economics textbook” approach proposes that money emerged in the realm of private economic relations, to minimize transaction costs and facilitate trade. (Francisco d’Anconia would approve.) Here be a decentralized, bottom-up acceptance of the medium of exchange. This view sits well with the classical liberal dichotomy between the civil and state spheres, which can be expanded to envision a very limited role for the state in monetary affairs. From the other hand, the “anthropological – historical” position articulates that trust on money comes mostly from the sovereign’s guarantee, marked by the sign of God and/ or Emperor. This top-down explanation is more receptive to the state control of money, rhyming with the monetary power as a prerogative of the ruler and an expression of sovereignty.
Beginning with some important judicial decisions in the second half of 19th century, the official assertion of state power over money came in the 20th century. Per the Permanent Court of International Justice, in 1929, “it is indeed a generally accepted principle that a state is entitled to regulate its own currency”. You know, the norm of modern national monetary monopolies. There was a time though, when things were more colorful and less unambiguous. From the 13th century onward to the Golden Age of Piracy and beyond, it was only normal for different monies of various issuers to flow from one territory to the other. Reputable currencies required not only a resilient authority backing them, but also a nod by society and custom. This kind-of-synthesis of the two positions outlined above rung especially true in the case of the young Greek state in 1830s – 1840s. (For this section I draw from the comprehensive “History of the Greek State 1830 – 1920”, by George B. Dertilis [the 2017 Crete University Press edition, in Greek. An extended version, under a different title, is forthcoming in English in 2021/22]. Btw, on Mar. 25 we celebrate 200 years from the Declaration of the Greek Revolution versus the Ottoman rule, an [underrated?] event with connotations of nationalism and liberal constitutionalism.)
Over there at the (Balkan) shore
As the new state needed to break free from all the institutions of Ottoman Empire, its hastily assembled first Bank of Issue sought to introduce a new national currency (the Phoenix). The impoverished, ravaged and cut off from international debt markets nascent state reflected bad upon the Bank. The government tried to force public’s trust via legislation. By decree, payments from/ to the state coffers would include a mandatory percentage of the new banknotes (later the percentage was set at 100%). Revenue from state natural resources – present and future – would back the currency. The administrative magic did not do it. The public actively tried to avoid the Phoenix banknotes, in favor of traditional silver/ gold coins. Bank and currency failed to crowd out the foreign monies and ultimately went out of business. A few years later, the overall environment had improved somewhat and a more vigorous state established the second Bank of Issue. Another new national currency, the Drachma, was already circulating in – copper – coins along with the foreign ones.
The second Bank received an exclusive charter of issue and undertook the task to roll-out the Drachma banknotes (silver/ gold coins would follow) and, in doing so, integrate the fragmented Greek countryside to a more cohesive national economy. Up until then, the local markets had operated as loosely hierarchical oligopolies. At the bottom of the chain, each small village or group of villages was dependent on a merchant-money lender who held monopsonistic power over the (tiny scale) agricultural production and, at the same time, monopolistic power in cash and credit. These rural businessmen depended on the respective merchant-money lender of the nearest town for brokerage. Next in line was the merchant-money lender of the nearest city, usually with access to international trade routes. You get the picture. These informal networks contained competition among neighboring lesser merchant-money lenders and promoted trade through a complex web of transactions (involving forward contracts, insurance premiums and bills of exchange, among others). (The official site for the anniversary features a fancy piece about the first attempts to establish a national bank as well. It includes a few names and dates, while noticing the “exploitative” networks and the “primitive” credit system .I find its lack of nuance disturbing somewhat misleading.)
Becoming one with the forces
The Bank opted to tap and complement the existing disjointed market forces, in order to gently nudge them. It channeled its primary tool, lending in banknotes, to the local money markets, firstly, to a limited number of large merchant-money lenders, later to the middle ones. (According to the Bank’s ledgers, these clients usually chose respectable job titles, such as “Banker” or “Broker”. Others, a bit blunter, went by the Greek equivalent of “Usurer”.) This lending – apart from being short-term, relatively safe and profitable – enabled the Bank to gradually assume a leading position, without the need to deep dive at the specifics of each end-user of the market. The soft, indirect entry in the century-old customary networks lowered the cost of money and contributed to the integration of the national economy. The transition was not always smooth, with the occasional episode (people switching from banknotes to metallic coins, the Bank returning the favor by aggressively cutting back lending, the government setting compulsory percentages etc – you know the drill), but still, the stakeholders’ incentives aligned. Society at large recognized Bank and currency, with the system reaching a workable equilibrium
The merchant-money lender of old was finally phased-out by regular bank lending in the next decades. Further underpinned by a cozy relationship with the state (always a valuable client, usually a partner, sometimes even an opponent), the Bank acted as a quasi-central banking institution until 1928, when the charter was transferred to the newly found Bank of Greece. The Drachma continued as official legal tender (albeit with numerous conversions) until the end of 2001.
[Note: this is a piece by Michalis Trepas, who you might recognize from the now-defunct NOL experiment “Be Our Guest.” Michalis is a newly-minted Notewriter, and this is the first of many more such pieces to come. -BC]
The Treasury and the Federal Reserve System have reached full accord with respect to debt-management and monetary policies to be pursued in furthering their common purpose to assure the successful financing of the Government’s requirements and, at the same time, to minimize monetization of the public debt.
– Joint announcement by the Secretary of the Treasury and the Chairman of the Board of Governors, and of the Federal Open Market Committee, of the Federal Reserve System, issued for release on Mar. 4, 1951
The Allied High Commission appreciates that these responsibilities [for the central bank] could not, without serious inconvenience, be given up so long as no legislation has been enacted establishing a competent Federal authority to assume them.
– Letter from the Allied High Commission to Chancellor Adenauer, Dated Mar. 6, 1951
A Financial Fable by Carl Barks, a short story starring Donald Duck and his duck-relatives, was published in Mar. 1951. It featured concepts like supply/ demand, money shocks, inflation and the ethics of productive labor, from a rather neoclassical perspective. Read today, it seems out of synch with the postwar paradigm of a subordinated monetary policy to the activist state and, more generally, with what came to be known as the Golden Age. As you have already probably noticed, this March also marks the 70th anniversary of two more instances against the currents of the time. It was back then that two main traditions of central bank independence – based on political consensus and judicial (“Chevron”) deference in the case of US, based on written law and judicial review in the case of Eurozone (read: Germany) – were (re)rooted. In the following lines, I offer an outline focused on institutional interplay, instead of then usual dramatis personae.
The first instance is the well-known Treasury – FED Accord. Its importance warrants a mention in nearly every institutional discussion of modern central bank independence. The FED implemented an interest rates peg – kind of capping the yield curve – in 1942, to accommodate public debt management during World War II. The details were complicated, but we can still think of it as a convenient arrangement for the Executive. The policy continued into the early 50s, with the inflationary backdrop of the Korean War leading to tensions between a demanding Executive and an increasingly resistant central bank. Shortly after the dispute became more pronounced, reaching the media, the two institutions achieved a compromise. The austere paragraph cited above ended the interest rates peg and prompted a shift of thinking within – and without – the central bank, on monetary policy and its independence of fiscal needs.
The second one is definitely more obscure, and as such deserves a little more detail. The Bank deutscher Länder (BdL) was established in 1948, in the Allied territory of occupied Germany. It integrated central banking institutions, old and new, in a decentralized fashion á la US FED. Its creation underpinned the – generally successful – double reform of that year (a currency conversion with a simultaneous abolition of price controls), which reignited free market forces (and also initiated the de facto separation of the country). The Allied Banking Commission (ABC) supervised the BdL and retained the sole right to issue direct instructions, a choice more practical than doctrinal or ideological. As the ABC gradually allowed a greater leeway to the central bank, while fending off even indirect German political interventions, the resulting institutional setting provided for a relatively independent BdL.
In late 1950, the Occupational Authority wanted out and an orderly transfer of powers required legislation from the Federal Government. Things deadlocked around the draft of the central bank law, the degrees of centralization and independence being the thorniest issues. The letter cited above, arriving after a few months of inertia, was the catalyst for action. The renewed negotiations concluded with the “Interim Law” of 10 Aug. 1951. The reformed BdL was made independent of instructions from the Federal Government, while at the same time assuming an obligation to support government’s general economic policy – without prejudice to its monetary duties.
This institutional arrangement was akin to what the BdL itself had pushed for, a de jure formalization of its already de facto status. Keep in mind that the central bank enjoyed a head start in terms of reputation and experience versus the Federal Government, after all. But it can also be traced to the position articulated by the free market-oriented majority in the German quasi-governmental bodies back in 1948, a unique blend of explicit independence from/ cooperation with the government. The 1951 law effectively set the blueprint for the final central bank law, the Bundesbank Act of 1957. The underlying liberal creed echoed in the written report of the Chairman of the Committee for Money and Credit of the parliament:
The security of the currency… is the highest precondition for the retention of a market economy, and hence in the final analysis that of a free constitution for society and the state… [T]he note-issuing bank must be independent of these [political bodies] and subject only to the law.
The Financial Fable was the only story featuring Disney’s characters that made it to an important history of comics book, published in 1971. Around that time, the postwar consensus on macroeconomic stabilization policy was reaching its peak. A rethinking was already underway on the tools and goals of monetary policy, taking it away from the still garbled understanding of the period. It took another decade or so for both sides of the Atlantic to recalibrate their respective monetary policies. The accompanying modern central bank independence, with its foundations set in 1951, became a more salient – and popular – aspect a bit later.
From an email I sent my principles of economics students:
Since we can’t have classes this week and the midterm is postponed a week, I felt chatty and wanted to share at least a few thoughts about why so many people are without power.
tl;dr: see the graph below. Prices are fixed. Supply shifts left, demand shifts right = instant shortages. This is not an easy problem to solve.
Issue #1 is that bad weather events increase demand – demand shifts to the right. Issue #2 is that energy prices are really sticky. We’ll be getting to this in March, but in energy markets we sign contracts with our energy providers that lock in the price of electricity for 1-2 years at a time. When demand increases, the price doesn’t! Further, some contracts allow us to smooth the bill out over 12 months, so if I need extra $12 of electricity today, I don’t actually pay for it today: I’ll pay for it by having a $1 higher electricity bill over a 12 month period. That does two things. a) It means that energy demand curves are really vertical, a small change in price doesn’t change my electricity consumption much; and b) when demand increases, prices don’t. That ruins the market price signal that tells you and me to conserve electricity. Issue #3 of course is that it is really amazingly expensive to increase electric capacity. That means that energy supply curves are also really vertical. Even if energy firms COULD raise prices, they can’t increase the quantity supplied in the short run. In the longer run, we have time to build more plants and add capacity, but in the short run we’re stuck with what we have.
The graph above shows the marginal cost of different types of energy. Some are energy that is easy to turn on and off, but expensive (eg. oil). Some are energy that is really, really hard to turn on and off at will (eg. nuclear) but very cheap. And producing more energy than you need is bad. So you build enough cheap stuff that you know for 100% positive will always be needed, and then you build expensive stuff to handle changes in demand. That’s the short version, anyway. It means that producing a little extra electricity is really expensive and there is a hard limit to much extra we can produce – eventually supply curves are completely vertical!
My friends on the right tend to send blame towards green energy. And they have a point! Renewables are temperamental – with too many clouds solar doesn’t do anything, and frozen blades can’t turn wind energy turbines. The impact of the storm is to shift energy supply curves to the left, and the more the grid relies on renewables, the bigger that shift is. The basic problem renewables have had is that it’s really difficult to STORE their energy for future use. If we could create really large energy reservoirs, we could store Texas’ abundant solar and wind energy for a literally-rainy day.
So we have supply curves shifting left at the same time demand curves are shifting right and prices can’t move … the final result is massive shortages! Now what could be done about that?
My friends on the left tend to blame deregulation. Sadly, not one of them is spelling out exactly what regulation they think would solve this problem. Let me be generous to them and imagine they mean the following: if the government ran (rather than regulated) the energy grid, they would build a greater capacity than we typically use.
And they have a point. Energy is like the opposite of the hotel industry. In the hotel industry, you don’t build the hotel based on AVERAGE, normal operations. In Stephenville, you build a hotel large enough to accommodate people who come for graduation. The cost of having unused rooms is fairly low – you still need to keep the room cool in case someone needs it, and you want to hire someone to dust it, but it just sits there most of the time. Then you rake in big money when demand suddenly increases. The energy industry is the opposite: it is very expensive to build capacity and it is also expensive to maintain it. Whether you are a private firm or a government, the money to maintain unused generators has to come from somewhere.
How do we afford that? In the market, energy prices are actually set a little bit higher than equilibrium so that supply > demand. That ensures we have plenty of electricity to handle normal, typical demand fluctuations. We pay for that excess capacity during the normal part of the year so that when temperatures are particularly high or extra low, the grid can handle it.
The government has a different problem, though. If electricity is publicly-run, they will tend to set the price lower than the market would and make up the differences with taxes. That further divorces energy use from the price paid. We would have a higher quantity demanded at all times (wasteful). Add in that governments generally do a bad job running businesses (wasteful) and in order to have that excess capacity we would have to be willing to pay higher taxes (and lower energy bills) for many years to make up for the extra expense. Most governments, like most markets, will therefore tend to undersupply for an emergency because the voters don’t want to pay higher taxes and there is no such thing as a free lunch. So it’s not 100% clear that this would solve the problem. Europe has power outages that affect millions too.
Why? Healy and Malhotra: Governments respond to incentives, and voters give the wrong incentives: “Do voters effectively hold elected officials accountable for policy decisions? Using data on natural disasters, government spending, and election returns, we show that voters reward the incumbent presidential party for delivering disaster relief spending, but not for investing in disaster preparedness spending. These inconsistencies distort the incentives of public officials, leading the government to underinvest in disaster preparedness, thereby causing substantial public welfare losses. We estimate that $1 spent on preparedness is worth about $15 in terms of the future damage it mitigates. By estimating both the determinants of policy decisions and the consequences of those policies, we provide more complete evidence about citizen competence and government accountability.”
Bottom line: there isn’t an easy solution to weather events that happen once in a hundred years, whether it’s floods or hurricanes or … whatever this white, powdery substance is that’s blanketing my lawn. The basic problem is scarcity in a market where price signals don’t work (by design) at a time when supply shifts left and demand shifts right. To the extent climate change means more frequent extreme events, this will be a growing problem.
The most important historical question to help understand our rise from the muck to modern civilization is: how did we go from linear to exponential productivity growth? Let’s call that question “who started modernity?” People often look to the industrial revolution, which is certainly an acceleration of growth…but it is hard to say it caused the growth because it came centuries after the initial uptick. Historians also bring up the Renaissance, but this is also a mislead due to the ‘written bias’ of focusing on books, not actions; the Renaissance was more like the window dressing of the Venetian commercial revolution of the 11th and 12th centuries, which is in my opinion the answer to “who started modernity.” However, despite being the progenitors of modern capitalism (which is worth a blog in and of itself), Venice’s growth was localized and did not spread immediately across Europe; instead, Venice was the regional powerhouse who served as the example to copy. The Venetian model was also still proto-banking and proto-capitalism, with no centralized balance sheets, no widespread retail deposits, and a focus on Silk Road trade. Perhaps the next question is, “who spread modernity across Europe?” The answer to this question is far easier, and in fact can be centered to a huge degree around a single man, who was possibly the richest man of all time: Jakob Fugger.
Jakob Fugger was born to a family of textile traders in Augsburg in the 15th century, and after training in Venice, revolutionized banking and trading–the foundations on which investment, comparative advantage, and growth were built–as well as relationships between commoners and aristocrats, the church’s view of usury, and even funded the exploration of the New World. He was the only banker alive who could call in a debt on the powerful Holy Roman Emperor, Charles V, mostly because Charles owed his power entirely to Fugger. Strangely, he is perhaps best known for his philanthropic innovations (founding the Fuggerei, which were some of the earliest recorded philanthropic housing projects and which are still in operation today); this should be easily outcompeted by:
His introduction of double entry bookkeeping to the continent
His invention of the consolidated balance sheet (bringing together the accounts of all branches of a family business)
His invention of the newspaper as an investment-information tool
His key role in the pope allowing usury (mostly because he was the pope’s banker)
His transformation of Maximilian from a paper emperor with no funding, little land, and no power to a competitor for European domination
His funding of early expeditions to bring spices back from Indonesia around the Cape of Good Hope
His trusted position as the only banker who the Electors of the Holy Roman Empire would trust to fund the election of Charles V
His complicated, mostly adversarial relationship with Martin Luther that shaped the Reformation and culminated in the German Peasant’s War, when Luther dropped his anti-capitalist rhetoric and Fugger-hating to join Fugger’s side in crushing a modern-era messianic figure
His involvement in one of the earliest recorded anti-trust lawsuits (where the central argument was around the etymology of the word “monopoly”)
His dissemination, for the first time, of trustworthy bank deposit services to the upper middle class
His funding of the military revolution that rendered knights unnecessary and bankers and engineers essential
His invention of the international joint venture in his Hungarian copper-mining dual-family investment, where marriages served in the place of stockholder agreements
His 12% annualized return on investment over his entire life (beating index funds for almost 5 decades without the benefit of a public stock market), dying the richest man in history.
The story of Fugger’s family–the story, perhaps, of the rise of modernity–begins with a tax record of his family moving to Augsburg, with an interesting spelling of his name: “Fucker advenit” (Fugger has arrived). His family established a local textile-trading family business, and even managed to get a coat of arms (despite their peasant origins) by making clothes for a nobleman and forgiving his debt.
As the 7th of 7 sons, Jakob Fugger was given the least important trading post in the area by his older brothers; Salzburg, a tiny mountain town that was about to have a change in fortune when miners hit the most productive vein of silver ever found by Europeans until the Spanish found Potosi (the Silver Mountain) in Peru. He then began his commercial empire by taking a risk that no one else would.
Sigismund, the lord of Salzburg, was sitting on top of a silver mine, but still could not run a profit because he was trying to compete with the decadence of his neighbors. He took out loans to fund huge parties, and then to expand his power, made the strategic error of attacking Venice–the most powerful trading power of the era. This was in the era when sovereigns could void debts, or any contracts, within their realm without major consequences, so lending to nobles was a risky endeavor, especially without backing of a powerful noble to force repayment or address contract breach.
Because of this concern, no other merchant or banker would lend to Sigismund for this venture because sovereigns could so easily default on debts, but where others saw only risk, Fugger saw opportunity. He saw that Sigismund was short-sighted and would constantly need funds; he also saw that Sigismund would sign any contract to get the funds to attack Venice. Fugger fronted the money, collateralized by near-total control of Sigismund’s mines–if only he could enforce the contract.
Thus, the Fugger empire’s first major investment was in securing (1) a long-term, iterated credit arrangement with a sovereign who (2) had access to a rapidly-growing industry and was willing to trade its profits for access to credit (to fund cannons and parties, in his case).
What is notable about Fugger’s supposedly crazy risk is that, while it depended on enforcing a contract against a sovereign who could nullify it with a word, he still set himself up for a consistent, long-term benefit that could be squeezed from Sigismund so long as he continued to offer credit. This way, Sigismund could not nullify earlier contracts but instead recognized them in return for ongoing loan services; thus, Fugger solved this urge toward betrayal by iterating the prisoner’s dilemma of defaulting. He did not demand immediate repayment, but rather set up a consistent revenue stream and establishing Fugger as Sigismund’s crucial creditor. Sigismund kept wanting finer things–and kept borrowing from Fugger to get them, meaning he could not default on the original loan that gave Fugger control of the mines’ income. Fugger countered asymmetrical social relationships with asymmetric terms of the contract, and countered the desire for default with becoming essential.
Eventually, Fugger met Maximilian, a disheveled, religion-and-crown-obsessed nobleman who had been elected Holy Roman Emperor specifically because of his lack of power. The Electors wanted a paper emperor to keep freedom for their principalities; Maximilian was so weak that a small town once arrested and beat him for trying to impose a modest tax. Fugger, unlike others, saw opportunity because he recognized when aligning paper trails (contracts or election outcomes) with power relationships could align interests and set him up as the banker to emperors. When Maximilian came into conflict with Sigismund, Fugger refused any further loans to Sigismund, and Maximilian forced Sigismund to step down. Part of Sigismund’s surrender and Maximilian’s new treaty included recognizing Fugger’s ongoing rights over the Salzburg mines, a sure sign that Fugger had found a better patron and solidified his rights over the mine through his political maneuvering–by denying a loan to Sigismund and offering money instead to Maximilian. Once he had secured this cash cow, Fugger was certainly put in risky scenarios, but didn’t seek out risk, and saw consistent yearly returns of 8% for several decades followed by 16% in the last 15 years of his life.
From this point forward, Fugger was effectively the creditor to the Emperor throughout Maximilian’s life, and built a similar relationship: Maximilian paid for parties, military campaigns, and bought off Electors with Fugger funds. As more of Maximilian’s assets were collateralized, Fugger’s commercial empire grew; he gained not only access to silver but also property ownership. He was granted a range of fiefs, including Arnoldstein, a critical trade juncture where Austria, Italy, and Slovenia border each other; his manufacturing and trade led the town to be renamed, for generations, Fuggerau, or Place of Fugger.
These activities that depended on lending to sovereigns brings up a major question: How did Fugger get the money he lent to the Emperor? Early in his career, he noted that bank deposit services where branches were present in different cities was a huge boon to the rising middle-upper class; property owners and merchants did not have access to reliable deposit services, so Fugger created a network of small branches all offering deposits with low interest rates, but where he could grow his services based on the dependability of moving money and holding money for those near, but not among, society’s elites. This gave him a deep well of dispersed depositors, providing him stable and dependable capital for his lending to sovereigns and funding his expanding mining empire.
Unlike modern financial engineers, who seem to focus on creative ways to go deeper in debt, Fugger’s creativity was mostly in ways that he could offer credit; he was most powerful when he was the only reliable source of credit to a political actor. So long as the relationship was ongoing, default risk was mitigated, and through this Fugger could control the purse strings on a wide range of endeavors. For instance, early in their relationship (after Maximilian deposed Sigismund and as part of the arrangement made Fugger’s interest in the Salzburg mines more permanent), Maximilian wanted to march on Rome as Charlemagne reborn and demand that the pope personally crown him; he was rebuffed dozens of times not by his advisors, but by Fugger’s denial of credit to hire the requisite soldiers.
Fugger also innovated in information exchange. Because he had a broad trading and banking business, he stood to lose a great deal if a region had a sudden shock (like a run on his banks) or gain if new opportunities arose (like a shift in silver prices). He took advantage of the printing press–less than 40 years after Gutenberg, and in a period when most writing was religious–to create the first proto-newspaper, which he used to gather and disseminate investment-relevant news. Thus, while he operated a network of small branches, he vastly improved information flow among these nodes and also standardized and centralized their accounting (including making the first centralized/combined balance sheet).
With this broad base of depositors and a network of informants, Fugger proceeded to change how war was fought and redraw the maps of Europe. Military historians have discussed when the “military revolution” that shifted the weapons, organization, and scale of war for decades, often centering in on Swedish armies in the 1550s as the beginning of the revolution. I would counter-argue that the Swedes simply continued a trend that the continent had begun in the late 1400’s, where:
Knights’ training became irrelevant, gunpowder took over
Logistics and resource planning were professionalized
Early mechanization of ship building and arms manufacturing, as well as mining, shifted war from labor-centric to a mix of labor and capital
Multi-year campaigns were possible due to better information flow, funding, professional organization
Armies, especially mercenary groups, ballooned in size
Continental diplomacy became more centralized and legalistic
Wars were fought by access to creditors more than access to trained men, because credit could multiply the recruitment/production for war far beyond tax receipts
Money mattered in war long before Fugger: Roman usurpers always took over the mints first and army Alexander showed how logistics and supply were more important than pure numbers. However, the 15th century saw a change where armies were about guns, mercenaries, technological development, and investment, and above all credit, and Fugger was the single most influential creditor of European wars. After a trade dispute with the aging Hanseatic League over their monopoly of key trading ports, Fugger manipulated the cities into betraying each other–culminating in a war where those funded by Fugger broke the monopolistic power of the League. Later, because he had a joint venture with a Hungarian copper miner, he pushed Charles V into an invasion of Hungary that resulted in the creation of the Austro-Hungarian Empire. These are but two of the examples of Fugger destroying political entities; every Habsburg war fought from the rise of Maximilian through Fugger’s death in 1527 was funded in part by Fugger, giving him the power of the purse over such seminal conflicts as the Italian Wars, where Charles V fought on the side of the Pope and Henry VIII against Francis I of France and Venice, culminating in a Habsburg victory.
Like the Rothschilds after him, Fugger gained hugely through a reputation for being ‘good for the money’; while other bankers did their best to take advantage of clients, he provided consistency and dependability. Like the Iron Bank of Braavos in Game of Thrones, Fugger was the dependable source for ambitious rulers–but with the constant threat of denying credit or even war against any defaulter. His central role in manipulating political affairs via his banking is well testified during the election of Charles V in 1519. The powerful kings of Europe– Francis I of France, Henry VIII of England, and Frederick III of Saxony all offered huge bribes to the Electors. Because these sums crossed half a million florins, the competition rapidly became one not for the interest of the Electors–but for the access to capital. The Electors actually stipulated that they would not take payment based on a loan from anyone except Fugger; since Fugger chose Charles, so did they.
Fugger also inspired great hatred by populists and religious activists; Martin Luther was a contemporary who called Fugger out by name as part of the problem with the papacy. The reason? Fugger was the personal banker to the Pope, who was pressured into rescinding the church’s previously negative view of usury. He also helped arrange the scheme to fund the construction of the new St. Peter’s basilica; in fact, half of the indulgence money that was putatively for the basilica was in fact to pay off the Pope’s huge existing debts to Fugger. Thus, to Luther, Fugger was greed incarnate, and Fugger’s name became best known to the common man not for his innovations but his connection to papal extravagance and greed. This culminated in the 1525 German Peasant’s War, which saw an even more radical Reformer and modern-day messianic figure lead hordes of hundreds of thousands to Fuggerau and many other fortified towns. Luther himself inveighed against these mobs for their radical demands, and Fugger’s funding brought swift military action that put an end to the war–but not the Reformation or the hatred of bankers, which would explode violently throughout the next 100 years in Germany.
This brings me to my comparison: Fugger against all of the great wealth creators in history. What makes him stand head and shoulders above the rest, to me, is that his contributions cross so many major facets of society: Like Rockefeller, he used accounting and technological innovations to expand the distribution of a commodity (silver or oil), and he was also one of the OG philanthropists. Like the Rothschilds’ development of the government bond market and reputation-driven trust, Fugger’s balance-sheet inventions and trusted name provided infrastructural improvement to the flow of capital, trust in banks, and the literal tracking of transactions. However, no other capitalist had as central of a role in religious change–both as the driving force behind allowing usury and as an anti-Reformation leader. Similarly, few other people had as great a role in the Age of Discovery: Fugger funded Portuguese spice traders in Indonesia, possibly bankrolled Magellan, and funded the expedition that founded Venezuela (named in honor of Venice, where he trained). Lastly, no other banker had as influential of a role in political affairs; from dismantling the Hanseatic League to deciding the election of 1519 to building the Habsburgs from paper emperors to the most powerful monarchs in Europe in two generations, Fugger was the puppeteer of Europe–and such an effective one that you have barely heard of him. Hence, Fugger was not only the greatest wealth creator in history but among the most influential people in the rise of modernity.
Fugger’s legacy can be seen in his balance sheet of 1527; he basically developed the method of using it for central management, its only liabilities were widespread deposits from the upper-middle class (and his asset-to-debt ratio was in the range of 7-to-1, leaving an astonishingly large amount of equity for his family), and every important leader on the continent was literally in his debt. It also showed him to have over 1 million florins in personal wealth, making him one of the world’s first recorded millionaires. The title of this post was adapted from a self-description written by Jakob himself as his epitaph. As my title shows, I think it is fairer to credit his wealth creation than his wealth accumulation, since he revolutionized multiple industries and changed the history of capitalism, trade, European politics, and Christianity, mostly in his contribution to the credit revolution. However, the man himself worked until the day he died and took great pride in being the richest man in history.
I wrote an article a few years ago about hyperinflation in ancient Rome (and blogged about it here), arguing that the social trust in issuing bodies has been a foundation for monetary value long before modern institutions.
This passed my ‘gut check’: during a crisis, who blows their entire budget? It also passed my historical-precedent check, and not only because he researched the Spanish flu and medieval precedent; in the Roman hyperinflation, the inflation lagged decades behind the expanded monetary volume, and in fact came right as the civil wars that nearly brought the Empire to its knees came to an end.
So, in short, inflation-hawks, you are probably right to fear the dramatic expansion of the money supply; however, you won’t feel vindicated for potentially years to come. In an age where people look for causes today to become results tomorrow (EVERY DAY, the WSJ tells me “stocks moved up/down because MAJOR EVENT TODAY”), we need to lengthen our time horizons of analysis and recognize that, just maybe, the ramifications of today’s policies will not really be felt for years. Or, put in a more dire light, by the time we realize who is right, it will be too late to reassert social trust in monetary value, and the dollar will follow the denarius into histories of hyperinflations.
It looks to me (as I refresh tracking numbers) that the post office is still reeling after several months of attempted voter suppression. It also looks to me like even though Trump is on his way out, there is no reason to believe that someone just as terrible couldn’t come along at any point in the next 50 years and outdo him.
As far as the USPS goes I think there’s a fairly simple solution that should make most people happy: split the USPS in two: a private for-profit firm that delivers junk mail and competes with UPS and Amazon, and a government agency that handles government business including things like distributing ballots and census surveys.
But the USPS is just one small part of a much larger problem. When the Trump II comes along, he’ll have more powers, including (very likely) a lot more power to mess with the health care sector. There are a lot of reasons I don’t like the idea of more government in health care, but this one should be terrifying to everyone.
This Atlantic article got me thinking. As an Indian national in the U.S., I would like to make a limited point about some (definitely not all) Indian Americans. In my interactions with some Indian Americans, the topic of India induces, if you will, a conflicting worldview. India —the developing political state—is often belittled in some very crude ways, using some out-of-context recent western parallels by mostly uninformed but emboldened Indian Americans.
Just mention Indian current affairs, and some of these well-assimilated Indian Americans quickly toss out their culturally informed, empathetic, anti-racist, historically contingent-privilege rhetoric to conveniently take on a sophisticated “self-made” persona, implying a person who ticked all the right boxes in life by making it in the U.S. This reflexive attitude reversal comes in handy to patronize Indians living in India. They often stereotype us as somehow lower in status or at least less competent owing to the lack of an advanced political state or an ”American” experience—therefore deficient in better ways of living and a higher form of ”humanistic” thinking.
This possibly unintentional but ultimately patronizing competence-downshift by a section of Indian Americans results in pejorative language to sketch generalizations about Indian society even as they recognize the same language as racist when attributed to American colored minorities.
In the last decade, I have learned that one must always take those who openly profess to be do-gooders, culturally conscious, anti-racist, and aware of their privileged Indian American status as a contingency of history with a bucket load of salt. Never take these self-congratulatory labels at face value. Discuss the topic of India with them to check if Indian contexts are easily overlooked. If they do, then obviously, these spectacular self-congratulatory labels are just that — skin-deep tags to fit into the dominant cultural narrative in the U.S.
Words of the economist Pranab Bardhan are worth highlighting: “Whenever you find yourself thinking that some behavior you observe in a developing country is stupid, think again. People behave the way they do because they are rational. And if you think they are stupid, it’s because you have failed to recognize a fundamental feature of their current economic environment.”
One of my favorite classics about why big businesses can’t always innovate is Clayton Christiansen’s The Innovator’s Dilemma. It is one of the most misunderstood business books, since its central concept–disruption–has been misquoted, and then popularized. Take the recent post on Investopedia that says in the second sentence that “Disruptive technology sweeps away the systems or habits it replaces because it has attributes that are recognizably superior.” This is the ‘hype’ definition used by non-innovators.
I think part of the misconception comes from thinking of disruption as major, public, technological marvels that are recognizable for their complexity or for even creating entire new industries. Disruptive innovations tend instead to be marginal, demonstrably simpler, worse on conventional scales, and start out by slowly taking over adjacent, small markets.
It recently hit me that you can identify disruption via Nassim Nicholas Taleb’s simple heuristics of recognizing when industry players are fragile. Taleb is my favorite modern philosopher, because he actually brought a new, universally applicable concept to the table, that puts into words what people have been practicing implicitly–but without a term to use. Anti-fragility is the inverse of fragile and actually helps you understand it better. Anti-fragile does not mean ‘resists breaking,’ which is more like ‘robust;’ instead, it means gains from chaos. Ford Pintos are fragile, Nokia phones are robust, but mechanical things are almost never anti-fragile. Bacteria species are anti-fragile to anti-biotics, as trying to kill them makes them stronger. Anti-fragile things are usually organic, and usually made up of fragile things–the death of one bacterium makes the species more resistant.
Taleb has a simple heuristic for finding anti-fragility. I recommend you read his book to get the full picture, but the secret to this concept is a simple thought experiment. Take any concept (or thing), and identify how it works (or fails to work). Now ask, if you subject it to chaos–by that, I mean, if you try to break it–and slowly escalate how hard you try, what happens?
If it gets disproportionately harmed, it is fragile. E.g., traffic: as you add cars, time-to-destination gets worse slowly at first, then all of the sudden increases rapidly, and if you do it enough, cars literally stop.
If it gets proportionately harmed or there is no effect, it is robust. Examples are easy, since most functional mechanical and electric systems are either fragile (such as Ford Pintos) or robust (Honda engines, Nokia phones, the Great Pyramids).
If it gets better, it is anti-fragile. Examples are harder here, since it is easier to destroy than build (and anti-fragility usually occurs based on fragile elements, which gets confusing); bacterial resistance to anti-biotics (or really, the function of evolution itself) is a great one.
The only real way to get anti-fragility outside of evolution is through optionality. Debt (obligation without a choice) is fragile to any extraneous shock, so a ‘free option’–choice without obligation, the opposite, is pure anti-fragility. Not just literal ‘options’ in the market; anti-fragile takes a different form in every case, and though the face is different, the structure is the same. OK, get it? Maybe you do. I recommend coming up with your own example–if you are just free riding on mine, you don’t get it.
Anyway, back to Christiansen. Taleb likes theorizing and leaves example-finding to you, while Christiansen scrupulously documented what happened to hundreds of companies and his concepts arose from his data; think about it like Christiansen is Darwin, carefully measuring beaks, and recognizing natural selection, where Taleb is Wallace, theorizing from his experience and the underlying math of reality. Except in this case, Taleb is not just talking about natural selection, he is also showing how mutation works, and giving a theory of evolution that is not restricted to just biology.
I realized that you can actually figure out whether an innovation is disruptive using this heuristic. It takes some care, because people often look at the technology and ask if it is anti-fragile–which is a mistake. Technologies are inorganic, so usually robust or fragile. Industries are organic, strategies are organic, companies are organic. Many new strategies build on companies’ competencies or existing customer bases, and though they may meet the ‘hype’ definition above, they give upside to incumbents, and are thus not fragilizing. Disruption happens when a company has an exposure to a strategy that it has little to gain from, but that could cannibalize its market if it grows, as anti-fragile things are wont to do.
The questions is: is a given incumbent company fragile with respect to a given strategy? Let’s start with some examples–first Christiansen’s, then my own:
Were 3″ drive makers fragile with respect to using smaller drives in cars?
In my favorite Christiansen anecdote, a 3″ drive-making-CEO, whose company designed a smaller 1.8″ drive but couldn’t sell it to their PC or mainframe customers, complained that he did exactly what Christiansen said, and built smaller drives, and there was no market. Meanwhile, startups were selling 1.8″ drives like crazy–to car companies, for onboard computers.
Christiansen notes that this was a tiny market, which would be an 0.01% change on a big-company income statement, and a low-profit one at that. So, since these companies were big, they were fragile to low-margin, low-volume, fast-growing submarkets. Meanwhile, startups were unbelievably excited about selling small drives at a loss, just so that Honda would buy from them.
So, 3″ drive makers had everything to lose (the general drive market) and a blip to gain, where startups had everything to gain and nothing to lose. Note that disruptive technologies are not those that are hard to invent or that immediately revolutionize the industry. Big companies (as Christiansen proved) are actually better at big changes and at invention. They are worse at recognizing value of small changes and jumps between industries.
Were book retailers fragile with respect to online book sales?
Yes, Amazon is my Christiansen follow-on. Jeff Bezos, as documented in The Everything Store, gets disruption: he invented the ‘two-pizza meeting’, so he ‘gets’ smallness; he intentionally isolates his innovation teams, so he ‘gets’ the excitement of tiny gains and allows cannibalism; he started in a proof-of-concept, narrow, feasible discipline (books) with the knowledge that it would grow into the Everything Store if successful, so he ‘gets’ going from simple beginnings to large-scale, well, disruption.
The Everything Store reads like a manual on how to be disrupted. Barnes & Noble first said “We can do that whenever we want.” Then when Bezos got some traction, B&N said “We can try this out but we need to figure out how to do it using our existing infrastructure.” Then when Bezos started eating their lunch, B&N said “We need to get into online book sales,” but sold the way they did in stores, by telling customers what they want, not by using Bezos’ anti-fragile review system. Then B&N said “We need to start doing whatever Bezos does, and beat him by out-spending,” by which time he was past that and selling CDs and then (eventually) everything.
Book sellers were fragile because they had existing assets that had running costs; they were catering to customers with not just a book, but with an experience; they were in the business of selecting books for customers, not using customers for recommendations; they treasured partnerships with publishers rather than thinking of how to eliminate them.
Now, some rapid-fire. Think carefully, since it is easy to fall into the trap of thinking industry titans were stupid, not fragile, and it is easy to have false positives unless you use Taleb’s heuristic.
Car companies were fragile to electric sports cars, and Elon Musk was anti-fragile. Sure, he was up-market, which doesn’t follow Christiansen’s down-market paradigm, but he found the small market that the Nissan Leaf missed.
NASA was fragile to modern, cheap, off-the-shelf space solutions, and…yet again…Elon Musk was anti-fragile.
Hedge funds were fragile to index funds, currently are fragile to copy trading, and I hope to god they break.
Lastly, some counter-examples, since it is always better to use the via negativa, and assuming you have additive knowledge is dangerous. If you disagree, prove me wrong, found a startup, and make a bajillion dollars by disrupting the big guys who won’t be able to find a market:
There is nothing disruptive about 5G.
Solar and wind are fragile and fragilizing.
What was wrong with WeWork’s business model? Double fragility–fixed contracts with building owners, flexible contracts with customers.
On a more optimistic note, cool tech can still be sustaining (as opposed to disruptive), like RoboAdvisors or induction stoves or 3D printed shoes.
Artificial intelligence or blockchain any use you have heard of (but not in any that you don’t know yet).
So, to summarize, if a company is fragile to a new strategy, the best it can do is try to robustify itself, since it has little upside. Many innovations give upside to incumbents at the marginal cost of R&D, and thus sustain them; disruption happens when the incumbents have little to gain from adopting a strategy, but startups have a high exposure to positive impact from possible adoption of a strategy due to the potential growth from small-market, incremental/simplifying opportunities, which is definitionally anti-fragility to the strategy.
Now, I hope you have a tool for judging whether industrial incumbents are fragile. Rather than trying to predict success or failure of any, you should just use Taleb’s heuristic–that will help you sort things into ‘hyped as disruptive’ vs. ‘actually probably disruptive.’ A last thought: if you found this wildly confusing, just remember, disruptive innovations tend to steal the jobs of incumbents. So, if an incumbent (say, a Goldman Sachs/Morgan Stanley veteran writing the definition of “disruptive” for Investopedia) is talking about a banking or trading technology, it is almost certainly not disruptive, since he would hardly tell you how to render him extraneous. You will find out what is disruptive when he makes an apology video while wearing a nice watch and French cuffs.
The market for who wins the presidency closed this morning! But the Electoral College margin of victory market was still open and at 98 cents for the already certain outcome. Maxing out my position there would mean $17 for free! So I did, and the market dipped to 97 cents.
This truly is the dumbest jack in the box. We all know exactly what’s going to happen, and yet…
I have recently shifted my “person I am obsessed with listening to”: my new guy is George Hotz, who is an eccentric innovator who built a cell phone that can drive your car. His best conversations come from Lex Fridman’s podcasts (in 2019 and 2020).
Hotz’s ideas bring into question the efficacy of any ethical strategy to address ‘scary’ innovations. For instance, based on his experience playing “Capture the Flag” in hacking challenges, he noted that he never plays defense: a defender must cover all vulnerabilities, and loses if he fails once. An attacker only needs to find one vulnerability to win. Basically, in CTF, attacking is anti-fragile, and defense is fragile.
Hotz’s work centers around reinforcement learning systems, which learn from AI errors in automated driving to iterate toward a model that mimics ‘good’ drivers. Along the way, he has been bombarded with questions about ethics and safety, and I was startled by the frankness of his answer: there is no way to guarantee safety, and Comma.ai still depends on human drivers to intervene to protect themselves. Hotz basically dismisses any system that claims to take an approach to “Level 5 automation” that is not learning-based and iterative, as driving in any condition, on any road, is an ‘infinite’ problem. Infinite problems have natural vulnerabilities to errors and are usually closer to impossible where finite problems often have effective and world-changing solutions. Here are some of his ideas, and some of mine that spawned from his:
The Seldon fallacy: In short, 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable. See my other post for more details!
Finite solutions to infinite problems: In Hotz’s words regarding how autonomous vehicles take in their environments, “If your perception system can be written as a spec, you have a problem”. When faced with any potential obstacle in the world, a set of plans–no matter how extensive–will never be exhaustive.
Trolling the trolley problem: Every ethicist looks at autonomous vehicles and almost immediately sees a rarity–a chance for an actual direct application of a philosophical riddle! What if a car has to choose between running into several people or alter path to hit only one? I love Hotz’s answer: we give the driver the choice. It is hard to solve the trolley problem, but not hard to notice it, so the software alerts the driver whenever one may occur–just like any other disengagement. To me, this takes the hot air out of the question, since it shows that, as with many ethical worries about robots, the problem is not unique to autonomous AIs, but inherent in driving–and if you really are concerned, you can choose yourself which people to run over.
Vehicle-to-vehicle insanity: While some autonomous vehicle innovators promise “V2V” connections, through which all cars ‘tell’ each other where they are and where they are going and thus gain tremendously from shared information. Hotz cautions (OK, he straight up said ‘this is insane’) that any V2V system depends, for the safety of each vehicle and rider, on 1) no communication errors and 2) no liars. V2V is just a gigantic target waiting for a black hat, and by connecting the vehicles, the potential damage inflicted is magnified thousands-fold. That is not to say the cars should not connect to the internet (e.g. having Google Maps to inform on static obstacles is useful), just that safety of passengers should never depend on a single system evading any errors or malfeasance.
Permissioned innovation is a contradiction in terms: As Hotz says, the only way forward in autonomous driving is incremental innovation. Trial and error. Now, there are less ethically worrisome ways to err–such as requiring a human driver who can correct the system. However, there is no future for innovations that must emerge fully formed before they are tried out. And, unfortunately, ethicists–whose only skin in the game is getting their voice heard over the other loud protesters–have an incentive to promote the precautionary principle, loudly chastise any innovator who causes any harm (like Uber’s first-pedestrian-killed), and demand that ethical frameworks precede new ideas. I would argue back that ‘permissionless innovation‘ leads to more inventions and long-term benefits, but others have done so quite persuasively. So I will just say, even the idea of ethics-before-inventions contradicts itself. If the ethicist could make such a framework effectively, the framework would include the invention itself–making the ethicist the inventor! Since instead, what we get is ethicists hypothesizing as to what the invention will be, and then restricting those hypotheses, we end up with two potential outcomes: one, the ethicist hypothesizes correctly, bringing the invention within the realm of regulatory control, and thus kills it. Two, the ethicist has a blind spot, and someone invents something in it.
“The Attention”: I shamelessly stole this one from video games. Gamers are very focused on optimal strategies, and rather than just focusing on cost-benefit analysis, gamers have another axis of consideration: “the attention.” Whoever forces their opponent to focus on responding to their own actions ‘has the attention,’ which is the gamer equivalent of the weather gauge. The lesson? Advantage is not just about outscoring your opponent, it is about occupying his mind. While he is occupied with lower-level micromanaging, you can build winning macro-strategies. How does this apply to innovation? See “permissioned innovation” above–and imagine if all ethicists were busy fighting internally, or reacting to a topic that was not related to your invention…
The Maginot ideology: All military historians shake their heads in disappointment at the Maginot Line, which Hitler easily circumvented. To me, the Maginot planners suffered from two fallacies: one, they prepared for the war of the past, solving a problem that was no longer extant. Second, they defended all known paths, and thus forgot that, on defense, you fail if you fail once, and that attackers tend to exploit vulnerabilities, not prepared positions. As Hotz puts it, it is far easier to invent a new weapon–say, a new ICBM that splits into 100 tiny AI-controlled warheads–than to defend against it, such as by inventing a tracking-and-elimination “Star Wars” defense system that can shoot down all 100 warheads. If you are the defender, don’t even try to shoot down nukes.
The Pharsalus counter: What, then, can a defender do? Hotz says he never plays defense in CTF–but what if that is your job? The answer is never easy, but should include some level of shifting the vulnerability to uncertainty onto the attacker (as with “the Attention”). As I outlined in my previous overview of Paradoxical genius, one way to do so is to intentionally limit your own options, but double down on the one strategy that remains. Thomas Schelling won the “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” for outlining this idea in The Strategy of Conflict, but more importantly, Julius Caesar himself pioneered it by deliberately backing his troops into a corner. As remembered in HBO’s Rome, at the seminal engagement of Pharsalus, Caesar said: “Our men must fight or die. Pompey’s men have other options.” However, he also made another underappreciated innovation, the idea of ‘floating’ reserves. He held back several cohorts of his best men to be deployed wherever vulnerabilities cropped up–thus enabling him to be reactive, and forcing his opponent to react to his counter. Lastly, Caesar knew that Pompey’s ace-in-the-hole, his cavalry, was made up of vain higher-class nobles, so he told his troops, instead of inflicting maximum damage indiscriminately, to focus on stabbing their faces and thus disfigure them. Indeed, Pompey’s cavalry did not flee from death, but did from facial scars. To summarize, the Pharsalus counter is: 1) create a commitment asymmetry, 2) keep reserves to fill vulnerabilities, and 3) deface your opponents.
Offensive privacy and the leprechaun flag: Another way to shift the vulnerability is to give false signals meant to deceive black hats. In Hotz’s parable, imagine that you capture a leprechaun. You know his gold is buried in a field, and you force the leprechaun to plant a flag where he buried it. However, when you show up to the field, you find it planted with thousands of flags over its whole surface. The leprechaun gave you a nugget of information–but it became meaningless in the storm of falsehood. This is a way that privacy may need to evolve in the realm of security; we will never stop all quests for information, but planting false (leprechaun) flags could deter black hats regardless of their information retrieval abilities.
The best ethics is innovation: When asked what his goal in life is, Hotz says ‘winning.’ What does winning mean? It means constantly improving one’s skills and information, while also seeking to find a purpose that changes the world in a way you are willing to dedicate yourself to. I think the important part of this that Hotz does not say “create a good ethical framework, then innovate.” Instead, he is effectively saying do the opposite–learn and innovate to build abilities, and figure out how to apply them later. The insight underlying this is that the ethics are irrelevant until the innovation is there, and once the innovation is there, the ethics are actually easier to nail down. Rather than discussing ‘will AIs drive cars morally,’ he is building the AIs and anticipating that new tech will mean new solutions to the ethical questions, not just the practical considerations. So, in summary, if you care about innovation, focus on building skills and knowledge bases. If you care about ethics, innovate.
Like some of my role models, I am inspired by Isaac Asimov’s vision. However, for years, the central ability at the heart of the Foundation series–‘psychohistory,’ which enables Hari Seldon, the protagonist, to predict broad social trends across thousands of galaxies over thousands of years–has bothered me. Not so much because of its impact in the fictional universe of Foundation, but for how closely it matches the real-life ideas of predictive modeling. I truly fear that the Seldon Fallacy is spreading, building up society’s exposure to negative, unpredictable shocks.
The Seldon Fallacy: 1) It is possible to model complex, chaotic systems with simplified, non-chaotic models; 2) Combining chaotic elements makes the whole more predictable.
The first part of the Seldon Fallacy is the mistake of assuming reducibility, or more poetically, of NNT’s Procustean Bed. As F.A. Hayek asserted, no predictive model can be less complex than the model it predicts, because of second-order effects and accumulation of errors of approximation. Isaac Asimov’s central character, Hari Seldon, fictionally ‘proves’ the ludicrous fallacy that chaotic systems can be reduced to ‘psychohistorical’ mathematics. I hope you, reader, don’t believe that…so you don’t blow up the economy by betting a fortune on an economic prediction. Two famous thought experiments disprove this: the three-body problem and the damped, driven oscillator. If we can’t even model a system with three ‘movers’, because of second-order effects, how can we model interactions between millions of people? Basically, with no way to know which reductions in complexity are meaningful, Seldon cannot know whether, in laying his living system into a Procustean bed, he has accidentally decapitated it. Using this special ability, while unable to predict individuals’ actions precisely, Seldon can map out social forces with such clarity that he correctly predicts the fall of a 10,000-year empire. Now, to turn to the ‘we can predict social, though not individual futures’ portion of the fallacy: that big things are predictable even if their consituent elements are not.
The second part of the Seldon Fallacy is the mistake of ‘the marble jar.’ Not all randomnesses are equal: drawing white and black marbles from a jar (with replacement) is fundamentally predictable, and the more marbles drawn, the more predictable the mix of marbles in the jar. Many models depend on this assumption or similar ones–that random events distribute normally (in the Gaussian sense) in a way that increases the certainty of the model as the number of samples increases. But what if we are not observing independent events? What if they are not Gaussian? What if someone tricked you, and tied some marbles together so you can’t take out only one? What if one of them is attached to the jar, and by picking it up, you inadvertently break the jar, spilling the marbles? Effectively, what if you are not working with a finite, reducible, Gaussian random system, but an infinite, Mandelbrotian, real-world random system? What if the jar contains not marbles, but living things?
I apologize if I lean too heavily on fiction to make my points, but another amazing author answers this question much more poetically than I could. Just in the ‘quotes’ from wise leaders in the introductions to his historical-fantasy series, Jim Butcher tells stories of the rise and fall of civilizations. First, on cumulative meaning:
“If the beginning of wisdom is in realizing that one knows nothing, then the beginning of understanding is in realizing that all things exist in accord with a single truth: Large things are made of smaller things.
Drops of ink are shaped into letters, letters form words, words form sentences, and sentences combine to express thought. So it is with the growth of plants that spring from seeds, as well as with walls built from many stones. So it is with mankind, as the customs and traditions of our progenitors blend together to form the foundation for our own cities, history, and way of life.
Be they dead stone, living flesh, or rolling sea; be they idle times or events of world-shattering proportion, market days or desperate battles, to this law, all things hold: Large things are made from small things. Significance is cumulative–but not always obvious.”
Second, on the importance of individuals as causes:
“The course of history is determined not by battles, by sieges, or usurpations, but by the actions of the individual. The strongest city, the largest army is, at its most basic level, a collection of individuals. Their decisions, their passions, their foolishness, and their dreams shape the years to come. If there is any lesson to be learned from history, it is that all too often the fate of armies, of cities, of entire realms rests upon the actions of one person. In that dire moment of uncertainty, that person’s decision, good or bad, right or wrong, big or small, can unwittingly change the world.
But history can be quite the slattern. One never knows who that person is, where he might be, or what decision he might make.
It is almost enough to make me believe in Destiny.”
If you are not convinced by the wisdom of fiction, put down your marble jar, and do a real-world experiment. Take 100 people from your community, and measure their heights. Then, predict the mean and distribution of height. While doing so, ask each of the 100 people for their net worth. Predict a mean and distribution from that as well. Then, take a gun, and shoot the tallest person and the richest person. Run your model again. Before you look at the results, tell me: which one do you expect shifted more?
I seriously hope you bet on the wealth model. Height, like marble-jar samples, is normally distributed. Wealth follows a power law, meaning that individual datapoints at the extremes have outsized impact. If you happen to live in Seattle and shot a tech CEO, you may have lowered the mean income in the group by more than the average income of the other 99 people!
So, unlike the Procustean Bed (part 1 of the Seldon Fallacy), the Marble Jar (part 2 of the Seldon Fallacy) is not always a fallacy. There are systems that follow the Gaussian distribution, and thus the Marble Jar is not a fallacy. However, many consequential systems–including earnings, wars, governmental spending, economic crashes, bacterial resistance, inventions’ impacts, species survival, and climate shocks–are non-Gaussian, and thus the impact of a single individual action could blow up the model.
The crazy thing is, Asimov himself contradicts his own protagonist in his magnum opus (in my opinion). While the Foundation Series keeps alive the myth of the predictive simulation, my favorite of his books–The End of Eternity (spoilers)–is a magnificent destruction of the concept of a ‘controlled’ world. For large systems, this book is also a death knell even of predictability itself. The Seldon Fallacy–that a simplified, non-chaotic model can predict a complex, chaotic reality, and that size enhances predictability–is shown, through the adventures of Andrew Harlan, to be riddled with hubris and catastrophic risk. I cannot reduce his complex ideas into a simple summary, for I may decapitate his central model. Please read the book yourself. I will say, I hope that as part of your reading, I hope you take to heart the larger lesson of Asimov on predictability: it is not only impossible, but undesirable. And please, let’s avoid staking any of our futures on today’s false prophets of predictable randomness.
In medicine, randomized controlled trials are the most highly regarded type of primary study, as they separately track treatment and control groups to determine whether an observed effect is actually caused by the intervention.
Bias, the constant bane of statisticians, can be minimized further by completing a blinded trial. In a single-blinded trial, the patient population is not informed which group they are in, to prevent knowledge of therapy from impacting results. Placebos are powerful, so blinding has helped identify dozens of therapies that are no better than sugar pills!
However, knowledge can contaminate studies in another way–through the physicians administering the therapies. Bias can be further reduced by double blinding, in which the physicians are also kept in the dark about which therapy was administered, so that their knowledge does not contaminate their reporting of results. In a double-blind trial, only the study administrators know which therapy is applied to each patient, and sometimes an independent lab is tasked with analysis to further limit bias.
Overall, these blinding mechanisms are meant to make us more certain that the results of a study are reflective of an intervention’s actual efficacy. However, medicine is not the only field where the efficacy of many interventions is impactful, highly debated, and worthy of study. Why, then, do we not have blinded studies in political economy?
We all know that randomized controlled trials are pretty much impossible in political economy. North/South Korea and West/East Germany were amazing accidental trials, but we can still hope that politicians and economists make policies that can at least be tracked to determine their ‘change from baseline’ even if we have no control group. Because of how easy it is to harm socioeconomic systems and sweep the ruinous results under the rug, I personally consider it unethical to intervene in a complex system without careful prior consideration, and straight up evil to do so without plans to track the impact of that intervention. So, how can politicians take an ‘evidence-based approach’ to their interventions?
I think that, in recent years, politicians–especially in the US and especially liberals and COVID-reactionaries–have come up with an amazing new experimental method: the triple blinded study. Examples include the ACA, the ARRA, and the recent $3 trillion stimulus package. In a triple blinded study, politicians carefully draft bills so that they are (1) too long for anyone, especially the politicians themselves, to read; (2) filled with a mish-mash of dozens of strategies implemented simultaneously or that are delegated vaguely to administrative agencies; and (3) have no pre-specified metrics by which the policy will be judged, thus blinding everyone to any useful study of signal and response.
I am reminded of one of the most painful West Wing episodes ever made, in which “President Bartlett” is addressing an economic crisis, and is fielding dozens of suggestions from experts–without being able to choose among the candidate interventions. Donna, assistant to his Deputy Chief of Staff, tells a parable about how her grandmother would use ‘a little bit of this, a little bit of that’ to cure minor illnesses. Inspired, Bartlett adopts a policy of ALL suggested economic interventions, thus ensuring that we try everything–and learn nothing. I shudder to think that this strategy was ever broached publicly…and copied from fiction into reality.
In this way, politicians have cleverly enabled us to reduce the bias caused by any knowledge of the intervention or its impact. The patients (citizens), physicians (politicians), and study administrators (economists?) are all kept carefully in the dark so that none of them can know how a policy impacted the economy. Thus, anyone debating any of these topics is given the full freedom to invent whatever argument they want, cherry-pick any data they want, and continue peddling their politics without ever being called to task by the data.
Even more insanely, doctors are held not only to the standard of evidence-based medicine, but also to that of of the precautionary principle–where passivity is preferred to action and novel methods are treated with special scrutiny. “Evidence-based policy”, on the other hand, is a buzzword and not an actual practice to align with RCTs, and any politician who actually followed the precautionary principle would be considered ‘do-nothing’. Thus, we carefully keep both evidence and principles of ‘do no harm’ far from the realm of political action, and continue a general practice across politics of the blind making sure that they lead the blind.
In sum, political leaders, please ignore Donna. Stop intentionally blinding us to policy impacts. Stop doing triple-blinded studies with the future of our country. Sincerely, all data-hounds, ever.
Friedman: A business is obligated to maximize shareholder value, nothing more.
Everyone else: That’s crazy! Profit maximizing businesses roll over all sorts of other stakeholders and fail to live up to basic ethical standards.
This relates to a complaint I’ve made before. Markets are good at generating prices that reflect aggregate views on the relative scarcity/importance of various goods. Markets aren’t good at charity. To roll other things in there means a good old fashioned price is now a price plus an obligation to do some moral calculus in how we each interact with the complex adaptive system that is the world economy. It’s a recipe for disaster.
So what do we do? We recognize the gap between a world where Friedman’s advice is reasonable and the world we live in, then we figure out how to close that gap. That Friedman’s doesn’t match our world says more about our world than it does about Friedman’s argument.
Rather than move Friedman’s starting point by trying to juggle competing demands of various stakeholders without markets, we should think about the legal framework these stakeholders are acting in.
If we refine our understanding of who has what rights to make what decisions we’ll see that the reason profit maximizers (and vote maximizers) sometimes do bad things is because it’s the best choice available to them. The answer isn’t to say “businesses lobby business therefore they shouldn’t respond to incentives!” it’s to say “therefore we should restrict opportunities to seek rents!”
Coase wasn’t trying to tell us that spillovers don’t matter. He was trying to tell us that transaction costs do matter and whenever they’re present, we need to be careful in allocating rights that have spillover effects. By the same token, we should think of Friedman’s advice as saying “in a perfect world, corporations should maximize profits, but the world needs work.”
I believe in gravity. I don’t believe in the flat earth conspiracy. But I haven’t done the work to verify either. Instead, I trust that some social process of “science” has done a reasonably good job of assembling and verifying the knowledge that keeps my house from collapsing or my car from exploding.
There are some areas where I’m qualified to hold an opinion. But honestly, it’s a pretty small set of things and subject to an infinity of caveats. The things I “know” are really things I believe because they were taught to me by sources I trust. It’s an imperfect system, but it works tolerably well and it frees up my time to do things like working, and having a life. I’m not going to “do my research” because that would mean not doing something with higher marginal benefit.
What Trumpians realize is that sowing distrust in sources of knowledge gives them an advantage in the marketplace of ideas. What’s worse is that they’re not wrong about the fundamental ambiguity of knowledge. I haven’t got enough time, energy, or inclination to verify that the sun will in fact rise again tomorrow. I can’t scientifically test the veracity of claims of what sorts of noodley appendages touch us all.
Do I know that Joe Biden is a better candidate than Trump? If I’m being honest, the answer is no. I’m not terribly comfortable with that, so I might decide against being honest. I know enough to verify that at least one of the candidates is a turd sandwich of a human being.
What I know for sure about this mess is that the problems are complex. Even a well funded team of experts with broad powers would have infinite problems sorting things out. And the sorts of people we try to put in power are less capable than well funded teams of experts with broad powers.
As always, I hope we learn a valuable lesson here. Complex systems are always going to confound our simple human sensibilities. Given the complexity of society, we should avoid aggregating so much power into the hands of politicians–especially when “the other guy” sometimes gets hold of that power.