Hyperinflation and trust in Ancient Rome

Since it hit 1,000,000% in 2018, Venezuelan hyperinflation has actually been not only continuing but accelerating. Recently, Venezuela’s annual inflation hit 10 million percent, as predicted by the IMF; the inflation jumped so quickly that the Venezuelan government actually struggled to print its constantly-inflated money fast enough. This may seem unbelievable, but peak rates of monthly inflation were actually higher than this in Zimbabwe (80 billion percent/month) in 2008, Yugoslavia (313 million percent/month) in 1994, and in Hungary, where inflation reached an astonishing 41.9 quadrillion percent per month in 1946.

The continued struggles to reverse hyperinflation in Venezuela are following a trend that has been played out dozens of times, mostly in the 20th century, including trying to “reset” the currency with fewer zeroes, return to barter, and turning to other countries’ currencies for transactions and storing value. Hyperinflation’s consistent characteristics, including its roots in discretionary/fiat money, large fiscal deficits, and imminent solvency crises are outlined in an excellent in-depth book covering 30 episodes of hyperinflation by Peter Bernholz. I recommend the book (and the Wikipedia page on hyperinflations) to anyone interested in this recurrent phenomenon.

However, I want to focus on one particular inflationary episode that I think receives too little attention as a case study in how value can be robbed from a currency: the 3rd Century AD Roman debasement and inflation. This involved an iterative experiment by Roman emperors in reducing the valuable metal content in their coins, largely driven by the financial needs of the army and countless usurpers, and has some very interesting lessons for leaders facing uncontrollable inflation.

The Ancient Roman Currency

The Romans encountered a system with many currencies, largely based on Greek precedents in weights and measures, and iteratively increased imperial power over hundreds of years by taking over municipal mints and having them create the gold (aureus) and silver (denarius) coins of the emperor (copper/bronze coins were also circulated but had negligible value and less centralization of minting). Minting was intimately related to army leadership, as mints tended to follow armies to the front and the major method of distributing new currency was through payment of the Roman army. Under Nero, the aureus was 99% gold and the denarius was 97% silver, matching the low debasement of eastern/Greek currencies and holding a commodity value roughly commensurate with its value as a currency.

The Crisis of the Third Century

However, a major plague in 160 AD followed by auctions of the imperial seat, major military setbacks, usurpations, loss of gold from mines in Dacia and silver from conquest, and high bread-dole costs drove emperors from 160-274 AD to iterative debase their coinage (by reducing the size and purity of gold coins and by reducing the silver content of coins from 97% to <2%). A major bullion shortage (of both gold and silver) and the demands of the army and imperial maintenance created a situation where a major government with fiscal deficits, huge costs of appeasing the army and urban populace, and diminishing faith in leaders’ abilities drove the governing body to vastly increase the monetary volume. This not only reflects Bernholz’ theories of the causes of hyperinflations but also parallels the high deficits and diminishing public credit of the Maduro regime.

Inflation and debasementFigure 1 for Fiat paper

Unlike modern economies, the Romans did not have paper money, and that meant that to “print” money they had to debase their coins. The question of whether the emperor or his subjects understood the way that coins represented value went beyond the commodity value of the coins has been hotly debated in academic circles, and the debasement of the 3rd century may be the best “test” of whether they understood value as commodity-based or as a representation of social trust in the issuing body and other users of the currency.

Figure 2 for Fiat paper

Given that the silver content of coins decreased by over 95% (gold content decreased slower, at an exchange-adjusted rate shown in Figure 1) from 160-274 AD but inflation over this period was only slightly over 100% (see Figure 2, which shows the prices of wine, wheat, and donkeys in Roman Egypt over that period as attested by papyri). If inflation had followed the commodity value of the coins, it would have been roughly 2,000%, as the coins in 274 had 1/20th of the commodity value of coins in 160 AD. This is a major gap that can only be filled in by some other method of maintaining currency value, namely fiat.

Effectively, a gradual debasement was not followed by insipid ignorance of the reduced silver content (Gresham’s Law continued to influence hoards into the early 3rd Century), but the inflation of prices also did not match the change in commodity value, and in fact lagged behind it for over a century. This shows the influence of market forces (as monetary volume increased, so did prices), but soundly punctures the idea that coins at the time were simply a convenient way to store silver–the value of the coins was in the trust of the emperor and of the community recognition of value in imperial currency. Especially as non-imperial silver and gold currencies disappeared, the emperor no longer had to maintain an equivalence with eastern currencies, and despite enormous military and prestige-related setbacks (including an emperor being captured by the Persians and a single year in which 6 emperors were recognized, sometimes for less than a month), trade within the empire continued without major price shocks following any specific event. This shows that trust in the solvency and currency management by emperors, and trust in merchants and other members of the market to recognize coin values during exchanges, was maintained throughout the Crisis of the Third Century.

Imperial communication through coinage

This idea that fiat and social trust maintained higher-than-commodity-values of coins is bolstered by the fact that coins were a major method of communicating imperial will, trust, and power to subjects. Even as Roman coins began to be rejected in trade with outsiders, legal records from Egypt show that the official values of coins was accepted within the army and bureaucracy (including a 1:25 ratio of aureus-to-denarius value) so long as they depicted an emperor who was not considered a usurper. Amazingly, even after two major portions of the empire split off–the Gallic Empire and the Palmyrene Empire–continued to represent their affiliation with the Roman emperor, including leaders minting coins with their face on one side and the Roman emperor (their foe but the trusted face behind Roman currency) on the other and imitating the symbols and imperial language of Roman coins, through their coins. Despite this, and despite the fact that the Roman coins were more debased (lower commodity value) compared to Gallic ones, the Roman coins tended to be accepted in Gaul but the reverse was not always true.

Interestingly, the aureus, which was used primarily by upper social strata and to pay soldiers, saw far less debasement than the more “common” silver coins (which were so heavily debased that the denarius was replaced with the antoninianus, a coin with barely more silver but that was supposed to be twice as valuable, to maintain the nominal 1:25 gold-to-silver rate). This may show that the army and upper social strata were either suspicious enough of emperors or powerful enough to appease with more “commodity backing.” This differential bimetallist debasing is possibly a singular event in history in the magnitude of difference in nominal vs. commodity value between two interchangeable coins, and it may show that trust in imperial fiat was incomplete and may even have been different across social hierarchies.

Collapse following Reform

In 274 AD, after reconquering both the Gallic and Palmyrene Empire, with an excellent reputation across the empire and in the fourth year of his reign (which was long by 3rd Century standards), the emperor Aurelian recognized that the debasement of his currency was against imperial interests. He decided to double the amount of silver in a new coin to replace the antoninianus, and bumped up the gold content of the aureus. Also, because of the demands of ever-larger bread doles to the urban poor and alongside this reform, Aurelian took far more taxes in kind and far fewer in money. Given that this represented an imperial reform to increase the value of the currency (at least concerning its silver/gold content), shouldn’t it logically lead to a deflation or at least cease the measured inflation over the previous century?

In fact, the opposite occurred. It appears that between 274 AD and 275 AD, under a stable emperor who had brought unity and peace and who had restored some commodity value to the imperial coinage, with a collapse in purchasing power of the currency of over 90% (equivalent to 1,000% inflation) in several months. After a century in which inflation was roughly 3% per year despite debasement (a rate that was unprecedentedly high at the time), the currency simply collapsed in value. How could a currency reform that restricted the monetary volume have such a paradoxical reaction?

Explanation: Social trust and feedback loops

In a paper I published earlier this summer, I argue that this paradoxical collapse is because Aurelian’s reform was a blaring signal from the emperor that he did not trust the fiat value of his own currency. Though he was promising to increase the commodity value of coins, he was also implicitly stating (and explicitly stating by not accepting taxes in coin) that the fiat value that had been maintained throughout the 3rd Century by his predecessors would not be recognized going forward by the imperial bureaucracy in its transactions, thus signalling that for all army payment and other transactions, the social trust in the emperor and in other market members that had undergirded the value of money would now be ignored by the issuing body itself. Once the issuer (and a major market actor) abandoned fiat currency and stated that newly minted coins would have better commodity value than previous coins, the market–rationally–answered by moving quickly toward commodity value of the coins and abandoned the idea of fiat.

Furthermore, not only were taxes taken in kind rather than coin, but there was widespread return to barter as those transacting tried to avoid holding coins as a store of value. This pushed up the velocity of money (as people abandoned it as a store of value and paid higher and higher amounts for commodities to get rid of their currency). The demonetization/return to barter reduced the market size that was transacted in currency, meaning that there were even more coins (mostly aureliani, the new coin, and antoniniani) chasing fewer goods. The high velocity of money, under Quantity Theory of Money, would also contribute to inflation, and the unholy feedback loop of decreasing value causing distrust, which caused demonetization and higher velocity, which led to decreasing value and more distrust in coins as stores of value kept this cycle going until all fiat value was driven out of Roman coinage.

Aftermath

This was followed by Aurelian’s assassination, and there were several monetary collapses from 275 AD forward as successive emperors attempted to recreate the debased/fiat system of their predecessors without success. This continued through the reign of Diocletian, whose major reforms got rid of the previous coinage and included the famous (and famously failed) Edict on Maximum Prices. Inflation continued to be a problem through 312 AD, when Constantine re-instituted commodity-based currencies, largely by seizing the assets of rich competitors and liquidating them to fund his army and public donations. The impact of that sort of private seizure is a topic for another time, but the major lesson of the aftermath is that fiat, once abandoned, is difficult to restore because the very trust on which it was based has been undermined. While later 4th Century emperors managed to again debase without major inflationary consequences, and Byzantine emperors did the same to some extent, the Roman currency was never again divorced from its commodity value and fiat currency would have to wait centuries before the next major experiment.

Lessons for Today?

While this all makes for interesting history, is it relevant to today’s monetary systems? The sophistication of modern markets and communication render some of the signalling discussed above rather archaic and quaint, but the core principles stand:

  1. Fiat currencies are based on social trust in other market actors, but also on the solvency and rule-based systems of the issuing body.
  2. Expansions in monetary volume can lead to inflation, but slow transitions away from commodity value are possible even for a distressed government.
  3. Undermining a currency can have different impacts across social strata and certainly across national borders.
  4. Central abandonment of past promises by an issuer can cause inflationary collapse of their currency through demonetization, increased velocity, and distrust, regardless of intention.
  5. Once rapid inflation begins, it has feedback loops that increase inflation that are hard to stop.

The situation in Venezuela continues to give more lessons to issuing bodies about how to manage hyperinflations, but the major lesson is that those sorts of cycles should be avoided at all costs because of the difficulty in reversing them. Modern governments and independent currency issuers (cryptocurrencies, stablecoins, etc.) should take lessons from the early stages of previous currency trends toward trust and recognition of value, and then how these can be destroyed in a single action against the promised and perceived value of a currency.

Atomistic? Moi?

I have written a brief paper entitled ‘Hayek: Postatomic Liberal’ intended for a collection on anti-rationalist thinkers. For the time being, the draft is available from SSRN and academia.edu. Here are a couple of snippets:

Hayek offers a way of fighting the monster of Rationalism while avoiding becoming an inscrutable monster oneself. The crucial move, and in this he follows Hume, is to recognize the non-rational origins of most social institutions, but treating this neither as grounds for dismissal of those institutions as unsound, nor an excuse to retreat from reason altogether. Indeed, reason itself has non-rational, emergent origins but is nevertheless a marvelous feature of humanity. Anti-rationalist themes that appear throughout Hayek’s work include: an emphasis on learning by processes of discovery, trial and error, feedback and adaptation rather than knowing by abstract theorizing; and the notion that the internal processes by which we come to a particular belief or decision is more complex than either a scientific experimenter or our own selves in introspection can know. We are always, on some level, a mystery even to ourselves…

Departing from Cartesian assumptions of atomistic individualism, this account can seem solipsistic. When we are in the mode of thinking of ourselves essentially as separate minds that relate to others through interactions in a material world, then it feels important that we share that world and are capable of clear communication about it and ourselves in order to share a genuine connection with others. Otherwise, we are each in our separate worlds of illusion. From a Hayekian skeptical standpoint, the mind’s eye can seem to be a narrow slit through which shadows of an external world make shallow, distorted impressions on a remote psyche. Fortunately, this is not the implication once we dispose of the supposedly foundational subject/object distinction. We can recognize subjecthood as an abstract category, a product of a philosophy laden with abstruse theological baggage… During most of our everyday experience, when we are not primed to be so self-conscious and self-centered, the phenomenal experience of ourselves and the environment is more continuous, flowing and irreducibly social in the sense that the categories that we use for interacting with the world are constituted and remade through interactions with many other minds.

The mythology of Lochner v. New York

In the highly competitive world of most misunderstood Supreme Court decisions, Lochner v. New York sits high on the list. The reason is simple enough: it has undergone a transcendent ascent to the world of abstraction, where it now embodies the platonic essence of a black-robed cadre of old, straight, white men hankering to smash the plebeian’s face in the dirt.

Yesterday, the Intelligencer–a publication of New York Magazine–dragged out these old tropes with the galumphing rhetoric typical of someone simply parroting a battered playbook with no real concern for its accuracy. The article is entitled, “Conservatives Want a ‘Republic’ to Protect Privileges.” Its basic premise is to push back against the anti-democratic tendencies of those who oppose direct, untrammeled democracy.

The article lists several “limitations on democracy to justify and even expand privilege.” The second references the conservative legal movement’s supposed attempt to resurrect the “Lochner era,” in order to protect the wealthy from democratic majorities.

First, off, it’s wrong to say that the “conservative legal movement” wants to revive Lochner. Both progressive and conservative jurists are generally united in their rejection of Lochner. Robert Bork, a thoroughly majoritarian conservative, railed against the case, as did Justice Antonin ScaliaGranted, this is because the conservative legal movement, sadly, has largely embraced the progressive juridical project of the 30’s, which was devoted to weakening the judiciary in order to shove the New Deal down the nation’s throat.

Second, Lochner‘s many detractors almost never grapple with the facts of the case. As a result, they frequently misunderstand it. Here’s what actually happened. In the early 1900’s, New York enacted a nitpicky law that saddled bakeries with an avalanche of finite requirements–limits on ceiling heights, limits on the kind of floor, and the demand to whitewash the walls every three months, among other things. But the provision dealt with in Lochner was this: “No employee shall be required or permitted to work in a biscuit, bread or cake bakery or confectionary establishment than 60 hours in one week or more than 10 hours in any one day.”

A Bavarian immigrant named Joseph Lochner who owned a Utica bakery was criminally indicted for violating this law. Aman Schmitter, another immigrant, lived with his family above the bakery and worked for Joseph. Aman happily worked over sixty hours a week in order to care for his family and increase his skills, and he said so in a sworn affidavit.

It is undisputed that New York’s law was not about health, safety, or protecting workers, though New York tried to say so at the time. Rather, New York passed the law at the behest of powerful bakeries and baker unions in a patent attempt to crush small, family-owned bakeries that relied upon flexible work schedules. It gets worse–the law intentionally targeted immigrant bakeries in particular, which tended to be of the small variety that leaned on overtime. The state’s legal brief contained a detestable line that progressives today would certainly associate with Trump: “there have come to [New York] great numbers of foreigners with habits which must be changed.” This is the law that progressives who hate Lochner are defending.

In a 5-4 decision, the Supreme Court thankfully struck down this law that was passed to serve the powerful and crush a weak immigrant population. Put that way, it seems startling that anyone today would wish to stand up for this piece of anti-immigrant, protectionist garbage.

But then again, Lochner is no longer about Lochner. It’s about rejecting a mythical “Lochner era.” Progressives believe that Lochner represented an entire ecosystem of turn-of-the-century jurisprudence in which corrupt judges were smothering the will of the people wholesale. Turns out that era never existed. Law professor David Bernstein has examined old court records concerning state exercises of their police power during that time period and found that there simply was no lengthy period in which courts were whack-a-moling every piece of social legislation that dared to lift its head.

To the extent that courts of that era did strike down social legislation under the liberty of contract, they did so not to serve the wealthy, but to protect weak minorities–which is of course why robust judicial review exists in the first place. For instance, the Illinois state supreme court struck down a deeply misogynistic law limiting women’s maximum work hours. The Court used the same liberty-of-contract reasoning as Lochner, arguing that women “are entitled to the same rights under the Constitution to make contracts with reference to their labor as are secured thereby to men.” And in Bailey v. Alabama, the wicked Lochner Court struck down a Jim Crow law that created a presumption of fraud when a worker quit after getting an advance payment. The law was aimed at penalizing black workers–an attempt essentially to revive peonage. Do progressives really want to own up to disagreeing with these “Lochner era” precedents? Somehow I doubt it.

Lochner did not, as Lochner‘s enemies love to claim, replace the legislature’s judgment with the judgment of the Court. Instead, the Court was willing to look skeptically at the legislature’s motives and demand that the legislature do its work and show that a law burdening a basic right is necessary. The New York law failed that test spectacularly.

Of course, Lochner‘s legacy does demand that courts counter democratic will when it conflicts with fundamental rights. Alexander Bickel famously called this the counter-majoritarian difficulty, something that has preoccupied the judiciary for a century. If you really care about minorities, though, you might consider Judge Janice Rogers Brown’s insight: “But the better view may be that the Constitution created the countermajoritarian difficulty in order to thwart more potent threats to the Republic: the political temptation to exploit the public appetite for other people’s money–either by buying consent with broad-based entitlements or selling subsidies, licensing restrictions, tariffs, or price fixing regimes to benefit narrow special interests.”

In any case, if progressives continue to take a polly-anna view of unfettered democracy despite the evidence, they should at least bother to get the facts right on Lochner.

 

Thoughts on ‘For Method’

Our project hasn’t seen much public-facing action, but it’s still happening. For my part, I have (so far) read Lakatos’s lectures that were meant to form the basis for his joint project with Feyerabend.

Before I jump into it, let me start with my favorite quote:

The social sciences are on a par with astrology, it is no use beating about the bush. (Funny that I should be teaching at the London School of Economics!)

Imre Lakatos, p. 107 For and Against Method

These lectures were an entertaining evisceration of some old (and still prevalent) superstitions about the functioning of science, plus Lakatos’s own view on how science actually works. I think his picture (which I’ll describe below) is a pretty good one, but doesn’t actually solve the demarcation problem.

The Big Question (TBQ) is this: how do we separate good science from bad? Lakatos presents three main schools of thought (besides his own):

  1. Demarcationism — a set of schools of thought that share a belief in something like an objective answer to TBQ.
  2. Authoritarianism — the belief that there are some people who can identify good science, but can’t necessarily enunciate their positions.
  3. Anarchism — which argues (according to Lakatos) that there is no good or bad science.

He quickly rejects the various flavors of Demarcationism. These schools of thought are either logically impossible (e.g. inductivism), inconsistent with the history of science, and/or too subjective. They’re popular caricatures of science–cartoons with heroic scientists battling ignorance, limited only by funding. But they aren’t true.

For example, Falsificationism (which is alive and well, half a century later, in the minds of many practicing scientists) tells us that scientists are only swayed by disconfirmatory evidence. But in practice scientists tend to ignore anamolies (i.e. disconfirmatory evidence) with the hope that they’ll be explained away later–and they tend to be swayed by confirmatory evidence in spite of Falsificationist priors.

All told, Demarcationists run into the problem of not being able to come up with a theory that doesn’t make significant errors such as classifying Newton as bad science.

On the far side of the spectrum are anarchists. Far from believing in any formula, criteria, or line in the sand, they say TBQ misses the point entirely. There isn’t such thing as “good” science or “bad” except from the perspective of whatever the current orthodoxy says. For the objective-truth-seeking philosopher, science ultimately boils down to “anything goes!”

For Lakatos, the anarchists have basically surrendered in the face of the demarcation problem. But it’s not clear to me that Lakatos hasn’t joined them. He’s got his progression criterion (more on that later), but can we really pin that down in any objective way? Motterlini seems to think Feyerabend thought Lakatos was really an anarchist after all, and I’m inclined to agree based on what (little) I’ve seen. Lakatos offers heuristics, but makes no guarantees that any formula will work reliably.

Let me come back to Authoritarianism after describing Lakatos’s theory of research programs.

A research program is (if I’m understanding this correctly) basically a mix of scientific framework and community. Austrian Economics is a research program comprised of a common theoretical view (with some disagreements), a network of citations, and a social network across space and backwards through time. Austrian Econ contains smaller programs within it: entrepreneurship, political economy, history of thought, capital theory, etc.

Any given research program (RP) may look relatively “good”(ish) or “bad” at any given time, but the future is always uncertain. I wouldn’t bet money on it, but how am I to prove that astrology won’t turn out to be true at some point? It’s the Grue problem writ large.

What we can evaluate is whether an RP is “progressing” or “degenerating.” In the former case it’s gaining predictive power. In the latter case it’s turning into an ad hoc mess in the face of evidence.

It’s up to individual scientists to make the entrepreneurial [my word, not his] decision to invest some effort in whichever program they think is promising. The natural move would be to join a progressing RP. But there might be an opportunity to save a degenerating RP.

In other words, Lakatos wants to describe what science is doing, but he wants to avoid making value judgements about unknown futures. Rather than draw a demarcation line he instead offers a way to ask if a RP is going in the right direction (right now or retrospectively).

Let’s digress a minute and consider objective reality. Putting aside Cartesian skepticism, it seems reasonable to take the existence of an objective universe as a basic axiom. But just as surely, that objective universe has far more complications than humanity will ever be able to fully account for. The universe has more dimensions than us; what did you expect? In considering science’s ability to grasp objective reality, we have to understand that there’s always going to be some degree of (radical) uncertainty, even at the best of times.

“Good” science is that science that gets us closer to capital-T Truth. But we’ll never be in the omniscient position necessary to conclusively judge a bit of science as actually being good or not.

I think Lakatos and I share a sense that there is this objective reality that we can move towards. I think we also share an understanding that this objective reality is fundamentally inaccessible. I also share his position that the demarcationists are wrong. But I’m not ready to give up on the anarchists or the authoritarians.

Authoritarians basically argue that although there is good and bad science and that they can identify them even if they can’t explain how. Lakatos deals mostly with the uglier side of this school of thought, but misses a nicer side. That nicer version, ironically, includes him telling us things like astronomy is more valid than astronomy. To be fair, he hedges by acknowledging that the future is always uncertain… maybe in 1000 years astrology switches from a degenerating body of knowledge to a progressive one.

Hayek’s notion of tacit knowledge applies to scientific knowledge. The tacit knowledge of scientists allows them to tell future scientists things like “don’t even bother with alchemy.”

Still, just because you know something, doesn’t mean it’s right. We all “know” that Roman soldiers spoke with English accents because that’s how they’ve always been portrayed in movies. Try imagining Gladiator with Italian accents; it doesn’t work!

Sometimes authorities give us useful advice like distinguishing between astronomy and astrology. But sometimes they turn out to be wrong (after encouraging us to pursue eugenics in the meantime).

Authority is a useful guidepost, and represents the (current) structure of knowledge. I am not willing to give up my own authority because when it comes to economics, I know it’s not a matter of “anything goes!”

Reading Lakatos, I can’t quite settle on a camp between the anarchists and authoritarians. The anarchists are literally correct, but the authoritarians are able to actually make bets on a reality I think exists.

We’re all in the position of the blind men and the elephant. When someone tells me an elephant is like a tree, I think it behooves me to a) accept that as evidence about what the world is like, and b) take it with a grain of salt. The bumper sticker version of my stance might be “the Truth is out there… and its bigger than you think.”

So what about Lakatos? It’s all a bit rusty at this point so please push back in the comments. But here’s my tl;dr:

  • Don’t trust anyone who tells you they’ve got the formula for “good science.”
  • The way science actually works (as opposed to the mythology we’re taught in high school science) is that RP’s build up complex bodies of knowledge around a few core postulates. Normal science is concerned with attacking the knowledge that isn’t in that core.
  • Scientific progress (e.g. the shift from Newtonian to Einsteinian physics) isn’t an Occam process… we’re not eliminating anomalies, but changing the set of anomalies we deal with.
  • The mark of bad science is adding ad hoc theory that hand-waves away anomalies but doesn’t generalize to describing novel facts (if Nasim Taleb were in the audience, he’d be shouting via negativa! right now)

Financial History to the Rescue: The Harder Money Wins Out

This article is part of a series on bitcoin (and bitcoiners’) arguments about money and particularly financial history. See also:

(1) ‘On Bitcoiners’ Many Troubles’, Joakim Book, NotesOnLiberty (2019-08-13)
(2): ‘Rothbard’s First Impressions on Free Banking in Scotland Were Correct’, Joakim Book,
AIER (2019-08-18)

(4): ‘Bitcoin’s Fixed Money Supply Is a Weakness’, Joakim Book, AIER (2019-08-28)

The great monetary economist and early Nobel Laureate John Hicks used to say that monetary theory “belongs to monetary history, in a way that economic theory does not always belong to economic history.”

Today I’m going to illustrate exactly that with respect to the Bitcoiner’s (mistaken) progressivism in another episode of Financial History to the Rescue.

In the game of monetary competition, the Bitcoin maximalists posit, the “harder” money always wins out. I’ve been uneasy with the statement as it (1) isn’t clear to me what “harder” money (or money’s “hardness”) really means, and (2) probably isn’t historically true. So we end up with something that’s false, or vague – or both! Clearly unsatisfactory. As I pointed out in my overview post to this series, financial and monetary history is almost always more nuanced than what such simple generalizations allow.

Luckily enough, Saifedean Ammous at the Soho Forum debate last week, did inadvertently provide me with a useable definition – and I intend to use it to debunk the idea that money’s history is one of increased hardness. Repeatedly Saif claimed that monetary history, before the advent of central banking, showed us that the harder money always won out: whenever two monetary networks clashed (shells and silver; wampum and gold) the “harder” money won. The obvious implication is that Bitcoin, being the “hardest” money, will similarly win out. Right off the bat, there’s some serious problems here.

First, it’s not altogether clear that such “This time is not different” arguments apply. Yes, economic history teaches us not to discount what seems to be long-standing or universally applicable phenomena – but also to take notice of the institutional setting in which they happen. Outcomes specific to, say, the Classical Gold Standard, rarely generalize into our hyper-modern financial markets with inflation targeting central banks.

Second, over the twentieth century we literally went from the hardest money (gold) to the “softest” money (central bank-created fiat paper money). Sure, you can argue that this was unfair or imposed upon us from above by wars and welfare states, but discounting it as irrelevant strikes me as overly cherry-picking. If the hardest money “lost” before, what makes you think that your new fancy money will win out this time around?

Then Saif returned to the topic of hardness and defined it as a money whose supply is “the hardest to increase.” The hardness of Cowrie shells or Wampum or gold or Whale’s teeth or Rai stones or the other early money that Jevons listed and discussed in 1875, all rely on a difficult, costly and inconvenient process of extraction and/or production. Getting Rai stones from far-away islands, stringing beads together into extended strips of Wampum, or digging up gold from inaccessible patches of the earth were all cumbersome and expensive processes. In Saif’s mind, this contributed to their hardness. Their money stock were simply difficult to expand – in jargon: their money supplies were inelastic.

The early 1600s Dutch Republic struggled with another problem. As the main financial centre of the time, countless hard money (coins) from all over the world were used in Amsterdam. Estimates say over a thousand legally recognized kinds of coins – and presumably even more unrecognized coins. A prime setting for monetary competition: they were all pretty hard (Saif’s definition: difficult and costly to expand) commodity moneys, of various quality, origin, and recognition in trade.

Another feature of 17th century Amsterdam was the international environment of Bills of Exchange (circulating private credit notes). Briefly summarized, merchants across the world traded debts on Amsterdam bankers or traders, and rather than holding and transporting bullion across the world, they transported the debt of the most trustworthy and reliable Dutch financiers. As all such bills required a settlement medium in Amsterdam, trade on thin margins was very sensitive to fluctuations in prices between the commodity moneys in which their bills were denominated – and very sensitive to debasements and re-defined values by various European proto-governments.

In 1609, the City of Amsterdam created the Wisselbank (initially a 100% reserve exchange bank) specifically tasked with standardizing the coinage and to insulate the bill market from currency fluctuations (through providing a ‘neutral’ unit of account for bills settlement). The Bank accepted deposit of whatever coin at the legally recognized rate (unrecognized at metal content) and delivered ”high-quality Dutch trade coins” upon withdrawal. To fund itself, it added a withdrawal fee of 1.5%, but no internal transfer fee, which made holding currency at the Bank very expensive in the short-term, but very cheap in the long-term. Merchants also avoided much of the withdrawal fee by simply trading balances with one another rather than depositing and withdrawing trade coins. In return for this cost-saving, sellers of bank balances would share a portion of the funds saved with the buyer in what’s known as the “Agio”: the price of Bank money in terms of current money outside the Bank’s accounts. This price would fluctuate like any other price on the market and would indicate the stance of liquidity demands.

In a classic example of Alchian’s monetary competition by transaction costs, Dutch merchants and financiers “outsourced” the screening and assaying of unfamiliar coins. They preferred settling their transactions through the (cheaper) medium that was deposits in the Bank.

And it gets worse for the bitcoiner’s story. In 1683, the Bank coupled its deposits with specific receipts for withdrawal; to gain access to coins, one was required both to hold balances and to purchase a receipt issued by the Bank (they also changed the pricing). Roughly speaking, the Bank became a fractional reserved bank (with capped withdrawals) overnight – and contrary to what the hardness argument would imply, the agio on Bank money rose to above par!

Two monetary historians, Stephen Quinn and William Roberds, summarize one of their many writings on the Wisselbank as follows:

“imaginary money on the Bank’s ledgers succeeded because it was more reliable than the real stuff. […] The most liquid asset in the economy was no longer coin, but a sort of ‘virtual banknote’ residing in Bank of Amsterdam accounts.”

Further,

“the evolution of the agio shows that the market valued irredeemable balances as if they were closely tied to backing trade coins” (my emphasis)

The story of the Amsterdam Wisselbank’s monetary experiments and innovations show us that monetary adaption relies on many more dimensions than “hardness.” Sometimes “hard” money is defeated by “soft” money, since the softer money brought other benefits to its users – in this case a cheap and reliable settling medium.

The lesson for bitcoin-vs-fiat-vs-FinTech is pretty clear: hard money doesn’t always “win”; and sometimes “soft” money can better serve the needs of consumers in a free market.

What Albert Camus taught us about freedom

The French-Algerian author and philosopher Albert Camus is unarguably one of the most read and thought-provoking intellectuals of the 20th century. Although he mainly gained attention through his philosophical theory of the absurd, which he carefully and subconsciously embedded in his novels, Camus also decisively contributed significant ideas and thoughts to the development of freedom in the post Second World War era. That is why I want to present you five little known things we still can learn from Albert Camus’ political legacy.

  • Oppose every form of totalitarianism

After the Second World War, socialism spread across Eastern Europe and was proclaimed the alternative draft to capitalism, which was regarded to be one of the reasons for the rise of fascism in Germany. On the other side, socialism was believed to bring about freedom for everybody in the end. Even though many intellectuals at first were attracted by the socialist ideology, Camus instantly saw the dangers of its predominant “the ends justify the means” narrative. He justifiably considered the vicious suppression of opposing views in order to obtain total freedom in the future as an early shibboleth for totalitarianism.

To achieve self-realization, an individual needs personal freedom, which is one of the first victims of totalitarian despotism. Thus, Camus vigorously fought against right or left authoritarian proposals – and for individual liberty, which lead to his conclusion: “None of the evils that totalitarianism claims to cure is worse than totalitarianism itself”.

  • A diverse Europe

If one thing is for sure, then it is Camus’ unbroken love for Europe. However, his conception of Europe does not portray the continent as a possible source for collectively controlled industry, military or thoughts. In contrast, he depicts Europe as an exciting intellectual battlefield of ideas, in which for 20 centuries people revolted “against the world, against the gods, and against themselves.” Thus, European people are unified through shared ideas and values rather than divided by borders.

That is why he forecasts the emergence of an unideological Europe populated by free people and based on unity and diversity already in 1957. Although he felt a strong love for his homeland France, he notes that an expansion of the realm he defines as “home” does not necessarily affects his love in a negative way. That is why he later on even argued for the “United States of the world.”

  • Nihilism is not a solution

In “A letter to a German friend” Camus remarks certain similarities between him and the Nazis regarding their philosophical starting point. They both reject any intrinsic, predetermined meaning in this world. However, the Nazis derive an arbitrariness of defining moral categories such as “good” and “evil” as well as a human subjugation to their animal instincts from this perception. Thus, it is allowed to murder on behalf of an inhuman ideology.

Contrary, Camus insisted that this nihilism leads to self-abandonment of humanity. In turn, he argues that we must fight against the unfairness of the world by creating our own meaning of life in order to achieve happiness. If there is no deeper meaning in our existence, every person has to seek happiness in his or her own way. When we accept our destiny, even if it devastating at first glance as he describes it in “The myth of Sisyphus”, we can pursue our own goals and therefore fulfil our personal meaning of life.

  • Total artistic freedom

Considering his artistic background, Camus’ conception of the value of freedom is quite interesting. Classical liberalist such as Locke and Mill regard freedom as the state of nature: The man is born free and thus freedom is the natural state of any person. Liberty for Camus instead is a necessary condition to fulfil every personal perception of the meaning of life. That is why he particularly emphasizes the invaluable worth of liberty for humanity: When people are not free, they cannot pursue their own meaning of life and thus achieve happiness in an unfair world.

Considering the immense value art personally has for Camus, it certainly reflects a major component in his personal equation towards fulfilment, alongside other interests such as sports and love. Hence, it is not surprising that he was a lifelong supporter of total artistic freedom, which prevents nobody from obtaining happiness through individual perceptions of art. That is why he famously concludes “Without freedom, no art; art lives only on the restraints it imposes on itself and dies of all others.”

  • Abrogate the death penalty

In the chilling essay “reflections on the Guillotine” Camus insists on the abolishment of the death penalty. Apart from different scientific arguments such as low efficiency and a non-existing deterrence-effect, Camus also points out the general moral fragility of the death penalty: He is deeply worried by the state privilege of deciding over life and death. This privilege is exploited through the death penalty, which solely is a form of revenge. On the contrary, it is only triggering an unbearable spiral of violence instead of preventing it. Alternatively, he argues for being set at labour for life as maximal punishment.

Albert Camus was not an Anarcho-capitalist nor was he a libertarian. Nevertheless, he regarded individual freedom as an essential element of society and examined the inseparable relation between freedom and art. Every true work of art increased the inner freedom of its admirer and thus free art gives scope for individual happiness. One can never solely serve the other – they presuppose each other. Because of his artistic and philosophical roots, Camus provides an unusual moral argument for individual liberty, which makes him worth reading even today.

The Case for Constructivism in IR Pt. 2

After a not so short break I took from blogging in which I submitted my Bachelor Thesis and took some much-needed vacations, I finally got my hands back on writing again. Before opening up something new, I first need to finish my Case for Constructivism in IR.

In my first post, I described how constructivism emerged as a school of thought and how the key concept of anarchy is portrayed. In this part, I want to discuss power and the differences between moderate constructivism, radical constructivism and poststructuralism.

The social construction of… everything? Where to draw the line.

The connection between moderate constructivism and radical constructivism is more of a flowing transition than a sharp distinction. Scholars have further developed the idea of social constructivism and expanded it beyond the realms of the international system. Not only the international system but also states, tribes and nations are socially constructed entities. Thus taking “states” as given entities (as moderate constructivist do) in the international system neglects how national identities are constructed. Why do nations act so differently although they are subjugated to the same international system? The implications of these findings have been the subject of many influential works, notably Francis Fukuyama’s latest book “Identity” or Samuel Huntington’s “Clash of Civilization”.

The most important component which radical constructivist brought into consideration was language. The linguistic turn induced by Ludwig Wittgenstein disrupted not only philosophy but all social sciences. For decades language has been portrayed as a neutral mean to communicate between the human species which evolved from spontaneous order. Wittgenstein dismantled this image and explained why we so often suffer from linguistic confusion. Friedrich Krachtowil further applied Wittgenstein’s findings to social sciences by dividing information into three categories: Observational (“brute”), mental and institutional facts. All these three dimensions need to be taken into account in order to understand a message. The institutional setting of spoken words directly builds a bridge between speaking and acting (speech act theory). If I say, let’s nuke North Korea, I might get a weird look on the streets, but nothing significant will happen. On the other hand, if the president of the USA says the same, the institutional setting has changed, and we might have a problem with the real-world implications of this statement. The social construction of the institutional setting is highlighted by paying special interest to language as a mean of human interaction. However, how far one can go with analyzing the results of a socially constructed language without losing the bigger picture out of sight remains a difficult task.

While the radical constructivists first established a connection between language and physical action, the poststructuralists sought to discover the immanent power structures within social constructs. Michel Foucault (one of the most prolific sociologists of the 20th century with some neoliberal influence) brought the discourse and moreover discursive action into perspective, whilst Derrida or Deleuze focused more on the deconstruction of written texts. Contrary to many poststructuralists, moderate constructivists avoid being constantly fooled by Maslow’s Hammer: While it is irrefutable that power relations play a vital role in analyzing social structures, an exceedingly rigid focus on them conceals other driving forces such as peaceful, non-hierarchical cooperation for example.

Why Constructivism at all?

Moderate Constructivism puts special emphasis on the institutional setting in which certain behaviour is incentivized. This setting, however, is subject to permanent changes and perceived differently by every subjective actor in the international system. Thus, the driving problem of IR remains a coordination problem: Instead of simple state interest directed to maximize their share of the Balance of Power (as Hans Morgenthau, the father of modern IR theory, proclaimed), we must now coordinate different institutional settings in the international system resulting in a different understanding of key power resources. None of the traditional IR schools of thought hypothesizes that ontology may be subjective. Moderate constructivism manages to integrate a post-positivist research agenda without getting lost in the details of language games (like radical constructivist) or power analytics (like poststructuralists).