Monetary Progression and the Bitcoiner’s History of Money

In the world of cryptocurrencies there’s a hype for a certain kind of monetary history that inevitably leads to bitcoin, thereby informing its users and zealots about the immense value of their endeavor. Don’t get me wrong – I laud most of what they do, and I’m much looking forward to see where it’s all going. But their (mis)use of monetary history is quite appalling for somebody who studies these things, especially since this particular story is so crucial and fundamental to what bitcoiners see themselves advancing.

Let me sketch out some problems. Their history of money (see also Nick Szabo’s lengthy piece for a more eloquent example) goes something like this:

  • In the beginning, there was self-sufficiency and the little trade that occurred place took place through barter.
  • In a Mengerian process of increased saleability (Menger’s word is generally translated as ‘saleableness’, rather than ‘saleability’), some objects became better and more convenient for trade than others, and those objects emerged as early primative money. Normally cherry-pick some of the most salient examples here, like hide, cowrie shells, wampum or Rai stones.
  • Throughout time, precious metals won out as the best objects to use as money, initially silver and gradually, as economies grew richer, large-scale payments using gold overtook silver.
  • In the early twentieth century, evil governments monopolized the production of money and through increasingly global schemes eventually cut the ties to hard money and put the world on a paper money fiat standard, ensuring steady (and sometimes not-so-steady) inflation.
  • Rising up against this modern Goliath are the technologically savvy bitcoiners, thwarting the evil money producing empires and launching their own revolutionary and unstoppable money; the only thing that stands in its way to worldwide success are crooked bankers backed by their evil governments and propaganda as to how useless and inapt bitcoin is.

This progressively upward story is pretty compelling: better money overtake worse money until one major player unfairly took over gold – the then-best money – replacing it with something inferior that the Davids of the crypto world now intents to reverse. I’m sure it’ll make a good movie one day. Too bad that it’s not true.

Virtually every step of this monetary account is mistaken.

First, governments have almost always defined – or at least seriously impacted – decisions over what money individuals have chosen to use. From the early Mesopotamian civilizations to the late-19th century Gold Standard that bitcoin is often compared to, various rulers were pretty much always involved. Angela Redish writes in her 1993 article ‘Anchors Aweigh’ that

under commodity standards – in practice – the [monetary] anchor was put in place not by fundamental natural forces but by decisions of human monetary authorities. (p. 778)

Governments ensured the push to gold in the 18th and 19th centuries, not a spontaneous order-decentralized Mengerian process: Newton’s infamous underpricing of silver in 1717, initiating what’s known as the silver shortage; Gold standard laws passed by states; large-scale network effects in play in trading with merchants in those countries.

Secondly, Bills of Exchange – ie privately issued debt – rather than precious metals were the dominant international money, say 1500-1900. Aha! says the bitcoiner, but they were denominated in gold or at least backed by gold and so the precious metal were in fact the real outside money. Nope. Most bills of exchange were denominated in the major unit of account of the dominant financial centre at the time (from the 15th to the 20th century progressively Bruges, Antwerp, Amsterdam and London), quite often using a ghost money, in reference to the purchasing power of a centuries-old coins or social convention.

Thirdly, monetary history is, contrary to what bitcoiners might believe, not a steady upward race towards harder and harder money. Monetary functions such as the medium of exchange and the unit of account were seldomly even united into one asset such as we tend to think about money today (one asset, serving 2, 3 or 4 functions). Rather, many different currencies and units of accounts co-emerged, evolved, overtook one another in response to shifting market prices or government interventions, declined, disappeared or re-appeared as ghost money. My favorite – albeit biased – example is early modern Sweden with its copper-based trimetallism (copper, silver, gold), varying units of account, seven strictly separated coins and notes (for instance, both Stockholms Banco and what would later develop into Sveriges Riksbank, had to keep accounts in all seven currencies, repaying deposits in the same currency as deposited), as well as governmental price controls for exports of copper, partly counteracting effects of Gresham’s Law.

The two major mistakes I believe bitcoiners make in their selective reading of monetary theory and history are:

1) they don’t seem to understand that money supply is not the only dimension that money users value. The hardness of money – ie, the difficulty to increase supply – as an anchoring of price levels or stability in purchasing power is one dimension of money’s quality – far from the only. Reliability, user experience (not you tech nerds, but normal people), storage and transaction costs, default-risk as well as network effects might be valued higher from the consumers’ point of view.

2) Network effects: paradoxically, bitcoiners in quibbling with proponents of other coins (Ethereum, ripple, dash etc) seem very well aware of the network effects operating in money (see ‘winner-takes-it-all’ arguments). Unfortunately, they seem to opportunistically ignore the switching costs involved for both individuals and the monetary system as a whole. Even if bitcoin were a better money that could service one or more of the function of money better than our current monetary system, that would not be enough in the presence of pretty large switching costs. Bitcoin as money has to be sufficiently superior to warrant a switch.

Bitcoiners love to invoke history of money and its progression from inferior to superior money – a story in which bitcoin seems like the natural next progression. Unfortunately, most of their accounts are lacking in theory, and definitely in history. The monetary economist and early Nobel Laureate John Hicks used to say that monetary theory “belongs to monetary history, in a way that economic theory does not always belong to economic history.”

Current disputes over bitcoin and central banking epitomize that completely.

Asking questions about women in the academy

Doing the economist’s job well, Nobel Laureate Paul Romer once quipped, “means disagreeing openly when someone makes an assertion that seems wrong.”

Following this inspiration guideline of mine in the constrained, hostile, and fairly anti-intellectual environment that is Twitter sometimes goes astray. That the modern intellectual left is vicious we all know, even if it’s only through observing them from afar. Accidentally engaging with them over the last twenty-four hours provided some hands-on experience for which I’m not sure I’m grateful. Admittedly, most interactions on twitter loses all nuance and (un)intentionally inflammatory tweets spin off even more anger from the opposite tribe. However, this episode was still pretty interesting.

It started with Noah Smith’s shout-out for economic history. Instead of taking the win for our often neglected and ignored field, some twitterstorians objected to the small number of women scholars highlighted in Noah’s piece. Fair enough, Noah did neglect a number of top economic historians (many of them women) which any brief and uncomprehensive overview of a field would do.

His omission raised a question I’ve been hooked on for a while: why are the authors of the most important publications in my subfields (financial history, banking history, central banking) almost exclusively male?

Maybe, I offered tongue-in-cheek in the exaggerated language of Twitter, because the contribution of women aren’t good enough…?

Being the twenty-first century – and Twitter – this obviously meant “women are inferior – he’s a heretic! GET HIM!”. And so it began: diversity is important in its own right; there are scholarly entry gates guarded by men; your judgment of what’s important is subjective, duped, and oppressive; what I care about “is socially conditioned” and so cannot be trusted; indeed, there is no objectivity and all scholarly contribution are equally valuable.

Now, most of this is just standard postmodern relativism stuff that I couldn’t care less about (though, I am curious as to how it is that the acolytes of this religion came to their supreme knowledge of the world, given that all information and judgments are socially conditioned – the attentive reader recognises the revival of Historical Materialism here). But the “unequal” outcome is worthy of attention, and principally the issue of where to place the blame and to suggest remedies that might prove effective.

On a first-pass analysis we would ask about the sample. Is it really a reflection of gender oppression and sexist bias when the (top) outcome in a field does not conform to 50:50 gender ratios? Of course not. There are countless, perfectly reasonable explanations, from hangover from decades past (when that indeed was the case), the Greater Male Variability hypothesis, or that women – for whatever reason – have been disproportionately interested in some fields rather than others, leaving those others to be annoyingly male.

  • If we believe that revolutionising and top academic contributions have a long production line – meaning that today’s composition of academics is determined by the composition of bright students, say, 30-40 years ago – we should not be surprised that the top-5% (or 10% or whatever) of current academic output is predominantly male. Indeed, there have been many more of them, for longer periods of time: chances are they would have managed to produce the best work.
  • If we believe the Greater Male Variability hypothesis we can model even a perfectly unbiased and equal opportunity setting between men and women and still end up with the top contribution belonging to men. If higher-value research requires smarter people working harder, and both of those characteristics are distributed unequally between sexes (as the Greater Male Variability hypothesis suggests), then it follows naturally that most top contributions would be men.
  • In an extension of the insight above, it may be the case that women – for entirely non-malevolent reasons – have interests that diverge from men’s (establishing precise reasons would be a task for psychology and evolutionary biology, for which I’m highly unqualified to assess). Indeed, this is the entire foundation on which the value of diversity is argued: women (or other identity groups) have different enriching experiences, approach problems differently and can thus uncover research nobody thought to look at. If this is true, then why would we expect that superpower to be applied equally across all fields simultaneously? No, indeed, we’d expect to see some fields or some regions or some parts of society dominated by women before others, leaving other fields to be overwhelmingly male. Indeed, any society that values individual choice will unavoidably see differences in participation rates, academic outcomes and performance for precisely such individual-choice reasons.

Note that none of this excludes the possibility of spiteful sexist oppression, but it means judging academic participation on the basis of surveys responses or that only 2 out of 11 economic historians cited in an op-ed were women, may be premature judgments indeed.

In Defense of Not Having a Clue

Timely, both in our post-truth world and for my current thinking, Bobby Duffy of the British polling company IPSOS Mori recently released The Perils of Perception, stealing the subtitle I have (humbly enough) planned for years: Why We’re Wrong About Nearly Everything. Duffy and IPSOS’s Perils of Perception surveys are hardly unknown for an informed audience, but the book’s collection and succint summary of the psychological literature behind our astonishingly uninformed opinions, nevertheless provide much food for thought.

Producing reactions of chuckles, indignation, anger, and unseeming self-indulgent pride, Duffy takes me on a journey of the sometimes unbelievably large divergence between the state of the world and our polled beliefs about the world. And we’re not primarily talking about unobservable things like “values” here; we’re almost always talking about objective, uncontroversial measures of things we keep pretty good track of: wealth inequality, share of immigrants in society, medically defined obesity, number of Facebook accounts, murder and unemployment rates. On subject after subject, people guess the most outlandish things: almost 80% of Britons believed that the number of deaths from terrorist attacks between 2002 and 2016 were more or about the same as 1985-2000, when the actual number was a reduction of 81% (p. 131); Argentinians and Brazilians seem to believe that roughly a third and a quarter of their population, respectivelly, are foreign-born, when the actual numbers are low single-digits (p. 97); American and British men believe that American and British women aged 18-29 have had sex as many as 23 times in the last month, when the real (admittedly self-reported) number is something like 5 times (p. 57).

We can keep adding astonishing misperceptions all day: Americans believe that more than every third person aged 25-34 live with their parents (reality: 12%), but Britons are even worse, guessing almost half (43%) of this age bracket, when reality is something like 14%; Australians on average believe that 32% of their population has diabetes (reality more like 5%) and Germans (31% vs 7%), Italians (35% vs 5%), Indians (47% vs 9%) and Britons (27% vs 5%) are similarly mistaken.

The most fascinating cognitive misconception is Britain’s infected relationship with inequality. Admittedly a confusing topic, where even top-economists get their statistical analyses wrong, inequality makes more than just the British public go bananas. When asked how large a share of British household wealth is owned by the top-1% (p. 90), Britons on average answered 59% when the reality is 23% (with French and Australian respondents similarly deluded: 56% against 23% for France and 54% against 21% for Australia). The follow-up question is even more remarkable: asked what the distribution should be, the average response is in the low-20s, which, for most European countries, is where it actually is. In France, ironically enough given its current tax riots, the respondents’ reported ideal household wealth proportion owned by the top-1% is higher than it already is (27% vs 23%). Rather than favoring upward redistribution, Duffy draws the correct conclusion:

“we need to know what people think the current situation is before we ask them what they think it should be […] not knowing how wrong we are about realities can lead us to very wrong conclusions about what we should do.” (p. 93)

Another one of my favorite results is the guesses for how prevalent teen pregnancies are in various countries. All of the 37 listed countries (p. 60) report numbers around less than 3% (except South Africa and noticeable Latin American and South-East Asian outliers at 4-6%), but respondents on average quote absolutely insane numbers: Brazil (48%), South Africa (44%) Japan (27%), US (24%), UK (19%).

Note that there are many ways to trick people in surveys and report statistics unfaithfully and if you don’t believe my or Duffy’s account of the IPSOS data, go figure it out for yourself. Regardless, is the take-away lesson from the imagine presented really that people are monumentally stupid? Ignorant in the literal sense of the world (“uninstructed, untututored, untaught”), or even worse than ignorant, having systematically and unidirectionally mistaken ideas about the world?

Let me confess to one very ironic reaction while reading the book, before arguing that it’s really not the correct conclusion.

Throughout reading Duffy’s entertaining work, learning about one extraordinarily silly response after another, the purring of my self-indulgent pride and anger at others’ stupidity gradually increased. Glad that, if nothing else, that I’m not as stupid as these people (and I’m not: I consistently do fairly well on most questions – at least for the countries I have some insight into: Sweden, UK, USA, Australia) all I wanna do is slap them in the face with the truth, in a reaction not unlike the fact-checking initiatives and fact-providing journalists, editorial pages, magazines, and pundits after the Trump and Brexit votes. As intuitively seems the case when people neither grasp nor have access to basic information – objective, undeniable facts, if you wish – a solution might be to bash them in the head or shower them with avalanches of data. Mixed metaphors aside, couldn’t we simply provide what seems to be rather statistically challenged and uninformed people with some extra data, force them to read, watch, and learn – hoping that in the process they will update their beliefs?

Frustratingly enough, the very same research that indicate’s peoples inability to understand reality also suggests that attempts of presenting them with contrary evidence run into what psychologists have aptly named ‘The Backfire Effect’. Like all force-feeding, forcing facts down the throats of factually resistent ignoramuses makes them double down on their convictions. My desire to cure them of their systematic ignorance is more likely to see them enshrine their erroneous beliefs further.

Then I realize my mistake: this is my field. Or at least a core interest of the field that is my professional career. It would be strange if I didn’t have a fairly informed idea about what I spend most waking hours studying. But the people polled by IPSOS are not economists, statisticians or data-savvy political scientists – a tenth of them can’t even do elementary percent (p. 74) – they’re regular blokes and gals whose interest, knowledge and brainpower is focused on quite different things. If IPSOS had polled me on Premier League results, NBA records, chords or tunes in well-known music, chemical components of a regular pen or even how to effectively iron my shirt, my responses would be equally dumbfunded.

Now, here’s the difference and why it matters: the respondents of the above data are routinely required to have an opinion on things they evidently know less-than-nothing about. I’m not. They’re asked to vote for a government, assess its policies, form a political opinion based on what they (mis)perceive the world to be, make decisions on their pension plans or daily purchases. And, quite a lot of them are poorly equipped to do that.

Conversely, I’m poorly equipped to repair literally anything, work a machine, run a home or apply my clumsy hands to any kind of creative or artful endeavour. Luckily for me, the world rarely requires me to. Division of Labor works.

What’s so hard with accepting absence of knowledge? I literally know nothing about God’s plans, how my screen is lit up, my car propels me forward or where to get food at 2 a.m. in Shanghai. What’s so wrong with extending the respectable position of “I don’t have a clue” to areas where you’re habitually expected to have a clue (politics, philosophy, virtues of immigration, economics)?

Note that this is not a value judgment that the knowledge and understanding of some fields are more important than others, but a charge against the societal institutions that (unnaturally) forces us to. Why do I need a position on immigration? Why am I required (or “entitled”, if you believe it’s a useful duty) to select a government, passing laws and dealing with questions I’m thoroughly unequipped to answer? Why ought I have a halfway reasonable idea about what team is likely to win next year’s Superbowl, Eurovision, or Miss USA?

Books like Duffy’s (Or Rosling’s, or Norberg‘s or Pinkers) are important, educational and entertaining to-a-t for someone like me. But we should remember that the implicit premium they place on certain kinds of knowledge (statistics and numerical memory, economics, history) are useful in very selected areas of life – and rightly so. I have no knowledge of art, literature, construction, sports, chemistry or aptness to repair or make a single thing. Why should I have?

Similarly, there ought to be no reason for the Average Joe to know the extent of diabetes, immigration or wealth inequality in his country.

Innovation and the Failure of the Great Man Theory

We tend to think about innovation as inventions and particularly about the inventors associated with them: Newton, Edison, Jobs, Archimedes, Watt, Arkwright.  This Great Man Theory of incredible technical innovation is mostly implicitly held by quite a few of us, celebrating these great men and their deeds.

Matt Ridley, the author of The Rational Optimist and The Evolution of Everything among other credentials, has spent a lot of time and effort in recent years arguing against this theory. In his recent Hayek Lecture to the British Institute of Economic Affairs he convincingly outlines his case: so many independent innovations take place roughly at the same time by different people. The Great Man Theory leads us to believe that  hadn’t it been for Edison, we’ll all be in the dark and humanity deprived ofall the benefits that came with the innovation.

Not so. There were a great number of contemporary inventors who came upon versions of the lightbulb (Ridley cites 21 or 23 or them, depending on whom you include) around the same time as Edison. The story can be repeated for most other great inventions we know of: laws of thermodynamics, calculus, most metals, typewriting machines, jet engines, the ATM, Oxygen. Indeed, the phenomenon is so common that it has its own term: simultaneous invention.

It seems, in complete contrast to the Great Man Theory, that history provided a certain problem, a sufficient number of people working on solving it at a certain time, and eventually similar inventions taking place around the same time. The process is, Ridley concludes, “gradual, incremental, collective yet inescapable inevitable […] it was bound to happen when it did”.

Interestingly enough for those of us schooled and fascinated by spontaneous orders and bottom-up social and economic phenomena, the Great Man Theory is remarkably similar to other beliefs about the world. It is a symptom of the same reasoned short-comings that makes us humans susceptible to believing in zero-sum thinking, top-down organizing and “design-implies-a-designer”. Instead of grasping the deep insights of gains from trade, spontaneous order or evolution, we are tempted by the militaristically directed organizations that we believe we understand rather than the emergent order of many independently acting individuals’ trials and errors.

Precisely this bias makes us susceptible to the mistake Mariana Mazzucato has become famous for wholeheartedly embracing: the idea that, whatever the innovation, government probably did it. That government innovation is productive – or at least more productive than is commonly presumed – and indeed societies can greatly benefit from ramping up government R&D spending. Nevermind incentives, track records or statistical robustness.

Indeed, what Ridley points to is precisely that valuable and life-changing innovation cannot be directed. Admittedly, some innovation does occur in labs, but only a vanishingly small part. Mazzucato and other top-downers could have benefited greatly from listening to Ridley (or reading his book The Rational Optimist; or reading Demsetz’ devastating 1969 article ‘Information and Efficiency’).

Coming full circle and espousing the Hayekian insights, Ridley notes that the price is everything. Specifically the reduction in prices is what matters for innovations to be spread and adopted rather than the ideas themselves. Very little happens in terms of adoption and transmission until prices start to fall dramatically (hint, hint, Bitcoin… or nuclear energy, or renewable energy…). Like the printing press and the steam engine, interesting things start to happen when prices fall – not because an innovation is particularly cool in some subsection of society.

Innovation is a deeply decentralized yet deeply collective process. We face similar challenges that occassionally come to similar conclusions – and history would in all likelihood have progress exactly the same had we not had a Newton or Edison or Jobs.

The Factual Basis of Political Opinions

“Ideology is a menace.” Paul Collier says in his forthcoming book The Future of Capitalism and I couldn’t agree more: ideology (and by extension morality) “binds and blinds”, as psychology professor Jonathan Haidt describes it, and ideology, especially utopian dreams by dedicated rulers, is what allows – indeed accounts for – the darkest episodes of humanity. There is a strange dissonance among people for whom political positions, ideology and politics are supremely important:

  • They portray their position as if supported by facts and empirical claims about the world (or at least spit out such claims as if they did believe that)
  • At the same time, believing that their “core values” and “ideological convictions” are immune to factual objections (“these are my values; this is my opinion”)

My purpose here is to illustrate that all political positions, at least in part, have their basis in empirically verifiable claims about the world. What political pundits fail to understand is not only that facts rule the world, but that facts also limits the range of positions one can plausible take. You may read the following as an extension of “everyone is entitled to his or her own opinions, but not to his or her own facts”. Let me show you:

  • “I like ice-cream” is an innocent and unobjectionable opinion to have. Innocent because hey, who doesn’t like ice-cream, and unobjectionable because there is no way we can verify whether you actually like ice-cream. We can’t effortlessly observe the reactions in your brain from eating ice-cream or even criticize such a position.
  • “Ice-cream is the best thing in the world”, again unobjectionable, but perhaps not so innocent. Intelligent people may very well disagree over value scales, and it’s possible that for this particular person, ice-cream ranks higher than other potential candidates (pleasure, food, world peace, social harmony, resurrection of dinosaurs etc).
  • “I like ice-cream because it cures cancer”. This statement, however, is neither unobjectionable nor innocent. First, you’re making a causal claim about curing cancer, for which we have facts and a fair amount of evidence weighing on the matter. Secondly, you’re making a value judgment on the kinds of things you like (namely those that cure cancer). Consequently, that would imply that you like other things that cure cancer.

Without being skilled in medicine, I’m pretty sure the evidence is overwhelmingly against this wonderful cancer-treating property of ice-cream, meaning that your causal claim is simply wrong. That also means one that you have to update your position through a) finding a new reason to like ice-cream, thus either invoking some other empirical or causal statement we can verify or revert back to the subjective statements of preferences above, b) renege on your ice-cream position. There are no other alternatives.

Now, replace “ice-cream” above with *minimum wages* and “cancer” with “poverty” or any other politically contested issue of your choice, and the fundamental point here should be obvious: your “opinions” are not simply innocent statements of your unverifiable subjective preferences, but contain some factual basis in them. If political opinions, then, consists of subjective value preferences and statements about the world and/or causal connections between things, you are no longer “entitled to your own opinions”. You may form your preferences any way you like – subject to them being internally consistent – but you cannot hold opinions that are based on incorrect observations or causal derivations about the world.

Let me invoke my national heritage, illustrating the point more clearly from a recent discussion on Swedish television.  The “inflammatory” Jordan Peterson, as part of his world tour, visited Norway and Sweden over the past weekend. On Friday he was a guest at Skavlan, one of the most viewed shows in either country (boasting occasionally of more viewers than the large sport events) and– naturally– discussed feminism and gender differences. After explaining the scientific evidence for biological gender differences*, and the observed tendency for maximally (gender) egalitarian societies to have the largest rather than smallest gender-related outcomes, Peterson concludes:

“there are only two reasons men and women differ. One is cultural, and the other is biological. And if you minimize the cultural differences, you maximize the biological differences… I know – everyone’s shocked when they hear this – this isn’t shocking news, people have known this in the scientific community for at least 25 years.”

After giving the example of diverging gender rates among engineers and nurses he elaborates on equality of opportunity, to which one of the other talk show guests, Annie Lööf (MP and leader of the fourth largest party with 9% of the parliamentary seats) responds with feelings and personal anecdotes. Here’s the relevant segment transcribed (for context and clarity, I slightly amended their statements):

Peterson: “One of the answers is that you maximize people’s free choice. […] If you maximize free choice, then you also maximize differences in choice between people – and so you can’t have both of those [maximal equality of opportunity and minimal differences along gender lines]”

Lööf: “because we are human beings [there will always be differences in choice]; I can’t see why it differs between me and Skavlan for instance; of course it differs in biological things, but not in choices. I think more about how we raise them [children], how we live and that education, culture and attitudes form a human being whether or not they are a girl or a boy.”

Peterson: “Yes, yes. That is what people who think that the differences between people are primarily culturally constructed believe, but it is not what the evidence suggests.”

Lööf: “Ok, we don’t agree on that”.

So here’s the point: this is not a dispute over preferences. Whether or not biology influences (even constitute, to follow Pinker) the choices we make is not an “I like ice-cream” kind of dispute, where you can unobjectionably pick whatever flavor you like and the rest of us simply have to agree or disagree. This is a dispute of facts. Lööf’s positon on gender differences and her desire to politically alter outcomes of people’s choices is explicitly based on her belief that the behaviour of human beings is culturally predicated and thus malleable. If that causal and empirical proposition is incorrect (which Peterson suggests it is), she can no longer readily hold that position. Instead, what does she do? She says: “Ok, we don’t agree”, as if the dispute was over ice-cream!

Political strategizing or virtue signalling aside, this perfectly illustrates the problem of political “opinions”: they espouse ideological positions as the outcome of enlightened or informed fact-based positions, but when those empirical statements are disproved, they revert to being expressions of subjective preference without a consequent diminution of their worth! Conservatives still gladly hum along to Trump’s protectionism, despite overwhelmingly being contradicted in the factual part of their opinion; progressives heedlessly champion rent control, believing that it helps the poor when it overwhelmingly hurts the poor. And both camps act as if the rest of us should pay attention or go out of our way to support them over what, at best, amounts to “I like ice-cream” proposals.

Ideology is a menace, and political “opinions” are the forefront of that ideological menace.

____

*(For a comprehensive overview of the scientific knowledge of psychological differences between men and women, see Steven Pinker’s The Blank Slate – or Pinker’s well-viewed TED-talk outline.)

The Capitalist Peace: What Happened to the Golden Arches Theory?

Many are familiar with the Democratic Peace Theory, the idea that two democracies have never waged war against one another. The point is widely recognized as one of the major benefits of democracy, and the hand-in-hand development of more democracies and fewer/less-devastating wars than virtually any other period of human history, is a tempting and enticing explanation.

Now, it is not overly difficult to come up with counter-examples to the Democratic Peace Theory, and indeed there’s an entire Wikipedia page dedicated to it. Here are some notable and obvious counters:

  • Yugoslavian wars of the 1990s
  • First Kashmir War between India and Pakistan War (1947-49)
  • Various wars between Israel and its neighbors (1967, 1973, 2006 etc)
  • The Football war (1969)
  • Paquisha and Cenepa wars (1981, 1995)

Some people even include the First World War and various 18th and 19th century armed conflicts between major powers (American War of Independence comes to mind), but the question of when a country becomes a democracy naturally arises.

There’s another, equally enticing explanation, the main rationale underlying European Integration: The Capitalist Peace, or in a more entertaining and relatable version: The Golden Arches Theory – as advanced by New York Times columnist Thomas Friedman in the mid-1990s:

No two countries that both have a McDonald’s have ever fought a war against one another.

Countries, frankly, “have too much to lose to ever go to war with one another.” As a proposition it seems reasonable, an extension of the phrase apocryphally attributed to Bastiat: “When goods don’t cross borders, soldiers will”. And not because your Big Mac meal comes with a side of peace-and-love or enhanced conflict-resolution skills, but because the introduction of McDonald’s stores represents close economic interdependence and global supply chains. After all, if your suppliers, your customers or your collegues consists of people on the other side of a potential military conflict, a war seems even less useful. Besides – paraphrasing Terry Anderson and Peter Hill in their superb The Not So Wild Wild Westtrading is cheaper than raiding. Even as adamant a critic as George Monbiot admits that a fair number of McDonald’s outlets “symbolised the transition” from poor and potentially trouble-making countries, to richer and peace-loving ones.

Not unlike poor Thomas Malthus, whose convicing theory had been correct up until that  point, reality rapidly decided to invalidate Friedman’s tongue-in-cheek explanation. Not long after it was published, the McDonald’s-ised nations of Pakistan and India decided to up their antics in the Kargil war, quickly undermining its near-flawless explanatory power of Friedman’s. Not one to leave all the fun to others, Russia engaged in no more than two wars in the 2000s to undermine the Golden Arches theory: the 2008 war with Georgia, and more recently the Crimean crisis. Adherring to their namesake creation, McDonald’s pull-out from Crimea was just a tad too late to vindicate Friedman.

The Capitalist Peace, the academic extension of the general truism that trading is cheaper than raiding, came undone pretty quickly thanks in part to our Russian friends. The updated version, the Dell Theory of Conflict Prevention, may unfortunately fall into the same trap as the Democratic Peace Theory: invoking ambiguous definitions that may ultimately collapse to mere than tautologies.

Do You have Silver?

In an episode of the Netflix medieval series The Last Kingdom, the protagonist Uthred, trying to purchase a sword from a blacksmith in a town he is just passing by, is instantly asked “Do you have silver?”.

In one scene, insignificant to the plot, the series creators neatly raised some fundamental questions in monetary economics, illustrating the relative use of credit and cash and the importance of finality.

For many centuries, the very payment system between people set severe constraints on what kinds of transactions they could – or dared – engage in. There are two main ways of providing payments (with quite a few variations within these categories): cash or credit.

Cash (sometimes referred to as ‘money transactions’) refers to payments with direct finality; the economic chain is instantly settled, and gives rise to no other economic relation. Examples here would be pure barter (where one object is traded for another) or commodity money (where an object is traded for a common media of exchange, with history providing countless fascinating examples: cattle, skin, olive oil, feathers, pearls etc).

The other category, credit, involves trading someone else’s liability or incurring a new one. Modern credit cards easily comes to mind: swiping that card settles the trade between the vendor and the customer who used the card only by creating two new (future) economic relations – a promise by the credit card company to transfer funds to the vendor, and a promise by the customer to pay the credit card company at the end of the month. The same features can be – and were – applied in many early societies; I give you some of my items, and you owe me; later I may transfer this “claim” to somebody else is the community in exchange for something I wanted, and instead of owing me, you owe them.

Some of the difficulties of monetary economics are here quickly revealed. In order for credit to work, a sufficient level of trust, repeated dealings or enforcement mechanisms must exist. If one or more parties do not trust each other, the two are unlikely to trade again or cannot socially or legally force the other into upholding his or her contract, they may refuse the deal up-front and lose the benefits of trade (the “backward induct,” in Game Theory-speak). Nevertheless going through with this transaction requires a different payment system: instant finality, such as provided with cash. Here’s the conundrum that troubles monetary economists:

The frictions that are needed to make money essential typically make credit infeasible and environments where credit is feasible are ones where money is typically not essential (Ugolini, The Evolution of Central Banking, p. 169)

If we trust each other enough (or have enough repeat dealings and a system of keeping track of everyone’s debts), there is no need for cash. If there is need for cash, that means we do not trust each other (or can’t keep track/enforce debts), indicating the presence of “frictions” that make us reluctant to use credit at all.

Let’s go back to our Last Kingdom protagonist. It is clear that the two characters are strangers (no previous dealings, no trust) and from simply passing through a village, no reason for the blacksmith to believe that there may be repeat dealings. A credit transaction is thus clearly out of question. Instead he directly asks for silver (cash), which initially seems to solve the problem. However, two further issues emerge:

  1. if all transactions were like this, the amount of cash everyone must carry around in the economy would be enormous. A common problem in medieval and even early modern societies were the lack of coins. If enough cash was simply not there and recourse to credit system unfeasible, we quickly realise how difficult transacting would be.
  2. even if the customer had enough cash, the very reason they were reluctant to use credit in the first place (no trust, no repeat dealings, no credible enforcement) harms their ability to transact in cash. Howso? Because both parties can opportunistically defect from the agreement. If the sword is paid for up front, the blacksmith can take the money and run – since they are strangers and unlikely to meet again, the cost of cheating is comparatively low. If the sword is paid for at delivery, the customer can easily renege on payment once delivery is obtained.

Is there no way out?

Uthred and the blacksmith use a method most of us are familiar with – indeed, probably even used as kids – pay half up-front, and half on delivery, with the possibility of a bonus payment (tip) at the end. Risk-minimising, yet offering payoff through the gains from trade.

Good monetary economics does precisely that: illustrating how monetary systems, including payment systems, can facilitate transactions and expand rather than limit the available gains from trade. It concerns itself with one of those spheres of (economic) life that we don’t notice until they breaks down. Try completing everyday transactions in countries with small-change shortage for a neat flashback to eighteenth century Britain or U.S., or in countries impaired by hyperinflation or sanctions. Monetary economics, in essence, is fascinating in its complexity of otherwise quite mundane things. Thanks to The Last Kingdom team for illustrating that.