Confessions of a Fragilista: Talebian Redundancies and Insurance

I’ve been on a Taleb streak this year (here, here and here). Nassim Nicholas Taleb, that is, the options trader-turned-mathematician-turned public intellectual (and I even managed to get myself on his infamous blocklist after arguing back at him). Many years ago, I read Fooled by Randomness but for some reason it didn’t resonate with me and I wasn’t seeing the brilliance.

Last spring, upon reading former poker champion Annie Duke’s Thinking in Bets and physicist Leonard Mlodinow’s The Drunkard’s Walk, I plunged into Taleb land again, voraciously consuming Fooled, The Black Swan and Skin in the Game, followed by Antifragile just a few months ago.

Taleb is a strange creature; vastly productive and incredibly successful, everything he touches does not quite become gold, but surely stirs up controversy. What he’s managed to do in his popular writing (collected in the Incerto series) is to tie almost every aspect of human life into his One Big Idea (think Isaiah Berlin’s hedgehog): the role of randomness, risk and uncertainty in everyday life.

One theme that comes up again and again is the idea of redundancies: having several different and overlapping systems – back-ups to back-ups – that minimize the chance of fatally bad outcomes. The failures of one of those systems will not result in the extremely bad event you’re trying to avoid.

Focusing primarily on survivability – “absorbing barriers” – through the handed-down wisdom of the Ancients and the Classic, the take-away lesson for Taleb in almost all areas of life is overlapping redundancies. Reality is complicated, and the distribution from which events are drawn is not a well-behaved Gaussian normal distribution, but one of thick tails. How thick nobody knows, but wisdom in the presence of absorbing barriers suggest that taking extreme caution is a prudent long-term strategy.

Of course, in the short run, redundancy amounts to “wasted” resources. In chapter 4 of Fooled, Taleb relates a story from his option trading days where a client angrily calling him up about tail-risk insurance he had sold them. The catastrophic event from which the insurance protected had not taken place, and so the client felt cheated. This behavior, Taleb maintains quite correctly, is idiotic. After all, if an insurance company’s clients consist of only soon-to-be claimants, the company won’t exist for long (or it prices insurance at prohibitively high rates, undermining the business model).

Same thing applies for one of his verbose rants about airline “efficiency,” a rather absurd episode of illustrating “asymmetry” – the idea that downside risks are larger than upside gains. Consider a plane departing JFK for London, a trip scheduled to take 7h trip. Some things can happen to make the trip quicker (speedy departure, weather conditions, landing slot available etc), but only marginally; it would, for instance, not be possible to arrive in London after only an hour. In contrast, the asymmetry arises as there are many things that can delay the trip from mere minutes to infinity – again, weather events, mechanical failures, tech or communication problems.

So, when airlines striving to make their services more efficient by minimizing turnaround time – Southwest’s legendary claim to fame – they hit Taleb’s antifragile asymmetry; getting rid of redundant time on the ground, makes the process of on-loading and off-loading passengers fragile. Any little mistake can cause serious delays, delays that accumulate and domino their way through crowded airport networks.

Embracing redundancies would mean having more time in-between flights, with extra planes and extra mechanics and spare parts available at many airports. Clearly, airlines’ already brittle business model would crumble in a heartbeat.

The flipside efficiency is Taleb’s redundancy. Without optimization, we constantly use more than we need, effectively operating as a tax on all activity. Taleb would of course quibble with that, pointing out that the probability distribution of what “we need” must include Black Swan events that standard optimization arguments overlook.

That’s fine if one places as high a value on risks that Taleb does, and indeed they’re voluntarily paid for. If customers wanted to pay triple the money for airfares in order to avoid this or that delay, there is a market for that – it just seems few people value that price over the damage from (low-probability) delays.

Another example is earthquake-proving buildings that Nate Silver discussed in his The Signal and the Noise regarding the Gutenberg-Ritcher law (the reliably inverse relationship between frequency and magnitude of earthquakes). Constructing buildings that can withstand a high-magnitude earthquake, say a one-in-three-hundred-year event is something rich Californians or Japanese can afford – much-less so a poor country like the Philippines. Yes, Taleb correctly argues, the poor country pays its earthquake expenses in heightened risk of devastating damage.

Large redundancies, back-ups to back-ups, are great if you a) can afford them, and b) are risk-averse enough. Judging by his writing, Taleb is – ironically – far out along the right-tail of risk aversion; for most other people, we have more urgent needs to look after. That means occasionally “blowing up” and suffer hours and hours of airline delays or collapsing buildings after an earthquake.

Taleb rarely considers the trade-offs, and the different subjective value scales (or discount rates!) that differ between people. While Taleb may cherish his redundancies, most of us would rather eliminate them for asymmetrically small gains.

Insurance is a relative assessment of price and risks. Keeping a reserve of redundancies are subjective choices, not an objective necessities.

The Paradox of Prediction

In one of famous investor Howard Marks’ memos to clients of Oaktree Capital, the eccentric and successful fund manager hits on an interesting aspect of prediction markets and probability alike. In 1993 Marks wrote:

Being ‘right’ doesn’t lead to superior performance if the consensus forecast is also right. […] Extreme predictions are rarely right, but they’re the ones that make you big money.

Let’s unpack this.

In economics, the recent past is often a good indicator for the present: if GDP growth was 3% last quarter, it is likely around 3% the next quarter as well. Similarly, since CPI growth was 2.4% last year and 2.1% the year before, a reasonable forecast for CPI growth for 2019 is north of 2%.

If you forecast extrapolation like this, you’d be right most of the time – but you won’t make any money, neither in betting markets nor financial markets. That is, Marks explains, because the consensus among forecasters are also hoovering around extrapolations from the recent past (give or take some), and so buyers and sellers in these markets price the assets accordingly. We don’t have to go as far as the semi-strong versions of the Efficient Market Hypothesis which claim that the best guesses of all publicly available information is already incorporated into the prices of securities, but the tendency is the same.

  • if you forecasted 5% GDP growth when most everyone else forecasted 3%, and the S&P500 increased by say 50% when everyone estimated +5%, you presumably made a lot more money than most through, say, higher S&P500 exposure or insane bullish leverage.
  • If you forecasted -5% GDP growth when most everyone else forecasted 3%, and the S&P500 fell 40% when everyone estimated +5%, you presumably made a lot more money than most through staying out out S&P500 entirely (holding cash, bonds or gold etc).

But if you look at all the forecasts over time by people who predicted radically divergent outcomes, you’ll find that they quite frequently predict radically divergent outcomes – and so they would be spectacularly wrong most of the time since extrapolation is usually correct. But occasionally they do get it right. In hammering the point home, Marks says:

the fact that he was right once doesn’t tell you anything. The views of that forecaster would not be of any value to you unless he was right consistently. And nobody is right consistently in making deviant forecasts.

The forecasts that do make you serious money are those that radically deviate from the extrapolated past and/or current consensus. Once in a while – call it shocks, bubble mania or creative destruction – something large happens, and the real world outcomes land pretty far from the consensus predictions. If your forecast led you to act accordingly, and you happened to be right, you stand the make a lot of money:

Predicting future development of markets thus put us in an interesting position: the high-probability forecasts of extrapolated recent past are fairly useless, since they cannot make an investor any money; the low-probability forecasts of radically deviant change can make you money, but there is no way to identify them among the quacks, charlatans, and permabears. Indeed, the kind of people who accurately call radically deviant outcomes are the ones who frequently make such radically deviant projections and whose track record of accurately forecasting the future are therefore close to zero.

Provocatively enough, Marks concludes that forecasting is not valuable, but I think the bigger lesson applies in a wider intellectual sense to everyone claiming to have predicted certain events (market collapses, financial crises etc).

No, you didn’t. You’re a consistently bullish over-optimist, a consistent doomsday sayer, or you got lucky; correctly calling 1 outcome out of 647 attempts is not indicative of your forecasting skills; correctly calling 1 outcome on 1 attempt is called ‘luck’, even if it seems like an impressive feat. Indeed, once we realize that there are literally thousands of people doing that all the time, ex post there will invariably be somebody who *predicted* it.

Stay skeptical.