Nightcap

  1. The strange relationship between virtue and violence Barbara King, Times Literary Supplement
  2. Nixon’s path to peace included bombing Cambodia Rick Brownell, Medium
  3. The suboptimality of the nation-state Branko Milanovic, globalinequality
  4. The threshold of land invasion Nick Nielsen, Grand Strategy Annex

Nightcap

  1. The intellectual distrust of democracy Jacob Levy, Niskanen
  2. Leave John Locke in the dustbin of history John Quiggin, Jacobin
  3. In defense of neoliberalism William Easterly, Boston Review
  4. The predated mind (our animal origins) Nick Nielsen, Grand Strategy Annex

Nightcap

  1. From the pussy hat to the liberty cap Marion Coutts, 1843
  2. Sweet waters grown salty Nathan Stone, Not Even Past
  3. A case for learning to read 17th century Dutch Julie van den Hout, JHIblog
  4. A danse macabre in Kermaria (Brittany) Kenan Malik, Pandaemonium

We have seen the algorithm and it is us.

The core assumption of economics is that people tend to do the thing that makes sense from their own perspective. Whatever utility function people are maximizing, it’s reasonable to assume (absent compelling arguments to the contrary) that a) they’re trying to get what they want, and b) they’re trying their best given what they know.

Which is to say: what people do is a function of their preferences and priors.

Politicians (and other marketers) know this; the political battle for hearts and minds is older than history. Where it gets timely is the role algorithms play in the Facebookification of politics.

The engineering decisions made by Facebook, Google, et al. shape the digital bubbles we form for ourselves. We’ve got access to infinite content online and it has to be sorted somehow. What we’ve been learning is that these decisions aren’t neutral because they implicitly decide how our priors will be updated.

This is a problem, but it’s not the root problem. Even worse, there’s no solution.

Consider one option: put you and me in charge of regulating social media algorithms. What will be the result? First we’ll have to find a way to avoid being corrupted by this power. Then we’ll have to figure out just what it is we’re doing. Then we’ll have to stay on top of all the people trying to game the system.

If we could perfectly regulate these algorithms we might do some genuine good. But we still won’t have eliminated the fundamental issue: free will.

Let’s think of this through an evolutionary lens. The algorithms that survive are those that are most consistent with users’ preferences (out of acceptable alternatives). Clickbait will (by definition) always have an edge. Confirmation bias isn’t going away any time soon. Thinking is hard and people don’t like it.

People will continue to chose news options they find compelling and trustworthy. Their preferences and priors are not the same as ours and they never will be. Highly educated people have been trying to make everyone else highly educated for generations and they haven’t succeeded yet.

A better approach is to quit this “Rock the Vote” nonsense and encourage more people to opt for benign neglect. Our problem isn’t that the algorithms make people into political hooligans, it’s that we keep trying to get them involved under the faulty assumption that people are unnaturally Vulcan-like. Yes, regular people ought to be sensible and civically engaged, but ought does not imply can.

Nightcap

  1. The story of our species needs rewriting again Christopher Bae, Aeon
  2. Conjuring anthropology’s future Simon During, Public Books
  3. Picasso’s year of erotic torment Michael Prodger, New Statesman
  4. “What Have the Romans Ever Done For Us?” Robert Darby, Quillette

Deep Learning and Abstract Orders

It is well known that Friedrich Hayek once rejoiced at Noam Chomsky’s evolutionary theory of language, which stated that the faculty of speaking depends upon a biological device which human beings are enabled with. There is no blank slate and our experience of the world relies on structures that come from the experience in itself.

Hayek would be now delighted if he were told about the recent discoveries on the importance of background knowledge in the arms race between human beings and Artificial Intelligence. When decisions are to be taken by trial and error at the inside of a feedback system, humans are still ahead because they apply a framework of abstract patterns to interpret the connections among the different elements of the system. These patterns are acquired from previous experiences in other closed systems and provide with a semantic meaning to the new one. Thus, humans outperform machines, which work as blank slates, since they take information only from the closed system.

The report of the cited study finishes with the common place of asking what would happen if some day machines learn to handle with abstract patterns of a higher degree of complexity and, then, keep up with that human relative advantage.

As we stated in another place, those abstract machines already exist and they are the legal codes and law systems that enable their users with a set of patterns to interpret controversies concerning human behaviour.

What is worth being asked is not whether Artificial Intelligence eventually will surpass human beings, but what group of individuals will overcome the other: the one which uses technology or the one which refuses to do so.

The answer seems quite obvious when the term “technology” is related to concrete machines, but it is not so clear when we apply it to abstract devices. I tried to ponder the latter problem when I outlined an imaginary arms race between policy wonks and lawyers.

Now, we can extend these concepts to whole populations. Which of these nations will prevail over the other ones: the countries whose citizens are enabled with a set of abstract rules to based their decisions on (the rule of law) or the despotic countries, ruled by the whim of men?

The conclusion to be drawn is quite obvious when we are confronted with a so polarised question. Nevertheless, the problem becomes more subtle when the disjunction concerns on rule of law vs deliberate central planning.

The rule of law is the supplementary set of abstract patterns of conduct that gives sense to the events of the social reality in order to interpret human social action, including the political authority.

In the case of central planning, those abstract patterns are replaced by a concrete model of society whose elements are defined by the authority (after all, that is the main function of Thomas Hobbes’ Leviathan).

Superficially considered, the former – the rule of law as an abstract machine – is irrational while the latter – the Leviathan’s central planning – seems to respond to a rational construction of the society. Our approach states that, paradoxically, the more abstract is the order of a society, the more rational are the decisions and plans that the individuals undertake, since they are based on the supplementary and general patterns provided by the law, whereas central planning offers to the individuals a poorer set of concrete information, which limits the scope of the decisions to those to be based on expediency.

That is why we like to state that law is spontaneous. Not because nobody had created it -in fact, someone did – but because law stands by itself the test of time as the result of an evolutionary process in which populations following the rule of law outperform the rival ones.

A long read

According to Instapaper this article at Wait But Why is a “139 minute read.” And it was time well spent.

It’s about a new Elon Musk venture, Neuralink, but there’s plenty non-Musk stuff in there of interest. I’m agnostic on whether Elon Musk is or isn’t the next coming of the (anti-) Christ. What’s really interesting is the background material this article gives, building up a highly entertaining natural history of knowledge. The section below really captures the main thrust of that story, but it’s worth reading anyways.

minimal tribal knowledge growth before language

And this:

That leads into a discussion of how brains work, that “soft pudding you could scoop with a spoon.” Here are some excerpts:

I’m pretty sure that gaining control over your limbic system is both the definition of maturity and the core human struggle. It’s not that we would be better off without our limbic systems—limbic systems are half of what makes us distinctly human, and most of the fun of life is related to emotions and/or fulfilling your animal needs—it’s just that your limbic system doesn’t get that you live in a civilization, and if you let it run your life too much, it’ll quickly ruin your life.

And…

Which leads us to the creepiest diagram of this post: the homunculus.

The homunculus, created by pioneer neurosurgeon Wilder Penfield, visually displays how the motor and somatosensory cortices are mapped. The larger the body part in the diagram, the more of the cortex is dedicated to its movement or sense of touch. A couple interesting things about this:

First, it’s amazing that more of your brain is dedicated to the movement and feeling of your face and hands than to the rest of your body combined. This makes sense though—you need to make incredibly nuanced facial expressions and your hands need to be unbelievably dexterous, while the rest of your body—your shoulder, your knee, your back—can move and feel things much more crudely. This is why people can play the piano with their fingers but not with their toes.

Second, it’s interesting how the two cortices are basically dedicated to the same body parts, in the same proportions. I never really thought about the fact that the same parts of your body you need to have a lot of movement control over tend to also be the most sensitive to touch.

Finally, I came across this shit and I’ve been living with it ever since—so now you have to too. A 3-dimensional homunculus man.17

This is too far outside my area of specialization to say, but it’s certainly an entertaining read that seems to fit with what I know about these topics (although the section on neurology could be made up as far as I know).

From there it builds up to the moon shot idea Musk apparently has in mind: building the core technology for a high bandwidth mind-computer interface. This would be the ultimate logical extreme of a trend towards better interfaces that’s been going on since before punch cards. If you think over the natural history of knowledge, it becomes clear that this idea is ultimately just a few dozen steps further down a path we’ve been on for billions of years.

And the implications of taking it that far are profound. The pros and cons of that power are huge. Consider how much more powerful your brain is with paper and pencil than without. Or a computer. Or a computer with a GUI and a copy of Excel. Once you can plug into your computer Matrix-style, all those awesome hot keys that let you zip through your computer like a pro will be like roller skates next to a rocket sled. And the two-way link would mean we could genuinely exercise some self control… for example,  by running a computer program that zaps you when you eat too much chocolate cake.

Readers of this blog will hear Hayek warning you! Such a device gives you a lot of power to manipulate a complicated thing in ways we may never be able to understand.

But this type of personal power might be a necessary bulwark against government or corporate power. Network externalities have already locked us in to Google and Facebook. A Byzantine government has created rent-seeking opportunities that puts enormous power in the hands of the politically connected. The NSA is terrifying. And machine learning will continue to get better, giving those entrenched players even more ability to understand and manipulate large numbers of people. (I’m not endorsing this forecast, just listing it as a possibility.)

In any case, even if Neuralink is just an April Fool’s joke I missed out on till now, this article provides a theory of knowledge that’s well worth reading.

In the near future I’ll argue why you need such a theory of knowledge. Stay tuned.