The core assumption of economics is that people tend to do the thing that makes sense from their own perspective. Whatever utility function people are maximizing, it’s reasonable to assume (absent compelling arguments to the contrary) that a) they’re trying to get what they want, and b) they’re trying their best given what they know.
Which is to say: what people do is a function of their preferences and priors.
Politicians (and other marketers) know this; the political battle for hearts and minds is older than history. Where it gets timely is the role algorithms play in the Facebookification of politics.
The engineering decisions made by Facebook, Google, et al. shape the digital bubbles we form for ourselves. We’ve got access to infinite content online and it has to be sorted somehow. What we’ve been learning is that these decisions aren’t neutral because they implicitly decide how our priors will be updated.
This is a problem, but it’s not the root problem. Even worse, there’s no solution.
Consider one option: put you and me in charge of regulating social media algorithms. What will be the result? First we’ll have to find a way to avoid being corrupted by this power. Then we’ll have to figure out just what it is we’re doing. Then we’ll have to stay on top of all the people trying to game the system.
If we could perfectly regulate these algorithms we might do some genuine good. But we still won’t have eliminated the fundamental issue: free will.
Let’s think of this through an evolutionary lens. The algorithms that survive are those that are most consistent with users’ preferences (out of acceptable alternatives). Clickbait will (by definition) always have an edge. Confirmation bias isn’t going away any time soon. Thinking is hard and people don’t like it.
People will continue to chose news options they find compelling and trustworthy. Their preferences and priors are not the same as ours and they never will be. Highly educated people have been trying to make everyone else highly educated for generations and they haven’t succeeded yet.
A better approach is to quit this “Rock the Vote” nonsense and encourage more people to opt for benign neglect. Our problem isn’t that the algorithms make people into political hooligans, it’s that we keep trying to get them involved under the faulty assumption that people are unnaturally Vulcan-like. Yes, regular people ought to be sensible and civically engaged, but ought does not imply can.
2 thoughts on “We have seen the algorithm and it is us.”
I’m tempted to argue that human beings don’t have a single utility function. One of the many interesting things coming out of a serious scientific [as opposed to dogmatic] examination of human decision making shows that preferences are formed within cognitive frames. Experiments in behavioral economics show that people are risk averse and depending on how choices are framed will make different choices even when expected values are identical. In other words “…a) they’re trying to get what they want…” depends on cognitive framing. It’s not just an issue of information (given what they know) it’s an issue of what they want.
[…] it’s uncool because it’s popular. But the truth is, the crappiness that is Facebook is just a reflection of a large swath of consumers. And I’m allowed to opt […]