The Roots of Truth and the Roots of Knowledge

John Oliver raises a Hayekian point on the roots of knowledge:

Just because they believed you and you believed them, doesn’t make it true! This isn’t like Peter Pan where believing in fairies will keep Tinker Bell alive. This isn’t a magic thing Peter, she has Lou Gehrig’s Disease.

He’s rightly picking on Donald Trump, who has a) been a particularly bad epistemologist, and b) should be held to a higher standard because he’s the president.

But the truth is that we’re all in the same boat: we believe what we hear from what we believe are reputable sources (because we heard those sources were reputable from sources we believed to be reputable). Most of our knowledge we take on faith from other people. In essence, we can’t simply know the truth in a vacuum; we depend on the context created by our culture, language, and personal experience. It’s only by trusting others that we can stand on the shoulders of giants.

What’s so special about science is that the standards are higher than in other domains. Knowledge has been carefully curated over generations by fallable humans engaged in a particular subculture of society. To the extent science makes good predictions, it creates value in society, and to the extent it can verify and capture that value, its practitioners get funding and get taken (mostly) seriously by the educated public.

You might notice that there are many places where science can go wrong. And the history of science is replete with blind alleys and shameful episodes. But also glorious advances in our knowledge, capability, and humanity. The same is true of all areas of life that deal with knowledge from politics and journalism to how you clean your kitchen. To the extent we see both competition and cooperation (in a variety of institutional forms) we will tend to see knowledge and truth converge. (I think.)

In this respect, we’re all, essentially, in the same boat. We should expect fallability and adopt a humble attitude. As surely as I want to believe John Oliver’s portrayal of current events (most of the time), I’m not about to fly to DC to check things out for myself.

Because, this isn’t about belief, it can’t be… Faith and Fact aren’t like Bill Pullman and Bill Paxton. When you confuse them it actually matters. Real people get hurt when you make policy based on false information.

We face trade offs when it comes to knowledge. Received wisdom might be correct enough to operate a bed and breakfast. But we’ve created real fragility in our political system by vesting so much power in the White House. It means that the standard of truth has to be so high that not even a crazed billionaire hell-bent on becoming president (a segment of society usually celebrated for their levelheadedness!) can be trusted to pursue.

Let me sum up:

  1. Our knowledge is always based on the trust we place in others. As such we can be more or less certain about any thing we might know. I am very certain (0.99×10^-100) that gravity exists and keeps me rooted to the earth, but less certain (0.05) that I am organizing my bookshelves correctly.
  2. We can, and do, have different standards of truth in different areas of our lives. I don’t make any important decisions that don’t account for the severity of gravity. But I’m not going to sweat it if I put a new book on an inappropriate shelf.
  3. We absolutely need to hold our government to very high standards. Nuclear weapons are scary, but lesser powers also call for very high standards. The level of certainty I’d insist on for nukes is at least an order of magnitude higher than the level for regulating pollution. But the level of certainty for the latter is orders of magnitude higher than might be possible under alternative arrangements.
  4. At the same time, we have to accept our own fallability, particularly when it comes to our ability to accurately know the truth. But that’s no reason to be nihilistic; it should inspire a striving for constant improvement in general (while making the appropriate trade offs on the margin).

Hitching to Sweden at 17

Below is an excerpt from my book I Used to Be French: an Immature Autobiography. You can buy it on amazon here.

Anyway, on that first hitching trip to Sweden, I had some hard times because I looked ordinary. Hitchhiking long distance, it turns out, serves to steel you against a childish sense of injustice that lingers long after childhood among some people. On one occasion, somewhere in the northern Netherlands, I had been standing on the side of the road for two hours when two Highlanders in full kilt regalia including furry sporran passed me and politely started signaling from behind me. (That is, they took their proper place in the queue.) Soon, they were followed by a guy in complete white Saharan outfit, with turban, white flowing robes, and all. The Highlanders were gone in ten minutes. Before the Saharan got picked up, he had the time to confirm that he lived in Paris, that he did not dress that way to work or university, and that he had bought the outfit for the specific purpose of hitchhiking to Sweden to meet girls. I felt under-dressed in my plain cotton pants and my short-sleeved blue shirt. Later, I made myself understand that my travel slowness was not a result of bad luck but of inferior knowledge and skills.

We can’t engineer our way out of this

Folks on the left have been getting more interested in science lately (though history tells us that might be something to worry about). They’re right to celebrate the incredible results of scientific progress–but scientific victory isn’t uniform across disciplines.

In some areas (including just about all areas on the cutting edge), scientists disagree with one another.It’s a big, complex world we live in, and we don’t understand it fully. That disagreement doesn’t mean we should discount science entirely, but it does mean we should be careful with it.

Imagine a world where engineers disagreed about the capabilities of their techniques and the strength of the materials they use. Some might be beholden to special interests (which gives me an idea for a public choice version of the Three Little Pigs), others might be dogmatic/superstitious. But even without concerns of systemic issues, we should be hesitant to try to get to the moon. That disagreement should tell us that we aren’t certain enough in our knowledge to make anyone but volunteers put their lives in the hands of those engineers.

Social scientist in particular frequently disagree with each other. Most are trying earnestly to apply the scientific way of thinking to understanding the social world, and it’s worth considering their view points. But applying that knowledge should only be done in a decentralized way. Applying the incredible insights of behavioral economics from the top down is appealing, but it’s probably best to do it piecemeal.

Social engineering and social science are harder than physical engineering and the physical sciences. Part of the problem Western governments face is that they’re trying to engage in social engineering. And politicians are promising them greater degrees of social engineering to improve the well-being of their constituents.

The trouble is two-fold: 1) those social engineering techniques aren’t good enough, even if they’re sometimes appealing. 2) The cost to decision makers of buying snake oil is too low in voting booths.

To my friends who are looking to the government to make things better: whether your hope is for government to help people be better versions of themselves, or to stop bad guys, we should push as much of that policy to the local level as possible. It might be nice if the whole country were more like Berkeley or Salt Lake City, but trying to make it happen at the national level is a recipe for conflict, disorder, and doing more harm than good. Keep policy local.

Summing up: the year of irrationality

Brandon says I’ve got one last chance to write his favorite post of the year. But it’s the end of a long semester and I’m brain dead, so I’m just going to free ride on his idea: a year end review. If I were to sum up the theme of this year in a word, that word would be irrational.

After 21 months of god awful presidential campaigning, we were finally left with a classic Kodos vs Kang election. The Democrats were certain that they could put forward any turd sandwich and beat Trump, but they ultimately lost out to populist outrage. Similar themes played out with Brexit, but I don’t know enough to comment.

Irrationality explains the Democrats, the Republicans, and the country as a whole. The world is complex, but big decisions have been made by simple people.

We aren’t equipped to manage the world’s complexity.

We aren’t made to have direct access to The Truth; we’re built to survive, so we get a filtered version of the truth that has tended to keep our ancestors out of trouble long enough to get laid. In other words, what seems sensible to each of us, may or may not be the truth. What we see with our own eyes may not be worth believing. We need more than simple observation to actually ferret out The Truth.

Our imperfect perceptions build on imperfect reasoning faculties to make imperfect folk economics. But what sounds sensible often overlooks important moving parts.

For every complex problem there is an answer that is clear, simple, and wrong.

Only a small minority of the population will ever have a strong grasp on any particularly complex thing. As surely as my mechanic will never become an expert in economics, I will never be able to do any real work on my car. The trouble arises when we expect me or my mechanic to try to run the country. The same logic applies to politicians, whose job (contrary to what your civics teacher thinks) is to get re-elected, not to be a master applied social scientist. (And as awful as democracy is, the alternative is just some other form of political competition… there is no philosopher king.)

But, of course, our imperfect perception and reasoning have gotten us this far. They’ve pulled us out of caves and onto the 100th floor of a skyscraper*. Because in many cases we get good enough feedback to learn a lot about how to accomplish things in our mysterious universe.

We’re limited in what we can do, but sometimes it’s worth trying something. The trouble is, I can do things that benefit me at your expense. And this is especially true in politics (also pollution–what they have in common is hot air!). But it’s not just the politicians who create externalities, it’s the electorate. The costs of my voting to outlaw gravity (the simplest way for me to lose a few pounds) are nil. But when too many of us share the same hare-brained idea, we can do some real harm. And many people share bad ideas that have real consequences.

Voting isn’t the only way to be politically engaged, and we face a similar problem in political discourse in general. A lot of Democrats are being sore losers about this election rather than learning and adapting. Trump promised he would have done the same had he lost. We’re basically doomed to have low-quality political discourse. It’s easy and feels (relatively) good to bemoan that the whole world is going to hell.

We’re facing rational irrationality. Everyone is simply counting on someone else to get their shit together, because each of us individually is more comfortable with our heads firmly up our asses.

It’s a classic tragedy of the commons and it should prompt us to find some way to minimize the harm of our lousy politics. We’ve been getting better at this over the centuries. Democracy means the levers of power can change hands peacefully. Liberalization has entailed extending civil and economic rights to a wider range of people. We need to continue in this vein. More freedom has allowed more peace and prosperity.


So what do we do? I’d argue that we should focus on general rules rather than trying to have flawed voters pick flawed politicians and hope for the best. I don’t mean “make all X following specifications a, b, and c.” I mean, if you’re mad, try and sue someone. We don’t need dense and exploitable regulations. We don’t need new commissions. We just need a way for people to deal with problems as they arise. Mind you, our court system (like the rest of our government) isn’t quite ready for a more sensible world. But we can’t be afraid to be a little Utopian when we’re planning for the long run. But let’s get back to my main point…

We live in an irrational world. And it makes sense that it’s that way; rationality is hard. We can see irrationality all around us, but we see it most where it’s cheapest: politics and Facebook. The trouble is, sometimes little harmless irrational acts add up to cause real harm. Let’s admit we’ve got a problem with irrationality in politics so we can get better.

*Although that’s only literally true in 17 cases.

Asking the Wrong Question

How do the United States and others achieve victory against Islamic State without empowering sectarian actors who will seek to poison the reconciliation that Iraq needs to hang together?

That’s the question posed by Craig Whiteside, an associate Professor of Theater Security Decision Making for the Naval War College at the Naval Postgraduate School, over at War on the Rocks. Dr Whiteside’s recommendations (“avoid all cooperation with sectarian militias, continue to target Islamic State with minimal collateral damage, patiently train and equip the security forces, ensure it’s done by Iraqis with subtle, behind the scenes help”) are just what you’d expect from a military strategist with a PhD, but his question highlights well what’s wrong with current thinking on non-state actors in Washington and also explains why central planning fails in areas other than managing an economy.

Whiteside’s line of thought is pretty standard, and it goes something like this: Islamic State is bad and Iraq is good. Islamic State is bad not because it lawlessly slaughters more people than Iraq (obviously not true, especially when you account for the Hussein regime), but because it is a non-state actor with political, economic, cultural, and military capabilities that threaten the existence of state actors. Hence his worry over how to defeat Islamic State while still keeping Iraq in one piece.

This is a terrible way to think about international relations and strategy, and it governs the logic of the republic’s finest thinkers.

Why not think about the situation in the Levant in the following way instead:

There is a “world order” of sorts that is composed of states. The states themselves have been patched together over the course of centuries. The world order itself has been patched together over the course of centuries.

Iraq is a state that was patched together by the UK and France, in accordance with the logic of the world order at the time. Thus, Iraq was able to become a legitimate member of organizations like the UN, FIFA, OPEC, etc. However, because Iraq was patched together by the world order rather than by the people of Iraq (acting through contentious factions), it can only, ever “hang together” under a regime governed by a strong man.

The appearance of Islamic State in what is now Iraq is just an attempt by Iraqis to govern themselves. Islamic State is an attempt, made possible by the power vacuum left by the invasion and occupation of the US and its allies, to join the world order (hence the “state” in Islamic State). It’s a horrible attempt, which is just what you’d expect from a people who have likely never had a chance to experiment in self-governance. Nevertheless, people in what is now Iraq are trying to patch together their own states.

The world order should recognize these attempts instead of trying to maintain the status quo. Change can be a good thing. As an example, just compare the brutality of the Hussein regime, a legitimate state actor, with that of Islamic State. It’s not even a contest, especially in terms of people murdered.

Wouldn’t recognizing Whitehead’s “sectarian actors,” instead of seeking to isolate or destroy them, be a much better avenue to peace and prosperity in the region? Recognition by the world order, haphazardly and pragmatically patched together itself, would bestow responsibility onto non-state actors. It would signal a trust in the ability of Iraqis to govern themselves. It would help to rationalize diplomacy and trade in the region. And it would put an end to the vicious cycle of strong men in the Middle East.

Instead of asking what the US and its allies can do to eliminate violent non-state actors from the region, isn’t it time to start asking what the West can do, as equal partners, to facilitate more self-governance in the Levant?

That central planning suffers from a knowledge problem is a given in many elite economics circles today (even economists at the Federal Reserve recognize it), but I don’t think this argument has extended into other fields of thought or other bureaucracies yet. A fatal conceit indeed.

The selfish meme

There are two senses in which to consider the phrase.

  1. The sense in which memes enter or exit our minds.
  2. The sorts of behavior encouraged by our memes.

For those who don’t know what I’m talking about:

A meme (/ˈmm/ meem)[1] is “an idea, behavior, or style that spreads from person to person within a culture”.

Richard Dawkins introduced the idea in his famous book: The Selfish Gene. The bulk of his book discusses examining the gene as the basic unit of analysis in evolutionary studies. He introduces the idea of the meme as a different form of replicator. Both genes and memes will only be reflected in the outcome of biological and cultural evolution if they exhibit fitness–if they are able to survive.

So the cultural traditions that helped hunter-gatherer societies survive droughts or harsh winters tended to survive and spread. Over time a culture accumulates this sort of practical, tacit knowledge. (Side note: this week’s Econtalk has Cesar Hidalgo who does really interesting work trying to indirectly measure the presence of such tacit knowledge in market economies.) And if culture is made up of memes, the same way organisms are made up of genes.

Looking at genes as the unit of analysis (as opposed to the organism) explains some otherwise mysterious behavior. It provides a plausible explanation of altruism: we care for our children more than anyone because 50% of their genes are our genes. A nephew is still precious, but not as important to us because the expected ratio of shared genes between the two of you is 25%. A gene that prompts you to protect your children is likely to survive longer than a gene that doesn’t prompt you. (And genes that hang around with such kin-protects genes are also more fit than their competition.) A gene that prompted you to be kind to neighbors makes sense when you live in small groups. But a gene that prompted you to be kind to total strangers might be a liability in a world where strangers were dangerous.

Cultural evolution certainly makes sense as a gradual mutation of different cultural practices merging together to make what is called (and perceived) as a unique body of culture. It’s a complex of knowledge, ideas and basic assumptions, social interface protocols, and it’s deeply embedded in how we engage in the world. (Perhaps we can’t remember our infancy because we didn’t have a cultural lens through which to reference anything to anything else…) One thing that I’m sure we’ve all noticed is that it can be almost painful to have to reject a cherished belief. It’s even difficult to see one of these memes challenged.

Now genes don’t have to be small bits of genetic code. They can be something simple like “make this enzyme when you get a chance.” But as a unit of replication, you should consider the smallest discrete chunk of genetic coding that replicates itself. If a particular pattern isn’t fit, it will leave the gene pool, while the fit collections of genetic instruction spread. So you might end up with long complex strings of genetic material akin to a computer program. Initially simple scripts might gather as successful collections of genes that work well together. The “produce stomach acid” gene works well with the “produce a stomach” gene and soon the two are virtually inseparable. They’ve become a simple script: “Do this, then that, then maybe this other thing.” Scripts gather into multi-cellular organisms with different functions that can respond differently to different stimuli. Soon you’ve got a complex set of code as your replicating unit.

More complex genes are necessary to prompt more complex behavior. It’s worth noting that Dawkin’s theoretical framework sometimes looks like a hyper-rational economics model. Evolutionary Stable Strategies are a Nash Equilibria that are robust to invaders. They occupy a niche and survive. But this evolution is happening in the context of increasing complexity. The system is learning*. This isn’t an instantaneous process**, but it is gradually becoming more sophisticated.

A complex gene will get bugs due to random mutation, but as long as it’s still generally fit, it will survive. And over time, more subtle and sophisticated programs identify new niches. And we get plant genes surviving by filling the “eat sunlight” niche and animals in the “eat plants” niche, and bacteria co-evolving with animals’ digestive systems.

Slowly working through this long, blind, random process genes surviving this hostile environment develop behaviors that help them flourish (the “four F’s of evolution: Fighting, fleeing, feeding, and reproduction”). Gradually they stumble into opportunities, and an important one was social behavior.

More and more complexity, round and round, until we start to get our first little bits of sentience. I’ve been watching a chinchilla hop around my apartment for a couple weeks now and I’m astonished by how much effort she puts into genuinely exploring her world. She tests objects for structural integrity and learns what she can and can’t jump on. She tests boxes with her teeth, I don’t know what for. She’s distinctly learning and not merely existing or surviving. She’s comfortable and does not know fear (I’ve seen her scare one particularly wussy cat). That sort of behavior requires a great deal of complexity which requires a great deal of genetic material.

I’m noticing as I write this that the biggest gene (i.e. discrete, replicated set of genetic code) must be that very large collection of genetic patterns that must come together in order for a one’s offspring to simply be the same species.*** I’ve heard that humans and chimps share 94 percent of our genetic material. That overlap tells me that some larger percentage than that is what makes us actually a human. The difference between any two individuals, then, must be among a very small portion of their total genetic makeup. This small portion is where genetic competition occurs in the arena of sexual reproduction.

In any case, our first memes (behaviors) seem to be transmitted biologically. Later, with more complex genes, we are able to replicate more complex behaviors. Eventually, we get complex enough to build up a sense of consciousness****.

A complex enough gene might have a subroutine that sets off an error; something like the pain our consciousness experiences when things are going poorly*. And likewise for a meme. Though more likely is that the error is being returned by our psychology. (If our genes are assembly language, our psychology is the operating system, and culture is the mess of basic programs that makeup our desktop environment.)

When we think of memes as self-replicating units, interesting questions arise: what sort of patterns will be robust to competition? Which will occupy what niches? What happens when incompatible memes come together in one mind? What sort of eusocial behaviors are possible? How much do our memes govern our behavior? (This is where nature and nurture overlap.)

Obviously one possibility is a “selfishness meme,” or a culture that hits an equilibrium of distrust. But there are many others, and how they combine matters. At this level we’re essentially asking questions about psychology, culture, and institutions. The fodder of all the social sciences comes together here. Different memes will be transmitted in different ways (which is perhaps what defines the disciplines), but any of these memes may be complex enough to have a defense mechanism that involves activating various processes (including other memes, perhaps) and perhaps making people feel anger and related emotions when someone questions our beliefs and may even push people to fight with their life for their memes.

*We’re computers, markets are computers, societies are computers, the ecosystem is a computer, Earth is just a big giant computer. It processes data and creates new data.

** The next Hayek rap should include the phrase “it’s spontaneous order, not instantaneous…”

*** I could imagine it as made up of some set of smaller genes in some complex, rather than one monolithic gene but I don’t have the language to communicate that idea concisely.

**** And it must be noted that this consciousness is built out of parts designed for the poop-and-panic machines that were our evolutionary ancestors. It’s like building a super computer* out of a truck load of Pez dispensers and a warehouse full of chainsaws. And yet, how else could it be done.

BC’s weekend reads

  1. Worldwide weeds
  2. The Mushroom That Explains the World
  3. …True Tales of Dharma, Demons, and Darwin
  4. From Spain to the New World via Florence and Vermont (be sure to scroll through the ‘comments’ thread)
  5. Time for Bolivians to Forget about the Sea (weak, but a good starting point for a discussion)
  6. Dissolution of the Templars