Computational Economics is the Right Perspective

Here’s a vastly oversimplified picture of mainstream economics: We pick some phenomenon, assume all the context into the background, then build a model that isolates only the variables specifically relevant to that phenomenon.

Once you’ve simplified the problem that way, you can usually build a formal mathematical model, make a few more (hopefully) reasonable enough assumptions, and make some strong ceteris paribus claims about your chosen phenomena.

That’s a reasonable enough approach, but it doesn’t shed much light on big picture issues. I’m interested in root causes, and this “reduce things to their component parts” approach doesn’t give enough of a big picture to find those roots.

How do we broaden our perspective? One approach is to return to the more “literary” approach of the pre-Samuelson days. A bit of philosophy of science has me convinced that the primary flaw of such an approach is rhetorical. Written and mathematical arguments leave some assumptions in the background, but the latter is more convincing to a generation of economists trained to be distrustful of natural language (and too trusting of algebra).

As a pluralist, I think we should use as many approaches as we can. Different schools of thoughts allow you to build different imaginary worlds in your mind. But the computational approach isn’t getting enough play. I’d go so far as saying agent based modeling is the right form of mathematics for social science.

What does this mean? In a nutshell, it means modelling processes, simulating those processes, and seeing how interactions between different agents leads to different sorts of outcomes.

A common trope among Emergent Order folks is how ants are individually stupid but collectively brilliant. Neoclassical economics runs into the opposite problem: individually brilliant individuals who get trapped in Prisoners’ Dilemmas.

Computational economics starts with models that are more like ants than homo economicus. Agents are essentially bundles of heuristics/strategies in an out-of-equilibrium world. But these competing (and cooperating) strategies can interact in interesting ways. Each agent is a part of all the other agents’ environment, so the mix of strategies is a function of the success of the strategies which is a function of the mix of strategies in the environment.

In essence, computational economics starts from what the mainline economists have long recognized: human society is a complex, interwoven, recursive process. The world is, essentially, a sort of meta computer with a complex web of programs interacting and evolving. We don’t need to assume in any sort deus ex machina (that’s a bit of an overstatement, but we haven’t got time to explore it this week), we just need replicating entities that can change over time.

Such a view, to my mind, provides an end run around rationality assumptions that can explain the brilliance of entrepreneurship (without making heroes out of the merely lucky) as well as the folly unearthed by Behavioralist economics (without the smugness). We’ve always known it. It’s all just evolution. But the methodology hasn’t made its way into the main stream of economics. If there are any undergrads reading this on their way to a PhD program, let me know in the comments so I can point you in some interesting directions!

“Cybernetics in the Service of Communism”

In October, 1961, just in time for the opening of the XXII Party Congress, a group of Soviet mathematicians, computer specialists, economists, linguists, and other scientists interested in mathematical model and computer simulation published a collection of papers called “Cybernetics in the Service of Communism”. In that collection they offered a wide variety of applications of computers to various problems in science and in the national economy.

From this video interview of MIT Lecturer (and historian) Vyacheslav Gerovitch conducted by the website Serious Science. The interview is only 15 minutes long.

Libertarian IQ

I recently stumbled across an old essay from the early 1990s written by a libertarian activist that is absolutely fascinating. The activist is a computer scientist currently at the University of Washington, Stuart Reges, and the essay is on the connection between intelligence and libertarianism.

Suffice it to say, many people cannot understand libertarianism simply because they cannot think in abstractions the way that libertarians seem to do. Computer programmers are another group characterized by high intelligence and Mr. Reges makes an important connection in his essay between the two, with logic bringing the two together.  He writes:

The student in my hypothetical story displays the classic mistake of treating symptoms rather than solving problems. The student knows the program doesn’t work, so he tries to find a way to make it appear to work a little better. As in my example, without a proper model of computation, such fixes are likely to make the program worse rather than better. How can the student fix his program if he can’t reason in his head about what it is supposed to do versus what it is actually doing? He can’t. But for many people (I dare say for most people), they simply do not think of their program the way a programmer does. As a result, it is impossible for a programmer to explain to such a person how to find the problem in their code. I’m convinced after years of patiently trying to explain this to novices that most are just not used to thinking this way while a small group of other students seem to think this way automatically, without me having to explain it to them.

Let me try to start relating this to libertarian philosophy. Just as programmers have a model of computation, libertarians have what I call a model of interaction. Just as a programmer can “play computer” by simulating how specific lines of code will change program state, a libertarian can “play society” by simulating how specific actions will change societal state. The libertarian model of interaction cuts across economic, political, cultural, and social issues. For just about any given law, for example, a libertarian can tell you exactly how such a law will affect society (minimum wage laws create unemployment by setting a lower-bound on entry-level wages, drug prohibition artificially inflates drug prices which leads to violent turf wars, etc.). As another example, for any given social goal, a libertarian will be able to tell you the problems generated by having government try to achieve that goal and will tell you how such a goal can be achieved in a libertarian society.

I believe this is qualitatively different from other predictive models because of the breadth of the model and the focus on transitions (both of which are also true of programming).

Indeed. I should note here that ‘libertarian’ in the Reges definition means libertarian and not Ron Paul Republican, self-declared Austrian economist, or dedicated follower of some dead economist. Those people give the rest of us a bad name by hiding behind the libertarian moniker to make flawed arguments and baseless assertions, knowing full well that if they made the exact same argument under the moniker of a conservative nobody would take them seriously.

You can read the essay in its entirety below the fold. Continue reading