Here’s a vastly oversimplified picture of mainstream economics: We pick some phenomenon, assume all the context into the background, then build a model that isolates only the variables specifically relevant to that phenomenon.
Once you’ve simplified the problem that way, you can usually build a formal mathematical model, make a few more (hopefully) reasonable enough assumptions, and make some strong ceteris paribus claims about your chosen phenomena.
That’s a reasonable enough approach, but it doesn’t shed much light on big picture issues. I’m interested in root causes, and this “reduce things to their component parts” approach doesn’t give enough of a big picture to find those roots.
How do we broaden our perspective? One approach is to return to the more “literary” approach of the pre-Samuelson days. A bit of philosophy of science has me convinced that the primary flaw of such an approach is rhetorical. Written and mathematical arguments leave some assumptions in the background, but the latter is more convincing to a generation of economists trained to be distrustful of natural language (and too trusting of algebra).
As a pluralist, I think we should use as many approaches as we can. Different schools of thoughts allow you to build different imaginary worlds in your mind. But the computational approach isn’t getting enough play. I’d go so far as saying agent based modeling is the right form of mathematics for social science.
What does this mean? In a nutshell, it means modelling processes, simulating those processes, and seeing how interactions between different agents leads to different sorts of outcomes.
A common trope among Emergent Order folks is how ants are individually stupid but collectively brilliant. Neoclassical economics runs into the opposite problem: individually brilliant individuals who get trapped in Prisoners’ Dilemmas.
Computational economics starts with models that are more like ants than homo economicus. Agents are essentially bundles of heuristics/strategies in an out-of-equilibrium world. But these competing (and cooperating) strategies can interact in interesting ways. Each agent is a part of all the other agents’ environment, so the mix of strategies is a function of the success of the strategies which is a function of the mix of strategies in the environment.
In essence, computational economics starts from what the mainline economists have long recognized: human society is a complex, interwoven, recursive process. The world is, essentially, a sort of meta computer with a complex web of programs interacting and evolving. We don’t need to assume in any sort deus ex machina (that’s a bit of an overstatement, but we haven’t got time to explore it this week), we just need replicating entities that can change over time.
Such a view, to my mind, provides an end run around rationality assumptions that can explain the brilliance of entrepreneurship (without making heroes out of the merely lucky) as well as the folly unearthed by Behavioralist economics (without the smugness). We’ve always known it. It’s all just evolution. But the methodology hasn’t made its way into the main stream of economics. If there are any undergrads reading this on their way to a PhD program, let me know in the comments so I can point you in some interesting directions!