Nightcap

  1. Excellent analysis of Trump’s impeachment and acquittal Greg Weiner, Law & Liberty
  2. Chinese encounters with the rest of the world Henrietta Harrison, TLS
  3. Moctezuma’s empire has fallen, but so too has the Spanish.” Ben Ehrenreich, Guardian
  4. Boundary conditions for emergent complexity Nick Nielsen, Grand Strategy Annex

Computational Economics is the Right Perspective

Here’s a vastly oversimplified picture of mainstream economics: We pick some phenomenon, assume all the context into the background, then build a model that isolates only the variables specifically relevant to that phenomenon.

Once you’ve simplified the problem that way, you can usually build a formal mathematical model, make a few more (hopefully) reasonable enough assumptions, and make some strong ceteris paribus claims about your chosen phenomena.

That’s a reasonable enough approach, but it doesn’t shed much light on big picture issues. I’m interested in root causes, and this “reduce things to their component parts” approach doesn’t give enough of a big picture to find those roots.

How do we broaden our perspective? One approach is to return to the more “literary” approach of the pre-Samuelson days. A bit of philosophy of science has me convinced that the primary flaw of such an approach is rhetorical. Written and mathematical arguments leave some assumptions in the background, but the latter is more convincing to a generation of economists trained to be distrustful of natural language (and too trusting of algebra).

As a pluralist, I think we should use as many approaches as we can. Different schools of thoughts allow you to build different imaginary worlds in your mind. But the computational approach isn’t getting enough play. I’d go so far as saying agent based modeling is the right form of mathematics for social science.

What does this mean? In a nutshell, it means modelling processes, simulating those processes, and seeing how interactions between different agents leads to different sorts of outcomes.

A common trope among Emergent Order folks is how ants are individually stupid but collectively brilliant. Neoclassical economics runs into the opposite problem: individually brilliant individuals who get trapped in Prisoners’ Dilemmas.

Computational economics starts with models that are more like ants than homo economicus. Agents are essentially bundles of heuristics/strategies in an out-of-equilibrium world. But these competing (and cooperating) strategies can interact in interesting ways. Each agent is a part of all the other agents’ environment, so the mix of strategies is a function of the success of the strategies which is a function of the mix of strategies in the environment.

In essence, computational economics starts from what the mainline economists have long recognized: human society is a complex, interwoven, recursive process. The world is, essentially, a sort of meta computer with a complex web of programs interacting and evolving. We don’t need to assume in any sort deus ex machina (that’s a bit of an overstatement, but we haven’t got time to explore it this week), we just need replicating entities that can change over time.

Such a view, to my mind, provides an end run around rationality assumptions that can explain the brilliance of entrepreneurship (without making heroes out of the merely lucky) as well as the folly unearthed by Behavioralist economics (without the smugness). We’ve always known it. It’s all just evolution. But the methodology hasn’t made its way into the main stream of economics. If there are any undergrads reading this on their way to a PhD program, let me know in the comments so I can point you in some interesting directions!

Nightcap

  1. James Madison won the shutdown Greg Weiner, Law & Liberty
  2. A Marxist defense of Venezuela Louis Proyect, Unrepentant Marxist
  3. Emergent complexity in a multi-planetary ecology Nick Nielsen, The View from Oregon
  4. Christian martyrs, marriage, and the Middle East Christian Sahner, Aeon

No Country for Creative Destruction

Imagine a country whose inhabitants reject every unpleasant byproduct of innovation and competition.

This country would be Frédéric Bastiat’s worst nightmare: in order to avoid the slightest maladies expected to emerge from creative destruction, all their advantages would remain unseen forever.

Nevertheless, that impossibility to acknowledge the unintended favourable consequences of competition is not conditioned by any type of censure, but by a sort of self-imposed moral blindness: the metaphysical belief that “being” is good and “becoming” is bad. A whole people inspired by W. B. Yeats, they want to be gathered into the artifice of eternity.

In this imaginary country, which would deserve a place in “The Universal History of Infamy” by J.L. Borges, people cultivate a curious strain of meritocracy, an Orwellian one: they praise stagnation for its stability and derogate growth because of the stubborn and incorruptible conviction that life in society is a zero-sum game.

Since growth is an unintended consequence of creative destruction, they reason additionally, then there must be no moral merit to be recognised in such dumb luck. On the other hand, stagnation is the unequivocal signal of the good deeds to the unlucky, who otherwise could suffer the obvious lost coming from every innovation.

In this fantastic country, Friedrich Nietzsche and his successors are well read: everybody knows that, in the Eternal Return, the whole chance is played at each throw of the dice. So, they conclude, “if John Rawls asked us to choose between growth or stagnation, we would shout at him: Stagnation!!!”

But the majority of the inhabitants of “Stagnantland” are not the only to blame for their devotion to quietness. The few and exceptional proponents of creative destruction who live in Stagnantland are mostly keen on the second term of the concept. That is why some love to say, from time to time, “we all are stagnationist” – the few contrarians are just Kalki’s devotees.

These imaginary people love to spend their vacations abroad, particularly in a legendary island named “Revolution”. Paradoxically, in Revolution Island the Revolutionary government found a way to avoid any kind of counter-revolutionary innovation. It is not necessary to mention that Revolution Island is, by far, Stagnantlanders’ favourite holiday destination.

They show their photos from their last vacation in Revolution Island and proudly stress: “Look: they left the buildings as they were back in 1950!!! Awesome!!!” If you dare to point out that the picture resembles a city in war, that the 1950 buildings lack of any maintenance or refurbishment, they will not get irritated. They will simply smile at you and reply smugly: “but they are happy!”

Actually, for Stagnantlanders, as for many others, ignorance is bliss, but their governments do not need to resort to such rudimentary devices as censure and spying to prevent people from being informed about the innovations and discoveries occurring in other countries, as Revolutionary Island rulers sadly do. Stagnantlanders simply reject any innovation as an article of faith!

Notwithstanding, they allow to themselves some guilty pleasures: they love to use smartphones brought by ant-smuggling and to watch contemporary foreign films which, despite being realistic, show a dystopian future to them.

As everything is deteriorated, progress is always a going back to an ancient and glorious time. In Stagnantland, things are not created, but restored. As with Parmenides, they do not believe in movement, but if there has to be an arrow of time, you had better point it to the past.

Moreover, Stagnantland is an imaginary country because it does not only lack of duration, but of territory as well. As the matter of fact, no man inhabits Stagnantland, but it is indeed stagnation that inhabits the hearts of Stagnantlanders. That is how, from dusk to dawn, any territory could be fully conquered by the said sympathy for the stagnation.

Nevertheless, if we scrutinise the question with due diligence, we will discover that the stagnation is not an ineluctable future, but our common past. Human beings appeared very much earlier than civilisation. So, all those generations must have been doing something before agriculture, commerce, and institutions.

Before the concept of creative destruction had been formulated by Joseph Schumpeter, it was needed a former conception about how people are conditioned by institutions: Bernard Mandeville pointed out how private vices might turn into public benefits, if politicians arranged the correct set of incentives. The main issue, thus, should be the process of discovery of such institutions.

That is why the said aversion to competition and innovation is hardly a problem of a misguided sense of justice, but mostly a matter of what we could coin as “bounded imagination”: the difficultly of reason to deal with complex phenomena. Don’t you think so, Horatio?

Nightcap

  1. Hayek and liberal dictatorship Matthew McManus, Areo
  2. Rule of Law: the case of open texture of language and complexity Federico Sosa Valle, NOL
  3. How the Germans finally caught up with the West Wolfgang Streeck, London Review of Books
  4. Rebuilding Europe after World War II Barry Stocker, NOL

Rule of Law: the case of open texture of language and complexity

This article by Matt McManus (@MattPolProff) recently published at Quillette made me remember H.L.A. Hart’s theory of law and the problems derived from the open texture of language, a concept borrowed by him from Friedrich Waismann, an Austrian Mathematician and philosopher of the Vienna Circle. Many authors would rather distinguish “open texture” from vagueness: being the latter a proper linguistic matter, the former is related to the dynamic of the experience. As Kyle Wallace summarized the problem: “certain expressions are open textured simply because there is always the possibility that in some new experience we may be uncertain whether or not the new expression is applicable.”

However, Brian Bix, in his “H.L.A. Hart and the ‘open texture’ of language,” argues that, despite the concept of “open texture” being a loan from Waismann’s philosophy, the use gave to the term by Hart is not derogatory at all. With respect to Hart’s point of view, the “open texture” of the law is rather an advantage, since it endows the judges with a discretionary power to adjust the text of the law to the changing experience.

Concerning individual liberty, the laudatory qualification of the open texture of the law made by Hart and Bix might be shared by the jurists of the Common Law tradition, but it hardly would be accepted by anyone from the Civil Law System. According to the former, every discretionary power enabled to the judges helps to prevent the political power from menacing individual liberties, while, following the latter, the written word of the law, passed by a legislative assembly according to constitutional proceedings, is the main guarantee of individual rights.

But the subject of the open texture of the language of the law acquires a new dimension when it is related to the coordination problem derived from the limits to knowledge in society. As it was distinguished by F. A. Hayek in the last chapter of Sensory Order, we could talk about two types of limits to knowledge: the relative and the absolute. The relative limit to knowledge depends upon the sharpness of our instruments used to gather information, whereas the absolute limit to knowledge is sealed by the increasing degrees of abstraction that constitute every classification system. Since every new experience demands the rearrangement of the current system of classification we use to order our perception of reality, the description of this feedback process requires a supplementary system of classification of a higher level of complexity. The progress of the subject of knowledge into higher levels of abstraction reaches an unconquerable limit when he is tasked with the full study of himself.

Thus, we could ascertain that the judiciary function would be enough to fulfill the problems that could arise from the open texture of law, since the judge pronounces the content of the law not in general terms, but in concrete definitions in order to solve a case. In this labour, the judge not only applies the positive law, but he might “discover” abstract principles that become relevant in order to the given new experiences that begot the controversy over the content of the law he is due to solve. This function of “immanent critique” of the positive law by the judiciary system is well discussed by F. A. Hayek in the fifth chapter of his Law, Legislation and Liberty. Since the judiciary function solves in every concrete case the coordination problem derived from the fragmentation of knowledge in society, the open texture of the law does not make it opaque to the citizens.

That notwithstanding, the open texture of the law remains as a systemic limit to the legislative assemblies to define the whole content of the law. Thus, since the whole content of the law can only be achieved in a given concrete case by a judge solving a particular controversy, every central planner would have to accomplish his model of society not through decisions based on principles, but on expediency. Central planning and rule of law will be always set to collide. In this sense, the concept of open texture of the law might work as a powerful argument for the impossibility of every central planning to be performed, sooner or later, under the rule of law.

Nightcap

  1. Thinking about the Holodomor Flagg Taylor, Law & Liberty
  2. Neoliberalism is not dead Scott Sumner, EconLog
  3. Social generativity and complexity Daniel Little, Understanding Society
  4. We don’t need the UN to regulate baby formula Ryan McMaken, Mises Wire

RCH: Imperialism and the Panama Canal

Folks, my latest over at RealClearHistory is up. An excerpt:

The political ramifications for Washington essentially stealing a province from Colombia were huge. The United States had just seized a number of overseas territories from Spain in 1898, and the imperial project was frowned upon by numerous factions for various reasons. The U.S. foray into imperialism led to governance issues in the Caribbean, where Washington found itself supporting anti-democratic autocrats, and confronting outright ethical problems in the Philippines, where the United States Army was ruthlessly putting down a revolt against its rule. So acquiring a “canal zone” in a country that was baited into leaving another country was scandalous, especially since Colombia’s reluctance to cooperate with France and the U.S. was viewed as democratic (the Colombian Senate refused to ratify several canal-related treaties with France and the U.S.), and the two Western powers were supposedly the torchbearers of democracy. To make matters worse, many elites in Panama, after agreeing to secede in exchange for protection from Colombia, felt betrayed by the terms of the Panama Canal Zone, which granted the United States sole control over the zone in perpetuity.

Please, read the rest.

Red Lobsters and Black Swans

Back In 2007 Nassim Nicholas Taleb had estimated that, in the following years, the rate of irruption of highly improbable events that change our way to perceive reality would be on the increase. Using his terminology, we would swiftly drift from Mediocristan out to Extremistan. People would have to deal with black swans more often and adapt to the new scenario.

The sudden spreading of Jordan Peterson’s lobsters might be a confirmation of Taleb’s surmise (in Extremistan, the term “surmise” has not any derogatory connotation). “Stand up straight with your shoulders back” is a piece of advice aimed at people who feel overwhelmed by a state of affairs, both personal and public, whose complexity they can hardly grasp. In Taleb’s terms, Jordan Peterson wants to prepare you for a world in which the Black Swans are the underlying reality.

Our quantitative patterns about reality -both physical and social- contribute to preserve fixed relationships among the terms that build up our world and subjectivity -while every now and then the “untimely” burst into our sense of reality. The Nietzschean “untimely” had always been there, out of the reach of our horizon of perception, but ready to appear suddenly and unexpectedly, like the plague in Thebes.

Nevertheless, perhaps there is no underlying chaotic reality, but a Hofstadter’s braid, where Apollo and Dionysus are intertwined: simple and complex phenomena, back to back, the beauty and the sublime. Upon one side, the train of events represented by a correlative train of thoughts; on the reverse, a plane of unarticulated notions that are inherent to those representations.

In this sense, the matrix of Taleb’s Black Swans might not inhabit the undertow of our perceptions, but stand above them, in a plane of a higher degree of complexity. Each new event triggers our brain to readjust our system of classifications. But this readjustment, at its time, triggers off a reconfiguration in the said plane of unarticulated notions that give support to our set of representations. In principle, an arrangement of such events would remain stable, but sometimes some unintended consequences could arise. That is the dynamic of events that Friedrich Hayek had once tried to convey with his concept of spontaneous or abstract order.

Peterson’s Red Lobsters try to make us reflect on the edge of our common patterns of conduct, whereas Taleb’s Black Swans incite us to perform the speculative activity of throwing hypothesis over the singularity of the abstract order, so that to anticipate any unintended consequences of our individual or collective behaviour. Notwithstanding the huge differences that there might be between them, what deserves our main attention is the acknowledgement of that the unplanned, the unexpected, the uncertain, are not alien forces, but the inherent articulation of the patterns of events that constitute the matter we are face to deal with.

On why complexity from simple rules is counterintuitive

“… normally we start from whatever behavior we want to get, then try to design a system that will produce it. Yet to do this reliable, we have to restrict ourselves to systems whose behavior we can readily understand and predict–for unless we can foresee how a system will behave, we cannot be sure that the system will do what we want.

“But unlike engineering, nature operates under no such constraint. So there is nothing to stop systmes like those at the end of the previous section from showing up. And in fact one of the important conclusions of this book is that such systems are actually very common in nature.

“But because the only situations in which we are routinely aware both of the underlying rules and overall behavior are ones in which we are building things or doing engineering, we never normally get any intuition about systems like the ones at the end of the previous section.”

Stephen Wolfram

The deeper you dig into math and computer science, the more Hayekian things look. The impossibility of economic calculation under socialism has important counterparts in Godel and Turing/Church.

What if we have already been ruled by an Intelligent Machine – and we are better off being so?

Common people and even reputed scientists, such as Stephen Hawking, have been worrying about the very menace of machines provided with Artificial Intelligence that could rule the whole human genre in detriment of our liberty and welfare. This fear has two inner components: the first one, that the Artificial Intelligence will outshine human intellectual capabilities; and the second one, that the Intelligent Machines will be endowed with their own volition.

Obviously, it would be an evil volition or, at least, a very egotistic one. Or maybe the Intelligent Machines will not necessarily be evil or egotistic, but only as fearful of humans as they are of machines – although more powerful. Moreover, depending on their morality on a multiplicity of reasonings we cannot grasp, we could not ascertain whether their superior intelligence (as we suppose the feared machines would be enabled with) is good or evil, or just more complex than ours.

Nevertheless, there is still a additional third assumption which accompanies all the warnings about the perils of thinking machines: that they are a physical shell inhabited by an Artificial Intelligence. Inspired by Gilbert Ryle’s critique of Cartesian Dualism, we can state that the belief of Intelligent Machines provided with an autonomous volition rests upon the said assumption of an intelligence independent from its physical body: a self-conscious being whose thoughts are fully independent from the sensory apparatus of its body and whose sensations are fully independent from the abstract classification which its mind operates by.

The word “machine” evokes a physical device. However, a machine might as well be an abstract one. Abstract Machines are thought experiments compounded by algorithms which delivers an output from an input of information which, in turn, could be used as an input for another circuit. Theses algorithms can emulate a decision making process, providing a set of consequences for a given set of antecedents.

In fact, all recent cybernetic innovations are the result of the merging of abstract machines with physical ones: machines that play chess, drive cars, recognize faces, etc.. Since they do not have an autonomous will and the sensory data they produce are determined by their algorithms, whose output, in turn, depends on the limitation of their hardware, people are reluctant to call their capabilities “real intelligence.” Perhaps the reason of that reluctance is that people are expecting automata which accomplish the Cartesian Dualism paradigm of a thinking being.

But what if an automaton enabled with an intelligence superior to ours has already existed and is ruling at least part of our lives? We do not know of any being of that kind, if for a ruling intelligent machine we regard a self-conscious and will-driven one. But the ones who are acquainted with the notion of law as a spontaneous and abstract order will not find any major difficulty to grasp the analogy between the algorithms that form an abstract machine and general and abstract laws that compound a legal system.

The first volume of Law, Legislation, and Liberty by Friedrich A. Hayek, subtitled “Norms [Rules] and Order” (1973), is until today the most complete account of the law seen as an autonomous system, which adapts itself to the changes in its environment through a process of negative feedback that brings about marginal changes in its structure. Abstract and general notions of rights and duties are well-known by the agents of the system and that allows to everyone to form expectations about the behaviour of each other. When a conflict between two agents arises, a judge establishes the correct content of the law to be applied to the given case.

Notwithstanding our human intelligence -using its knowledge about the law- is capable of determining the right decision to each concrete controversy between two given agents, the system of the law as whole achieves a higher degree of complexity than any human mind might reach. Whereas our knowledge of a given case depends on acquiring more and more concrete data, our knowledge of the law as a whole is related to more and more abstract degrees of classifications. Thus, we cannot fully predict the complete chain of consequences of a singular decision upon the legal system as a whole. This last characteristic of the law does not mean its power of coercion is arbitrary. As individuals, we are enabled with enough information about the legal system to design our own plans and to form correct expectations about other people’s behaviour. Thus, legal constraints do not interfere with individual liberty.

On the other hand, the absolute boundary to the knowledge of the legal system as a whole works as a limitation to the political power over the law and, thence, over individuals. But, after all, that is what the concept of rule of law is about: we are much better off being ruled by an abstract and impersonal entity, more complex than the human mind, than by the self-conscious -but discretional- rule of man. Perhaps, law is not at all an automaton which rules our lives, but we can ascertain that law -as a spontaneous order- prevents other men from doing so.

Freedom of Conscience and the Rule of Law

Of course the concept of “freedom of conscience” was forged in Europe by Spinoza, Locke, Voltaire, John Stuart Mill, and many other philosophers. But the freedom of conscience as an individual right that belongs to set of characteristics which defines the rule of law is an American innovation, which later spread to Latin America and to the Old Continent.

This reflection comes from the dispute which has been aroused in Notes On Liberty about the Protestant Reformation and freedom of conscience. Now, my intention is not to mediate between Mark and Bruno, but to bring to the Consortium a new line of debate. What I would like to polemize is what defines which rights to be protected by the rule of law. In this sense, might we regard a political regime that bans freedom of conscience as based on the rule of law? I am sure that no one would dare to do so. But, instead, would anyone dare to state that unification of language in a given country hurts the rule of law? I am afraid that almost nobody would.

Nevertheless, this is a polemical question. For example, the current Catalan independence movement has the language of Catalan as one of its main claims, so tracing the genealogy of the rights that constitutes the concept of rule of law is a meaningful task —and this is why the controversy over the Protestant Reformation and the origin of Freedom of Conscience at NOL is so interesting.

Before the Protestant Reformation, the theological, philosophical, scientific, and political language of Europe was unified in Latin. On the other hand, the languages used by the common people were utterly fragmented. A multiplicity of dialects were spoken all over Europe. The Catholic Kings of Spain, for example, unified their kingdom under the same religion, but they did not touch the local dialects. A very similar situation might be found in the rest of Europe: kingdoms with one religion and several dialects.

There was a strong reason for this to be so. Before the Medieval Ages Bibles in vernacular had existed, but the literacy rate was so low that the speed of evolution and fragmentation of the dialects left those translations obsolete and incomprehensible. Since printing books was extremely costly (this was before the invention of  the printing press), the best language to write and print books and constitutional documents was Latin.

The Evangelical movement, emerged out of the Protestant Reformation, meant that final authority of religion was not the Papacy any more but the biblical text. What changed was the coordination problem. Formerly, the reference was the local bishop, who was linked to the Bishop of Rome. (Although with the Counter-Reformation, in some cases, like Spain, the bishops were appointed by the king, a privilege obtained in exchange for remaining loyal to the Pope). On the other hand, in the Reformation countries, the text of the Bible as final authority on theological matters demanded the full command of an ability not so extended until that moment: literacy.

It is well-known that the Protestant Reformation and the invention of printing expanded the translations of the Bible into the vernacular. But always goes completely unnoticed that by that time the concept of a national language hardly existed. In the Reformist countries the consolidation of a national language was determined by the particular vernacular which was chosen to translate the Bible into.

Evidently, the extension of a common language among the subjects of a given kingdom had reported great benefits to its governance, since the tendency was followed by the monarchies of France and Spain. The former extended the Parisian French over the local patois and, in Spain of the XVIII Century, the Bourbon Reforms imposed Castilian as the national Spanish language. The absolute kings, who each of them had inherited a territory unified by a single religion, sowed the seeds of national states aggregated by a common language. Moreover, Catholicism became more dependent on absolute kings than on Rome —and that is why Bruno finds some Catholics arguing for the separation of Church from the state.

Meanwhile, in the New World, the Thirteen Colonies were receiving the European immigration mostly motivated on the lack of religious tolerance in their respected countries of origin. The immigrants arrived carrying with them all kind of variances of Christian confessions and developed new and unexpected ones. All those religions and sects had a common reference: the King James Bible.

My thesis is that it was the substitution of religion for language as the factor of cohesion and mechanism of social control that made possible the development of the freedom of conscience. The political power left what was inside of the mind of their subjects a more economical device: language. Think what you wish, believe what you wish, read what you wish, write what you wish, say what you wish, as long as I understand what you do and you can understand what I mean.

Moreover, an official language became a tool of accountability and a means of knowing the rights and duties of an individual before the state. The Magna Carta (1215) was written in Medieval Latin while the Virginia Declaration of Rights (1776), in English. Both documents were written in the language that was regarded as proper in their respective time. Nevertheless, the language which is more convenient to the individual for the defense of his liberties is quite obvious.

Often, the disputes over the genealogy of rights and institutions go around two poles: ideas and matter. I think it is high time to go along the common edge of both of them: the unintended consequences, the “rural nomos,” the complex phenomena. In this sense, but only in this sense, tracing the genealogy – or, better, the “nomology” – of the freedom of conscience as an intended trait of the concept of “rule of law” is worth our efforts.

Summing up: the year of irrationality

Brandon says I’ve got one last chance to write his favorite post of the year. But it’s the end of a long semester and I’m brain dead, so I’m just going to free ride on his idea: a year end review. If I were to sum up the theme of this year in a word, that word would be irrational.

After 21 months of god awful presidential campaigning, we were finally left with a classic Kodos vs Kang election. The Democrats were certain that they could put forward any turd sandwich and beat Trump, but they ultimately lost out to populist outrage. Similar themes played out with Brexit, but I don’t know enough to comment.

Irrationality explains the Democrats, the Republicans, and the country as a whole. The world is complex, but big decisions have been made by simple people.

We aren’t equipped to manage the world’s complexity.

We aren’t made to have direct access to The Truth; we’re built to survive, so we get a filtered version of the truth that has tended to keep our ancestors out of trouble long enough to get laid. In other words, what seems sensible to each of us, may or may not be the truth. What we see with our own eyes may not be worth believing. We need more than simple observation to actually ferret out The Truth.

Our imperfect perceptions build on imperfect reasoning faculties to make imperfect folk economics. But what sounds sensible often overlooks important moving parts.

For every complex problem there is an answer that is clear, simple, and wrong.

Only a small minority of the population will ever have a strong grasp on any particularly complex thing. As surely as my mechanic will never become an expert in economics, I will never be able to do any real work on my car. The trouble arises when we expect me or my mechanic to try to run the country. The same logic applies to politicians, whose job (contrary to what your civics teacher thinks) is to get re-elected, not to be a master applied social scientist. (And as awful as democracy is, the alternative is just some other form of political competition… there is no philosopher king.)

But, of course, our imperfect perception and reasoning have gotten us this far. They’ve pulled us out of caves and onto the 100th floor of a skyscraper*. Because in many cases we get good enough feedback to learn a lot about how to accomplish things in our mysterious universe.

We’re limited in what we can do, but sometimes it’s worth trying something. The trouble is, I can do things that benefit me at your expense. And this is especially true in politics (also pollution–what they have in common is hot air!). But it’s not just the politicians who create externalities, it’s the electorate. The costs of my voting to outlaw gravity (the simplest way for me to lose a few pounds) are nil. But when too many of us share the same hare-brained idea, we can do some real harm. And many people share bad ideas that have real consequences.

Voting isn’t the only way to be politically engaged, and we face a similar problem in political discourse in general. A lot of Democrats are being sore losers about this election rather than learning and adapting. Trump promised he would have done the same had he lost. We’re basically doomed to have low-quality political discourse. It’s easy and feels (relatively) good to bemoan that the whole world is going to hell.

We’re facing rational irrationality. Everyone is simply counting on someone else to get their shit together, because each of us individually is more comfortable with our heads firmly up our asses.

It’s a classic tragedy of the commons and it should prompt us to find some way to minimize the harm of our lousy politics. We’ve been getting better at this over the centuries. Democracy means the levers of power can change hands peacefully. Liberalization has entailed extending civil and economic rights to a wider range of people. We need to continue in this vein. More freedom has allowed more peace and prosperity.

 

So what do we do? I’d argue that we should focus on general rules rather than trying to have flawed voters pick flawed politicians and hope for the best. I don’t mean “make all X following specifications a, b, and c.” I mean, if you’re mad, try and sue someone. We don’t need dense and exploitable regulations. We don’t need new commissions. We just need a way for people to deal with problems as they arise. Mind you, our court system (like the rest of our government) isn’t quite ready for a more sensible world. But we can’t be afraid to be a little Utopian when we’re planning for the long run. But let’s get back to my main point…

We live in an irrational world. And it makes sense that it’s that way; rationality is hard. We can see irrationality all around us, but we see it most where it’s cheapest: politics and Facebook. The trouble is, sometimes little harmless irrational acts add up to cause real harm. Let’s admit we’ve got a problem with irrationality in politics so we can get better.


*Although that’s only literally true in 17 cases.

The Homo Economicus is “The Body” of the Agent

The model of the decision-making agent known as homo economicus is a trivial truth, but not a misconception. All agents are supposed to maximize the utility of their resources – that is true in every geography and in every age. But because it is a tautology, it is a mistake to give to the deductions brought about from this sole model the value of a description of a particular reality. As Wittgenstein had pointed out in his Tractatus, the tautologies do not convey any relevant information of any particular world, but of every possible world. The error consists on qualifying a common note to every possible situation as a distinctive characteristic of a particular set of events. To say that every agent acts to maximize his utility is true, but to state that this observation tells us something of a particular world that distinguishes it from every other possible world is the most extended misconception about the use of the model of the rational agent.

In another post, I mentioned the importance of having a body in order to develop a personal identity. In the same line as Hayek’s Sensory Order, the body enables us with the most elementary system of classification that makes our perceptions possible, or – to express it in a more radical empiricist strain – that brings the experience to happen. Upon this system of classification will sediment more abstract layers, or degrees, of systems of classification. Our knowledge is expressed at a level of abstraction that our mind can handle, whereas the law, the market, and the language are examples of phenomena that might achieve increasing degrees of complexity that mark blurred boundaries to our knowledge. The former are named “simple phenomena” and the latter “complex phenomena” in the Hayekian terminology. Our personal identity is continuously developing on that stratus set between the simple and the complex phenomena.

In this sense, we need the conclusions brought about by the rational agent model as the stem upon which lay further strata of increasingly complex analysis. Paradoxically, the more particular the social reality we seek to describe, the more abstract has to be our layer of analysis. Notwithstanding, it would be impossible to us to conceive any image of the social experience if we lack of the fixed point of the rational agent model.

Max Weber’s ideal types could be interpreted as instances of social arrangements based on the rational agent model, which incorporate particular – and abstract – characteristics depending on the historical circumstances under scrutiny. At the bottom of both adventurous capitalism and traditional society, beneath successive strata of different social and institutional designs, we will find an agent who maximizes utility. Perhaps that is why the term “rational capitalism” is so controversial. If rationality concerns subjective reason, then rational capitalism encloses a circular definition.

In this line of ideas, I hope this quick reflection might shed some light upon the old discussion about instrumental rationality and substantive reason. Since the instrumental rationality is common to every possible world, we might look for the substantive reason that gives order to this actual world in the increasing layers of complexity that reach degrees of abstraction that are superior to the subjective reason. Although that does not mean that we will ever be able to find it.

A metaphor for the Socialist Calculation Debate

This week’s episode of EconTalk was fantastic, and in particular drew an important parallel between the complexity of the human brain and the complexity of market economies. The guest was discussing radical nanotechnology (basically the idea that engineers could out do bacteria by applying good design principles in place of random mutation and natural selection), and Russ pointed out that the logic is basically the same as in Socialism. Radical nanotechnology runs into a fundamental problem as long as it ignores the emergent processes occurring at the molecular/cellular level.

Later, the guest discusses the issue of artificial intelligence and points out that the fundamental unit of biological computing is not the neuron (which we simulate on computers using neural networks), but the molecule. In other words, natural intelligence is the outcome of a complex process that isn’t simple enough for us to easily replicate on a computer.

All that in mind, the idea of socialism* is like the idea that we could replace a brain with a pocket calculator. Yes, the idea is to get a very powerful calculator, but the problem is that it’s replacing a computer that’s far more complex and sophisticated.


* i.e. Centralized control of the means of production… socialism has nothing to do with sharing (you’re thinking “Egalitarianism”) and everything to do with control, and particularly the attempt to rationalize complex systems.