Someone (I don’t remember who) said that International Relations is the academic discipline of disagreement. Internationalists disagree on mostly everything, beginning by how to view their object of study. With that said, the discipline of International Relations has been historically dominated mostly by two theoretical schools, Realism and Liberalism. Some other minor schools, such as Constructivism and the English School also have significant influence. With that in mind, I believe it might be useful to post something here about the theory of International Relations.
Although the chronology is highly disputed, it can be defended that Realism is the first theory of International Relations, going back to Thucydides in Ancient Greece or to Machiavelli in late medieval/early modern Europe. In any case, Realism is arguably the most influential theory of International Relations, partially for its influence in actual statecraft (in opposition to academic thinking). Realists come in many shapes and colors, but I believe that most of them present some core characteristics:
The first thing that most (or in this case, all) Realists believe in is that the international system is anarchic. Actually, this is something that virtually any student of International Relations believes in, because… it is! When we say that the international system is anarchic, we are not saying that it is a mess or a state of permanent war. In international relations, the definition of anarchy is more simple: it means that there is no formal hierarchy of power between countries. Of course, countries have a clear hierarchy of power, with some being much more powerful than others. However, all countries are formally sovereign and independent. Countries recognize themselves as their ultimate authority. Each one of them.
A second thing that Realists believe is that countries (or in the more technical vocabulary, states) are the main actors of the international relations. Although we can speak of international corporations and international institutions, in the end, the actors that really matter are countries, especially great powers. That is so mainly because they have military capabilities. Coca-Cola may have lots of money, but not an army.
Finally, Realists believe that countries have a relationship of competition. They tend to see each other as potential enemies. Maybe not actual enemies, but certainly potential ones. Because of that, countries have to defend themselves against one another.
There are many more characteristics that we could add to this list, but I believe that these are the essential points of realist thinking in International Relations. Realists call themselves realists because they believe they see reality as it is, not in an idealized manner. I tend to agree. I believe that history proves that unfortunately, International Relations work in a realistic way. And this is something that, I believe, is key for at least many realists, and that is too often misunderstood: realists are not saying that international relations should be this way. They are saying that [sadly] they are this way. If you analyze international relations objectively, you will find out that countries (even the ones you like) and politicians (even the ones you believe are so nice) act in very selfish ways.
Realists are accused of leaving little or no room for change. But is this a fair assessment? I wish! But most other schools of International Relations fail to present plausible ways in which the international system could be improved, leading to more peace and prosperity for all.
Imagine a country whose inhabitants reject every unpleasant byproduct of innovation and competition.
This country would be Frédéric Bastiat’s worst nightmare: in order to avoid the slightest maladies expected to emerge from creative destruction, all their advantages would remain unseen forever.
Nevertheless, that impossibility to acknowledge the unintended favourable consequences of competition is not conditioned by any type of censure, but by a sort of self-imposed moral blindness: the metaphysical belief that “being” is good and “becoming” is bad. A whole people inspired by W. B. Yeats, they want to be gathered into the artifice of eternity.
In this imaginary country, which would deserve a place in “The Universal History of Infamy” by J.L. Borges, people cultivate a curious strain of meritocracy, an Orwellian one: they praise stagnation for its stability and derogate growth because of the stubborn and incorruptible conviction that life in society is a zero-sum game.
Since growth is an unintended consequence of creative destruction, they reason additionally, then there must be no moral merit to be recognised in such dumb luck. On the other hand, stagnation is the unequivocal signal of the good deeds to the unlucky, who otherwise could suffer the obvious lost coming from every innovation.
In this fantastic country, Friedrich Nietzsche and his successors are well read: everybody knows that, in the Eternal Return, the whole chance is played at each throw of the dice. So, they conclude, “if John Rawls asked us to choose between growth or stagnation, we would shout at him: Stagnation!!!”
But the majority of the inhabitants of “Stagnantland” are not the only to blame for their devotion to quietness. The few and exceptional proponents of creative destruction who live in Stagnantland are mostly keen on the second term of the concept. That is why some love to say, from time to time, “we all are stagnationist” – the few contrarians are just Kalki’s devotees.
These imaginary people love to spend their vacations abroad, particularly in a legendary island named “Revolution”. Paradoxically, in Revolution Island the Revolutionary government found a way to avoid any kind of counter-revolutionary innovation. It is not necessary to mention that Revolution Island is, by far, Stagnantlanders’ favourite holiday destination.
They show their photos from their last vacation in Revolution Island and proudly stress: “Look: they left the buildings as they were back in 1950!!! Awesome!!!” If you dare to point out that the picture resembles a city in war, that the 1950 buildings lack of any maintenance or refurbishment, they will not get irritated. They will simply smile at you and reply smugly: “but they are happy!”
Actually, for Stagnantlanders, as for many others, ignorance is bliss, but their governments do not need to resort to such rudimentary devices as censure and spying to prevent people from being informed about the innovations and discoveries occurring in other countries, as Revolutionary Island rulers sadly do. Stagnantlanders simply reject any innovation as an article of faith!
Notwithstanding, they allow to themselves some guilty pleasures: they love to use smartphones brought by ant-smuggling and to watch contemporary foreign films which, despite being realistic, show a dystopian future to them.
As everything is deteriorated, progress is always a going back to an ancient and glorious time. In Stagnantland, things are not created, but restored. As with Parmenides, they do not believe in movement, but if there has to be an arrow of time, you had better point it to the past.
Moreover, Stagnantland is an imaginary country because it does not only lack of duration, but of territory as well. As the matter of fact, no man inhabits Stagnantland, but it is indeed stagnation that inhabits the hearts of Stagnantlanders. That is how, from dusk to dawn, any territory could be fully conquered by the said sympathy for the stagnation.
Nevertheless, if we scrutinise the question with due diligence, we will discover that the stagnation is not an ineluctable future, but our common past. Human beings appeared very much earlier than civilisation. So, all those generations must have been doing something before agriculture, commerce, and institutions.
Before the concept of creative destruction had been formulated by Joseph Schumpeter, it was needed a former conception about how people are conditioned by institutions: Bernard Mandeville pointed out how private vices might turn into public benefits, if politicians arranged the correct set of incentives. The main issue, thus, should be the process of discovery of such institutions.
That is why the said aversion to competition and innovation is hardly a problem of a misguided sense of justice, but mostly a matter of what we could coin as “bounded imagination”: the difficultly of reason to deal with complex phenomena. Don’t you think so, Horatio?
In the last few days, the economics blogosphere (and twitterverse) has been discussing this paper in the Journal of Economic Psychology. Simply put, the article argues that economists discount “bad journals” so that a researcher with ten articles in low-ranked and mid-ranked journals will be valued less than a researcher with two or three articles in highly-ranked journals.
Some economists, see notably Jared Rubin here, made insightful comments about this article. However, there is one comment by Trevon Logan that gives me a chance to make a point that I have been mulling over for some time. As I do not want to paraphrase Trevon, here is the part of his comment that interests me:
many of us (note: I assume he refers to economists) simply do not read and therefore outsource our scholarly opinions of others to editors and referees who are an extraordinarily homogeneous and biased bunch
There are two interrelated components to this comment. The first is that economists tend to avoid reading about minute details. The second is that economists tend to delegate this task to gatekeepers of knowledge. In this case, this would be the editors of top journals. Why do economists act as such? More precisely, what are the incentives to act as such? After, as Adam Smith once remarked, the professors at Edinburgh and Oxford were of equal skill but the former produced the best seminars in Europe because their incomes depended on registrations and tuition while the latter relied on long-established endowments. Same skills, different incentives, different outcomes.
My answer is as such: the competition that existed in the field of economics in the 1960s-1980s has disappeared. In “those” days, the top universities such as Princeton, Harvard, MIT and Yale were a more or less homogeneous group in terms of their core economics. Lets call those the “incumbents”. They faced strong contests from the UCLA, Chicago, Virginia and Minnesota. These challengers attacked the core principles of what was seen as the orthodoxy in antitrust (see the works of Harold Demsetz, Armen Alchian, Henry Manne), macroeconomics (Lucas Paradox, Islands model, New Classical Economics), political economy (see the works of James Buchanan, Gordon Tullock, Elinor Ostrom, Albert Breton, Charles Plott) and microeconomics (Ronald Coase). These challenges forced the discipline to incorporate many of the insights into the literature. The best example would be the New Keynesian synthesis formulated by Mankiw in response to the works of people like Ed Prescott and Robert Lucas. In those days, “top” economists had to respond to articles published in “lower-ranked” journals such as Economic Inquiry, Journal of Law and Economics and Public Choice (all of which have risen because they were bringing competition – consider that Ronald Coase published most of his great pieces in the JL&E).
In that game, economists were checking one another and imposing discipline upon each other. More importantly, to paraphrase Gordon Tullock in his Organization of Inquiry, their curiosity was subjected to social guidance generated from within the community:
He (the economist) is normally interested in the approval of his peers and and hence will usually consciously shape his research into a project which will pique other scientists’ curiosity as well as his own.
Is there such a game today? If in 1980 one could easily answer “Chicago” to the question of “which economics department challenges that Harvard in terms of research questions and answers”, things are not so clear today. As research needs to happen within a network where the marginal benefits may increase with size (up to a point), where are the competing networks in economics?
And there is my point, absent this competition (well, I should not say absent – it is more precise to speak of weaker competition) there is no incentive to read, to invest other fields for insights or to accept challenges. It is far more reasonable, in such a case, to divest oneself from the burden of academia and delegate the task to editors. This only reinforces the problem as the gatekeepers get to limit the chance of a viable network to emerge.
So, when Trevon bemoans (rightfully) the situation, I answer that maybe it is time that we consider how we are acting as such because the incentives have numbed our critical minds.
I am generally skeptical of “accepted wisdom” on many policy debates. People involved in policy-making are generally politicians who carefully craft justifications (i.e. cover stories) where self-interest and common good cannot be disentangled easily. These justifications can easily become “accepted wisdom” even if incorrect. I am not saying that “accepted wisdom” is without value or that it is always wrong, but more often than not it is accepted at face value without question.
My favorite example is “antitrust”. In the United States, the Sherman Act (the antitrust bill) was first introduced in 1889 (passed in 1890). The justification often given is that it was meant to promote competition as proposed by economists. However, as often pointed out, the bill was passed well before the topic of competition in economics had been unified into a theoretical body. It was also rooted in protectionist motives. Moreover, the bill was passed after the industries most affected saw prices fall faster than the overall price level and output increase faster than the overall output level (see here here here here and here). Combined, these elements should give pause to anyone willing to cite the “accepted wisdom”.
More recently, economist Patrick Newman provided further reason for caution in an article in Public Choice. Interweaving political history and the biographical details about senator John Sherman (he of the Sherman Act), Newman tells a fascinating story about the self-interested reasons behind the introduction of the act.
In 1888, John Sherman failed to obtain the Republican presidential nomination – a failure which he blamed on the governor of Michigan, Russell Alger. Out of malice and a desire of vengeance, Sherman defended his proposal by citing Alger as the ringmaster of one of the “trusts”. Alger, himself a presidential hopeful for the 1892 cycle, was politically crippled by the attack (even if it appears that it was untrue). Obviously, this was not the sole reason for the Act (Newman highlights the nature of the Republican coalition which would have demanded such an act). However, once Alger was fatally wounded, Sherman appears to have lost interest in the Act and left others to push it through.
As such, the passage of the bill was partly motivated by political self-interest (thus illustrating the key point of behavioral symmetry that underlies public choice theory). Entangled in the “accepted wisdom” is a wicked tale of revenge between politicians. At such sight, it is hard not to be cautions with regards to “accepted wisdom”.
The implications of the chosen terms (“existential crisis,” “decisive leadership,” “political flaw”) are not casual. It looks like the market that crypto-currency had carried from the beginning contain the germ of its own destruction. As in an Escher’s drawing, Bitcoin has unraveled its political strand and its whole existence is, now, dependent upon a moment of decision of the sovereign: the assembly of miners. The decisionist narrative would be fulfilled if the political decision had to be taken by acclamatio instead of voting.
Nevertheless, the decision by acclamation would be still possible: the ones who want “Bitcoin Core” might follow one direction and the other ones, who choose “Bitcoin Unlimited,” might follow their own way. After all, no existential crisis can be solved by voting.
So, which is inside of which? Is the market framed in a system depending upon a political decision of the sovereign? Or does every decision need to be taken inside a spontaneous framework of rules?
We are used to praising Bitcoin for its independence from any political factor: Bitcoin supply depends on a set of rules which allows the public to form expectations about its value with a high degree of probability of proving to be correct.
Taken in isolation, Bitcoin emulates the market. Nevertheless, being independent of political institutions is not enough for being “the market.” The attractiveness of Bitcoin is that it operates in an open system of competition of currencies. In this system, there are many other crypto-currencies, and there might be several variances of Bitcoins as well –in esse or in posse.
Imagine, for example, that Bitcoin effectively splits into Bitcoin Core and Bitcoin Unlimited. Which of the two will prevail over the other? It does not matter. What really matters is that there will be several variances of currencies in competition. The factors that determine the selection of the prevailing currency depends upon a higher level of abstraction that impose an absolute limit to our knowledge.
So, is Bitcoin in an existential crisis? Does a political decision need to be made? Maybe.
But that does not imply that “The Political” will take over the reins of the crypto-currency market. Moreover, opposite political decisions are the linkages which the spontaneous selection process -in this case, of currencies- is made of. In this sense, “Bitcoin Core” and “Bitcoin Unlimited” are attributes of a competitive system and the final prevalence of one variance among other alternatives will not be the result of a deliberate decision but of an abstract process of evolution.