There is a thin line between the abstract model of “natural selection of institutions,” its instantiation in an imaginary example that interprets it and the application of that theory to interpret historical experience. The latter does not test the model, but is the result of the organization of the record of events around this interpretive model. The instantiation in an imaginary example is a visualization that allows us to identify the inconsistencies in the model -if there are any- and to test general predictions about the behaviour of the variables. Such interpretations of the model assume that the rest of the variables remain unchanged, that is, the ceteris paribus condition.
If the abstract model does not have inconsistencies, i.e.: if in its imaginary interpretation, contradictory events do not arise, and, nevertheless, its explanatory or predictive power is contradicted with the experience, this does not imply a refutation. On the contrary, it is an indicator that another set of events are acting that neutralize the effects of the process described by the theory. In this case, although the theory does not achieve results in terms of explanations and predictions, it does fulfil a heuristic function: that is, it inspires new lines of research and discovery.
One such line of such lines is, for example, how politics plays out in the process of natural selection of social habits and practices. As indicated by the School of Public Choice, the regulations on economic activity that affect the distribution of corporate profits, assign monopolies, restrict imports, intervene in the market of credits and capital to favour certain activities over others, among others many cases of economic dirigisme encourage the development of practices known as “lobbying.” Investing in human capital and new technologies means an opportunity cost that will never be assumed if higher yields are obtained as a result of influencing government decisions that protect the producer from competition, or allowing the State to sell at a price higher than the market price. Therefore, if experience is indicating a low capacity for innovation, lack of initiative and stagnation, it is most appropriate to focus the observation on which incentives are acting effectively in that country.
The counterpart of the logical models is the empirical models, the latter consist of abstractions of elements that occur in reality, highlighting their common notes to obtain various classifications of such elements, and they are a simplified scheme of perceived reality. However, any system of abstraction of the common notes of a set of objects requires a prior conceptualization of such notes as defining a set or class. In order to classify diverse populations in countries, it is previously necessary to be in possession of the notion of population, for example.
On the other hand, abstract notions are not necessarily conformed by a deliberate operation of consciousness, but by the perception of series of events that are repeated and differentiated from one another, generating in the cognitive apparatus an association of diverse stimuli. Out of habit arises the expectation that from the appearance of a particular event or series of events a range of determined events will follow and not follow another range of events of various kinds. On these spontaneous classifications, articulated around the repetition of events, their differential in the system of stimuli of the nervous apparatus, and the predisposition generated by the habit of waiting and ruling out the consequent appearance of other events and stimuli is that consciousness is conformed and the cognitive apparatus of the knowledge subject.
But, likewise, those “spontaneous classifications” allow the appearance of an abstract set of functionally related notions whose ordering does not depend on a deliberate decision. These are the cases of norms with empirical observation and of what Douglass North called “informal institutions.” The value of the contribution of Friedrich Hayek in Law, legislation and Liberty consists in both the positive legal norms (deliberately created by the legislator) and the informal institutions that condition our conduct also depend for their enunciation of that abstract order of notions that it arises from pure experience.
These logical models -as they are abstract- that make up the consciousness and the cognitive apparatus of the subjects, are in permanent trial and error testing and, therefore, in continuous reformulation. It is a kind of negative feedback process in which the frustration of an expectation is corrected in the interpretative scheme of reality that the individual has, in a process of continuous readjustment. From the invariant reiteration of a certain series of events, a structure is formed that serves as a parameter to order other events of less frequency or more erratic behaviour.
To the extent that the subject continues its experimentation, the spontaneous classification system that makes up its consciousness becomes more complex, incorporating new ranges of events, adjusting its frequency and incorporating new structures. These are the relative limits of knowledge. They depend on the experimentation and the readjustment of the abstract patterns that allowed the subject to classify the events of reality.
However, knowledge can also grow in another direction: consciousness can focus not on the events that come from its perceptions but in the analysis of the classifications themselves. In this activity, the abstract classification schemes that had been shaped by habit do not apply to reality, but reflect on these classifications and extend and reformulate them, not in terms of their experience, but in virtue of their abstract speculation. This is the task of deliberately shaping the logical models to be applied to the interpretation of reality.
The elaboration of a legal theory -for example, about representation-, the description of a market structure -for example, monopolistic competition-, the outline of a sociological explanation -through the ideal types statement, to cite a case- , are situations in which the subject of knowledge does not experiment on events, but reformulates the classificatory systems that until then had arrived spontaneously. Knowledge in this case does not grow in specificity, but increases in levels of abstraction.
These are the cases in which the historian questions not only the interpretative frameworks he uses, but also the conditions that underlie these interpretative frameworks. The philosophy of science dabbled in the scientific paradigms (Thomas Kuhn), or in the research programs (Imre Lakatos), or in the great stories (Jean Francois Lyotard). The common denominator of these three concepts can be found in that they lack an “author,” they are inferences, true conjectures that we make about the framework in which a given scientific community develops tacitly.
Many interpret these currents of philosophy of science, although diverse, as relativistic, since they lend themselves to postulate that the statements of science are conditioned by the historical circumstances that serve as the frame of legitimation. There would not be a truth in itself, but a truth enunciated in a frame of reference. Another way to see it is to interpret these scientific communities structured around a set of practices, procedures, and validation rules whose origin is mainly spontaneous in a sort of “abstract discovery machines.”
In general, a series of physical devices conformed in a process of transforming inputs into exits is called a machine. But such physical devices are organized according to an abstract plane that assigns them functions for a certain process. This plane can be interpreted through mental operations without resorting to the construction of the physical machine, throwing said mental operations verifiable results; we are faced with an abstract machine. In recent times, the term “algorithm” has also been used to compare an information process that does not depend on the free will of the researchers, but consists in the follow-up of an automatic process.
In this line, Friedrich Hayek characterized competition as a process of discovery, that is, as an abstract machine that processes data and yields results that describe reality. In fact, the discovery would be the only function of a system of free competition that gives a differential over the rest of the systems. A monopoly, whose margins of profitability were controlled either by a maximum price or by a tax on profits, would be more efficient in terms of the production of a given good, than a set of small producers without market power and without scale. The scale of the monopolistic producer allows greater efficiency at a technological level than small producers competing with each other, being able to resolve economic inefficiency through regulatory or tax tools. However, in what a system of free competition is incomparably superior is in terms of the discovery process that drives its own dynamics. These are the benefits that innovation brings, as a consequence of an unanticipated system of free competition or competition, which far exceed all the supposed advantages of a regulated system.
It is this innovation that produces, most of the time involuntarily, an institutional system of free competition, called by Acemoglu & Robinson “inclusive economic institutions” – the one that allowed Hayek to characterize it as a process of discovery, in other words, as an abstract innovation machine.
This characterization of innovation processes through institutions that function as algorithms that produce new knowledge can also be extended to scientific communities and to the evolutionary process of legal norms.
- Oakeshott versus the American Originalists David Glasner, Uneasy Money
- The limits of common sense Charlotte Allen, Law & Liberty
- Woah. Catherine Kim, Vox
- Getting to definitions and then getting beyond them Nick Nielsen, Grand Strategy Annex
The core message of a number of books I’ve recently had the great pleasure to read has been fairly simple. Have a look. Check it out. Put your numbers in perspective. In a world awash with statistics and cognitive biases imploring us to cheer mindlessly for our own team, having the skill and wherewithal to step back and carefully ask: “can this really be so?” is golden.
One of recently passed celebrity professor and YouTube phenomenon Hans Rosling’s most profound advice for countering misinformation about the state of the world is precisely this: put all numbers in perspective. Never accept unaccompanied numbers – never believe the numerator without checking the denominator. What matters, as Bryan Caplan never ceases to emphasize as the GMU Economics creed, “are statistics, not emotions – and arguments, not stories.”
But, a statistic may never be left alone, Rosling maintains, but always compared to other relevant numbers. What share of its total category does this statistic represent? What was it last year, 5 or 10 or 20 years ago? Is there some self-evident change in associated behavior that is relevant or ought to explain it? A century ago street cars used to kill and injure hundreds of people every year, but since very few American cities make use of street cars today, the casualty is fortunately much lower. If we keep in mind that miles travelled by cars far outnumber miles travelled by street cars, reporting the number of street car deaths – while probably correct – entirely miss the point when discussing traffic safety. In How Not To Be Wrong, Mathematics professor Jordan Ellenberg quipped
Dividing one number by another is mere computation ; knowing what to divide by what is mathematics.
Here’s another example. If I told you about 23 000 individual deaths and spent a brief 10 second on each of them, going through the list would take me almost three days. On a personal level like that, 23 000 deaths is an absurd, insane, catastrophe-style event that few people are emotionally equipped to handle – essentially the size of my hometown, wiped out in a single year. If I told you those 23 000 deaths were due to antibiotic resistant diseases in the U.S. last year, the pandemic scenarios working through your mind quickly escalate. That many! Let’s find the nearest bunker!
If I then told you that cancer and heart diseases (each!) claim the lives of about 20x that, the fear of lethal apocalyptic germs consuming the world ought to quickly recede. Oh.
Here’s another example. It is entirely correct to point out that the number of people killed in worldwide airplane accidents in 2018 (556 people) was much higher than the year before (44 people) and the year before that (325 people). Would one be excused for believing that air travel is getting more risky and dangerous? Forbes, for instance, ran a roughly accurate story claiming that airline fatalities increased by 900%.
Not in the slightest. The number of fatalities from air travel has been falling for decades, all while the number of flights and miles travelled have increased exponentially, meaning that the per-flight, per-mile or per-passenger risk of death has kept dropping. Not to mention that alternative modes of travelling like driving is magnitudes more dangerous.
While Rosling teaches us to figure out what the base rate is, i.e. putting our statistic into appropriate perspective, one of Philip Tetlock’s tricks for becoming a ‘Superforecaster’ is to use Bayesian updating of one’s beliefs. This picks up precisely where Rosling’s idea left off. Once we know where to start, we have to amass more information, numbers and observations from other points of view – Bayesian updating is a popular method to incorporate and synthesize new information with the old.
In short “Calculation, like logic, is your friend” (Landsburg 2018: 44). Statistics matter and numbers can deceive. In order to better understand our realities and see through mistakes that others make – either intentionally to deceive or persuade, or unintentionally through ignorance – we must embrace the core message of people like Ellenberg, Tetlock, Duffy, Rosling or Pinker.
Always Be Comparing Thy Numbers. Never accept an unaccompanied statistic. Never trust numerators without denominators.
Irfan Khawaja has a good argument on Yoram Hazony’s new book on nationalism, which is being thoroughly and thoughtfully dissected by Arnold Kling:
Does anyone understand the point that Kling and/or Hazony are making about the relation between legitimacy based on voluntary acceptance, and consent? On the one hand, the claim is that in a legitimate government, we obey the law “voluntarily”; on the other hand, the claim is that we do not consent to government. How can we not consent to government if we obey it voluntarily? Coming the other way around: how can we obey it voluntarily if we don’t consent to it? Even if Hazony wants to broaden consent beyond the Lockean account, that’s still a broadening of the conditions of consent, not a nullification of the role of consent. The combination of claims that Kling attributes to Hazony does not seem coherent.
As a reminder, this is not a philosophical argument. Well, it is but it isn’t. I suspect this is about Israel and Palestine as much as it is about logical rigor. Stay tuned, and don’t be shy about having your say!
- good update on the mayhem in the Middle East
- as good as that update is, though: Iraq, Saudi Arabia to reopen border crossings after 27 years
- great read on Russia’s Far East and Russia’s travel writing genre
- in Russia, Lutheranism (Protestantism) is considered a “traditional” religion (h/t NEO)
- how social is reason?
It is not clear what the policy consequences are regarding those who lose out due to competition. If we are free to choose our friends, there will be losers who lose someone’s friendship. Should we be forced to stay friends with those we no longer like? If not, then such a loss has no policy implication. Such incidental injuries have less damaging consequences than a law that prohibits ending a friendship. Thus the deontological and consequential effects are complements: Likewise, the consequences of prohibiting economic competition are worse than the losses due to competition. And entrepreneurs should know that the system is a profit and loss system, and anyone in business is vulnerable to losses. The losses due to competition are not torts and they are not coercive harms. They are injuries not deliberately inflicted but incidental to individuals and firms pursuing their happiness.
This is in response to my short post on the ethical divide within libertarianism between deontologists and consequentialists. I don’t think there is too much that we disagree on here; Indeed, it seems as if we are complimenting each other quite nicely (if I do say so myself!).
My one quibble is more of a question than a quibble: Although we cannot predict who will lose out to competition in markets, shouldn’t we be able to make some solid inferences? For example, if the US and Europe were to abolish subsidies to farmers and open up their markets to foreign competition, it stands safe to reason that Western farmers will lose out, at least in the short-run.
The logic behind Dr Foldvary’s comment is relatively clear: abolish protectionist subsidies (which are aggressive legislative acts perpetrated against Western consumers and foreign farmers) and this paves the way for non-aggression. Not only is this logic clear, it is irrefutable. It also shows how deontology and consequentialism are complimentary. However, logic and facts are not very useful when it comes to persuading the public. Philosophically this argument makes perfect sense, and politically and rhetorically Dr Foldvary makes it work, but in the general public sphere (especially the internet) the appeal to deontology has earmarked liberalization for disaster.
I suppose, if we follow Jacob Huebert’s line of reasoning, that the politics and the rhetoric of our ideas should not matter, but on the other hand we live in a world where even in the West libertarians have become a minority. The world will continue to liberalize as long as libertarians continue to be as lucid as Dr Foldvary, but I fear that men of his caliber are in very short supply today.