Some Thoughts on State Capacity

State capacity is an important topic and the subject of much recent attention in both development economics and economic history. Together with Noel Johnson I’ve recently written a survey article on the topic (here). At the same time, many libertarians and classical liberals are uncomfortable with the concept (see here and here). I think these criticisms are useful but misplaced. Addressing them will hopefully move the debate forward in a useful fashion.

Here I will just focus one issue. This is the argument recently made by Alex Salter that state capacity is a black box. Alex notes correctly that we have a detailed and convincing theory for how markets can lead to economic growth (by directing resources to their most efficient use). In contrast, according to Alex:

“State capacity, by itself, addresses neither the information issue nor the incentive issue. While governance institutions obviously began centralizing at the beginning of the modern era, this is just a morphological description of what happened to institutions. On its own, that’s insufficient as a causal explanation”.

I think Alex and other critics are on the wrong track here. State capacity is not alternative explanation for economic growth to that offered by markets. The relevant question is what impeded market development before, say, 1700, and what enabled the growth of markets after around 1700. The evidence provided by a body of research suggests that prior to 1700 market development was impeded by political fragmentation both within and between states. Critics of the state capacity argument should engage with this literature.

A second claim Alex makes is that we lack a theory for why the more centralized states that arose after 1700 were less rent-seeking and predatory than their weaker and more internally fragmented predecessors. But in fact we have a fairly good understanding of many of the mechanisms responsible for the demise of the more costly forms of recent seeking that characterized medieval and early modern Europe. This understanding is based on the work of James Buchanan and Mancur Olson.

The basic argument is this. Medieval and early modern states were mostly devices for rent-extraction and rent-seeking. But this rent-extraction and rent-seeking was largely decentralized. They collected taxes through a variety of costly and inefficient means (such as selling monopolies). They then spent the tax revenue on costly wars.

Decentralized rent-extraction was costly and inefficient. For example, it is well known that weights and measures varied from place to place in preindustrial Europe. What is less well known is that there were institutional reasons for this, as each local lord wanted to use his own measures in order to extract more surplus from the peasants who were forced to grind their grain using his mill. Local cities similarly used their own systems of weights and measures in order to extract surplus from traveling merchants. This benefited each local lord and city authority but imposed a large deadweight loss on the economy at large.

The logic of internal tariffs was similar. Each local lord or city would choose their internal tariffs in order to maximize their own income. But we know from elementary microeconomics that in this setting each local authority will set these tariffs “too high” because they will not take into account the effect of their tax rate on the tax revenue of their neighbors who also set their tariffs too high.

When early modern European rulers invested in state capacity, they sought to abolish or restrict such internal tariffs, to impose uniform taxes, and to standardize weights and measures. This resulted in a reduction in deadweight loss as when the king set the tax rate he considered the tax revenue he gets from his entire realm, and internalized the negative externality mentioned above.  The reasoning is identical to that which states that a single combined monopolist may be preferable to an up-stream and down-stream monopolist. When it comes to a public bad (like rent-seeking) a monopolist is preferable to competition.

Political Decentralization and Innovation in early modern Europe

My full review of Joel Mokyr’s A Culture of Growth is forthcoming in the Independent Review. Unfortunately, it won’t be out until the Winter 2017 issue is released so here is a preview. Specifically, I want to discuss one of the main themes of the book and my review: the role of political decentralization in the onset of economic growth in western Europe.

This argument goes back to Montesquieu and David Hume. It is discussed in detail in my paper “Unified China; Divided Europe’’ (forthcoming in the International Economic Review and available here). But though many writers have argued that fragmentation was key to Europe’s eventual rise, these arguments are often underspecified, fail to explain the relevant mechanisms, or do not discuss counter-examples. Mokyr, however, has an original take on the argument which is worth emphasizing and considering in detail.

Mokyr focuses on how the competitive nature of the European state system provided dynamic incentives for economic growth and development. This argument is different from the classic one, according to which political competition led to fiscal competition, lower taxes, and better protection of property rights (see here). That argument rests on a faulty analogy between competition in the marketplace and competition between states.  The main problem it encounters is that while firms can only attract customers by offering lower prices (lower taxes) or better products (better public goods), states can compete with violence. Far from being competitive, low tax states like the Polish-Lithuanian commonwealth were crushed in the high-pressure competitive environment that characterized early modern Europe. The notion that competition produced low taxes is also falsified by the well-established finding that taxes were much higher in early modern Europe than elsewhere in the world.

It is also not the case that political fragmentation is always and everywhere good for economic development. India was fragmented for much of its history. Medieval Ireland was fragmented into countless chiefdom prior to the English conquest. Perhaps we can distinguish between low-intensity but fragmented state systems which tended not to generate competitive pressure such as medieval Ireland or South-East Asia and high-intensity fragmented state systems such as early modern Europe or warring states China. But even then it is not clear that a highly competitive and fragmented state system will be good for growth. In general, political fragmentation raised barriers to trade and impeded market integration. Moreover a competitive state system means more conflict or more resources spent deterring conflict. For this reason political fragmentation tends to result in wasteful military spending. It can be easily shown, for instance, that a much higher proportion of the population spent their lives in the economically wasteful activity of soldiering in fragmented medieval and early modern Europe than did in either the Roman empire or imperial China (see Ko, Koyama, Sng, 2018).

Innovation and Decentralization

What then is Mokyr’s basis for claiming that political fragmentation was crucial for the onset of modern growth? Essentially, for Mokyr the upside of Europe’s political divisions was dynamic. It was the conjunction of political fragmentation with a thriving trans-European intellectual culture that was crucial for the eventual transition to modern growth. The political divisions of Europe meant that innovative and heretical thinkers had an avenue of escape from oppressive political authorities. This escape valve prevented the ideas and innovations of the Renaissance and Reformation from being crushed after the Counter-Reformation became ascendant in southern Europe after 1600. Giordano Bruno was burned in Rome. But in general heretical and subversive thinkers could escape the Inquisition by judiciously moving across borders.

Political fragmentation enabled thinkers from Descartes and Bayle to Voltaire and Rousseau to flee France. It also allowed Hobbes to escape to Paris during the English Civil War and Locke to wait out the anger of Charles II in the Netherlands. Also important was the fact that the political divisions of Europe also meant that no writer or scientist was dependent on the favor of a single, all powerful monarch. A host of different patrons were available and willing to compete to attract the best talents. Christina of Sweden sponsored Descartes. Charles II hired Hobbes as a mathematics teacher for a while. Leibniz was the adornment of the House of Hanover.

The other important point that Mokyr’s stresses is Europe’s cultural unity and interconnectedness. As I conclude in my review, Mokyr’s argument is that

“the cultural unity of Europe meant that the inventors, innovators, and tinkers in England and the Dutch Republic could build on the advances of the European-wide Scientific Revolution. Europe’s interconnectivity due to the Republic of Letters helped to give rise to a continent-wide Enlightenment Culture. In the British Isles, this met a response from apprentice trained and skilled craftsmen able to tinker with and improve existing technologies.  In contrast, political fragmentation in the medieval Middle East or pre-modern India does not seem to have promoted innovation, whereas the political unity of Qing China produced an elite culture that was conservative and that stifled free thinking”.

It is this greater network connectivity that needs particular emphasize and should be the focus of future research into the intellectual origins of growth in western Europe. At present we can only speculate on its origins. The printing press certainly deserves mention as it was the key innovation that helped the diffusion of ideas. Mokyr also points to the postal system as a crucial institutional development that enabled rapid communication across political boundaries. Other factors include the development of a nascent European identity and what Chris Wickham calls, in his recent book on medieval Europe, “the late medieval public sphere” (Wickham, 2016). These developments were important but understudied complements to the fragmented nature of the European state system so frequently highlighted in the literature.

AI: Bootleggers and Baptists Edition

“Elon Musk Is Wrong about Artificial Intelligence and the Precautionary Principle” – Reason.com via @nuzzel

(disclaimer: I haven’t dug any deeper than reading the above linked article.)

Apparently Elon Musk is afraid of the potential downsides of artificial intelligence enough to declare it “a rare case where we should be proactive in regulation instead of reactive. By the time we are reactive in AI regulation, it is too late.”

Like literally everything else, AI does have downsides. And, like anything that touches so many areas of our lives, those downsides could be significant (even catastrophic). But the most likely outcome of regulating AI is that people already investing in that space (i.e. Elon Musk) would set the rules of competition in the biggest markets. (A more insidious possible outcome is that those who would use AI for bad would be left alone.) To me this looks like a classic Bootleggers and Baptists story.

The Economics of Hard Choices

In economics, there are two types of numbers that we use. Cardinal numbers express amounts. For example, “one”, “two”, “three”, etc. are all cardinal numbers. You can add them, subtract them, or even take them to an exponent.

Money prices are cardinal, which is why you can calculate precise profits and loss.

On the other hand, ordinal numbers express ranks. For example “first, “second”, “third”, etc. are all ordinal numbers. It doesn’t really make sense to talk about adding (or subtracting or exponentiating) ranks.

Almost all economists believe that utility is ordinal. This means your preferences are ranked: first most preferred, second most preferred, and so on. Here is a made up value scale:

1st. Having a slice of pizza
2nd. Having $2 in cash
3rd. Having a cyanide pill

Someone with the above preferences would give $2 in cash in order to get a slice of pizza. But would rather keep their $2 than to have a cyanide pill. By the same principle, they would also prefer to have a slice of pizza to a cyanide pill.

This is in contrast to cardinal utility, which requires the existence of something like “utils”. It’s just as nonsensical to say that “Sally gets twice as many utils from her first preferred good than the next best thing,” as it is to say “I like my first best friend twice as much as my second best friend.”

Usually, this is where most discussions of ordinality as it applies to economics end. But I believe I have a new extension of this concept that affect utility theory.

A New Perspective on Ordinal Preferences

Some people are dissatisfied with the ordinal approach to utility. “Sure, I prefer pizza over cyanide,” they’ll say, “but I really, really prefer pizza. You can’t show this intensity of preferences ordinally!” In other words, they believe the ordinal approach is lacking something real that a cardinal approach could approximate.

Well, it’s true that in a specific moment when I observe you choosing pizza over cyanide, I can’t really tell “how much” you preferred it.

But one way I can model it is that in your mind, you have a value scale of all things you wanted in that moment. And that the thing that you “really, really” wanted is ranked “much, much” higher relative to the other thing.

Let’s say pizza was first on your value scale, and cyanide was 1000th. So while it’s wrong to say you preferred pizza “one thousand times” as much as cyanide, it would be correct to say you would have preferred 999 other things to cyanide.

In other words, you would rather have any one of these 999 other things instead of instead cyanide—with pizza being chief among them. This is the sense in which you “really, really” prefer pizza to cyanide. We’ve been able to express the “intensity” sentiment without resorting to cardinal numbers.

Let’s extend the example. If your choice was between pizza and sushi, and sushi was your 2nd ranked good, then we can say several equivalent things: (i) you’re closer to indifference (i.e., viewing them as the same good) between pizza and sushi than pizza and cyanide; (ii) your preference for pizza over cyanide is stronger than your preference for pizza over sushi; (iii) you prefer pizza less intensely to sushi than to cyanide; and (iv) it’s easier for you to choose between pizza and cyanide than it is to choose between pizza and sushi.

Of course, we don’t walk around with an exhaustive list of all the goods we could possibly want at any time. This fact may make it virtually impossible to empirically test this account of psychology. But this way of thinking about “intensity of preferences” is at least consistent with ordinal preferences, meaning we can better understand this phenomena using mental tools we’re already familiar with.

I believe this way of thinking is also useful in interpreting “hard choices”. Everyone is familiar with being in a situation where you don’t know what to choose between to seemingly attractive alternatives. An easily relatable example might be choosing a drink at the self-serve cola machines that are in most fast food restaurants now. You might really like both Vanilla Coke and Cherry Coke, but you can only choose one. Because you feel as though you like both of them equally, this is what makes it a hard choice.

IMG_0472

In other words, I’m proposing the reason that this is a hard choice is because these two goods are positioned very close together on your value scale. So close, in fact, that you have a difficult time determining which outranks the other. This is what makes the choice hard.

Applications of this framework

You might object. “This might be fun to think about, it might even be a contribution to psychology. But what implication does this have for economics—that is, the science of human action?”

My answer is this: when presented with a difficult choice, a person will choose to wait instead of choosing instantly. Waiting allows them to collect more information, deliberate more, and consider other options.

(More formally: a person facing a choice between two goods that are indistinguishably close on their value scale will not choose either good in the present period; instead, they will postpone the choice to a future period where they expect to have a larger information set.)

How can we apply this framework to the real world? Before we begin, note well that hesitation is itself an action. And as an action, it has a place on the individual’s value scale. For an entrepreneur, this has at least two implications.

First, the entrepreneur’s consumers may be facing hesitation because they can’t choose between the goods on sale. Think back to the cola examples. It’s possible that you’re in such a rush that the hesitation is not worth your time. The consumer may choose not to buy cola at all.

Second, I believe this approach can shed new light on the issue of so-called “transfer pricing”. While transfer pricing typically is used in the context of tax ramifications to a firm that is trying to buy or sell assets from a subsidiary, we can generalize the concept by considering how a either a very large firm that has “horizontally integrated” by buying and selling inputs for its final product from itself, either because it has grown so large that it’s merged with all its competitors and suppliers, or it has a government monopoly where no other firm is allowed to produce that input. In short, if a firm has monopolized the production of inputs to the extent where no market prices exist for them, how should he calculate his own costs (and therefore profits)?

Murray Rothbard was the first to observe that in effect a firm that has grown so large where this is a problem has become a socialist economy. And just as how a socialist economy can’t produce efficiently without market prices, neither can this hypothetical firm. (For more on this point, see pp. 659-660 of Murray Rothbard’s Man, Economy, and State with Power and Market.)

And so while the standard story of why a socialistic economy can’t rationally calculate profits and losses is based on the cardinal notions of money—without money prices, you literally cannot subtract costs from revenues—I approach from a different angle. Namely, action becomes “harder” because the lack of a market does not allow the firm (or socialist government) to observe its own ordinal rankings for its inputs. And so, the firm faces a “hard choice” in how to optimize its own production schedule.

Firms and governments also demonstrate that they’re engaging in hard choices, by establishing bureaucracies. Firms hire “transfer pricing specialists”, governments set up councils that “determine” prices, and so on. This is analogous to an indecisive person hiring a consultant to help them pick between Vanilla Coke and Cherry Coke.

Conclusion

In summary, a “hard choice” is when a person cannot distinguish where two heterogeneous goods rank on their own value scale. This can be demonstrated through hesitation and/or deliberation. The harder it is to establish a market price for goods, the more hesitation and deliberation will be required.

I believe this approach, if correct, can be insightful for both entrepreneurs and research. A new question that is raised is, “how do individuals, firms, and governments respond under different circumstances when faced with a hard choice?”

In a future post, I hope to extend this framework of ambiguous ordinal rankings to probabilities as well. In the meantime, I look forward to any feedback on this post.

 

Minimum Wages: Short rejoinder to Geloso

A few days ago I posted here at NOL a short comment on some reaction I’ve seen with regards to Seattle’s minimum wage study. Vincent Geloso offers an insightful criticism of my argument. Even if his point is quite specific (or so it seems to me), it offers an opportunity for some clarification.

But first, what was my argument? My comment was aimed at a specific point raised by advocates of increasing minimum wages. Namely, that even if Seattle’s study shows an increase in unemployment, a study with a larger sample may say otherwise. My point is that the way I’ve seen this criticism raised is missing the economic insight of minimum wage analysis, namely that jobs will be lost in less efficient employers and employees first. So far so good. The problem Geloso points out is with my example. I refer to McDonald’s as the efficient employers fast food chain (think of economics of scale) and as less efficient employers the neighborhood family-run little food place (neighborhood’s diner).

Geloso correctly argues that different employers react in different ways. It is expected, for instance, that a larger employer such as a fast-food chain would have more options to make a marginal adjustment when there is an increase in minimum wages. Of course, I agree, but the point I’m rising is about where jobs will be lost first (not the specific mechanism in each employer). Geloso flips my example and argues that a small diner has more (in relative terms) to lose by letting go one out of two employees than a fast food joint that has to let one employee go among maybe ten thousand. By letting one employee go, the small employer loses a larger share of its output. Therefore a small employer would be more inclined to keep all of his labor force and cut costs on another front (less hours work in average doesn’t cut it, that’s like a shared unemployment that would also cut output down).

A large employer like a fast food chain, however, can let one out of ten thousand employees go because the loss in output is not that significant. I have two issues with this example. The first one is that a fast food chain is facing the increase in minimum wage ten thousand times, not two. To cut even the rise in cost, the firm fast food chain has to cut down its labor force 15% (1,500 employees.) But I think the problem with this example does not end here. If it were the case that small diners don’t cut employment but fast food chains do, then we should see more unemployment in larger employers than in small neighborhood diners.

A second point I want to make is with Geloso’s argument that the study is about focusing “like a laser” on one out of multiple channels in the group most likely to respond in that manner (unemployment?). That the study, as long as the focus is on unemployment, should focus on the less efficient employers (and employees) first, and not just look at the unaffected employers because that’s where we just happen to have better statistics for is my point. There are two options. The first option is that what matters is focusing on the channel the increase in cost will be managed by employers. But this is neither a focus on unemployment nor on the criticism I’m replying to. Option number two, that the study should focus on the employers “most likely” to reduce unemployment, which is actually my point regardless of how many “channels” are included in the sample.

Minimum Wages: Where to Look for Evidence

A recent study on the effect of minimum wages in the city of Seattle has produced some conflicted reactions. As most economists expected, the significant increase in the minimum wage resulted in job losses and bankruptcies. Others, however, doubt the validity of the results given that the sample may be incomplete.

In this post I want to focus just one empirical problem. An incomplete sample in itself may not be a problem. The issue is whether or not the observations missing from the sample are relevant. This problem has been pointed out before as the Russian Roulette Effect, which consists in asking survivors of the increase in minimum wages if the increase in minimum wages have put them out of business. Of course, the answer is no. In regards to Seattle, a concern might be that fast food chains such as McDonald’s are not properly included in the study.

The first reaction is, so what? Why is that a problem? If the issue is to show that an increase of wages above their equilibrium level is going to produce unemployment, all that has to be shown is that this actually happens, not to show where it does not happen. This concern about the Seattle study is missing a key point of the economic analysis of minimum wages. The prediction is that jobs will be lost first among less efficient workers and less efficient employers, not equally across all workers and employers. More efficient employers may be able to absorb a larger share of the wage increase, to cut compensations, delay the lay-offs, etc. This is seen by the fact that demand is downward sloping and that a minimum wage above its equilibrium level “cuts” demand in two. Some employers are below the minimum wage (the less efficient ones) and others are above the minimum wage (the more efficient ones.) Let’s call the former “Uncle’s diner” and the latter “McDonald’s.” This how it is seen in a demand and supply graph:

minimum wage

Surely, there is some overlapping. But the point that this graph is making is that looking at the effects minimum wage above the red line is looking at the wrong place. A study that is looking for the effect on employment needs to be looking at what happens with below the red line. This sample, of course, has less information available than fast food chains such as McDonald’s; this is a reason why some studies focus on what can be seen even if the effect happens in what cannot be seen (and this is a value added of the Seattle study.)

This is why it is important to ask: “what do minimum wage advocates expect to find by increasing the sample size?” To question that minimum wages increase unemployment, then the critics also needs to focus on the “Uncle’s diner” part of the demand curve. If the objective is to inquire about something else, than that has no bearing on the fact that minimum wage increases do produce unemployment in the minimum wage market and at the less efficient (and harder to gather data) portion of it first.

PS: I have a previous post on minimum wages that can be found here.

More simple economic truths I wish more people understood

There are only four ways to spend money:

  1. You spend your money with yourself.
  2. You spend your money with others.
  3. You spend other’s money with yourself.
  4. You spend other’s money with others.

Just think about it for one second and you will agree: these are the only possible ways to spend money.

The best way to spend money is to spend your money with yourself. The worst is when you spend other’s money with others.

When you spend your money with yourself, you know how much money you have and what your needs are.

When you spend your money with others, you know how much money you have but you don’t know as well what other’s needs are.

When you spend other’s money with yourself you know your needs but you don’t know how much money you can actually spend. In a situation like this, some people will shy away from spending much money (when they actually could) and will end up not totally satisfied. Other people will have no problem like that and will spend as crazy, not noticing that they are spending too much. It doesn’t matter what kind of person you are: the fact is that this is not a good way to spend money.

When you spend other’s money with others you have the worst case scenario: you don’t know other’s needs as well as they know and you also don’t have a clear grasp of how much money you can actually spend.

In a true liberal capitalist society, the majority of the money is spent by individuals on their own needs. The more a society drifts away from this ideal, the more people spend money that is not actually theirs on other people. The more money is misused.

The government basically spend money is not theirs on other people. That’s why government spending is usually bad. Even the most well-meaning government official, more in line with your personal beliefs, will probably not spend your money as well as you could yourself.

Rent-Seeking Rebels of 1776

Since yesterday was Independence Day, I thought I should share a recent piece of research I made available. A few months ago, I completed a working paper which has now been accepted as a book chapter regarding public choice theory insights for American economic history (of which I talked about before).  That paper simply argued that the American Revolutionary War that led to independence partly resulted from strings of rent-seeking actions (disclaimer: the title of the blog post was chosen to attract attention).

The first element of that string is that the Americans were given a relatively high level of autonomy over their own affairs. However, that autonomy did not come with full financial responsibility.  In fact, the American colonists were still net beneficiaries of imperial finance. As the long period of peace that lasted from 1713 to 1740 ended, the British started to spend increasingly larger sums for the defense of the colonies. This meant that the British were technically inciting (by subsidizing the defense) the colonists to take aggressive measures that may benefit them (i.e. raid instead of trade). Indeed, the benefits of any land seizure by conflict would large fall in their lap while the British ended up with the bill.

The second element is the French colony of Acadia (in modern day Nova Scotia and New Brunswick). I say “French”, but it wasn’t really under French rule. Until 1713, it was nominally under French rule but the colony of a few thousands was in effect a “stateless” society since the reach of the French state was non-existent (most of the colonial administration that took place in French North America was in the colony of Quebec). In any case, the French government cared very little for that colony.   After 1713, it became a British colony but again the rule was nominal and the British tolerated a conditional oath of loyalty (which was basically an oath of neutrality speaking to the limited ability of the crown to enforce its desires in the colony). However, it was probably one of the most prosperous colonies of the French crown and one where – and this is admitted by historians – the colonists were on the friendliest of terms with the Native Indians. Complex trading networks emerged which allowed the Acadians to acquire land rights from the native tribes in exchange for agricultural goods which would be harvested thanks to sophisticated irrigation systems.  These lands were incredibly rich and they caught the attention of American colonists who wanted to expel the French colonists who, to top it off, were friendly with the natives. This led to a drive to actually deport them. When deportation occurred in 1755 (half the French population was deported), the lands were largely seized by American settlers and British settlers in Nova Scotia. They got all the benefits. However, the crown paid for the military expenses (they were considerable) and it was done against the wishes of the imperial government as an initiative of the local governments of Massachusetts and Nova Scotia. This was clearly a rent-seeking action.

The third link is that in England, the governing coalitions included government creditors who had a strong incentives to control government spending especially given the constraints imposed by debt-financing the intermittent war with the French.  These creditors saw the combination of local autonomy and the lack of financial responsibility for that autonomy as a call to centralize management of the empire and avoid such problems in the future. This drive towards centralization was a key factor, according to historians like J.P. Greene,  in the initiation of the revolution. It was also a result of rent-seeking on the part of actors in England to protect their own interest.

As such, the history of the American revolution must rely in part on a public choice contribution in the form of rent-seeking which paints the revolution in a different (and less glorious) light.

The Behavioural Economics of the “Liberty & Responsibility” couple.

The marketing of Liberty is enclosed with the formula “Liberty + Responsibility.” It is some sort of “you have the right to do what you please, BUT you have to be responsible for your choices.” That is correct: the costs and profits enable rationality to our decisions. The lack of the former brings about the tragedy of the commons outcome. In a world where everyone is accountable for his choices, the ideal of liberty as absence of arbitrary coercion will be delivered by the resulting structure of rational individual decisions limiting our will.

The couple of Liberty and Responsibility is right BUT unattractive. First of all, the formula is not actually “Liberty + Responsibility,” but “Liberty as Absence of Coercion – What Responsibility Takes Away.” The latter is still right: Responsibility transforms negative liberty as “absence of coercion” into “absence of arbitrary coercion.” The problem remains a matter of marketing of ideas.

David Hume is a strong candidate for the title of “First Behavioural Economist,” since he had stated that it is more unpleasant for a man to have the unfulfilled promise of a good than not having neither the good nor the promise of it. The latter might be a desire, while the former is experienced as a dispossession. The couple “Liberty – Responsibility” dishes out the same kind of deception.

It is like someone who tells to you: “do what you want, enjoy 150% of Liberty”; and suddenly, he warns you: “but wait! You know there’s no such thing as a free lunch, if you are 150% free, someone will be 50% your slave. Give that illegitimate 50% of freedom back!” And he will be -again- right: being responsible makes everybody 100% free. Right – albeit disappointing.

Perhaps we should restate the formula the other way around: “Being 100% responsible for your choices gives you the right to claim 100% of your freedom.” Only a few will be interested in being more than 100% responsible for anything. But if it happens that someone is expected to deal alone with his own needs, at least he will be entitled to claim the right to his full autonomy.

The formula of “Responsibility + Liberty” is associated with the evolutionist notion of liberties, which means rights to be conquered, one by one. Being responsible and then free means that Liberty is not an unearned income to be neutrally taxed. It is not a “state of nature” to give in exchange for civilization, but a project to grow, a goal, a raison d’etre.

Putting first Responsibility and then Liberty determines a curious outcome: you are consciously free to choose the amount of freedom you are really willing to enjoy. Markets and hierarchies are, then, not antagonistic terms, but structures of cooperation freely consented. Moreover, what we trade are not goods, not even rights on goods, but parcels of our sphere of autonomy.

Simple economics I wish more people understood

Economics comes from the Greek “οίκος,” meaning “household” and “νęμoμαι,” meaning “manage.” Therefore, in its more basic sense, economy means literally “rule of the house.” It applies to the way one manages the resources one has in their house.

Everyone has access to limited resources. It doesn’t matter if you are rich, poor, or middle class. Even the richest person on Earth has limited resources. Our day has only 24 hours. We only have one body. This body starts to decay very early in our lives. Even with modern medicine, we don’t get to live much more than 100 years.

The key of economics is how well we manage our limited resources. We need to make the best with the little we are given.

For most of human history, we were very poor. We had access to very limited resources, and we were not particularly good at managing them. We became much better in managing resources in the last few centuries. Today we can do much more with much less.

Value is a subjective thing. One thing has value when you think this thing has value. You may value something that I don’t.

We use money to exchange value. Money in and of itself can have no value at all. It doesn’t matter. The key of money is its ability to transmit information: I value this and I don’t value that.

Of course, many things can’t be valued in money. At least for most people. But it doesn’t change the fact that money is a very intelligent way to attribute value to things.

The economy cannot be managed centrally by a government agency. We have access to limited resources. Only we, individually, can judge which resources are more necessary for us in a given moment. Our needs can change suddenly, without notice. You can be saving money for years to buy a house, only to discover you will have to spend this money on a medical treatment. It’s sad. It’s even tragic. But it is true. If the economy is managed centrally, you have to transmit information to this central authority that your plans have changed. But if we have a great number of people changing plans every day, then this central authority will inevitably be loaded. The best judge of how to manage your resources is yourself.

We can become really rich as a society if we attribute responsibility for each person on how we manage our resources. If each one of us manages their resources to the best of their knowledge and abilities, we will have the best resource management possible. We will make the best of the limited resources we have.

Economics has a lot to do with ecology. They share the Greek “οίκος” which, again, means “household.” This planet is our house. The best way to take care of our house is to distribute individual responsibility over individual management of individual pieces of this Earth. No one can possess the whole Earth. But we can take care of tiny pieces we are given responsibility over.

James Buchanan on racism

McLean

Ever since Nancy MacLean’s new book came out, there have been waves of discussions of the intellectual legacy of James Buchanan – the economist who pioneered public choice theory and won the Nobel in economics in 1986. Most prominent in the book are the inuendos of Buchanan’s racism.  Basically, public choice had a “racist” agenda.  Even Brad DeLong indulged in this criticism of Buchanan by pointing that he talked about race by never talking race, a move which reminds him of Lee Atwater.

The thing is that it is true that Buchanan never talked about race as DeLong himself noted.  Yet, that is not a sign (in any way imaginable) of racism. The fact is that Buchanan actually inspired waves of research regarding the origins of racial discrimination and was intellectually in line with scholars who contributed to this topic.

Protecting Majorities and Minorities from Predation

To see my point in defense of Buchanan here, let me point out that I am French-Canadian. In the history of Canada, strike that, in the history of the province of Quebec where the French-Canadians were the majority group, there was widespread discrimination against the French-Canadians. For all intents and purposes, the French-Canadian society was parallel to the English-Canadian society and certain occupations were de facto barred to the French.  It was not segregation to be sure, but it was largely the result of the fact that the Catholic Church had, by virtue of the 1867 Constitution, monopoly over education. The Church lobbied very hard  in order to protect itself from religious competition and it incited logrolling between politicians in order to win Quebec in the first elections of the Canadian federation. Logrolling and rent-seeking! What can be more public choice? Nonetheless, these tools are used to explain the decades-long regression of French-Canadians and the de facto discrimination against them (disclaimer: I actually researched and wrote a book on this).

Not only that, but when the French-Canadians started to catch-up which in turn fueled a rise in nationalism, the few public choice economists in Quebec (notably the prominent Jean-Luc Migué and the public choice fellow-traveler Albert Breton) were amongst the first to denounce the rise of nationalism and reversed linguistic discrimination (supported by the state) as nothing else than a public narrative aimed at justifying rent-seeking attempts by the nationalists (see here and here for Breton and here and here for Migué). One of these economists, Migué, was actually one of my key formative influence and someone I consider a friend (disclaimer: he wrote a blurb in support of the French edition of my book).

Think about this for a second : the economists of the public choice tradition in Quebec defended both the majority and the minority against politically-motivated abuses. Let me repeat this : public choice tools have been used to explain/criticize attempts by certain groups to rent-seek at the expense of the majority and the minority.

How can you square that with the simplistic approach of MacLean?

Buchanan Inspired Great Research on Discrimination and Racism

If Buchanan didn’t write about race, he did set up the tools to explain and analyze it. As I pointed out above, I consider myself in this tradition as most of my research is geared towards explaining institutions that cause certain groups of individuals to fall behind or pull ahead.  A large share of my conception of institutions and how state action can lead to predatory actions against both minorities and majorities comes from Buchanan himself!  Nevermind that, check out who he inspired who has published in top journals.

For example, take the case of the beautifully written articles of Jennifer Roback who presents racism as rent-seeking. She sets out the theory in an article in Economic Inquiry , after she used a case study of segregated streetcars in the Journal of Economic HistoryA little later, she consolidated her points in a neat article in the Harvard Journal of Law and Public PolicyShe built an intellectual apparatus using public choice tools to explain the establishment of discrimination against blacks and how it persisted for long.

Consider also one of my personal idols, Robert Higgs who is a public-choice fellow traveler who wrote Competition and Coerciowhich considers the topic of how blacks converged (very slowly) with whites in hostile institutional environment. Higgs’ treatment of institutions is well in line with public choice tools and elements advanced by Buchanan and Tullock.

The best case though is The Origins and Demise of South African Apartheid by Anton David Lowenberg and William H. Kaempfer. This book explicitly uses a public choice to explain the rise and fall of Apartheid in South Africa.

Contemporaries that Buchanan admired were vehemently anti-racist

Few economists, except maybe economic historians, know of William Harold Hutt. This is unfortunate since Hutt produced one of the deepest and most thoughtful economic criticism of Apartheid in South Africa, The Economics of the Colour Bar This book stands tall and while it is not the last word, it generally is the first word on anything related to Apartheid – a segregation policy against the majority that lasted nearly as long as segregation in the South.  This writing, while it earned Hutt respect amongst economists, made him more or less personae non grata in his native South Africa.

Oh, did I mention that Hutt was a public choice economist? In 1971, Hutt published Politically Impossible which has been an underground classic in the public choice tradition. Unfortunately, Hutt did not have the clarity of written expression that Buchanan had and that book has been hard to penetrate.  Nonetheless, the book is well within the broad public choice tradition.  He also wrote an article in the South African Journal of Economics which expanded on a point made by Buchanan and Tullock in the Calculus of Consent. 

Oh, wait, I forgot to mention the best part. Buchanan and Hutt were mutual admirers of one another. Buchanan cited Hutt’s work very often (see here and here) and spoke with admiration of Hutt (see notably this article here by Buchanan and this review of Hutt’s career where Buchanan is discussed briefly).

If MacLean wants to try guilt by (inexistent) association, I should be excused from providing redemption by (existent) association.  Not noting these facts that are easily available shows poor grasp of the historiography and the core intellectual history.

Simply Put

Buchanan inspired a research agenda regarding how states can be used for predatory purposes against minorities and majorities which has produced strong interpretations of racism and discrimination. He also associated with vehement and admirable anti-racists like William H. Hutt and inspired students who took similar positions. I am sure that if I were to assemble a list of all the PhD students of Buchanan, I would find quite a few who delved into the deep topic of racism using public choice tools. I know better and I did not spend three years researching Buchanan’s life. Nancy MacLean has no excuse for these oversights.

The Old Deluder Satan Act: Literacy, Religion, and Prosperity

So, my brother (Keith Kallmes, graduate of the University of Minnesota in economics and history) and I have decided to start podcasting some of our ideas. The topics we hope to discuss range from ancient coinage to modern medical ethics, but with a general background of economic history. I have posted here our first episode, the Old Deluder Satan Act. This early American legislation, passed by the Massachusetts Bay Colonists, displays some of the key values that we posit as causes of New England’s principal role in the Industrial Revolution. The episode: 

We hope you enjoy this 20-minute discussion of the history of literacy, religion, and prosperity, and we are also happy to get feedback, episode suggestions, and further discussion in the comments below. Lastly, we have included links to some of the sources cited in the podcast.


Sources:

The Legacy of Literacy: Continuity and Contradictions in Western Culture, by Harvey Graff

Roman literacy evidence based on inscriptions discussed by Dennis Kehoe and Benjamin Kelly

Mark Koyama’s argument

European literacy rates

The Agricultural Revolution and the Industrial Revolution: England, 1500-1912, by Gregory Clark

Abstract of Becker and Woessman’s “Was Weber Wrong?”

New England literacy rates

(Also worth a quick look: the history of English Protestantism, the Puritans, the Green Revolution, and Weber’s influence, as well as an alternative argument for the cause of increased literacy)

Is the U-curve of US income inequality that pronounced?

For some time now, I have been skeptical of the narrative that has emerged regarding income inequality in the West in general and in the US in particular. That narrative, which I label UCN for U-Curve Narrative, simply asserts that inequality fell from a high level in the 1910s down to a trough in the 1970s and then back up to levels comparable to those in the 1910s.

To be sure, I do believe that inequality fell and rose over the 20th century.  Very few people will disagree with this contention. Like many others I question how “big” is the increase since the 1970s (the low point of the U-Curve). However, unlike many others, I also question how big the fall actually was. Basically, I do think that there is a sound case for saying that inequality rose modestly since the 1970s for reasons that are a mixed bag of good and bad (see here and here), but I also think that the case that inequality did not fall as much as believed up to the 1970s is a strong one.

The reasons for this position of mine relates to my passion for cliometrics. The quantitative illustration of the past is a crucial task. However, data is only as good as the questions it seek to answer. If I wonder whether or not feudal institutions (like seigneurial tenure in Canada) hindered economic development and I only look at farm incomes, then I might be capturing a good part of the story but since farm income is not total income, I am missing a part of it. Had I asked whether or not feudal institutions hindered farm productivity, then the data would have been more relevant.

Same thing for income inequality I argue in this new working paper (with Phil Magness, John Moore and Phil Schlosser) which is a basically a list of criticisms of the the Piketty-Saez income inequality series.

For the United States, income inequality measures pre-1960s generally rely on tax-reporting data. From the get-go, one has to recognize that this sort of system (since it is taxes) does not promote “honest” reporting. What is less well known is that tax compliance enforcement was very lax pre-1943 and highly sensitive to the wide variations in tax rates and personal exemption during the period. Basically, the chances that you will report honestly your income at a top marginal rate of 79% is lower than had that rate been at 25%. Since the rates did vary from the high-70s at the end of the Great War to the mid-20s in the 1920s and back up during the Depression, that implies a lot of volatility in the quality of reporting. As such, the evolution measured by tax data will capture tax-rate-induced variations in reported income (especially in the pre-withholding era when there existed numerous large loopholes and tax-sheltered income vehicles).  The shift from high to low taxes in the 1910s and 1920s would have implied a larger than actual change in inequality while the the shift from low to high taxes in the 1930s would have implied the reverse. Correcting for the artificial changes caused by tax rate changes would, by definition, flatten the evolution of inequality – which is what we find in our paper.

However, we go farther than that. Using the state of Wisconsin which had a tax system with more stringent compliance rules for the state income tax while also having lower and much more stable tax rates, we find different levels and trends of income inequality than with the IRS data (a point which me and Phil Magness expanded on here). This alone should fuel skepticism.

Nonetheless, this is not the sum of our criticisms. We also find that the denominator frequently used to arrive at the share of income going to top earners is too low and that the justification used for that denominator is the result of a mathematical error (see pages 10-12 in our paper).

Finally, we point out that there is a large accounting problem. Before 1943, the IRS provided the Statistics of Income based on net income. After 1943, there shift between definitions of adjusted gross income. As such, the two series are not comparable and need to be adjusted to be linked. Piketty and Saez, when they calculated their own adjustment methods, made seemingly reasonable assumptions (mostly that the rich took the lion’s share of deductions). However, when we searched and found evidence of how deductions were distributed, they did not match the assumptions of Piketty and Saez. The actual evidence regarding deductions suggest that lower income brackets had large deductions and this diminishes the adjustment needed to harmonize the two series.

Taken together, our corrections yield systematically lower and flatter estimates of inequality which do not contradict the idea that inequality fell during the first half of the 20th century (see image below). However, our corrections suggest that the UCN is incorrect and that there might be more of small bowl (I call it the Paella-bowl curve of inequality, but my co-authors prefer the J-curve idea).

InequalityPikettySaez.png

A Short Note on “Net Neutrality” Regulation

Rick Weber has a good note lashing out against net neutrality regulation. The crux of his argument is that there are serious costs to consumers in terms of getting content slower to enforced net neutrality. But even if we ignore his argument, what if regulation isn’t even necessary to preserve the benefits of net neutrality (even though there really never was net neutrality as proponents imagine it to begin with, and it has nothing to do with fast lanes but with how content providers need to go through a few ISPS)? In fact, there is evidence that the “fast lane” model that net neutrality advocates imagine would

In fact, there is evidence that the “fast lane” model that net neutrality advocates imagine would happen in the absence of regulatory intervention is not actually profitable for ISPs to pursue, and has failed in the past. As Timothy Lee wrote for the Cato Institute back in 2008:

The fundamental difficulty with the “fast lane” strategy is that a network owner pursuing such a strategy would be effectively foregoing the enormous value of the unfiltered content and applications that comes “for free” with unfiltered Internet access. The unfiltered internet already offers breathtaking variety of innovative content and application, and there is every reason to expect things to get even better as the availabe bandwidth continues to increase. Those ISPs that continue to provide their users with faster, unfiltered access to the Internet will be able to offer all of this content to their customers, enhancing the value of their pipe at no additional cost to themselves.

In contrast, ISPs that chose not to upgrade their customers’ Internet access but instead devote more bandwidth to a proprietary “walled garden” of affiliated content and applications will have to actively recruit each application or content provider that participates in the “fast lane” program. In fact, this is precisely the strategy that AOL undertook in the 1990s. AOL was initially a propriety online service, charged by the hour, that allowed its users to access AOL-affiliated online content. Over time, AOL gradually made it easier for customers to access content on the Internet so that, by the end of the 1990s, it was viewed as an Internet Service Provider that happened to offer some propriety applications and content as well. The fundamental problem requiring AOL to change was that content available on the Internet grew so rapidly that AOL (and other proprietary services like Compuserve) couldn’t keep up. AOL finally threw in the towel in 2006, announcing that the proprietary services that had once formed the core of its online offerings would become just another ad-supported web-site. A “walled garden/slow lane” strategy has already proven unprofitable in the market place. Regulations prohibiting such a business model would be suprlusage.

It looks like it might be the case that Title II-style regulation is a solution in search of a problem. Add to it the potential for ISPs and large companies to lobby regulators to erect other barriers to entry to stop new competitors, like what happened with telecommunications companies under Title II and railroad companies under the Interstate Commerce Commission, and the drawbacks of pure net neutrality Rick pointed out, and it looks like a really bad policy indeed.

Vincent Geloso Interviewed for his Work on the War on Drugs

Regular readers of NOL know that fellow notewriter Vincent Geloso has done a lot of great work on the war on drugs. Dr. Geloso was recently on Student for Liberty’s Podcast to discuss a paper he recently co-authored compiling data on the effects of the war on drugs on increased security costs, which he previewed a few months ago on NOL. He had a wide-ranging discussion on his findings, secondary effects of the war on drugs in terms of economic costs, the psychology of policing with the war on drugs, and comparing the drug war to prohibition. Check out the discussion.

P.S. If you’re not already listening to SFL On Air, you should and not just because I’m in charge of marketing for it.