An argument against Net Neutrality

First off, Comcast sucks. Seriously, screw those guys.

But let’s assume a can opener and see if that doesn’t help us find a deeper root problem. The can opener is competition in the ISP network. Let’s consider how the issue of Net Neutrality (NN) would play out in a world where your choice of ISP looked more like your choice of grocery store. Maybe a local district is set up to manage a basic grid and ISPs bid for usage of infrastructure (i.e. cities take a page out of the FCC’s playbook on spectrum rights). Maybe some technological advance makes it easy to set up decentralized wireless infrastructure. But let’s imagine that world.

Let me also assume a bit of regulation. The goal is to create some simple rules that make the market work a bit better. Two regulations that I’d like to see are 1) a requirement that ISPs have a public list of any websites they restrict access to*, and 2) a limitation on how complicated end user agreements can be. I’m not sure these things would be possible in my anarchist utopia, but in a second best world of governments I’m pretty comfortable with them.

Let’s also create a default contracts for content providers with ISPs. “Unless otherwise agreed to, content providers (e.g. YouTube, my crazy uncle Larry, the cafe around the corner, etc.) relationship with ISPs is assumed to take the following form:…” An important clause would be “access/speed/etc. to your content will meet ______ specifications and cannot be negatively altered at the request of any third party.”

A similar default contract could be written for ISPs and end users. “Universal access under ____________ conditions will be provided and cannot be negatively altered at the request of any third party.”

Explicitly and publicly setting neutral defaults means we can get NN by default, but allow people the freedom to exchange their way out of it.

Do we need, or even want, mandated NN in that world? There are some clear potential gains to a non-neutral Internet. Bandwidth is a scarce resource, and some websites use an awful lot of it. YouTube and Netflix are great, but they’re like a fleet of delivery trucks creating traffic on the Information Super Highway. Letting them pay ISPs for preferred access is like creating a toll lane that can help finance increased capacity.

Replacing NN with genuine competition means that consumers who value Netflix can pay for faster streaming on that while (essentially) agreeing to use less of the net’s bandwidth for other stuff. We should encourage faster content, even if it means that some content gets that extra speed before the rest.

Competing ISPs would cater to the preferences and values of various niches. Some would seem benign: educational ISPs that provide streamlined access to content from the Smithsonian while mirroring Wikipedia content on their volunteer servers. Bandwidth for sites outside the network might come at some price per gigabyte, or it might be unlimited.

Other ISPs might be tailored for information junkies with absolutely every website made available at whatever speed you’re willing to pay for. Family friendly ISPs would refuse to allow porn on their part of the network (unsuccessfully, I suspect), but couldn’t stop other ISPs from anything. Obnoxious hate group ISPs would probably exist too.

There would be plenty of bad to go along with the good, just like there is in a neutral network.

I’m okay with allowing ISPs to restrict access to some content as long as they’re honest about it. The Internet might not provide a universal forum for all voices, but that’s already the case. If you can’t pay for server space and bandwidth, then your voice can only be heard on other people’s parts of the Internet. Some of those people will let you say whatever you want (like the YouTube comments section), but others are free to ban you.

Similarly, big companies will be in a better position to provide their content, but that’s already the case too. Currently they can spend more on advertising, or spend more on servers that are physically closer to their audience. A non-neutral net opens up one more margin of competition: paying for preferred treatment. This means less need to inefficiently invest physical resources for the same preferred treatment. (Hey, a non-neutral net is Green!)

There might be reason to still be somewhat worried about the free speech implications of a non-neutral net. As consumers, we might prefer networks that suppress dissident voices. And those dissident voices might (in the aggregate) be providing a public good that we’d be getting less of. (I think that’s a bit of a stretch, but I think plenty of smart people would take the point seriously.) If that’s the case, then let’s have the Postal Service branch out to provide modestly priced, moderate speed Internet access to whoever wants it. Not great if you want to do anything ambitious like play Counter Strike or create a major news network, but plenty fine for reading the news and checking controversial websites.

tl;dr: I can imagine a world without Net Neutrality that provides better Internet service and better economizes on the resources necessary to keep the Information Super Highway moving. But it’s not the world we currently live in. What’s missing is genuine market competition. To get there would require gutting much of the existing regulatory frameworks and replacing it with a much lighter touch.

What I’m talking about seems like a bit of a pipe dream from where we’re sitting. But if we could take the political moment of the Net Neutrality movement and redirect it, we could plausibly have a free and competitive Internet within a generation.


*Or maybe some description about how they filter out websites… something like a non-proprietary description of their parental filters for ISPs that (attempt to) refuse adult content access.

A Tax is Not a Price

Auto_stoped_highwayAccording to The Economist, the latest US federal budget includes incentives for “congestion pricing” of roads.

Ostensibly, this is about reducing congestion. But some municipalities like the idea of charging for roads because it represents a new revenue stream. This creates an incentive to charge a price above cost. When a firm does this, we call it a “monopoly price.”

But when a government monopoly forces you to pay a fee to use a good or service, do not call it a price. It is a fee that a government collects by fiat. In other words, it is a tax.

A price is a voluntary exchange of money for a good or service. The emphasis on voluntary is important, because it is this aspect of the price that enables economic calculation for what people really want.  Even a free market “monopolist” (however unlikely or conceptually vague it may be) engages in voluntary exchange.

On the other hand, a bureaucrat “playing market” by imposing fees on government-controlled goods and services will not have the same results as a market process. For starters, unlike a person making decisions on their own behalf, a government bureaucrat has to guess at costs. Under a voluntary system, a cost is the highest valued good or service you voluntarily give up in order to attain a goal. But the bureaucrat is dealing with other people’s money.

To “objectively” determine costs, in order to set “fair” prices, is a chimera. In the words of Ludwig von Mises, “[a] government can no more determine prices than a goose can lay hen’s eggs.”

On Borjas, Data and More Data

I see my craft as an economic historian as a dual mission. The first is to answer historical question by using economic theory (and in the process enliven economic theory through the use of history). The second relates to my obsessive-compulsive nature which can be observed by how much attention and care I give to getting the data right. My co-authors have often observed me “freaking out” over a possible improvement in data quality or be plagued by doubts over whether or not I had gone “one assumption too far” (pun on a bridge too far). Sometimes, I wish more economists would follow my historian-like freakouts over data quality. Why?

Because of this!

In that paper, Michael Clemens (whom I secretly admire – not so secretly now that I have written it on a blog) criticizes the recent paper produced by George Borjas showing the negative effect of immigration on wages for workers without a high school degree. Using the famous Mariel boatlift of 1980, Clemens basically shows that there were pressures on the US Census Bureau at the same time as the boatlift to add more black workers without high school degrees. This previously underrepresented group surged in importance within the survey data. However since that underrepresented group had lower wages than the average of the wider group of workers without high school degrees, there was an composition effect at play that caused wages to fall (in appearance). However, a composition effect is also a bias causing an artificial drop in wages and this drove the results produced by Borjas (and underestimated the conclusion made by David Card in his original paper to which Borjas was replying).

This is cautionary tale about the limits of econometrics. After all, a regression is only as good as the data it uses and suited to the question it seeks to answer. Sometimes, simple Ordinary Least Squares are excellent tools. When the question is broad and/or the data is excellent, an OLS can be a sufficient and necessary condition to a viable answer. However, the narrower the question (i.e. is there an effect of immigration only on unskilled and low-education workers), the better the method has to be. The problem is that the better methods often require better data as well. To obtain the latter, one must know the details of a data source. This is why I am nuts over data accuracy. Even small things matter – like a shift in the representation of blacks in survey data – in these cases. Otherwise, you end up with your results being reversed by very minor changes (see this paper in Journal of Economic Methodology for examples).

This is why I freak out over data. Maybe I can make two suggestions about sharing my freak-outs.

The first is to prefer a skewed ratio of data quality to advanced methods (i.e. simple methods with crazy-data). This reduces the chances of being criticized for relying on weak assumptions. The second is to take a leaf out of the book of the historians. While historians are often averse to advantaged data techniques (I remember a case when I had to explain panel data regressions to historians which ended terribly for me), they are very respectful of data sources. I have seen historians nurture datasets for years before being willing to present them. When published, they generally stand up to scrutiny because of the extensive wealth of details compiled.

That’s it folks.

 

Can we trust US interwar inequality figures?

This question is the one that me and Phil Magness have been asking for some time and we have now assembled our thoughts and measures in the first of a series of papers. In this paper, we take issue with the quality of the measurements that will be extracted from tax records during the interwar years (1918 to 1941).

More precisely, we point out that tax rates at the federal level fluctuated wildly and were at relatively high levels. Since most of our inequality measures are drawn from the federal tax data contained in the Statistics of Income, this is problematic. Indeed, high tax rates might deter honest reporting while rapidly changing rates will affect reporting behavior (causing artificial variations in the measure of market income). As such, both the level and the trend of inequality might be off.  That is our concern in very simple words.

To assess whether or not we are worrying for nothing, we went around to find different sources to assess the robustness of the inequality estimates based on the federal tax data. We found what we were looking for in Wisconsin whose tax rates were much lower (never above 7%) and less variable than those at the federal levels. As such, we found the perfect dataset to see if there are measurement problems in the data itself (through a varying selection bias).

From the Wisconsin data, we find that there are good reasons to be skeptical of the existing inequality measured based on federal tax data. The comparison of the IRS data for Wisconsin with the data from the state income tax shows a different pattern of evolution and a different level (especially when deductions are accounted for). First of all, the level is always inferior with the WTC data (Wisconsin Tax Commission). Secondly, the trend differs for the 1930s.

Table1 for Blog

I am not sure what it means in terms of the true level of inequality for the period. However, it suggests that we ought to be careful towards the estimations advanced if two data sources of a similar nature (tax data) with arguably minor conceptual differences (low and stable tax rates) tell dramatically different stories.  Maybe its time to try to further improve the pre-1945 series on inequality.

Empire effects : the case of shipping

I have been trying, for some time now, to circle an issue that we can consider to be a cousin of the emerging “state capacity” literature (see Mark Koyama’s amazing summary here). This cousin is the literature on “empire effects” (here and here for examples).

The core of the “empire effect” claim is that empires provide global order which we can consider as a public good. A colorful image would be the British Navy roaming the seas in the 19th century which meant increased protection for trade. This is why it is a parent of the state capacity argument in the sense that the latter concept refers (broadly) to the ability of a state to administer the realm within its boundaries. The empire effect is merely the extension of these boundaries.

I still have reservations about the nuances/limitations of state capacity as an argument to explain economic growth. After all, the true question is not how states consolidate, but how they create constraints on rulers to not abuse the consolidated powers (which in turn generates room for growth). But, it is easy to heavily question its parent: the empire effect.

This is what I am trying to do in a recent paper on the effects of empire on shipping productivity between 1760 and 1860.

Shipping is one of the industry that is most likely to be affected by large empires – positively or negatively. Indeed, the argument for empire effects is that they protect trade. As such, the British navy in the 19th century protected trade and probably helped the shipping industry become more productive. But, achieving empire comes at a cost. For example, the British navy needed to grow very large in size and it had to employ inputs from the private sector thus crowding-it out. In a way, if a security effect from empire emerged as a benefit, there must have been a cost. The cost we wish to highlight is the crowding-out one.

In the paper (written with Jari Eloranta of Appalachian State University and Vadim Kufenko of University of Hohenheim), I argue that, using the productivity of the Canadian shipping industry which was protected by the British Navy, the security effect from a large navy was smaller than the crowding-out from high-levels of expenditures on the navy.

While it is still a working paper which we are trying to expand and improve, our point is that what allowed the productivity of the Canadian shipping industry (which was protected by Britain) to soar was that the British Navy grew smaller in absolute terms. While the growth of the relative strength of the British Navy did bolster productivity in some of our tests, the fact that the navy was much smaller was the “thing in the mix that did the trick”.  In other words, the empire effect is just the effect of a ramping-down in military being presented as something else than it truly is (at least partly).

That’s our core point. We are still trying to improve it and (as such) comments are welcomed.

Some ideas to guide your thoughts on health care

This post is meant to help my non-economist friends think more clearly about how we pay for health care. I’ll talk about markets, but the truth is that the American system is built of deeply bastardized markets. If our car markets worked like our health markets, most of us would walk to work. I’m trying to focus on the essential logic of the situation which is going to sound Utopian because Congress isn’t going to give us any sort of logical policy any time soon. But we aren’t going to get a logical solution until we as voters understand the logic of health care finance.

I’ve got a few big points to make:

  1. Trying to health insurance also work like charity is bound to end poorly for everyone.
  2. A single-payer system has a lot of nice features for individuals, but a lot of systemic problems.
  3. It’s fundamentally impossible to insure pre-existing conditions. Insurance is about sharing risk, not unavoidable expenses.

(This post is longer than I’d like, so thanks for your patience!)

Markets and Charity

I’ve said it before, and I’ll say it again: we don’t have to ruin markets to do charity.

The essence of markets is that they aggregate knowledge about the relative costs and benefits of different goods based on the preferences of the real people involved in producing and consuming those goods.

The demand side of markets provide information by giving you (as a consumer) a choice between more of something you like and more money to spend on other stuff. On the supply side they give you (as a supplier–probably of your own labor) the choice between providing more of what people are willing to pay for or having less money to buy the stuff you want. Markets crowdsource cost-benefit analysis.

Prices also give suppliers an incentive to produce things that consumers want while trying to save resources (i.e. cut costs). In other words, a price is a signal wrapped up in an incentive.

So what about fairness? The bad news is that markets are a system of “from each according to their ability, to each according to how much other people are willing to pay for the product of their ability.” (Not very catchy!) It’s mostly fair for most of us, but doesn’t do much good for people who are just unlucky (e.g. kids born with genetic defects). Here’s the good news: we can use charity alongside markets.

We can debate how much role government should play in charity some other time. For now, let’s whole-ass one thing instead of half-assing two things. We have to appreciate that interfering with markets interferes with the ability of those markets to function as sources of reliable information. It doesn’t matter how good our intentions are, we face a trade off here… unless we do something to establish a functioning charity system parallel to the health care finance system.

Single Payer

Anecdotes about the merits of a single-payer health care system are powerful because they shed light on the biggest benefit to such a system: individual convenience.

Part of the appeal has to do with the general screwiness of the American system. It’s a cathedral built of band-aids. But even in an idealized market system, a single-payer system has the advantage of not making me go through the work of evaluating which plan best suits my needs.

A single-payer system is, from an individual perspective, about as ideal as having your parents pay for it. But we don’t really  want our parents buying our stuff for us.

Single payer system sacrifice the informational value of markets (probably even more so than America’s current system of quasi-price controls). Innovation would be harder as long as new treatments had to be approved by risk-averse bureaucrats (and again, we already face a version of this with Medicare billing codes and insurance companies).

Essentially, a single payer system creates a common pool problem: each of us gets the individual benefit of being able to be lazy. But then we’re left trusting bureaucrats, special interest groups, and think tanks to keep an eye on things. It could be an improvement over the current American system, but that’s like saying amputation is better than gangrene.

Insurance

Premium = expected cost + overhead

Consider two alternatives. In scenario A you start with $150, flip a coin, and if it comes up tails you lose $100. In scenario B you get $90. The expected value of A is $100, but most of us would still prefer the sure thing.

Here’s how insurance works: You start with $150, give $60 to the insurance company, then flip the coin. If it comes up tails, you lose $100, but the insurance company gives you back $40. You’ve just gotten the sure thing. And by taking on thousands of these bets the insurance company is able to make enough money to pay their employees.

But here’s the thing: the premium they charge is fundamentally tied to that expected value. Change the odds, or the costs (i.e. the claims they have to pay for) and you’ll change the premium.

(BTW, Tim Harford did a nice ~8 minute podcast episode on insurance that’s worth checking out.)

Pre-existing conditions are the equivalent of changing our thought experiment to a 100% chance of flipping tails. No amount of risk sharing that will get you to the $90 outcome you want. You can’t insure your car after you’ve been in an accident and you can’t insure a person against a loss they’ve already realized. If you’re Bill Gates, that’s no big deal, but for many people, this might mean depending on charity. That’s a bummer, but wishful thinking can’t undo that.

If we insist that insurance companies cover pre-existing conditions* the result can only be higher premiums. This is nice for people with these pre-existing conditions, but not so great for (currently) healthy poor people. Again, charity matters needs to be part of the debate, but it needs to be parallel to insurance markets.

Covering more contingencies also affects premiums. The more things a policy covers, the higher the expected cost, and therefore the higher the premium. We each have to decide what things are worth insuring and what risks we’re willing to face ourselves. Politics might not be the best way to navigate those choices.

High deductibles and catastrophic care

Actuaries think about the cost of insuring as a marginal cost. In other words, they know that the odds that you spend $100 in a year are much higher than the odds that you spend $1000. So the cost of insuring the first dollar of coverage is much higher than the cost of insuring the 5000th dollar. This is why high deductible plans are so much cheaper… they only pay out in the unlikely situation where something catastrophically bad happens to you. This is exactly why most of us want insurance. We aren’t afraid of the cost of band-aids and aspirin, we’re afraid of the cost of cancer treatment.

For those of us firmly in the middle class, what we really need is a high-deductible plan plus some money in the bank to cover routine care and smaller emergencies. (Personally, my version of this is a credit card.) Such a plan has the added benefit of encouraging us to be more cost conscious.

A big problem with our current system is that it’s set up like an all-you-can-eat buffet. You pay to get in (your premiums) but once you’re in the hospital, any expenses are the insurance company’s problem (read: everyone else on your health plan). The logic here is the same as with pollution. When I drive my car I get the benefits of a quick and comfortable commute but I also suffer a little bit more pollution. But I don’t have incentive to think about how that pollution affects you so I pollute more than would be ideal. Multiply that by millions of people and we can end up with smog.

tl;dr

If I were trying to put together a politically palatable alternative to our current system, I’d have an individual mandate with insurance vouchers for the poor (it’s not very libertarian, and it’s far from my Utopian ideal, but I think it would be a huge improvement over what we’ve got now). I would also expand the role of market competition by encouraging high deductibles plus flexible health savings accounts.

Reality is complicated, but I’m trying to get at the fundamental logic here. We don’t have a properly functioning market system. To get there we need competition, transparency, and a populace with the mental tools and mathematical literacy necessary to understand what their insurance can and can’t do. That’s a tall order, but it doesn’t mean we shouldn’t keep trying to move in that direction.

To have a fruitful debate we need to understand what we want from our healthcare system: help for the poor (charity), convenience, and efficiency from an individual and social perspective. By trying to lump all these things together we muddy the waters and make it harder to understand one another.


*I don’t know what the deal is with the idea that the AHCA will treat rape as a pre-existing condition. Some webpages give a bunch of random tweets as evidence of this, and others call bullshit. Let’s just leave it at this: in a competitive market this would be considered terrible marketing and savvy companies wouldn’t do it. The lesson then is to keep calling companies on bad marketing, and avoid protecting politically powerful companies from market competition.

On the paradox of poverty and good health in Cuba

One of the most interesting (in my opinion) paradox in modern policy debates relates to how Cuba, a very poor country, has been able to generate health outcomes close to the levels observed in rich countries. To be fair, academics have long known that there is only an imperfect relation between material living standards and biological living standards (full disclosure: I am inclined to agree, but with important caveats better discussed in a future post or article, but there is an example). The problem is that Cuba is really an outlier. I mean, according to the WHO statistics, its pretty close to the United States in spite of being far poorer.

In the wake of Castro’s death, I believed it necessary to assess why Cuba is an outlier and creates this apparent paradox. As such, I decided to move some other projects aside for the purposes of understanding Cuban economic history and I have recently finalized the working paper (which I am about to submit) on this paradox (paper here at SSRN).

The working paper, written with physician Gilbert Berdine (a pneumologist from Texas Tech University), makes four key arguments to explain why Cuba is an outlier (that we ought not try to replicate).

The level of health outcomes is overestimated, but the improvements are real

 Incentives matter, even in the construction of statistics and this is why we should be skeptical. Indeed, doctors are working under centrally designed targets of infant mortality that they must achieve and there are penalties if the targets are not reached. As such, physicians respond rationally and they use complex stratagems to reduce their reported levels. This includes the re-categorization of early neonatal deaths as late fetal deaths which deflates the infant mortality rate and the pressuring (sometimes coercing) of mothers with risky pregnancies to abort in order to avoid missing their targets. This overstates the level of health outcomes in Cuba since accounting for reclassification of deaths and a hypothetically low proportions of pressured/coerced abortions reduces Cuban life expectancy by close to two years (see figure below). Nonetheless, the improvements in Cuba since 1959 are real and impressive – this cannot be negated.

Cuba1.png

 

Health Outcomes Result from Coercive Policy 

Many experts believe that we ought to try to achieve the levels of health outcomes generated by Cuba and resist the violations of human rights that are associated with the ruling regime. The problem is that they cannot be separated. It this through the use of coercive policy that the regime is able to allocate more than 10% of its tiny GDP to health care and close to 1% of its population to the task of being a physician. It ought also be mentioned that physicians in Cuba are also mandated to violate patient privacy and report information to the regime. Consequently, Cuban physicians (who are also members of the military) are the first line of internal defense of the regime. The use of extreme coercive measures has the effect of improving health outcomes, but it comes at the price of economic growth. As documented by Werner Troesken, there are always institutional trade-offs in term of health care. Either you adopt policies that promote growth but may hinder the adoption of certain public health measures or you adopt these measures at the price of growth. The difference between the two choices is that economic growth bears fruit in the distant future (i.e. there are palliative health effects of economic growth that take more time to materialize).

Health Outcomes are Accidents of Non-Health Related Policies

As part of the institutional trade-off that make Cubans poorer, there might be some unintended positive health-effects. Indeed, the rationing of some items does limit the ability of the population to consume items deleterious to their health. The restrictions on car ownership and imports (which have Cuba one of the Latin American countries with the lowest rate of car ownership) also reduces mortality from road accidents which,  in countries like Brazil, knock off 0.8 years of life expectancy at birth for men and 0.2 years for women.  The policies that generate these outcomes are macroeconomic policies (which impose strict controls on the economy) unrelated to the Cuban health care system. As such, the poverty caused by Cuban institutions  may also be helping Cuban live longer.

Human Development is not a Basic Needs Measure

The last point in the paper is that human development requires agency.  Since life expectancy at birth is one of the components of the Human Development Indexes (HDI),  Cuba fares very well on that front. The problem is that the philosophy between HDIs is that individual must have the ability to exercise agency. It is not a measure of poverty nor a measure of basic needs, it is a measure meant to capture how well can individual can exercise free will: higher incomes buy you some abilities, health provides you the ability to achieve them and education empowers you.

You cannot judge a country with “unfree” institutions with such a measure. You need to compare it with other countries, especially countries where there are fewer legal barriers to human agency. The problem is that within Latin America, it is hard to find such countries, but what happens when we compare with the four leading countries in terms of economic freedom. What happens to them? Well, not only do they often beat Cuba, but they have actually come from further back and as such they have seen much larger improvements that Cuba did.

This is not to say that these countries are to be imitated, but they are marginal improvements relative to Cuba and because they have freer institutions than Cuba, they have been able to generate more “human development” than Cuba did.

Cuba2.png

Our Conclusion

Our interpretation of Cuban health care provision and health outcomes can be illustrated by an analogy with an orchard. The fruit of positive health outcomes from the “coercive institutional tree” that Cuba has planted can only be picked once, and the tree depletes the soil significantly in terms of human agency and personal freedom. The “human development tree” nurtured in other countries yields more fruit, and it promises to keep yielding fruit in the future. Any praise of Cuba’s health policy should be examined within this broader institutional perspective.

On British Public Debt, the American Revolution and the Acadian Expulsion of 1755

I have a new working paper out there on the role of the Acadian expulsion of 1755 in fostering the American revolution.  Most Americans will not know about the expulsion of a large share of the French-speaking population (known as the Acadians) of the Maritimes provinces of Canada during the French and Indian Wars.

Basically, I argue that the policy of deportation was pushed by New England and Nova Scotia settlers who wanted the well-irrigated (thanks to an incredibly sophisticated – given the context of a capital-scarce frontier economy – dyking system) farms of the Acadians. Arguing that the French population under nominal British rule had only sworn an oath of neutrality, they represented a threat to British security, the settlers pushed hard for the expulsion. However, the deportation was not approved by London and was largely the result of colonial decisions rather than Imperial decisions. The problem was that the financial burden of the operation (equal to between 32% of 38% of the expenditures on North America – and that’s a conservative estimate) were borne by England, not the colonies.

This fits well, I argue, into a public choice framework. Rent-seeking settlers pushed for the adoption of a policy whose costs were spread over a large population (that of Britain) but whose benefits they were the sole reapers.

The problem is that this, as I have argued elsewhere, was a key moment in British Imperial history as it contributed to the idea that London had to end the era of “salutary neglect” in favor of a more active management of its colonies.  The attempt to centralize management of the British Empire, in order to best prioritize resources in a time of rising public debt and high expenditures level in the wars against the French, was a key factor in the initiation of the American Revolution.

Moreover, the response from Britain was itself a rent-seeking solution. As David Stasavage has documented, government creditors in England became well-embedded inside the British governmental structure in order to minimize default risks and better control expenses. These creditors were a crucial part of the coalition structure that led to the long Whig Supremacy over British politics (more than half a century). In that coalition, they lobbied for policies that advantaged them as creditors. The response to the Acadian expulsion debacle (for which London paid even though it did not approve it and considered the Acadian theatre of operation to be minor and inconsequential) should thus be seen also as a rent-seeking process.

As such, it means that there is a series of factors, well embedded inside broader public choice theory, that can contribute to an explanation of the initiation of the American Revolution. It is not by any means a complete explanation, but it offers a strong partial contribution that considers the incentives behind the ideas.

Again, the paper can be consulted here or here.

On doing economic history

I admit to being a happy man. While I am in general a smiling sort of fellow, I was delightfully giggling with joy upon hearing that another economic historian (and a fellow  Canadian from the LSE to boot), Dave Donaldson, won the John Bates Clark medal. I dare say that it was about time. Nonetheless I think it is time to talk to economists about how to do economic history (and why more should do it). Basically, I argue that the necessities of the trade require a longer period of maturation and a considerable amount of hard work. Yet, once the economic historian arrives at maturity, he produces long-lasting research which (in the words of Douglass North) uses history to bring theory to life.

Economic History is the Application of all Fields of Economics

Economics is a deductive science through which axiomatic statements about human behavior are derived. For example, stating that the demand curve is downward-sloping is an axiomatic statement. No economist ever needed to measure quantities and prices to say that if the price increases, all else being equal, the quantity will drop. As such, economic theory needs to be internally consistent (i.e. not argue that higher prices mean both smaller and greater quantities of goods consumed all else being equal).

However, the application of these axiomatic statements depends largely on the question asked. For example, I am currently doing work on the 19th century Canadian institution of seigneurial tenure. In that work, I  question the role that seigneurial tenure played in hindering economic development.  In the existing literature, the general argument is that the seigneurs (i.e. the landlords) hindered development by taxing (as per their legal rights) a large share of net agricultural output. This prevented the accumulation of savings which – in times of imperfect capital markets – were needed to finance investments in capital-intensive agriculture. That literature invoked one corpus of axiomatic statements that relate to capital theory. For my part, I argue that the system – because of a series of monopoly rights – was actually a monopsony system through the landlords restrained their demand for labor on the non-farm labor market and depressed wages. My argument invokes the corpus of axioms related to industrial organization and monopsony theory. Both explanations are internally consistent (there are no self-contradictions). Yet, one must be more relevant to the question of whether or not the institution hindered growth and one must square better with the observed facts.

And there is economic history properly done. It tries to answer which theory is relevant to the question asked. The purpose of economic history is thus to find which theories matter the most.

Take the case, again, of asymetric information. The seminal work of Akerlof on the market for lemons made a consistent theory, but subsequent waves of research (notably my favorite here by Eric Bond) have showed that the stylized predictions of this theory rarely materialize. Why? Because the theory of signaling suggests that individuals will find ways to invest in a “signal” to solve the problem. These are two competing theories (signaling versus asymetric information) and one seems to win over the other.  An economic historian tries to sort out what mattered to a particular event.

Now, take these last few paragraphs and drop the words “economic historians” and replace them by “economists”.  I believe that no economist would disagree with the definition of the tasks of the economist that I offered. So why would an economic historian be different? Everything that has happened is history and everything question with regards to it must be answered through sifting for the theories that is relevant to the event studied (under the constraint that the theory be consistent). Every economist is an economic historian.

As such, the economic historian/economist must use advanced tools related to econometrics: synthetic controls, instrumental variables, proper identification strategies, vector auto-regressions, cointegration, variance analysis and everything you can think of. He needs to do so in order to answer the question he tries to answer. The only difference with the economic historian is that he looks further back in the past.

The problem with this systematic approach is the efforts needed by practitioners.  There is a need to understand – intuitively – a wide body of literature on price theory, statistical theories and tools, accounting (for understanding national accounts) and political economy. This takes many years of training and I can take my case as an example. I force myself to read one scientific article that is outside my main fields of interest every week in order to create a mental repository of theoretical insights I can exploit. Since I entered university in 2006, I have been forcing myself to read theoretical books that were on the margin of my comfort zone. For example, University Economics by Allen and Alchian was one of my favorite discoveries as it introduced me to the UCLA approach to price theory. It changed my way of understanding firms and the decisions they made. Then reading some works on Keynesian theory (I will confess that I have never been able to finish the General Theory) which made me more respectful of some core insights of that body of literature. In the process of reading those, I created lists of theoretical key points like one would accumulate kitchen equipment.

This takes a lot of time, patience and modesty towards one’s accumulated stock of knowledge. But these theories never meant anything to me without any application to deeper questions. After all, debating about the theory of price stickiness without actually asking if it mattered is akin to debating with theologians about the gender of angels (I vote that they are angels and since these are fictitious, I don’t give a flying hoot’nanny). This is because I really buy in the claim made by Douglass North that theory is brought to life by history (and that history is explained by theory).

On the Practice of Economic History

So, how do we practice economic history? The first thing is to find questions that matter.  The second is to invest time in collecting inputs for production.

While accumulating theoretical insights, I also made lists of historical questions that were still debated.  Basically, I made lists of research questions since I was an undergraduate student (not kidding here) and I keep everything on the list until I have been satisfied by my answer and/or the subject has been convincingly resolved.

One of my criteria for selecting a question is that it must relate to an issue that is relevant to understanding why certain societies are where there are now. For example, I have been delving into the issue of the agricultural crisis in Canada during the early decades of the 19th century. Why? Because most historians attribute (wrongly in my opinion)  a key role to this crisis in the creation of the Canadian confederation, the migration of the French-Canadians to the United States and the politics of Canada until today. Another debate that I have been involved in relates to the Quiet Revolution in Québec (see my book here) which is argued to be a watershed moment in the history of the province. According to many, it marked a breaking point when Quebec caught up dramatically with the rest of  Canada (I disagreed and proposed that it actually slowed down a rapid convergence in the decade and a half that preceded it). I picked the question because the moment is central to all political narratives presently existing in Quebec and every politician ushers the words “Quiet Revolution” when given the chance.

In both cases, they mattered to understanding what Canada was and what it has become. I used theory to sort out what mattered and what did not matter. As such, I used theory to explain history and in the process I brought theory to life in a way that was relevant to readers (I hope).  The key point is to use theory and history together to bring both to life! That is the craft of the economic historian.

The other difficulty (on top of selecting questions and understanding theories that may be relevant) for the economic historian is the time-consuming nature of data collection. Economic historians are basically monks (and in my case, I have both the shape and the haircut of friar Tuck) who patiently collect and assemble new data for research. This is a high fixed cost of entering in the trade. In my case, I spent two years in a religious congregation (literally with religious officials) collecting prices, wages, piece rates, farm data to create a wide empirical portrait of the Canadian economy.  This was a long and arduous process.

However, thanks to the lists of questions I had assembled by reading theory and history, I saw the many steps of research I could generate by assembling data. Armed with some knowledge of what I could do, the data I collected told me of other questions that I could assemble. Once I had finish my data collection (18 months), I had assembled a roadmap of twenty-something papers in order to answer a wide array of questions on Canadian economic history: was there an agricultural crisis; were French-Canadians the inefficient farmers they were portrayed to be; why did the British tolerate catholic and French institutions when they conquered French Canada; did seigneurial tenure explain the poverty of French Canada; did the conquest of Canada matter to future growth; what was the role of free banking in stimulating growth in Canada etc.

It is necessary for the economic historian to collect a ton of data and assemble a large base of theoretical knowledge to guide the data towards relevant questions. For those reasons, the economic historian takes a longer time to mature. It simply takes more time. Yet, once the maturation is over (I feel that mine is far from being over to be honest), you get scholars like Joel Mokyr, Deirdre McCloskey, Robert Fogel, Douglass North, Barry Weingast, Sheilagh Ogilvie and Ronald Coase (yes, I consider Coase to be an economic historian but that is for another post) who are able to produce on a wide-ranging set of topics with great depth and understanding.

Conclusion

The craft of the economic historian is one that requires a long period of apprenticeship (there is an inside joke here, sorry about that). It requires heavy investment in theoretical understanding beyond the main field of interest that must be complemented with a diligent accumulation of potential research questions to guide the efforts at data collection. Yet, in the end, it generates research that is likely to resonate with the wider public and impact our understanding of theory. History brings theory to life indeed!

Auftragstaktik: Decentralization in military command

Many 20th century theorists who advocated central planning and control (from Gaetano Mosca to Carl Landauer, and hearkening back to Plato’s Republic) drew a direct analogy between economic control and military command, envisioning a perfectly functioning state in which the citizens mimic the hard work and obedience of soldiers. This analogy did not remain theoretical: the regimes of Mussolini, Hitler, and Lenin all attempted to model economies along military principles. [Note: this is related to William James’ persuasion tactic of “The Moral Equivalent of War” that many leaders have since used to garner public support for their use of government intervention in economic crises from Great Depression to the energy crisis to the 2012 State of the Union, though one matches the organizing methods of war to central planning and the other matches the moral commitment of war to intervention, but I digress.] The underlying argument of the “central economic planning along military principles” was that the actions of citizens would be more efficient and harmonious under direction of a scientific, educated hierarchy with highly centralized decision-making than if they were allowed to do whatever they wanted. Wouldn’t an army, if it did not have rigid hierarchies, discipline, and central decision-making, these theorists argued, completely fall apart and be unable to function coherently? Do we want our economy to be the peacetime equivalent of an undisciplined throng (I’m looking at you, Zulus at Rorke’s Drift) while our enemies gain organizational superiority (the Brits had at Rorke’s Drift)? While economists would probably point out the many problems with the analogy (different sets of goals of the two systems, the principled benefits of individual liberty, etc.), I would like to put these valid concerns aside for a moment and take the question at face value. Do military principles support the idea that individual decision-making is inferior to central control? Historical evidence from Alexander the Great to the US Marine Corps suggests a major counter to this assertion, in the form of Auftragstaktik.

Auftragstaktik

Auftragstaktik was developed as a military doctrine by the Prussians following their losses to Napoleon, when they realized they needed a systematic way to overcome brilliant commanders. The idea that developed, the brainchild of Helmuth von Moltke, was that the traditional use of strict military hierarchy and central strategic control may not be as effective as giving only the general mission-based, strategic goals that truly necessitated central involvement to well-trained officers who were operating on the front, who would then have the flexibility and independence to make tactical decisions without consulting central commanders (or paperwork). Auftragstaktik largely lay dormant during World War I, but literally burst onto the scene as the method of command that allowed (along with the integration of infantry with tanks and other military technology) the swift success of the German blitzkrieg in World War II. This showed a stark difference in outcome between German and Allied command strategies, with the French expecting a defensive war and the Brits adhering faithfully and destructively to the centralized model. The Americans, when they saw that most bold tactical maneuvers happened without or even against orders, and that the commanders other than Patton generally met with slow progress, adopted the Auftragstaktik model. [Notably, this also allowed the Germans greater adaptiveness and ability when their generals died–should I make a bad analogy to Schumpeter’s creative destruction?] These methods may not even seem foreign to modern soldiers or veterans, as it is still actively promoted by the US Marine Corps.

All of this is well known to modern military historians and leaders: John Nelson makes an excellent case for its ongoing utility, and the excellent suggestion has also been made that its principles of decentralization, adaptability, independence, and lack of paperwork would probably be useful in reforming non-military bureaucracy. It has already been used and advocated in business, and its allowance for creativity, innovation, and reactiveness to ongoing complications gives new companies an advantage over ossified and bureaucratic ones (I am reminded of the last chapter of Parkinson’s Law, which roughly states that once an organization has purpose-built rather than adapted buildings it has become useless). However, I want to throw in my two cents by examining pre-Prussian applications of Auftragstaktik, in part to show that the advantages of decentralization are not limited to certain contexts, and in part because they give valuable insight into the impact of social structures on military ability and vice versa.

Historical Examples

Alexander the Great: Alexander was not just given exemplary training by his father, he also inherited an impressive military machine. The Macedonians had been honed by the conquest of neighboring Illyria, Thrace, and Paeonia, and the addition of Thessalian cavalry and Greek allies in the Sacred Wars. However, as a UNC ancient historian found, the most notable innovations of the Macedonians were their new siege technologies (which allowed a swifter war–one could say, a blitzkrieg–compared to earlier invasions of Persia) and their officer corps. This officer corps, made up of the king’s “companions,” was well trained in combined-arms hoplite and cavalry maneuvers, and during multiple portions of his campaign (especially in Anatolia and Bactria) operated as leaders of independent units that could cover a great deal more territory than one army. In set battles, the Macedonians showed a high degree of maneuverability, with oblique advances, effective use of reserves, and well-timed cavalry strikes into gaps in enemy formations, all of which depended on the delegation of tactical decision-making. This contrasted with the Persians, who followed standards into battle without organized ranks and files, and the Greek hoplites, whose phalanx depended mostly on cohesion and group action and therefore lacked flexibility. [Also, fun fact, the Macedonians had the only army in recorded history in which bodies of troops were identified systematically by the name of their leader. This promoted camaraderie and likely indicates that, long-term, the soldiers became used to the tactical independence and decision-making of that individual. Imagine dozens of Rogers’ Rangers.]

The Roman legion: As with any great empire, the Macedonians spread through their military innovations, but then ossified in technique over the next 150 years. When the Romans first faced a major Hellenistic general, Pyrrhus, they had already developed the principles of the system that would defeat the Macedonian army: the legion. In the early Roman legion, two centuries were combined into a maniple, and maniples were grouped into cohorts, allowing for detachment and independent command of differing group sizes. Crucially, centurions maintained discipline and the flexible but coordinated Roman formations, and military tribunes were given tactical control of groups both during and between battles. The flexibility of the Roman maniples was shown at the Battle of Cynoscephalae, in which the Macedonian phalanx–which had frontal superiority through its use of the sarissa and cohesion but little maneuverability–became disorganized on rough ground and was cut to pieces on one flank by the more mobile and individually capable Roman legionaries, This (as well as many battles in the Macedonian and Syrian Wars proved) showed the value of flexibility and individual action in a disciplined force, but where was the Auftragstaktik? At Cynoscephalae, after defeating one flank, the Romans on that flank dispersed to loot the Macedonian camp. In antiquity, this generally resulted in those troops becoming ineffective as a fighting force, and many a battle was lost because of pre-emptive looting. However, in this case, an unnamed tribune–to whom the duty of tactical decisions had been delegated–reorganized these looters and brought them to attack the rear of the other Macedonian flank, which had been winning. This resulted in a crushing victory and contributed to he Roman conquest of Greece. Decentralized control was also a hallmark of Julius Caesar himself, who frequently sent several cohorts on independent campaigns in Gaul under subordinates such as Titus Labienus, allowing him to conquer the much more numerous Gauls through local superiority, lack of Gallic unity, and organization. Also, at the climactic Battle of Alesia, Caesar used small, mobile reserve units with a great deal of tactical independence to hold over 20 km of wooden walls against a huge besieging force.

The Vikings: I do not mean to generalize about Vikings (who could be of many nations–the term just means “raider”) when they do not have a united culture, but in their very diversity of method and origin, they demonstrate the effectiveness of individualism and decentralization. Despite being organized mostly based on ship-crews led by jarls, with central leadership only when won by force or chosen by necessity, Scandinavian longboatmen and warriors exerted their power from Svalbard to Constantinople to Sicily to Iceland and North America from the 8th to 12th centuries. The social organization of Scandinavia may have been the most free (in terms of individual will to do whatever one wants–including, unfortunately, slaughter, but also some surprisingly progressive women’s rights to decisions) in recorded history, and this was on display in the famous invasion of the Great Heathen Army. With as few as 3,500 farmer-raiders and 100 longboats to start, the legendary sons of Ragnar Lothbrok and the Danish invaders, with jarls as the major decision-makers of both strategic and tactical matters for their crews, won a series of impressive battles over 20 years (described in fascinating, if historical-fiction, detail in the wonderful book series and now TV series The Last Kingdom), almost never matching the number of combatants of their opponents, and took over half of England. The terror and military might associated with the Vikings in the memories of Western historians is a product of the completely decentralized, nearly anarchic methods of Scandinavian raiders.

The Mongols: You should be sensing a trend here: cultures that fostered lifelong training and discipline (and expertise in siege engineering, which seems to have correlated with the tactics I describe, as the Macedonians, Romans, and Mongols were each the most advanced siege engineers of their respective eras) tended to have more trust in well-trained subordinates. This brought them great military success and also makes them excellent examples of proto-Auftragstaktik. The Mongols not only had similar mission-oriented commands and tactical independence, but they also had two other aspects of their military that made them highly effective over an enormous territory: their favored style of horse-archer skirmishing gave natural flexibility and their clan organization allowed for many independently-operating forces stretching from Poland to Egypt to Manchuria. The Mongols, like the Romans, demonstrate how a force can have training/discipline without sacrificing the advantages based on tactical independence, and the two should never be mixed up!

The Americans in the French and Indian War and the Revolutionary War: Though this is certainly a more limited example, there were several units that performed far better than others among the Continentals. The aforementioned Rogers’ Rangers operated as a semi-autonomous attachment to regular forces during the French and Indian War, and were known for their mobility, individual experience and ability, and tactical independence in long-range, mission-oriented reconnaissance and ambushes. This use of savvy, experienced woodsman in a semi-autonomous role was so effective that the ranger corps was expanded, and similar tactical independence, decentralized command, and maneuverability were championed by the Green Mountain Boys, the heroes of Ticonderoga. Morgan’s Rifles used similar experience and semi-autonomous flexibility to help win the crucial battles of Saratoga and Cowpens, which allowed the nascent Continental resistance to survive and thrive in the North outside of coastal cities and to capture much of the South, respectively. The forces of Francis Marion also used proto-guerrilla tactics with decentralized command and outperformed the regulars of Horatio Gates. Given the string of unsuccessful set-piece battles fought by General Washington and his more conventional subordinates, the Continentals depended on irregulars and unconventional warfare to survive and gain victories outside of major ports. These victories (especially Saratoga and Cowpens) cut off the British from the interior and forced the British into stationary posts in a few cities–notably Yorktown–where Washington and the French could siege them into submission. This may be comparable to the Spanish and Portuguese in the Peninsular War, but I know less about their organization, so I will leave the connection between Auftragstaktik and early guerrilla warfare to a better informed commenter.

These examples hopefully bolster the empirical support for the idea that military success has often been based, at least in part, on radically decentralizing tactical control, and trusting individual, front-line commanders to make mission-oriented decisions more effectively than a bureaucracy could. There are certainly many more, and feel free to suggest examples in the comments, but these are my favorites and probably the most influential. This evidence should cause a health skepticism toward argument for central control on the basis of the efficiency or effectiveness demonstrated in military central planning. Given the development of new military technologies and methods of campaign (especially guerilla and “lone wolf” attacks, which show a great deal of decentralized decision-making) and the increasing tendency since 2008 to revert toward ideas of central economic planning, we are likely to get a lot of new evidence about both sides of this fascinating analogy.

How dairy farmers unions in Canada are distorting the facts about supply management

Under heat recently as President Trump has criticized supply management in Canada and retaliated against it, the different provincial associations representing dairy farmers have moved on the offensive. To promote the virtues of this system meant to reduce production in order to prop up prices through the use of trade tariffs, production quotas and price controls (how can we call those virtues), these unions have produced numerous infographics to make their case. It is even part of what they dub their These-infographics-show-that-diary-prices-are-lower-in-Canada-than-elsewhere, that milk is still a cheap drink relative to other type of drinks and those prices, supposedly, increase more slowly than elsewhere. All of these graphics are dishonest and must be dismantled.

The most egregious of these infographics – present in the “lobby day kit” – shows the price of milk in Australia (1.55 CAD), Canada (1.45 CAD) and New Zealand (1.65 CAD). They are seemingly using 2014 prices. First of all, they use data that conflicts massively with the reports of Statistics Canada that suggest that milk prices hover between 2.33$ to 2.48$ per liter.  Their data is provided by AC Nielsen but no justification is presented as to why they are better than Statistics Canada. The truth is that it is not better. Participants in Nielsen surveys come from a self-selected pool of storeowners who wish to participate and are then selected by Nielsen to be part of the data collection. Then, they can record prices. It should be mentioned that not all regions of Canada are covered in the data. Although the Nielsen data does have some uses (especially with regards to market studies), it hardly measures up Statistics Canada when comes the time to evaluate price levels. This is because the government agency collects information from all regions and tries a broader sweep of retailers in order to create the consumer price index.

But an even larger problem is that, in their comparison of prices, they don’t mention that New Zealand taxes milk. In New Zealand, all food items are subjected to sales tax, which is not the case in Canada and Australia. Hence, when they compare retail prices, they are comparing prices that exclude taxes and prices that include taxes. One would like to find if they acknowledge this fact in the methodological mentions, but there are none!

Using prices available at Numbeo.com and Expatisan.com and the exchange rates made available by the Bank of Canada, we can correct for this problem of theirs. Simply changing prices source leads to a massively different result with regards to Australia whose milk prices are lower than in Canada. Secondly, once we adjust for the sales tax in New Zealand, we find that prices in New Zealand are lower than in Canada. In fact they are lower than in one of Canada’s cheapest market, Montreal (let alone Toronto or Vancouver).  So the infographic they show in order to lobby governments is a fabrication.

Table 1: The real price of milk

Using Numbeo.com (regular milk)
Unadjusted Adjusted for taxes
 Australia  $           1.59  $                 1.59
 New Zealand  $           2.26  $                 1.97
 Canada  $           1.99  $                 1.99
 Using Expatisan.com (whole milk)
 Unadjusted  Adjusted for taxes
 Sydney  $           1.82  $                 1.47
 Wellington  $           2.42  $                 2.10
 Montreal  $           2.87  $                 2.87

Source: Numbeo.com and Expatisan.com, consulted May 16th 2014 and the Bank of Canada’s currency converter. Note: using the Statistics Canada price would make Canada’s situation even worse by comparison.

This is part of a pattern of deceit since they also massage data for numerous other graphs that are presented to Canadians in efforts to convince them of the virtues of supply management. One other example is an infographic that presents a figure of nominal milk prices in Australia before and after the abolition of supply management. Given that prices seem more volatile after 2000 and that they increase more steeply, they try to make us believe that liberalization was a failure. This is not the case. Any sensible policy analyst would deflate nominal prices by the general price index to control for inflation. When one does just that using the data from the Australian Bureau of Statistics, one sees that real prices stabilized in the first ten years of deregulation after increasing roughly 15% in the decade prior. And since 2010, real prices have been falling constantly.

Other examples abound. In one instance, the Quebec union of dairy farmers circulated an infographic meant to show that nominal prices for dairy products increased faster in the United States than in Canada. Again, they omit inflation. Since 1990 (their own starting date), prices of dairy products have risen more slowly than inflation – indicating a decline in real prices. In Canada, the opposite occurred – inflation increased more slowly than dairy prices indicating an increase of the real price.

The debate around supply management is complicated. The policy course to adopt in order to improve agricultural productivity and lower prices for Canadians is hard to pinpoint. But whatever position one may hold, no one is well-served by statistical manipulations offered by the unions representing dairy farmers.

Minimum wages: The economist as a psychologist

Both Ludwig von Mises and F.A. Hayek are known for arguing that there is no such thing as a good economist who is only an economist. For these two thinkers, a good economist-as-scientist also needs to know history, philosophy of science, ethics, and physics. Mises and Hayek are thinking of what an economist-as-scientist should be familiar with and have some minimum knowledge beyond his discipline.

I would add that the economist as public educator, that is, when the economist talks as an economist to non-economists, also needs some awareness of psychology. I may not be using the term “psychology” in the most proper way, but I mean the awareness to understand what the interlocutor feels and needs and then figure out how to communicate economic insights in a way that will not be automatically (emotionally or psychologically) rejected; how to make someone accept an economics outcome they do not want to be true. How to break the bad news with empathy? This is a challenge I try to get my students to understand as one day they, too, will be economists out of the classroom in the real world.

A few days ago I found myself unexpectedly in the “psychologist” position. Only seconds after meeting for the first time two persons dropped the question: “So, what do you think about increasing the minimum wages, should we do it?” I knew nothing about these two individuals, and the only thing they knew about me was that I’m an economics professor. The answer to such a question is an Econ 101 problem: if you increase the minimum wage (above the equilibrium price) some lucky workers will get a wage increase at the expense of other ones loosing their jobs.

The first question I asked myself was “what do these two nice ladies actually want, the analytical/scientific answer, or do they want instead the ‘professor’ to confirm their bias?” This might be a delicate discussion since they may well have a loved one in the minimum wage market.

The first thing to get out of the way is that my answer as an economist is not ideologically driven or does not respond to secret political agendas. How can that be made clear? One way is to show the economics profession’s consensus on the subject from an impersonal position. I explained to them that any economics textbook from any author from any country in the world used in any university would say the same thing: “If you put the price of labor above its equilibrium (a minimum wage), it will produce a disequilibrium (unemployment). You cannot fix the price outside equilibrium and at the same time remain in equilibrium.” Yes, as Ben Powell reminds us, even Paul Krugman agrees on this. By mentioning a worldwide consensus there is no room for ideological or political agendas. It is important to mention that economics is not always about politics. The economic analysis of minimum wages has nothing to do with being a Democrat or a Republican; the political position of each party may differ, but those are not economic analyses, those are political strategies.

The next step was to deal with the issue that if such a consensus exists, why are there mentions of studies showing no harmful effects of increases in minimum wages. This is no mystery either. A well known reason of why an increase in minimum wages does not increase unemployment is because in fact there is no such increase. The politician may say he is increasing the minimum wage, but he does not say that the minimum wage is being located just above the equilibrium level and therefore he is not doing much. Another reason is to look at the effect of a minimum wage increase in a small location where low skilled workers can get another job in the next town without the need to move and therefore they will not show up as unemployed. This is another case of an ineffective increase in minimum wages. Or maybe the minimum wage increases but a benefit goes down. The total compensation to the employee does not change, its composition does.

But can the economist show his claim? Is there more clear evidence that the effects of increasing minimum wages does do harm than the complicated cases where there are no harmful effects? Again, I went geographically large. First, I compared the U.S. with Europe, which has higher minimum wages with respect to the U.S. In Europe you find higher unemployment rates, higher unemployment in the youth population, and also higher long-term unemployment. Second, I brought the case of the U.S. Fair Labor Standard Act of 1938, which fixed the minimum wage at 25 cents per hour. This law included Puerto Rico where the many workers were earning between 3 to 4 cents. Bankruptcies and unemployment skyrocketed. It was in fact unions who asked Congress to make an exception for Puerto Rico, which took two years to consider. For two years people in Puerto Rico were forced to work in the black market or fail to make a minimum income. Want more cases? See here. “See,” the psychologist says, “minimum wages are very dangerous; you can seriously harm yourself. Is that a bet you really want to make?”

Two issues remain to be explained after dealing with these three problems, (1) objection to minimum wage increases is not (necessary) an ideological or political position, (2) studies that deny the effect are doubtful for easy reasons to understand, and (3) if you look at a sample broad enough the economist’s prediction is right there.

First, when an economist objects to an increase in minimum wages it does not mean we do not want wages to go up. We are just saying that a minimum wage increase is not the right way to do it. I wish it were so easy, but the laws of demand and supply inform otherwise. I guess most economists would advocate for minimum wages if their negative effects were not real. Second, explain Milton Friedman’s lesson that a policy is valued by its results, not by its intentions. The economist objects because of the unintended effects of fixing the price of labor outside equilibrium, not because we wouldn’t like to see real wages increasing.

On Evonomics, Spelling and Basic Economic Concepts

I am a big fan of exploring economic ideas into greater depth rather than remaining on the surface of knowledge that I accumulated through my studies. As such, I am always happy when I see people trying to promote “alternatives” within the field of economics (e.g. neuroeconomics, behavioral economics, economic history, evolutionary economics, feminist economics etc.). I do not always agree, but it is enjoyable to think about some of the core tenets of the field through the work of places like the Institute for New Economic Thinking. However, things like Evonomics do not qualify for this.

And this is in spite of the fact that the core motivation of the webzine is correct: there are problems with the way we do economics today (on average). However, discomfort towards the existing state of affairs is no excuse for shoddy work and holding up strawmen that can be burned at the stake followed by a vindictive celebratory dance. The most common feature of those who write for Evonomics is to hold such a strawman with regards to rationality. It presents a caricature where humans calculate everything with precision and argue that if, post-facto, all turns out well then it was a rational process. No one, I mean no one, believes that. The most succinct summary  of rationality according to economists is presented by Vernon Smith in his Rationality in Economics: Constructivist and Ecological Forms. 

Such practices have led me to discount much of what is said on Evonomics and it is close to the threshold where the time costs of sorting the wheat from the chaff outweighs the intellectual benefits.

This recent article on “Dierdre” McCloskey may have pushed it over that threshold. I say “Dierdre” because the author of the article could not even be bothered to write correctly the name of the person he is criticizing. Indeed, it is “Deirdre” McCloskey and not “Dierdre”. While, ethymologically, Dierdre is a variant of Deirdre from the Celtic legend that shares similarities to Tristan and Isolde, the latter form is more frequent. More importantly, Dierdre is name more familiar to players of Guild Wars. 

A minor irritant which, unfortunately, compounds my poor view of the webzine. But then, the author of the article in question goes into full strawman mode. He singles out a passage from McCloskey regarding the effects of redistributing income from the top to the bottom. In that passage, McCloskey merely points out that the effects of equalizing incomes would be minimal.  The author’s reply? Focus on wealth and accuse McCloskey of shoddy mathematics.

Now, this is just poor understanding of basic economic concepts and it matters to the author’s whole point. Income is a flux variable and wealth is a stock variable. The two things are thus dramatically different. True, the flux can help build up the stock, but the people with the top incomes (flux) are not necessarily those with the top wealths (stock). For example, most students have negative net worth (negative stock) when they graduate. However, thanks to their human capital (Bryan Caplan would say signal here), they have higher earnings. Thus, they’re closer to the top of the income distribution and closer to the very bottom of the wealth distribution.  My grandpa is the actual reverse. Before he passed away, my grandpa was probably at the top of the wealth distribution, but since he passed most of his time doing  no paid work whatsoever, he was at the bottom of the income distribution.

Nevermind that the author of the Evonomics article misses the basic point of McCloskey (which is that we should care more about the actual welfare of people rather than the egalitarian distribution), this basic flaw in understanding why the difference between a stock and flux leads him astray.

To be fair, I can see why some people disagree with McCloskey. However, if you can’t pass the basic ideological Turing test, you should not write in rebuttal.

Geopolitics and Asia’s Little Divergence: State Building in China and Japan After 1850

Crossposted at the Medium

Why did Japan successfully modernize in the 19th century while China failed to do so? Both China and Japan came under increasing threat from the Western powers after 1850. In response, Japan successfully undertook a program of state building and modernization; in China, however, attempts to modernize proved unsuccessful and the power of the central state was fatally weakened. The failure to build a modern state led to China’s so-called lost century while Japan’s success enabled it to become the first non-western country to industrialize. In a paper with Chiaki Moriguchi (Hitotsubashi University) and Tuan-Hwee Sng (NUS), we explore this question using a combination of historical evidence and formal modeling.

On the surface this East Asian “little divergence” is extremely puzzling. Qing China, as late as the end of the eighteenth century, was a powerful centralized empire. An impersonal bureaucracy selected by exams, and routinely rotated, governed the empire. In contrast, the institutions of Tokugawa Japan are usually described as feudal. The shogun directly ruled only 15% of the country. The remainder was divided into 260 domains ruled by lords known as daimyo who collected their own taxes, possessed their own armies, and issued their own currencies. To the outside observer China would have seemed much more likely to have been able to establish the institutions or a centralized state than Japan.

Figure 1: Qing China and Tokugawa Japan

For much of the early modern period (1500–1700) China and Japan possessed military capabilities that made them more than a match for any western power. This changed dramatically after the Industrial Revolution and their vulnerability exposed by the Opium War (1839–1840) and the Black Ships Incident of 1853, respectively. During the First Opium a small number of British ships overpowered the entire Chinese navy, while Commodore Perry’s show of force in landing in Japan in 1853 convinced the Japanese of western naval superiority. Within a few years, political elites in both countries recognized the need to modernize if only to develop the military capacity required to fend off this new danger.

* * *

koyamajapanperry
Figure 2: Commodore Perry in Japanese eyes

In China, after the suppression of the Taiping Rebellion, there were attempts at modernizing — notably the Self-Strengthening movement associated with Li Hongzhang and others. Recent scholarship has reevaluated this movement positively. At the purely military-technological level it was in fact quite successful. The Jiangnan Arsenal and the Fuzhou Shipyard saw the successful importation of western military technology into China and the Chinese were soon producing modern ships and weaponry. However, these developments were associated with a process of political decentralization as local governors took on more and more autonomy. The importation of military technology was not associated with more far-reaching societal or political reforms. There was no serious attempt to modernize the Qing state.

In contrast, Japan, following the Meiji Restoration, embarked on whole scale-societal transformation. The daimyo lost all power. Feudalism was abolished. Compulsory education was introduced as was a nationwide railway system. A new fiscal system was imposed in the teeth of opposition from farmers. The samurai were disarmed and transformed from a military caste into bureaucrats and businessmen.

Qing China and the newly modernized Meiji Japan would collide in the first Sino-Japanese war (1894–1895). Before the war, western observers believed China would win in part because of their superior equipment. But the Chinese lacked a single national army. It was the Beiyang army and the Beiyang fleet that fought the entire Japanese military force. The fact that Japan had undergone a wholesale transformation of society enabled them to marshal the resources to win a rapid victory.

 

koyamabaiyanfleet
Figure 3: The Jingyuan, one of the ships in the Baiyang fleet

* * *

Why did the Japanese succeed in modernizing while Qing China failed to do so? Historians have proposed numerous explanations. In our paper, however, rather than focusing on cultural differences between Japan and China, we focus on how different geopolitical incentives shaped their decisions to invest in state capacity and state centralization.

Before the mid-19th century China only faced a threat from inner Asia from where historically nomadic invasions had routinely invaded and threatened the sedentary population of the Chinese plain. Due to this threat, historically China tended to be a centralized empire with its capital and the bulk of its professional army stationed close to the northern frontier (see Ko, Koyama, and Sng (2018)). In contrast, Japan faced no major geopolitical threats prior to 1850. This meant that it could retain a loose and decentralized political system.

After 1850 both countries faced major threats from several directions. China was threatened on its landward borders by Russian expansionism and from the coast by Britain and France (and later Germany and the United States). Japan was threatened from all directions by western encroachment.

We build a simple model which allows for multidirectional geopolitical threats. We represent each state as a line of variable length. States have to invest in state capacity to defend against external geopolitical threats. Each state can use centralized fiscal institutions or decentralized fiscal institutions.

If there is strong threat from one direction, as China faced prior to 1850, the dominant strategy is political centralization. In the absence of major geopolitical threats decentralization may be preferable as was the case in Tokugawa Japan.

The emergence of a multidirectional threat, however, changes things. A large country facing a multidirectional threat may have to decentralize in order to meet the different challenges it now faces. This is what happened in China after 1850. In contrast, for a small state with limited resources, an increase in the threat level makes centralization and resource pooling more attractive. For a small territory like Japan, the emergence of non-trivial foreign threats renders political decentralization untenable.

We then consider the incentives to modernize. Modernization is costly. It entails social dislocation and creates losers as well as winners, the losers will attempt to block any changes that hurt their interests. We show that for geographically compact polities, it is always a dominant strategy to modernize in the face of a multidirectional threat as the state is able to manage local opposition to reform. This helps to explain why all members of the Japanese political elite came around to favoring rapid modernization by the late 1860s.

Consistent with our model, modernization was more difficult and controversial in China. The Qing government and particularly the Empress Dowager famously opposed the building of railroads. The most well-known example of this was the Wusong Road in Shanghai. Built using foreign investment it was dismantled in 1877 after locals complained about it. The Qing state remained reactive and prepared to kowtow to local powerholders and vested interests rather than confront them. Despite local initiatives, no effort was made at wholesale reforms until after China’s defeat at the hands of Japan in 1895.

koyamaindustrialization
Figure 4: The Wusong Railroad in 1876

* * *

By 1895 it was too late, however. The attempts of the Qing state to reform and modernize led to its collapse. Needless to state, East Asian’s little divergence would have lasting consequences.

Japan’s modernization program astonished foreign observers. Victory over Russia in 1904 propelled Japan to Great Power status but also set Japan on the path to disaster in the World War Two. Nevertheless, the institutional legacy of Japan’s successful late 19th century modernization played a crucial role in Japan’s post-1945 economic miracle.

Following the collapse of the Qing dynasty China fragmented further entering the so-called warlord era (1916–1926). Though the Nationalist regime reunified the country and began a program of modernization, the Japanese invasion and the Second Sino-Japanese War (1937–1945) devastated the country. The end result was that China came to be reunified by the Communist party and to experience more conflict and trauma until it began to embrace market reforms after 1979.

Ricardo and Ringo for a free-trade Brexit

maxresdefault

My colleague, Shruti Rajagopalan, points out that today is the 200th Anniversary of the publication of David Ricardo’s  On the Principles of Political Economy and Taxation. It was here that the notion of comparative advantage began confounding protectionists and nativists. Shruti offers this famous example of it in practice:

Apparently, when asked if Ringo Starr is the best drummer in the world, John Lennon quipped, “Ringo isn’t the best drummer in the world. He isn’t even the best drummer in the Beatles.” And while Lennon may have fancied himself a better singer, guitarist, songwriter, and drummer, than Ringo, the Beatles are still better off with Ringo at the drums.

The essence of comparative advantage is that you don’t need to possess a great talent to benefit from trade within a group, whether we are talking about individual people or nations. So long as there exists some variation in relative talents, people will be able to benefit from specialization and trade.

This message is as relevant as ever. The British Parliament has just voted to hold fresh elections. This is supposedly to strengthen the Prime Minister’s hand when negotiating new terms of trade when Britain leaves the European Union. Politicians act as if trade is dangerous, always a threat to the national interest unless carefully constrained. They negotiate complex deals and regulation on market access, essentially holding their own consumers hostage, preventing them from buying foreign goods unless other countries agree to open their own markets. They fear that their domestic producers will be out-competed by superior, or cut-priced, businesses from abroad.

What comparative advantage shows is that even if that happened to be true for every single industry, domestic businesses could still specialize so as to be competitive on the world market, and improve domestic living standards at the same time. Britain could open its ports and wallets to foreign goods and services with no tariffs, even without any reciprocal deal from the EU, and yet still benefit from trade.

Why? Because it doesn’t matter if you have to be the drummer, just so long as you are in the band.