Electricity in Quebec before Nationalization (1919 to 1939)

A few weeks ago, I mentioned that  I am generally skeptical of “accepted wisdom” on many topics. “Accepted wisdom” is a construction of a stylized fact by a party with intense preferences that is gradually able to remove nuances over time to solidify its preferred narrative. The example I gave a few weeks ago concerned antitrust laws. There are many more. One of those concerns a research agenda that I laid claim to in a recent article in Atlantic Economic Journal (co-authored with my dear friend Germain Belzile): the nationalization of electricity in Quebec.

My home province of Quebec is basically one giant network of rivers well-suited for the production of hydro-electricity – a potential that was noticed in the late 19th century and led to a rapid expansion of the network. Historians (and some economists) have depicted the early electrical industry in Quebec as a “trust” (a cartel) that gouged consumers and could only be resolved, as witnessed by the neighboring province of Ontario, by nationalization (which occurred in two waves – one in 1944 and one in 1962).

In the article I published with Belzile, I argue that this narration is largely incorrect. First, before nationalization prices in Quebec were falling and were low by North American standards (see figures below). Second, production was expanding rapidly. This is in spite of the fact that taxes imposed on the electrical industry grew rapidly over time from less than 10% of total expenditures to close to 30%.  Moreover, we point out that looking at residential prices is bound to yield bad comparisons (if we can call those made above as “bad”) if there is price discrimination. The industry price discriminated and offered incredibly low prices for industrial customers (large power) than in Ontario or anywhere else in Canada  (in spite of the taxes it was operating under and the fact that Ontario subsidized its own).

We also point out that there was a dynamics of interventionism problem. The neighboring province of Ontario (more populous and richer than Quebec) nationalized its industry and set prices well below the market level which is an implicit subsidy. However, at the subsidized rate, Ontario could not supply its own demand and had to buy at the market price in Quebec. Its over-equilibrium quantity of energy demanded was transferred on the freer Quebec market, thus increasing prices on that market.

We also argue that there was wide heterogeneity of rates in Quebec that relate to the structure of municipal regulation (the level at which electricity was regulated pre-1935). The price differences depended on the political games involving rent-seeking firms and politicians (best exemplified by the case of Quebec City). Cities with high prices were places where the electrical market was heavily politicized and franchises (i.e. the contracts fixing rate schedules over long periods of time to recoup capital investment) were short and subject to holdups.

This latter point is meant for us (me and Germain) to stake a claim on future research to document the nationalization and regulation process at the municipal level and see what the effects on prices and outputs were. In a certain way, I am trying to establish a research agenda extending the skepticism of “accepted wisdom” that has emerged with the economic history of antitrust in the United States to the case of electricity trusts in Quebec. This first article is, I believe, a promising start for such an inclusion.

 

Figure2Electricity

Figure4Electricity

 

Rosenbloom on the Colonial American Economy

Joshua Rosenbloom is an economic historian worth following if you are interested in American economic history during the colonial era. He has recently published what appears to be an overview article of the topic (probably for a book or an invited symposium) which perfectly summarizes the current state of the research. I believe that this should be widely read by interested parties.  Here are key excerpts for some of the topics he discusses. I provide some comments to enrich his contribution, but these should be understood as complements rather than substitutes to this excellent overview of the American economy during the colonial era.

On Economic Growth 

Mancall and Weiss (…) concluded that likely rates of per capita GDP growth could not have been higher than 0.1 percent per year and were likely closer to zero. In subsequent work, Mancall, Rosenbloom and Weiss (2004) and Rosenbloom and Weiss (2014) have constructed similar estimates for the colonies and states of the Lower South and the Mid-Atlantic regions, respectively. Applying the method of controlled conjectures at a regional level allowed them to incorporate additional, region-specific, evidence about agricultural  productivity and exports, and reinforced the finding that there was little if any growth in GDP per capita during the eighteenth century. Lindert and Williamson (2016b) have also attempted to backcast their estimates of colonial incomes. Their estimates rely in part on the regional estimates of Mancall, Rosenbloom and Weiss, but the independent evidence they present is consistent with the view that economic growth was quite slow during the eighteenth century.

This is still a contentious point (see notably this article by McCusker), but I believe that they are correct. In my own work, using both wages and incomes, I have found similar results for Canada and Leticia Arroyo Abad and Jan Luiten Van Zanden have found something roughly similar for the Latin American economies (Mexico and Peru).

It is also consistent with even simplistic accounts of the neoclassical growth model. The New World was an economy of abundant land input whose outputs (agricultural produce) were mostly meant for local consumption. If one wanted to increase his income, all he had to do was use more inputs at really low costs. There is very little in this situation to invest in increasing total factor productivity and incomes would only increase at the dis-aggregated level (following the same region over time) as we are capturing the extent of inputs included over time (e.g. the long-settled farmer has a high income because he has had the time to build his farm, but the short-settled farmer brings the average down because he is just starting that process).

On Monetary History and Monetary Puzzles

In lieu of specie, the colonists relied heavily on barter for local exchange. In the Chesapeake transactions were often denominated in weights of tobacco. However, tobacco was not used as a medium of exchange. Rather merchants might advance credit to planters for the purchase of imported items, to be repaid at harvest with the specified quantity of tobacco. Elsewhere book credit accounts helped to facilitate transactions and reduce the need for currency. The colonists regularly complained about the shortage of specie, but as Perkins (1988, p. 165) observed, the long run history of prices does not suggest any tendency of prices to fall, as would be expected if the money supply was too small. (…) With only a few exceptions the colonies issuance of these notes did not give rise to inflationary pressures. There is by now a large literature that has analyzed the relationship between note issuance and prices, and finds little evidence of any correlation between the series (Weiss 1970, 1974; Wicker 1985; Smith 1985; Grubb 2016. As Grubb (2016) has argued, this suggests that while the circulation of bills of credit may have facilitated exchange by substituting for book credit or other forms of barter, they did not assume the role of currency.

In this, Rosenbloom summarizes a puzzle which has been the subject of debates since the 1970s (starting with West in 1978 in this Economic Inquiry article). In many instances (like South Carolina and Pennsylvania), the large issues of paper money had no measurable effect on prices.  This is a puzzle given the quantity theory of the price level. The proposition to solve the puzzle is that as the paper money printed by colonies tended to be backed by future assets, they were securities that could circulate as a medium of exchange. If properly backed and redeemed, people would form expectations that these injections were temporary injections and there would be no effect on the price level all else being equal. Inflation would only occur if redemption promises were not held or were believed to be humbug. This proposition has been heavily contested given the limited information we hold for the stock of other media of exchange and trade balances. I have my own take on this debate on which I weigh using a similar Canadian monetary experiment (see here), but this is a serious debate. Basically, it is a historical battleground between the proponents of the fiscal theory of the price level (see notably the classical Sargent and Wallace article) and the proponents of the quantity theory of the price level.  Anyone interested in the wider macroeconomic debate should really focus on these colonial experiments because they really are the perfect testing grounds (which Rosenbloom summarizes efficiently).

On Mercantilism, the Navigation Acts and American Living Standards

The requirement that major colonial exports pass through England on their way to continental markets and that manufactures be imported from England was the equivalent of imposing a tax on this trade. The resulting price wedge reduced the volume of trade and shifted some of the producer and consumer surplus to the providers of shipping and merchant services. A number of cliometric studies have attempted to estimate the magnitude of these effects to determine whether they played a role in encouraging the movement for independence (Harper 1939; Thomas 1968; Ransom 1968; McClelland 1969). The major difference in these studies arises from different approaches to formulating a counterfactual estimate of how large trade would have been in the absence of the Navigation Acts. In general, the estimates suggest that the cost to the colonists was relatively modest, in the range of 1-3 percent of annual income. Moreover, this figure needs to be set against the benefits of membership in the empire, which included the protection the British Navy afforded colonial merchants and military protection from hostile natives and other European powers.

The Navigation Acts were often cited as a burden that the colonists despised, but many economic historians have gone over their impact and they appear to have been minimal. It does not mean that they were insignificant to political events (rent-seeking coalitions tend to include small parties with intense preferences). However, it does imply that the action lies elsewhere if someone wants to explain the root causes of the revolution or that one must consider distributional effects (see notably this article here).

These are the sections that I found the most interesting (as they relate to some of my research agendas), but the entire article provides an effective summary for anyone interested in initiating research on the topic of American economic history during the colonial era. I really recommend reading it even if all that you seek is an overview for general culture.

On demography and living standards in the colonial era

This is a topic that has been bugging me. Very often, historians will (accurately) point out mortality statistics in the United States, Canada (Quebec) and the Latin America during the colonial era were better than in the comparable Old World (comparing French with French, British with British, Spanish with Spanish). However, they will argue that this is evidence that living standards were higher. This is where I wish to make an important nuance.

Settlement colonies (so, here there is a bigger focus on North America, but it applies to smaller extent to Latin America which I am more tempt to label as extractive – see here) are generally frontier economies. This means that they are small economies because of small populations.  This means that labor and capital are scarce relative to land. All outputs that come from the relatively abundant factor will thus tend to be cheaper if there is little international trade for the goods that they are best at producing. The colonial period pretty much fits that bill. The American and Canadian colonies were basically agricultural colonies, but very few of those agricultural outputs actually crossed the Atlantic. As such, agricultural produces were cheap. This is akin to saying that nutrition was cheap.

This, by definition, will give settlement colonies an advantage in terms of biological living standards. As they are not international price takers, wheat is cheaper than in the old world. This is why James Lemon spoke of the New World as the “Best poor man’s country” (I love that expression) : it was easy to earn subsistence. However, beyond that it is very hard to go beyond. For example, in my dissertation (articles still in consideration at Cliometrica and Canadian Journal of Economics) I found that when wages were deflated by a subsistence basket containing very few services and manufactured goods and which relied heavily on untransformed foods, Canada was richer than the richest city of France. Once you shifted to a basket that marginally increased transformed goods and manufactured goods, the advantage was wiped away.

Yet, everything indicates that mortality rates were greater in Paris and France and than in Quebec City and Quebec as a whole (but not by a lot) (see images below).  Similar gaps seem to exist for the United States relative to Britain, but the data is not as rich as for Quebec. However, the data that exists for New England suggests that death rates were lower than in England but the “bare bones” real incomes measured by Lindert and Williamson show that New England may have been poorer than Great Britain (not by much though).

Crude Death Rates

IMR

I am not saying that demographic and biological data is worthless. Quite the contrary (even I wanted to, I could not since I have a paper on the heights of French-Canadians from 1780 to 1830)! The point is that data matters in context.  The world is full of small non-linearities between variables. While “good” demographic outcomes are generally tracking “good” economic outcomes, there are contexts where this may be a weaker relation (curvilinear relations between variables). I think that this is a good example of that point.

On why complexity from simple rules is counterintuitive

“… normally we start from whatever behavior we want to get, then try to design a system that will produce it. Yet to do this reliable, we have to restrict ourselves to systems whose behavior we can readily understand and predict–for unless we can foresee how a system will behave, we cannot be sure that the system will do what we want.

“But unlike engineering, nature operates under no such constraint. So there is nothing to stop systmes like those at the end of the previous section from showing up. And in fact one of the important conclusions of this book is that such systems are actually very common in nature.

“But because the only situations in which we are routinely aware both of the underlying rules and overall behavior are ones in which we are building things or doing engineering, we never normally get any intuition about systems like the ones at the end of the previous section.”

Stephen Wolfram

The deeper you dig into math and computer science, the more Hayekian things look. The impossibility of economic calculation under socialism has important counterparts in Godel and Turing/Church.

On Antitrust, the Sherman Act and Accepted Wisdom

I am generally skeptical of “accepted wisdom” on many policy debates. People involved in policy-making are generally politicians who carefully craft justifications (i.e. cover stories) where self-interest and common good cannot be disentangled easily.  These justifications can easily become “accepted wisdom” even if incorrect. I am not saying that “accepted wisdom” is without value or that it is always wrong, but more often than not it is accepted at face value without question.

My favorite example is “antitrust”.  In the United States, the Sherman Act (the antitrust bill) was first introduced in 1889 (passed in 1890). The justification often given is that it was meant to promote competition as proposed by economists. However, as often pointed out, the bill was passed well before the topic of competition in economics had been unified into a theoretical body.  It was also rooted in protectionist motives. Moreover, the bill was passed after the industries most affected saw prices fall faster than the overall price level and output increase faster than the overall output level (see here here here here and here). Combined, these elements should give pause to anyone willing to cite the “accepted wisdom”.

More recently, economist Patrick Newman provided further reason for caution in an article in Public Choice. Interweaving political history and the biographical details about senator John Sherman (he of the Sherman Act), Newman tells a fascinating story about the self-interested reasons behind the introduction of the act.

In 1888, John Sherman failed to obtain the Republican presidential nomination – a failure which he blamed on the governor of Michigan, Russell Alger. Out of malice and a desire of vengeance, Sherman defended his proposal by citing Alger as the ringmaster of one of the “trusts”. Alger, himself a presidential hopeful for the 1892 cycle, was politically crippled by the attack (even if it appears that it was untrue). Obviously, this was not the sole reason for the Act (Newman highlights the nature of the Republican coalition which would have demanded such an act). However, once Alger was fatally wounded, Sherman appears to have lost interest in the Act and left others to push it through.

As such, the passage of the bill was partly motivated by political self-interest (thus illustrating the key point of behavioral symmetry that underlies public choice theory). Entangled in the “accepted wisdom” is a wicked tale of revenge between politicians. At such sight, it is hard not to be cautions with regards to “accepted wisdom”.

A quote

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

Mark Weiser

Hayek would have liked this quote about computers. On his behalf, I’m going to co-opt it as a description of the miracle of markets.

WP: Does Bitcoin Have the Right Monetary Rule?

The growing literature on Bitcoin offers little more than passing mentions to Bitcoin’s monetary rule. This is the topic of this short paper.

The growing literature on Bitcoin can be divided in two groups. One performs an economic analysis of Bitcoin focusing on its monetary characteristics. The other one looks takes a financial look at the price of Bitcoin. Interestingly, both of these groups have not given much more than passing comments to the problem whether or not Bitcoin has the right monetary rule. This paper argues that Bitcoin in particular, and cryptocurrencies in general, do not have a good monetary rule, and that this shortcoming seriously limits its prospect of becoming a well-stablished currency.

Download from SSRN.

If causality matters, should the media talk about mass shootings?

My answer to that question is “maybe not” or at the very least “not as much”. Given the sheer amount of fear that can be generated by a shooting, it is understandable to be shocked and the need to talk about it is equally understandable. However, I think this might be an unfortunate opportunity to consider the incentives at play.

I assume that criminals, even crazy ones, are relatively rational. They weigh the pros and cons of potential targets, they assess the most efficient tool or weapon to accomplish the objectives they set etc. That entails that, in their warped view of the world, there are benefits to committing atrocities. In some instances, the benefit is pure revenge as was the case in one of the famous shooting in my hometown of Montreal (i.e. a university professor decided to avenge himself of the slights caused by other professors). In other instances, the benefit is defined in part by the attention that the violent perpetrator attracts for himself. This was the case of another infamous shooting in Montreal where the killer showed up at an engineering school to kill 14 other students and staff. He committed suicide and also left a suicide statement that read like a garbled political manifesto. In other words, killers can be trying to “maximize” media hits.

This “rational approach” to mass shootings opens the door to a question about causality: is the incidence of mass shootings determining the number of media hits or is the number of media hits determining the incidence of mass shootings. In a recent article in the Journal of Public Economicsthe possibility of the latter causal link has been explored with regards to terrorism.  Using the New York Times‘ coverage of some 60,000 terrorist attacks in 201 countries over 43 years, Michael Jetter used the exogenous shock caused by natural disasters to study they reduced the reporting of terrorist attacks and how this, in turn, reduced attention to terrorism. That way, he could arrive at some idea (by the negative) of causality. He found that one New York Times article increased attacks by 1.4 in the following three weeks. Now, this applies to terrorism, but why would it not apply to mass shooters? After all, there are very similar in their objectives and methods – at the very least with regards to the shooters who seek attention.

If the causality runs in the direction suggested by Jetter, then the full-day coverage offered by CNN or NBC or FOX is making things worse by increasing the likelihood of an additional shooting. For some years now, I have been suggesting this possibility to journalist friends of mine and arguing that maybe the best way to talk about terrorism or mass shooters is to move them from the front page of a newspaper to a one-inch box on page 20 or to move the mention from the interview to the crawler at the bottom. In each discussion, my claim about causality is brushed aside with either incredulity at the logic and its empirical support or I get something like “yes, but we can’t really not talk about it”. And so, the thinking ends there. However, I am quite willing to state that its time for media outlets to deeply reflect upon their role and how to best accomplish the role they want. And that requires thinking about causality and accepting that “splashy” stories may be better left ignored.

Do risk preferences account for 1.4 percentage points of the gender pay gap?

A few days ago, this study of gender pay differences for Uber drivers came out. The key finding, that women earned 7% less than men, was stunning because Uber uses a gender-blind algorithm. The figure below was the most interesting one from the study as it summarized the differences in pay quite well.

DataUber

To explain this, the authors highlight a few explanations borne out by the data: men drive faster allowing them to have more clients; men have spent more time working for Uber and have more experience that may be unobserved; choices of where and when to drive matters. It is this latter point that I find fascinating because it speaks to an issue that I keep underlining regarding pay gaps when I teach.

For reasons that may be sociological or biological (I am agnostic on that), men tend to occupy jobs that have high rates of occupational mortality (see notably this British study on the topic) in the forms of accidents (think construction, firemen) or diseases (think miners and trashmen). They also tend to take the jobs in further removed areas in order to gain access to a distance premium (which is a form of risk in the sense that it affects  family life etc.). The premiums to taking risky jobs are well documented (see notably the work of Kip Viscusi who measured the wage premium accruing to workers who were employed in bars where smoking was permitted). If these premiums are non-negligible but tend to be preferred by men (who are willing to incur the risk to be injured or fall sick), then risk preferences matter to the gender wage gap.

However, there are hard to properly measure in order to assess the share of the wage gap truly explained by discrimination. Here with the case of Uber, we can get an idea of the amplitude of the differences. Male Uber drivers prefer riskier hours (more risks of having an inebriated and potentially aggressive client), riskier places (high traffic with more risks of accidents) and riskier behavior (driving faster to get more clients per hour).  The return to taking these risks is greater earnings. According to the study, 20% of the gap stems from this series of choices or roughly 1.4 percentage points.

I think that this is significantly large to warrant further consideration in the future in the debate. More often than not, the emphasis is on education, experience, marital status, and industry codes (NAICS code) to explain wage differences. The use of industry codes has never convinced me. There is wide variance within industries regarding work accidents and diseases. The NAICS codes industries by wide sectors and then by sub-sectors of activities (see for example the six-digits codes to agriculture, forestry, fishing and hunting here). This does not allow to take account of the risks associated with a job. There are a few study that try to account for this problem, but there are … well … few in numbers. And rarely are they considered in public discussions.

Here, the Uber case shows the necessity to bring back this subtopic in order to properly explain the wage gap.

Prices in Canada since 1688

A few days ago, I received goods news that the Canadian Journal of Economics had accepted my paper that constructed a consumer price index for Canada between 1688 and 1850 from homogeneous sources (the account books of religious congregations). I have to format the article to the guidelines of the journal and attach all my data and it will be good to go (I am planning on doing this over the weekend). In the meanwhile, I thought I would share the finalized price index so that others can see it.

First, we have the price index that focuses on the period from 1688 to 1850.  Most indexes that exist for pre-1850 Canada (or Quebec since I assume that Quebec is representative of pre-1850 Canadian price trends) are short-term, include mostly agricultural goods and have no expenditures weights to create a basket. Now, my index is the first that uses the same type of sources continuously over such a long period and it is also the first to use a large array of non-agricultural goods. It also has a weights scheme to create a basket.

PriceIndexCanada

The issue of adding non-agricultural goods was especially important because there were important differences in the evolution of different types of goods. Agricultural goods, see next image, saw their nominal prices continually increase between the 17th and 19th centuries. However, most other prices – imported goods, domestically produced manufactured goods etc. – either fall or remain stable. These are very pronounced changes in relative prices. It shows that reliance on agricultural goods price index will overstate the amount of “deflating” needed to arrive at real wages or incomes.  The image below shows the nominal price evolution of groupings of goods as described above.

PriceIndexCanada2

And finally, the pièce de résistance! I link my own index to other existing post-1850 index so as to generate the evolution of prices in Canada since 1688. The figure below shows the evolution of the price index over … 328 years (I ended the series at 2015, but extra years forward can be added). In the years to come, I will probably try to extend this backwards as much as possible at least to 1665 (the first census in Canada) and will probably try to approach Statistics Canada to see if they would like to incorporate this contribution into their wide database of macroeconomic history of Canada.

PriceIndexCanada3

2018 Hayek Essay Contest

The 2018 General Meeting of the Mont Pelerin Society will take place from September 30 – October 6, 2018 at ExpoMeloneras and Lopesan Hotels in Meloneras, Gran Canaria, Canary Islands. As with past general meetings, the Mont Pelerin Society is currently soliciting submissions for Friedrich A. Hayek Fellowships. The fellowships will be awarded through the Hayek Essay Contest.

The Hayek Essay Contest is open to all individuals 36 years old or younger. Entrants should write a 5,000 word (maximum) essay that addresses the quotation(s) and question(s) detailed on the contest announcement (available at the above link). The deadline for submissions is May 31, 2018. The winners will be announced on July 31, 2018. Essays must be submitted in English only. Electronic submissions should be sent in PDF format to this email address (mps.youngscholars@ttu.edu). Authors of winning essays must present their papers at the General Meeting to receive their award. The essays will be judged by an international panel of three members of the Society.

Please feel free to share this announcement with any individuals who may have an interest in submitting an essay for consideration of a fellowship award. All questions may be directed to the MPS Young Scholars Program Committee by email at mps.youngscholars@ttu.edu or phone at +1.806.742.7138.

MPS Young Scholars Program Committee

The best economic history papers of 2017

As we are now solidly into 2018, I thought that it would be a good idea to underline the best articles in economic history that I read in 2017. Obviously, the “best” is subjective to my preferences. Nevertheless, it is a worthy exercise in order to expose some important pieces of research to a wider audience.  I limited myself to five articles (I will do my top three books in a few weeks). However, if there is an interest in the present post I will publish a follow-up with another five articles.

O’Grady, Trevor, and Claudio Tagliapietra. “Biological welfare and the commons: A natural experiment in the Alps, 1765–1845.” Economics & Human Biology 27 (2017): 137-153.

This one is by far my favorite article of 2017. I stumbled upon it quite by accident. Had this article been published six or eight months earlier, I would never have been able to fully appreciate its contribution. Basically, the authors use the shocks induced by the wars of the late 18th century and early 19th century to study a shift from “self-governance” to “centralized governance” of common pool resources. When they speak of “commons” problems, they really mean “commons” as the communities they study were largely pastoral communities with area in commons.  Using a difference-in-difference where the treatment is when a region became “centrally governed” (i.e. when organic local institutions were swept aside), they test the impact of these top-down changes to institutions on biological welfare (as proxied by infant mortality rates). They find that these replacements worsened outcomes.

Now, this paper is fascinating for two reasons. First, the authors offer a clear exposition of its methodology and approach. They give just the perfect amount of institutional details to assuage doubts.  Second, this is a strong illustration of the points made by Elinor Ostrom and Vernon Smith. These two economists emphasize different aspects of the same thing. Smith highlights that “rationality” is “ecological” in the sense that it is an iterative process of information discovery to improve outcomes.  This includes the generation of “rules of the game” which are meant to sustain exchanges. These rules need not be formal edifices. They can be norms, customs, mores and habits (generally supported by the discipline of continuous dealings and/or investments in social-distance mechanisms). On her part, Ostrom emphasized that the tragedy of the commons can be resolved through multiple mechanisms (what she calls polycentric governance) in ways that do not necessarily require a centralized approach (or even market-based approaches).

In the logic of these two authors, attempts at “imposing” a more “rational” order (from the perspective of the planner of this order) may backfire. This is why Smith often emphasizes the organic nature of things like property rights. It also shows that behind seemingly “dumb” peasants, there is often the weight of long periods of experimentation in order to adapt rules and norms in order to fit the constraints faced by the community.  In this article, we can see those two things – the backfiring and, by logical implication, the strengths of the organic institutions that were swept away.

Fielding, David, and Shef Rogers. “Monopoly power in the eighteenth-century British book trade.” European Review of Economic History 21, no. 4 (2017): 393-413.

In this article, the authors use a legal change caused by the end of the legal privileges of the Stationers’ Company (which amounted to an easing of copyright laws).  The market for books may appear to be “non-interesting” for mainstream economics. However, this would be a dramatic error. The “abundance” of books is really a recent development. Bear in mind that the most erudite monk of the late middle ages had less than fifty books from which to draw knowledge (this fact is a vague recollection of mine from Kenneth Clark’s art history documentary from the late 1960s which was aired by the BBC). Thus, the emergence of a wide market for books – which is dated within the period studied by the authors of this article – should not be ignored. It should be considered as one of the most important development in western history. This is best put by the authors when they say that “the reform of copyright law has been regarded as one of the driving forces behind the rise in book production during the Enlightenment, and therefore a key factor in the dissemination of the innovations that underpinned Britain’s Industrial Revolution”.

However, while they agree that the rising popularity of books in the 18th century is an important historical event, they contest the idea that liberalization had any effect. They find that the opening up of the market to competition had little effects on prices and book production. They also find that mark-ups fell but that this could not be attributed to liberalization. At first,  I found these results surprising.

However, when I took the time to think about it I realized that there was no reason to be surprised. First, many changes have been heralded as crucial moments in history. More often than not, the importance of these changes has been overstated. A good example of an overstated change has been the abolition of the Corn Laws in England in the 1840s. The reduction in tariffs, it is argued, ushered Britain into an age of free trade and falling food prices.

In reality, as John Nye discusses, protectionist barriers did not fall as fast as many argued and there were reductions prior to the 1846 reform as Deirdre McCloskey pointed out. It also seems that the Corn Laws did not have substantial effects on consumption or the economy as a whole (see here and here).  While their abolition probably helped increase living standards, it seems that the significance of the moment is overstated. The same thing is probably at play with the book market.

The changes discussed by Fielding and Rogers did not address the underlying roots of the level of market power enjoyed by industry players. In other words, it could be that the reform was too modest to have an effect. This is suggested by the work of Petra Moser. The reform studied by Fielding and and Rogers appears to have been short-lived as evidenced by changes to copyright laws in the early 19th century (see here and here). Moser’s results point to effects much larger (and positive for consumers) than those of Fielding and Rogers.  Given the importance of the book market to stories of innovation in the industrial revolution, I really hope that this sparks a debate between Moser and Fielding and Rogers.

Johnson, Noel D., and Mark Koyama. “States and economic growth: Capacity and constraints.” Explorations in Economic History 64 (2017): 1-20.

I am biased as I am fond of most of the work of these two authors. Nevertheless, I think that their contribution to the state capacity debate is a much needed one. I am very skeptical of the theoretical value of the concept of state capacity.  The question always lurking in my mind is the “capacity to do what?”.

A ruler who can develop and use a bureaucracy to provide the services of a “productive state” (as James Buchanan would put it) is also capable of acting like a predator.  I actually emphasize this point in my work (revise and resubmit at Health Policy & Planning) on Cuban healthcare: the Cuban government has the capacity to allocate large quantities of resources to healthcare in amounts well above what is observed for other countries in the same income range. Why? Largely because they use health care for a) international reputation and b) actually supervising the local population. As such, members of the regime are able to sustain their role even if the high level of “capacity” comes at the expense of living standards in dimensions other than health (e.g. low incomes). Capacity is not the issue, its capacity interacting with constraints that is interesting.

And that is exactly what Koyama and Johnson say (not in the same words). They summarize a wide body of literature in a cogent manner that clarifies the concept of state capacity and its limitations. In doing so, they ended up proposing that the “deep roots” question that should interest economic historians is how “constraints” came to be efficient at generating “strong but limited” states.

In that regard, the one thing that surprised me from their article was the absence of Elinor Ostrom’s work. When I read about “polycentric governance” (Ostrom’s core concept), I imagine the overlap of different institutional structures that reinforce each other (note: these structures need not be formal ones). They are governance providers. If these “governance providers” have residual claimants (i.e. people with skin in the game), they have incentives to provide governance in ways that increased the returns to the realms they governed. Attempts to supersede these institutions (e.g. like the erection of a modern nation state) requires dealing with these providers. They are the main demanders of constraints which are necessary to protect their assets (what my friend Alex Salter calls “rights to the realm“). As Europe pre-1500 was a mosaic of such governance providers, there would have been great forces pushing for constraints (i.e. bargaining over constraints).

I think that this is where the literature on state capacity should orient itself. It is in that direction that it is the most likely to bear fruits. In fact, there have been some steps taken in that direction For example, my colleagues Andrew Young and Alex Salter have applied this “polycentric” narrative to explain the emergence of “strong but limited states” by focusing on late medieval institutions (see here and here).  Their approach seems promising. Yet, the work of Koyama and Johnson have actually created the room for such contributions by efficiently summarizing a complex (and sometimes contradictory) literature.

Bodenhorn, Howard, Timothy W. Guinnane, and Thomas A. Mroz. “Sample-selection biases and the industrialization puzzle.” The Journal of Economic History 77, no. 1 (2017): 171-207.

Elsewhere, I have peripherally engaged discussants in the “antebellum puzzle” (see my article here in Economics & Human Biology on the heights of French-Canadians born between 1780 and 1830). The antebellum puzzle refers to the possibility that the biological standard of living (e.g. falling heights, worsening nutrition, increased mortality risks) fell while the material standard of living increased (e.g. higher wages, higher incomes, access to more services, access to a wider array of goods) during the decades leading to the American Civil War.

I am inclined to accept the idea of short-term paradoxes in living standards. The early 19th century witnessed a reversal in rural-urban concentration in the United States. The country had been “deurbanizing” since the colonial era (i.e. cities represented an increasingly smaller share of the population). As such, the reversal implied a shock in cities whose institutions were geared to deal with slowly increasing populations.

The influx of people in cities created problems of public health while the higher level of population density favored the propagation of infectious diseases at a time where our understanding of germ theory was nill. One good example of the problems posed by this rapid change has been provided by Gergely Baics in his work on the public markets of New York and their regulation (see his book here – a must read).  In that situation, I am not surprised that there was a deterioration in the biological standard of living. What I see is that people chose to trade-off shorter wealthier lives against longer poorer lives. A pretty legitimate (albeit depressing) choice if you ask me.

However, Bodenhorn et al. (2017) will have none of it. In a convincing article that has shaken my priors, they argue that there is a selection bias in the heights data – the main measurement used in the antebellum puzzle debate.  Most of the data on heights comes either from prisoners or enrolled volunteer soldiers (note: conscripts would not generate the problem they describe). The argument they make is that as incomes grow, the opportunity cost of committing a crime or of joining the army grows.  This creates the selection bias whereby the sample is going to be increasingly composed of those with the lowest opportunity costs. In other words, we are speaking of the poorest in society who also tended to be shorter. Simultaneously, fewer tall individuals (i.e. rich individuals) committed crimes or joined the army because incomes grew. This logic is simple and elegant. In fact, this is the kind of data problem that every economist should care about when they design their tests.

Once they control for this problem (through a meta-analysis), the puzzle disappears. I am not convinced by the latter part of the claim. Nevertheless, it is very likely that the puzzle is much smaller than initially gleaned. In yet to be published work, Ariell Zimran (see here and here) argues that the antebellum puzzle is robust to the problem of selection bias but that it is indeed diminished. This concedes a large share of the argument to Bodenhorn et al. While there is much to resolve, this article should be read as it constitutes one of the most serious contributions to the field of economic history published in 2017.

Ridolfi, Leonardo. “The French economy in the longue durée: a study on real wages, working days and economic performance from Louis IX to the Revolution (1250–1789).” European Review of Economic History 21, no. 4 (2017): 437-438.

I discussed Leonardo’s work elsewhere on this blog before. However, I must do it again. The article mentioned here is the dissertation summary that resulted from Leonardo being a finalist to the best dissertation award granted by the EHES (full dissertation here). As such, it is not exactly the “best article” published in 2017. Nevertheless,  it makes the list because of the possibilities that Leonardo’s work have unlocked.

When we discuss the origins of the British Industrial Revolution, the implicit question lurking not far away is “Why Did It Not Happen in France?”. The problem with that question is that the data available for France (see notably my forthcoming work in the Journal of Interdisciplinary History) is in no way comparable with what exists for Britain (which does not mean that the British data is of great quality as Judy Stephenson and Jane Humphries would point out).  Most estimates of the French economy pre-1790 were either conjectural or required a wide array of theoretical considerations to arrive at a deductive portrait of the situation (see notably the excellent work of Phil Hoffman).  As such, comparisons in order to tease out improvements to our understanding of the industrial revolution are hard to accomplish.

For me, the absence of rich data for France was particularly infuriating. One of my main argument is that the key to explaining divergence within the Americas (from the colonial period onwards) resides not in the British or Spanish Empires but in the variation that the French Empire and its colonies provide. After all, the French colony of Quebec had a lot in common geographically with New England but the institutional differences were nearly as wide as those between New England and the Spanish colonies in Latin America. As such, as I spent years assembling data for Canada to document living standards in order to eventually lay down the grounds to test the role of institutions, I was infuriated that I could do so little to compare with France. Little did I know that while I was doing my own work, Leonardo was plugging this massive hole in our knowledge.

Leonardo shows that while living standards in France increased from 1550 onward, the level was far below the ones found in other European countries. He also showed that real wages stagnated in France which means that the only reason behind increased incomes was a longer work year. This work has also unlocked numerous other possibilities. For example, it will be possible to extend to France the work of Nicolini and Crafts and Mills regarding the existence of Malthusian pressures. This is probably one of the greatest contribution of the decade to the field of economic history because it simply went through the dirty work of assembling data to plug what I think is the biggest hole in the field of economic history.

On the “tea saucer” of income inequality since 1917

I disagree often with the many details that underlie the arguments of Thomas Piketty and Emmanuel Saez. That being said, I am also a great fan of their work and of them in general. In fact, I think that both have made contributions to economics that I am envious to equal. To be fair, their U-curve of inequality is pretty much a well-confirmed fact by now: everyone agrees that the period from 1890-1929 was a high-point of inequality which leveled off until the 1970s and then picked up again.

Nevertheless, while I am convinced of the curvilinear aspect of the evolution of income inequality in the United State as depicted by Piketty and Saez, I am not convinced by the amplitudes. In their 2003 article, the U-curve of inequality really looks like a “U” (see image below).  Since that article, many scholars have investigated the extent of the increase in inequality post-1980 (circa). Many have attenuated the increase, but they still find an increase (see here here here here here here here here here). The problem is that everyone has been considering the increase – i.e. the right side of the U-curve. Little attention has been devoted to the left side of the U-curve even though that is where data problems should be considered more carefully for the generation of a stylized fact. This is the contribution I have been coordinating and working on for the last few months alongside John Moore, Phil Magness and Phil Schlosser. 

Blog Figure

To arrive at their proposed series of inequality, Piketty and Saez used the IRS Statistics of Income (SOI) to derive top income fractiles. However, the IRS SOI have many problems. The first is that between 1917 and 1943, there are many years where there are less than 10% of the potential tax population that files a tax return. This prohibits the use of a top 10% income share in many years unless an adjustment is made. The second is that prior to 1943, the IRS reports net income and reports adjusted gross income after 1943. As such, to link post-1943 with pre-1943, there needs to be an additional adjustment. Piketty and Saez made some seemingly reasonable assumptions, but they have never been put to the test regarding sensitivity and robustness. This is leaving aside issues of data quality (I am not convinced IRS data is very good as most of it was self-reported pre-1943 which is a period with wildly varying tax rates). The question here is “how good” are the assumptions?

What we did is verify each assumption to see their validity. The first one we tackle is the adjustment for the low number of returns. To make their adjustments, Piketty and Saez used the fact that single households and married households filed in different quantities relative to their total population. Their idea is that a year in which there was a large number of return was used, the ratio of single to married could be used to adjust the series. The year they used is 1942. This is problematic as 1942 is a war year with self-reporting when large quantities of young American males are abroad fighting. Using 1941, the last US peace year, instead shows dramatically different ratios. Using these ratios knocks off a few points from the top 10% income share. Why did they use 1942? Their argument was there was simply not enough data to make the correction in 1941.  They point to a special tabulation in the 1941 IRS-SOI of 112,472 1040A forms from six states which was not deemed sufficient to make to make the corrections. However, later in the same document, there is a larger and sufficient sample of 516,000 returns from all 64 IRS collection districts (roughly 5% of all forms). By comparison, the 1942 sample Piketty and Saez used to correct only had 455,000 returns.  Given the war year and the sample size, we believe that 1941 is a better year to make the adjustment.

Second, we also questioned the smoothing method to link net income-based series with adjusted-gross income based series (i.e. pre-1943 and post-1943 series). The reason for this is that the implied adjustment for deductions made by Piketty and Saez is actually larger than all the deductions claimed that were eligible under the definition of Adjusted Gross Income – which is a sign of overshot on their parts. Using the limited data available for deductions by income groups and making some assumptions (very conservative ones) to move further back in time, we found that adjusting for “actual deductions” yields a lower level of inequality. This is contrasted with the fixed multipliers which Piketty and Saez used pre-1943.

Third, we question their justification for not using the Kuznets income denominator. They argued that Kuznets’ series yielded an implausible figure because, in 1948, its use yielded a greater income for non-fillers than for fillers.  However, this is not true of all years. In fact, it is only true after 1943. Before 1943, the income of non-fillers is equal in proportion to the one they use post-1944 to impute the income of non-fillers. This is largely the result of an accounting error definition. Incomes before 1943 were reported as net income and as gross incomes after that point. This is important because the stylized fact of a pronounced U-curve is heavily sensitive to the assumption made regarding the denominator.

These three adjustments are pretty important in terms of overall results (see image below).  The pale blue line is that of Piketty of Saez as depicted in their 2003 paper in the Quarterly Journal of Economics. The other blue line just below it is the effect of deductions only (the adjustment for missing returns affects only the top 10% income share). All the other lines that mirror these two just below (with the exception of the darkest blue line which is the original Kuznets inequality estimates) compound our corrections with three potential corrections for the denominators. The U-curve still exists, but it is not as pronounced. When you look with the adjustments made by Mechling et al. (2017) and Auten and Splinter (2017) for the post-1960 period (green and red lines) and link them with ours, you can still see the curvilinear shape but it looks more like a “tea saucer” than a pronounced U-curve.

In a way, I see this as a simultaneous complement to the work of Richard Sutch and to the work of Piketty and Saez: the U-curve still exists, but the timing and pattern is slightly more representative of history. This was a long paper to write (and it is a dry read given the amount of methodological discussions), but it was worth it in order to improve upon the state of our knowledge.

FigureInequality

On Ronald Coase as an Economic Historian?

Can we consider Ronald Coase as an economic historian? Most economists or social scientists that read this blog must appreciate Coase largely for his Nature of the Firm and the Problem of Social Cost. Personally, while I appreciate these works for their theoretical insights (well, isn’t that an understatement!), I appreciate Coase much more for articles that very few know about.

Generally, after these two articles, most economists do not know what Coase wrote about.  Some might know about Coase’s proviso regarding durability and monopoly (a single firm producing a durable good cannot be a monopoly because it competes with its future self) or about his work on the Federal Communications Commission (which is an application of his two main papers).

Fewer people know about his piece about the lighthouse in economics. While it is not an unknown piece, it is mostly known within the subfield of public economics as it concerns the scope for the private provision of public goods. Generally, I found that those who know about the piece know the “takeaway” which was that lighthouses (which because of their low marginal costs and non-excludability have been deemed public goods ever since J.S. Mill) could be produced privately. While this was indeed Coase’s point, this summary (like that Stigler made of the Coase Theorem) misses the peripheral insights that matter. Coase did the economic history job of documenting the institutional details behind the provision of lighthouses which sparked debates in journals such as Journal of Legal Studies, Cambridge Journal of Economics, European Review of Economic History, Public Choice and Public Finance Review (they still go to this day and I am trying to contribute to that with this piece that me and Rosolino Candela have recently submitted). It seems unclear whether or not the lighthouse can even be considered a public good or if it was merely an instance of government failure rather than market failure (or the reverse). Regardless of the outcome, if you read the lighthouse paper by Coase, you will read an application of theory to history bringing a “boring” topic (i.e. that of maritime safety pre-1900) to life through theory.  The lighthouse paper is an application of industrial organization through the Coasean lenses of transaction costs and joint provision. And it is a fine application if I might say!

But that is not his only piece! Has anyone ever read his article in the Journal of Law & Economics on Fisher Body and vertical integration? Or his piece on the British Post Office and private messengers in the same journal?  In those articles, Coase brings theory to life by asking simple questions to history in ways that force us to question some common day conceptions like “vertical integration was the results of holdup problems” or “postal services need to be publicly provided”.  In both of these articles and the lighthouse article, Coase basically applies simple theoretical tools to cut through a maze of details in order to answer questions of great relevance to economic theory and even policy (i.e. the post office example). And this is why, earlier in 2017, I mentioned that Coase should be considered in the league of the top economic historians.

On the popularity of economic history

I recently engaged in a discussion (a twittercussion) with Leah Boustan of Princeton over the “popularity” of economic history within economics (depicted below).  As one can see from the purple section, it is as popular as those hard candies that grandparents give out on Halloween (to be fair, I like those candies just like I do economic history). More importantly, the share seems to be smaller than at the peak of 1980s. It also seems like the Nobel prize going to Fogel and North had literally no effects on the subfield’s popularity. Yet, I keep hearing that “economic history is back”. After all, the Bates Clark medal went to Donaldson of Stanford this year which should confirm that economic history is a big deal.  How can this be reconciled with the figure depicted below?

EconomicHIstoryData

As I explained in my twittercussion with Leah, I think that there is a popularity for using historical data. Economists have realized that if some time is spent in archives to collect historical data, great datasets can be assembled. However, they do not necessarily consider themselves “economic historians” and as such they do not use the JEL code associated with history.  This is an improvement over a field where Arthur Burns (former Fed Chair) supposedly said during the 1970s that we needed to look at history to better shape monetary policy. And by history, he meant the 1950s. However, while there are advantages, there is an important danger which is left aside.

The creation of a good dataset has several advantages. The main one is that it increases time coverage. By increasing the time coverage, you can “tackle” the big questions and go for the “big answers” through the generation of stylized facts. Another advantage (and this is the one that summarizes my whole approach) is that historical episodes can provide neat testing grounds that give us a window to important economic issues. My favorite example of that is the work of Petra Moser at NYU-Stern. Without going into too much details (because her work was my big discovery of 2017), she used a few historical examples which she painstakingly detailed in order to analyze the effect of copyright laws. Her results have important ramifications to debates regarding “science as a public good” and “science as a contribution good” (see the debates between Paul David and Terence Kealey on this in Research Policy for this point).

But these two advantages must be weighted against an important disadvantage which Robert Margo has warned against in a recent piece in Cliometrica.  When one studies economic history, one must keep in mind that two things must be accomplished simultaneously: to explain history through theory and bring theory to life through history (this is not my phrase, but rather that of Douglass North). To do so, one must study a painstaking amount of details to ascertain the quality of the sources used and their reliability.  In considering so many details, one can easily get lost or even fall prey to his own prior (i.e. I expect to see one thing and upon seeing it I ask no question). To avoid this trap, there must be a “northern star” to act as a guide. That star, as I explained in an earlier piece, is a strong and general understanding of theory (or a strong intuition for economics). To create that star and give attention to details is an incredibly hard task and which is why I argued in the past that “great” economic historians (Douglass North, Deirdre McCloskey, Robert Fogel, Nathan Rosenberg, Joel Mokyr, Ronald Coase (because of the lighthouse piece), Stephen Broadberry, Gregory Clark etc.) take a longer time to mature. In other words, good economic historians are projects that have have a long “time to build problem” (sorry, bad economics joke).  However, the downside is that when this is not the case, there are risks of ending up with invalid results that are costly and hard to contest.

Just think about the debate between Daron Acemoglu and David Albouy on the colonial origins of development. It took more than five years to Albouy to get his results that threw doubts on Acemoglu’s 1999 paper. Albouy clearly expended valuable resources to get the “details” behind the variables. There was miscoding of Niger and Nigeria, and misunderstandings of what type of mortalities were used.  This was hard work and it was probably only deemed a valuable undertaking because Acemoglu’s paper was such a big deal (i.e. the net gains were pretty big if they paid off). Yet, to this day, many people are entirely unaware of the Albouy rebuttal.  This can be very well seen in the image below regarding the number of cites of the Acemoglu-Johnson-Robinson paper on an annual basis. There seems to be no effect from the massive rebuttal (disclaimer: Albouy convinced me that he was right) from the Albouy piece.

AcemogluPaperCites

And it really does come down to small details like those underlined by Albouy. Let me give you another example taken from my work. Within Canada, the French minority is significantly poorer than the rest of Canada. From my cliometric work, we now know that there were poorer than the rest of Canada and North America as far as the colonial era. This is a stylized fact underlying a crucial question today (i.e. Why are French-Canadians relatively poor).  That stylized fact requires an explanation. Obviously, institutions are a great place to look. One of the institution that is most interesting is seigneurial tenure which was basically a “lite” version of feudalism in North America that was present only in the French settled colonies. Some historians and economic historians argued that there were no effects of the institutions on variables like farm efficiency.  However, some historians noticed that in censuses the French reported different units that the English settlers within the colony of Quebec. To correct for this metrological problem, historians made county-level corrections. With those corrections, the aforementioned has no statistically significant effect on yields or output per farm. However, as I note in this piece that got a revise and resubmit from Social Science Quarterly (revised version not yet online), county-level corrections mask the fact that the French were more willing to move to predominantly English areas than the English were willing to predominantly French areas. In short, there was a skewed distribution. However, once you correct the data on an ethnic composition basis rather than on the county-level (i.e. the same correction for the whole county), you end with a statistically significant negative effect on both output per farm and yields per acre. In short, we were “measuring away” the effect of institutions. All from a very small detail about distributions. Yet, that small detail has supported a stylized fact that the institution did not matter.

This is the risk that Margo speaks about illustrated in two examples. Economists who use history merely as a tool may end up making dramatic mistakes that will lead to incorrect conclusions. I take this “juicy” quote from Margo (which Pseudoerasmus) highlighted for me:

[EH] could become subsumed entirely into other fields… the demand for specialists in economic history might dry up, to the point where obscure but critical knowledge becomes difficult to access or is even lost. In this case, it becomes harder to ‘get the history right’

Indeed, unfortunately.