WP: Does Bitcoin Have the Right Monetary Rule?

The growing literature on Bitcoin offers little more than passing mentions to Bitcoin’s monetary rule. This is the topic of this short paper.

The growing literature on Bitcoin can be divided in two groups. One performs an economic analysis of Bitcoin focusing on its monetary characteristics. The other one looks takes a financial look at the price of Bitcoin. Interestingly, both of these groups have not given much more than passing comments to the problem whether or not Bitcoin has the right monetary rule. This paper argues that Bitcoin in particular, and cryptocurrencies in general, do not have a good monetary rule, and that this shortcoming seriously limits its prospect of becoming a well-stablished currency.

Download from SSRN.

Advertisements

If causality matters, should the media talk about mass shootings?

My answer to that question is “maybe not” or at the very least “not as much”. Given the sheer amount of fear that can be generated by a shooting, it is understandable to be shocked and the need to talk about it is equally understandable. However, I think this might be an unfortunate opportunity to consider the incentives at play.

I assume that criminals, even crazy ones, are relatively rational. They weigh the pros and cons of potential targets, they assess the most efficient tool or weapon to accomplish the objectives they set etc. That entails that, in their warped view of the world, there are benefits to committing atrocities. In some instances, the benefit is pure revenge as was the case in one of the famous shooting in my hometown of Montreal (i.e. a university professor decided to avenge himself of the slights caused by other professors). In other instances, the benefit is defined in part by the attention that the violent perpetrator attracts for himself. This was the case of another infamous shooting in Montreal where the killer showed up at an engineering school to kill 14 other students and staff. He committed suicide and also left a suicide statement that read like a garbled political manifesto. In other words, killers can be trying to “maximize” media hits.

This “rational approach” to mass shootings opens the door to a question about causality: is the incidence of mass shootings determining the number of media hits or is the number of media hits determining the incidence of mass shootings. In a recent article in the Journal of Public Economicsthe possibility of the latter causal link has been explored with regards to terrorism.  Using the New York Times‘ coverage of some 60,000 terrorist attacks in 201 countries over 43 years, Michael Jetter used the exogenous shock caused by natural disasters to study they reduced the reporting of terrorist attacks and how this, in turn, reduced attention to terrorism. That way, he could arrive at some idea (by the negative) of causality. He found that one New York Times article increased attacks by 1.4 in the following three weeks. Now, this applies to terrorism, but why would it not apply to mass shooters? After all, there are very similar in their objectives and methods – at the very least with regards to the shooters who seek attention.

If the causality runs in the direction suggested by Jetter, then the full-day coverage offered by CNN or NBC or FOX is making things worse by increasing the likelihood of an additional shooting. For some years now, I have been suggesting this possibility to journalist friends of mine and arguing that maybe the best way to talk about terrorism or mass shooters is to move them from the front page of a newspaper to a one-inch box on page 20 or to move the mention from the interview to the crawler at the bottom. In each discussion, my claim about causality is brushed aside with either incredulity at the logic and its empirical support or I get something like “yes, but we can’t really not talk about it”. And so, the thinking ends there. However, I am quite willing to state that its time for media outlets to deeply reflect upon their role and how to best accomplish the role they want. And that requires thinking about causality and accepting that “splashy” stories may be better left ignored.

Do risk preferences account for 1.4 percentage points of the gender pay gap?

A few days ago, this study of gender pay differences for Uber drivers came out. The key finding, that women earned 7% less than men, was stunning because Uber uses a gender-blind algorithm. The figure below was the most interesting one from the study as it summarized the differences in pay quite well.

DataUber

To explain this, the authors highlight a few explanations borne out by the data: men drive faster allowing them to have more clients; men have spent more time working for Uber and have more experience that may be unobserved; choices of where and when to drive matters. It is this latter point that I find fascinating because it speaks to an issue that I keep underlining regarding pay gaps when I teach.

For reasons that may be sociological or biological (I am agnostic on that), men tend to occupy jobs that have high rates of occupational mortality (see notably this British study on the topic) in the forms of accidents (think construction, firemen) or diseases (think miners and trashmen). They also tend to take the jobs in further removed areas in order to gain access to a distance premium (which is a form of risk in the sense that it affects  family life etc.). The premiums to taking risky jobs are well documented (see notably the work of Kip Viscusi who measured the wage premium accruing to workers who were employed in bars where smoking was permitted). If these premiums are non-negligible but tend to be preferred by men (who are willing to incur the risk to be injured or fall sick), then risk preferences matter to the gender wage gap.

However, there are hard to properly measure in order to assess the share of the wage gap truly explained by discrimination. Here with the case of Uber, we can get an idea of the amplitude of the differences. Male Uber drivers prefer riskier hours (more risks of having an inebriated and potentially aggressive client), riskier places (high traffic with more risks of accidents) and riskier behavior (driving faster to get more clients per hour).  The return to taking these risks is greater earnings. According to the study, 20% of the gap stems from this series of choices or roughly 1.4 percentage points.

I think that this is significantly large to warrant further consideration in the future in the debate. More often than not, the emphasis is on education, experience, marital status, and industry codes (NAICS code) to explain wage differences. The use of industry codes has never convinced me. There is wide variance within industries regarding work accidents and diseases. The NAICS codes industries by wide sectors and then by sub-sectors of activities (see for example the six-digits codes to agriculture, forestry, fishing and hunting here). This does not allow to take account of the risks associated with a job. There are a few study that try to account for this problem, but there are … well … few in numbers. And rarely are they considered in public discussions.

Here, the Uber case shows the necessity to bring back this subtopic in order to properly explain the wage gap.

Prices in Canada since 1688

A few days ago, I received goods news that the Canadian Journal of Economics had accepted my paper that constructed a consumer price index for Canada between 1688 and 1850 from homogeneous sources (the account books of religious congregations). I have to format the article to the guidelines of the journal and attach all my data and it will be good to go (I am planning on doing this over the weekend). In the meanwhile, I thought I would share the finalized price index so that others can see it.

First, we have the price index that focuses on the period from 1688 to 1850.  Most indexes that exist for pre-1850 Canada (or Quebec since I assume that Quebec is representative of pre-1850 Canadian price trends) are short-term, include mostly agricultural goods and have no expenditures weights to create a basket. Now, my index is the first that uses the same type of sources continuously over such a long period and it is also the first to use a large array of non-agricultural goods. It also has a weights scheme to create a basket.

PriceIndexCanada

The issue of adding non-agricultural goods was especially important because there were important differences in the evolution of different types of goods. Agricultural goods, see next image, saw their nominal prices continually increase between the 17th and 19th centuries. However, most other prices – imported goods, domestically produced manufactured goods etc. – either fall or remain stable. These are very pronounced changes in relative prices. It shows that reliance on agricultural goods price index will overstate the amount of “deflating” needed to arrive at real wages or incomes.  The image below shows the nominal price evolution of groupings of goods as described above.

PriceIndexCanada2

And finally, the pièce de résistance! I link my own index to other existing post-1850 index so as to generate the evolution of prices in Canada since 1688. The figure below shows the evolution of the price index over … 328 years (I ended the series at 2015, but extra years forward can be added). In the years to come, I will probably try to extend this backwards as much as possible at least to 1665 (the first census in Canada) and will probably try to approach Statistics Canada to see if they would like to incorporate this contribution into their wide database of macroeconomic history of Canada.

PriceIndexCanada3

2018 Hayek Essay Contest

The 2018 General Meeting of the Mont Pelerin Society will take place from September 30 – October 6, 2018 at ExpoMeloneras and Lopesan Hotels in Meloneras, Gran Canaria, Canary Islands. As with past general meetings, the Mont Pelerin Society is currently soliciting submissions for Friedrich A. Hayek Fellowships. The fellowships will be awarded through the Hayek Essay Contest.

The Hayek Essay Contest is open to all individuals 36 years old or younger. Entrants should write a 5,000 word (maximum) essay that addresses the quotation(s) and question(s) detailed on the contest announcement (available at the above link). The deadline for submissions is May 31, 2018. The winners will be announced on July 31, 2018. Essays must be submitted in English only. Electronic submissions should be sent in PDF format to this email address (mps.youngscholars@ttu.edu). Authors of winning essays must present their papers at the General Meeting to receive their award. The essays will be judged by an international panel of three members of the Society.

Please feel free to share this announcement with any individuals who may have an interest in submitting an essay for consideration of a fellowship award. All questions may be directed to the MPS Young Scholars Program Committee by email at mps.youngscholars@ttu.edu or phone at +1.806.742.7138.

MPS Young Scholars Program Committee

The best economic history papers of 2017

As we are now solidly into 2018, I thought that it would be a good idea to underline the best articles in economic history that I read in 2017. Obviously, the “best” is subjective to my preferences. Nevertheless, it is a worthy exercise in order to expose some important pieces of research to a wider audience.  I limited myself to five articles (I will do my top three books in a few weeks). However, if there is an interest in the present post I will publish a follow-up with another five articles.

O’Grady, Trevor, and Claudio Tagliapietra. “Biological welfare and the commons: A natural experiment in the Alps, 1765–1845.” Economics & Human Biology 27 (2017): 137-153.

This one is by far my favorite article of 2017. I stumbled upon it quite by accident. Had this article been published six or eight months earlier, I would never have been able to fully appreciate its contribution. Basically, the authors use the shocks induced by the wars of the late 18th century and early 19th century to study a shift from “self-governance” to “centralized governance” of common pool resources. When they speak of “commons” problems, they really mean “commons” as the communities they study were largely pastoral communities with area in commons.  Using a difference-in-difference where the treatment is when a region became “centrally governed” (i.e. when organic local institutions were swept aside), they test the impact of these top-down changes to institutions on biological welfare (as proxied by infant mortality rates). They find that these replacements worsened outcomes.

Now, this paper is fascinating for two reasons. First, the authors offer a clear exposition of its methodology and approach. They give just the perfect amount of institutional details to assuage doubts.  Second, this is a strong illustration of the points made by Elinor Ostrom and Vernon Smith. These two economists emphasize different aspects of the same thing. Smith highlights that “rationality” is “ecological” in the sense that it is an iterative process of information discovery to improve outcomes.  This includes the generation of “rules of the game” which are meant to sustain exchanges. These rules need not be formal edifices. They can be norms, customs, mores and habits (generally supported by the discipline of continuous dealings and/or investments in social-distance mechanisms). On her part, Ostrom emphasized that the tragedy of the commons can be resolved through multiple mechanisms (what she calls polycentric governance) in ways that do not necessarily require a centralized approach (or even market-based approaches).

In the logic of these two authors, attempts at “imposing” a more “rational” order (from the perspective of the planner of this order) may backfire. This is why Smith often emphasizes the organic nature of things like property rights. It also shows that behind seemingly “dumb” peasants, there is often the weight of long periods of experimentation in order to adapt rules and norms in order to fit the constraints faced by the community.  In this article, we can see those two things – the backfiring and, by logical implication, the strengths of the organic institutions that were swept away.

Fielding, David, and Shef Rogers. “Monopoly power in the eighteenth-century British book trade.” European Review of Economic History 21, no. 4 (2017): 393-413.

In this article, the authors use a legal change caused by the end of the legal privileges of the Stationers’ Company (which amounted to an easing of copyright laws).  The market for books may appear to be “non-interesting” for mainstream economics. However, this would be a dramatic error. The “abundance” of books is really a recent development. Bear in mind that the most erudite monk of the late middle ages had less than fifty books from which to draw knowledge (this fact is a vague recollection of mine from Kenneth Clark’s art history documentary from the late 1960s which was aired by the BBC). Thus, the emergence of a wide market for books – which is dated within the period studied by the authors of this article – should not be ignored. It should be considered as one of the most important development in western history. This is best put by the authors when they say that “the reform of copyright law has been regarded as one of the driving forces behind the rise in book production during the Enlightenment, and therefore a key factor in the dissemination of the innovations that underpinned Britain’s Industrial Revolution”.

However, while they agree that the rising popularity of books in the 18th century is an important historical event, they contest the idea that liberalization had any effect. They find that the opening up of the market to competition had little effects on prices and book production. They also find that mark-ups fell but that this could not be attributed to liberalization. At first,  I found these results surprising.

However, when I took the time to think about it I realized that there was no reason to be surprised. First, many changes have been heralded as crucial moments in history. More often than not, the importance of these changes has been overstated. A good example of an overstated change has been the abolition of the Corn Laws in England in the 1840s. The reduction in tariffs, it is argued, ushered Britain into an age of free trade and falling food prices.

In reality, as John Nye discusses, protectionist barriers did not fall as fast as many argued and there were reductions prior to the 1846 reform as Deirdre McCloskey pointed out. It also seems that the Corn Laws did not have substantial effects on consumption or the economy as a whole (see here and here).  While their abolition probably helped increase living standards, it seems that the significance of the moment is overstated. The same thing is probably at play with the book market.

The changes discussed by Fielding and Rogers did not address the underlying roots of the level of market power enjoyed by industry players. In other words, it could be that the reform was too modest to have an effect. This is suggested by the work of Petra Moser. The reform studied by Fielding and and Rogers appears to have been short-lived as evidenced by changes to copyright laws in the early 19th century (see here and here). Moser’s results point to effects much larger (and positive for consumers) than those of Fielding and Rogers.  Given the importance of the book market to stories of innovation in the industrial revolution, I really hope that this sparks a debate between Moser and Fielding and Rogers.

Johnson, Noel D., and Mark Koyama. “States and economic growth: Capacity and constraints.” Explorations in Economic History 64 (2017): 1-20.

I am biased as I am fond of most of the work of these two authors. Nevertheless, I think that their contribution to the state capacity debate is a much needed one. I am very skeptical of the theoretical value of the concept of state capacity.  The question always lurking in my mind is the “capacity to do what?”.

A ruler who can develop and use a bureaucracy to provide the services of a “productive state” (as James Buchanan would put it) is also capable of acting like a predator.  I actually emphasize this point in my work (revise and resubmit at Health Policy & Planning) on Cuban healthcare: the Cuban government has the capacity to allocate large quantities of resources to healthcare in amounts well above what is observed for other countries in the same income range. Why? Largely because they use health care for a) international reputation and b) actually supervising the local population. As such, members of the regime are able to sustain their role even if the high level of “capacity” comes at the expense of living standards in dimensions other than health (e.g. low incomes). Capacity is not the issue, its capacity interacting with constraints that is interesting.

And that is exactly what Koyama and Johnson say (not in the same words). They summarize a wide body of literature in a cogent manner that clarifies the concept of state capacity and its limitations. In doing so, they ended up proposing that the “deep roots” question that should interest economic historians is how “constraints” came to be efficient at generating “strong but limited” states.

In that regard, the one thing that surprised me from their article was the absence of Elinor Ostrom’s work. When I read about “polycentric governance” (Ostrom’s core concept), I imagine the overlap of different institutional structures that reinforce each other (note: these structures need not be formal ones). They are governance providers. If these “governance providers” have residual claimants (i.e. people with skin in the game), they have incentives to provide governance in ways that increased the returns to the realms they governed. Attempts to supersede these institutions (e.g. like the erection of a modern nation state) requires dealing with these providers. They are the main demanders of constraints which are necessary to protect their assets (what my friend Alex Salter calls “rights to the realm“). As Europe pre-1500 was a mosaic of such governance providers, there would have been great forces pushing for constraints (i.e. bargaining over constraints).

I think that this is where the literature on state capacity should orient itself. It is in that direction that it is the most likely to bear fruits. In fact, there have been some steps taken in that direction For example, my colleagues Andrew Young and Alex Salter have applied this “polycentric” narrative to explain the emergence of “strong but limited states” by focusing on late medieval institutions (see here and here).  Their approach seems promising. Yet, the work of Koyama and Johnson have actually created the room for such contributions by efficiently summarizing a complex (and sometimes contradictory) literature.

Bodenhorn, Howard, Timothy W. Guinnane, and Thomas A. Mroz. “Sample-selection biases and the industrialization puzzle.” The Journal of Economic History 77, no. 1 (2017): 171-207.

Elsewhere, I have peripherally engaged discussants in the “antebellum puzzle” (see my article here in Economics & Human Biology on the heights of French-Canadians born between 1780 and 1830). The antebellum puzzle refers to the possibility that the biological standard of living (e.g. falling heights, worsening nutrition, increased mortality risks) fell while the material standard of living increased (e.g. higher wages, higher incomes, access to more services, access to a wider array of goods) during the decades leading to the American Civil War.

I am inclined to accept the idea of short-term paradoxes in living standards. The early 19th century witnessed a reversal in rural-urban concentration in the United States. The country had been “deurbanizing” since the colonial era (i.e. cities represented an increasingly smaller share of the population). As such, the reversal implied a shock in cities whose institutions were geared to deal with slowly increasing populations.

The influx of people in cities created problems of public health while the higher level of population density favored the propagation of infectious diseases at a time where our understanding of germ theory was nill. One good example of the problems posed by this rapid change has been provided by Gergely Baics in his work on the public markets of New York and their regulation (see his book here – a must read).  In that situation, I am not surprised that there was a deterioration in the biological standard of living. What I see is that people chose to trade-off shorter wealthier lives against longer poorer lives. A pretty legitimate (albeit depressing) choice if you ask me.

However, Bodenhorn et al. (2017) will have none of it. In a convincing article that has shaken my priors, they argue that there is a selection bias in the heights data – the main measurement used in the antebellum puzzle debate.  Most of the data on heights comes either from prisoners or enrolled volunteer soldiers (note: conscripts would not generate the problem they describe). The argument they make is that as incomes grow, the opportunity cost of committing a crime or of joining the army grows.  This creates the selection bias whereby the sample is going to be increasingly composed of those with the lowest opportunity costs. In other words, we are speaking of the poorest in society who also tended to be shorter. Simultaneously, fewer tall individuals (i.e. rich individuals) committed crimes or joined the army because incomes grew. This logic is simple and elegant. In fact, this is the kind of data problem that every economist should care about when they design their tests.

Once they control for this problem (through a meta-analysis), the puzzle disappears. I am not convinced by the latter part of the claim. Nevertheless, it is very likely that the puzzle is much smaller than initially gleaned. In yet to be published work, Ariell Zimran (see here and here) argues that the antebellum puzzle is robust to the problem of selection bias but that it is indeed diminished. This concedes a large share of the argument to Bodenhorn et al. While there is much to resolve, this article should be read as it constitutes one of the most serious contributions to the field of economic history published in 2017.

Ridolfi, Leonardo. “The French economy in the longue durée: a study on real wages, working days and economic performance from Louis IX to the Revolution (1250–1789).” European Review of Economic History 21, no. 4 (2017): 437-438.

I discussed Leonardo’s work elsewhere on this blog before. However, I must do it again. The article mentioned here is the dissertation summary that resulted from Leonardo being a finalist to the best dissertation award granted by the EHES (full dissertation here). As such, it is not exactly the “best article” published in 2017. Nevertheless,  it makes the list because of the possibilities that Leonardo’s work have unlocked.

When we discuss the origins of the British Industrial Revolution, the implicit question lurking not far away is “Why Did It Not Happen in France?”. The problem with that question is that the data available for France (see notably my forthcoming work in the Journal of Interdisciplinary History) is in no way comparable with what exists for Britain (which does not mean that the British data is of great quality as Judy Stephenson and Jane Humphries would point out).  Most estimates of the French economy pre-1790 were either conjectural or required a wide array of theoretical considerations to arrive at a deductive portrait of the situation (see notably the excellent work of Phil Hoffman).  As such, comparisons in order to tease out improvements to our understanding of the industrial revolution are hard to accomplish.

For me, the absence of rich data for France was particularly infuriating. One of my main argument is that the key to explaining divergence within the Americas (from the colonial period onwards) resides not in the British or Spanish Empires but in the variation that the French Empire and its colonies provide. After all, the French colony of Quebec had a lot in common geographically with New England but the institutional differences were nearly as wide as those between New England and the Spanish colonies in Latin America. As such, as I spent years assembling data for Canada to document living standards in order to eventually lay down the grounds to test the role of institutions, I was infuriated that I could do so little to compare with France. Little did I know that while I was doing my own work, Leonardo was plugging this massive hole in our knowledge.

Leonardo shows that while living standards in France increased from 1550 onward, the level was far below the ones found in other European countries. He also showed that real wages stagnated in France which means that the only reason behind increased incomes was a longer work year. This work has also unlocked numerous other possibilities. For example, it will be possible to extend to France the work of Nicolini and Crafts and Mills regarding the existence of Malthusian pressures. This is probably one of the greatest contribution of the decade to the field of economic history because it simply went through the dirty work of assembling data to plug what I think is the biggest hole in the field of economic history.

On the “tea saucer” of income inequality since 1917

I disagree often with the many details that underlie the arguments of Thomas Piketty and Emmanuel Saez. That being said, I am also a great fan of their work and of them in general. In fact, I think that both have made contributions to economics that I am envious to equal. To be fair, their U-curve of inequality is pretty much a well-confirmed fact by now: everyone agrees that the period from 1890-1929 was a high-point of inequality which leveled off until the 1970s and then picked up again.

Nevertheless, while I am convinced of the curvilinear aspect of the evolution of income inequality in the United State as depicted by Piketty and Saez, I am not convinced by the amplitudes. In their 2003 article, the U-curve of inequality really looks like a “U” (see image below).  Since that article, many scholars have investigated the extent of the increase in inequality post-1980 (circa). Many have attenuated the increase, but they still find an increase (see here here here here here here here here here). The problem is that everyone has been considering the increase – i.e. the right side of the U-curve. Little attention has been devoted to the left side of the U-curve even though that is where data problems should be considered more carefully for the generation of a stylized fact. This is the contribution I have been coordinating and working on for the last few months alongside John Moore, Phil Magness and Phil Schlosser. 

Blog Figure

To arrive at their proposed series of inequality, Piketty and Saez used the IRS Statistics of Income (SOI) to derive top income fractiles. However, the IRS SOI have many problems. The first is that between 1917 and 1943, there are many years where there are less than 10% of the potential tax population that files a tax return. This prohibits the use of a top 10% income share in many years unless an adjustment is made. The second is that prior to 1943, the IRS reports net income and reports adjusted gross income after 1943. As such, to link post-1943 with pre-1943, there needs to be an additional adjustment. Piketty and Saez made some seemingly reasonable assumptions, but they have never been put to the test regarding sensitivity and robustness. This is leaving aside issues of data quality (I am not convinced IRS data is very good as most of it was self-reported pre-1943 which is a period with wildly varying tax rates). The question here is “how good” are the assumptions?

What we did is verify each assumption to see their validity. The first one we tackle is the adjustment for the low number of returns. To make their adjustments, Piketty and Saez used the fact that single households and married households filed in different quantities relative to their total population. Their idea is that a year in which there was a large number of return was used, the ratio of single to married could be used to adjust the series. The year they used is 1942. This is problematic as 1942 is a war year with self-reporting when large quantities of young American males are abroad fighting. Using 1941, the last US peace year, instead shows dramatically different ratios. Using these ratios knocks off a few points from the top 10% income share. Why did they use 1942? Their argument was there was simply not enough data to make the correction in 1941.  They point to a special tabulation in the 1941 IRS-SOI of 112,472 1040A forms from six states which was not deemed sufficient to make to make the corrections. However, later in the same document, there is a larger and sufficient sample of 516,000 returns from all 64 IRS collection districts (roughly 5% of all forms). By comparison, the 1942 sample Piketty and Saez used to correct only had 455,000 returns.  Given the war year and the sample size, we believe that 1941 is a better year to make the adjustment.

Second, we also questioned the smoothing method to link net income-based series with adjusted-gross income based series (i.e. pre-1943 and post-1943 series). The reason for this is that the implied adjustment for deductions made by Piketty and Saez is actually larger than all the deductions claimed that were eligible under the definition of Adjusted Gross Income – which is a sign of overshot on their parts. Using the limited data available for deductions by income groups and making some assumptions (very conservative ones) to move further back in time, we found that adjusting for “actual deductions” yields a lower level of inequality. This is contrasted with the fixed multipliers which Piketty and Saez used pre-1943.

Third, we question their justification for not using the Kuznets income denominator. They argued that Kuznets’ series yielded an implausible figure because, in 1948, its use yielded a greater income for non-fillers than for fillers.  However, this is not true of all years. In fact, it is only true after 1943. Before 1943, the income of non-fillers is equal in proportion to the one they use post-1944 to impute the income of non-fillers. This is largely the result of an accounting error definition. Incomes before 1943 were reported as net income and as gross incomes after that point. This is important because the stylized fact of a pronounced U-curve is heavily sensitive to the assumption made regarding the denominator.

These three adjustments are pretty important in terms of overall results (see image below).  The pale blue line is that of Piketty of Saez as depicted in their 2003 paper in the Quarterly Journal of Economics. The other blue line just below it is the effect of deductions only (the adjustment for missing returns affects only the top 10% income share). All the other lines that mirror these two just below (with the exception of the darkest blue line which is the original Kuznets inequality estimates) compound our corrections with three potential corrections for the denominators. The U-curve still exists, but it is not as pronounced. When you look with the adjustments made by Mechling et al. (2017) and Auten and Splinter (2017) for the post-1960 period (green and red lines) and link them with ours, you can still see the curvilinear shape but it looks more like a “tea saucer” than a pronounced U-curve.

In a way, I see this as a simultaneous complement to the work of Richard Sutch and to the work of Piketty and Saez: the U-curve still exists, but the timing and pattern is slightly more representative of history. This was a long paper to write (and it is a dry read given the amount of methodological discussions), but it was worth it in order to improve upon the state of our knowledge.

FigureInequality