- The new age of great power politics John Bew, New Statesman
- Before We Cure Others of Their False Beliefs, We Must First Cure Our Own Christopher Preble, Cato Unbound
- Libertarians (FDP) ruin coalition talks in Germany Christian Hacke, Deutsche Welle
- The Rich You Will Always Have With You Brandon Turner, Law & Liberty
I disagree often with the many details that underlie the arguments of Thomas Piketty and Emmanuel Saez. That being said, I am also a great fan of their work and of them in general. In fact, I think that both have made contributions to economics that I am envious to equal. To be fair, their U-curve of inequality is pretty much a well-confirmed fact by now: everyone agrees that the period from 1890-1929 was a high-point of inequality which leveled off until the 1970s and then picked up again.
Nevertheless, while I am convinced of the curvilinear aspect of the evolution of income inequality in the United State as depicted by Piketty and Saez, I am not convinced by the amplitudes. In their 2003 article, the U-curve of inequality really looks like a “U” (see image below). Since that article, many scholars have investigated the extent of the increase in inequality post-1980 (circa). Many have attenuated the increase, but they still find an increase (see here here here here here here here here here). The problem is that everyone has been considering the increase – i.e. the right side of the U-curve. Little attention has been devoted to the left side of the U-curve even though that is where data problems should be considered more carefully for the generation of a stylized fact. This is the contribution I have been coordinating and working on for the last few months alongside John Moore, Phil Magness and Phil Schlosser.
To arrive at their proposed series of inequality, Piketty and Saez used the IRS Statistics of Income (SOI) to derive top income fractiles. However, the IRS SOI have many problems. The first is that between 1917 and 1943, there are many years where there are less than 10% of the potential tax population that files a tax return. This prohibits the use of a top 10% income share in many years unless an adjustment is made. The second is that prior to 1943, the IRS reports net income and reports adjusted gross income after 1943. As such, to link post-1943 with pre-1943, there needs to be an additional adjustment. Piketty and Saez made some seemingly reasonable assumptions, but they have never been put to the test regarding sensitivity and robustness. This is leaving aside issues of data quality (I am not convinced IRS data is very good as most of it was self-reported pre-1943 which is a period with wildly varying tax rates). The question here is “how good” are the assumptions?
What we did is verify each assumption to see their validity. The first one we tackle is the adjustment for the low number of returns. To make their adjustments, Piketty and Saez used the fact that single households and married households filed in different quantities relative to their total population. Their idea is that a year in which there was a large number of return was used, the ratio of single to married could be used to adjust the series. The year they used is 1942. This is problematic as 1942 is a war year with self-reporting when large quantities of young American males are abroad fighting. Using 1941, the last US peace year, instead shows dramatically different ratios. Using these ratios knocks off a few points from the top 10% income share. Why did they use 1942? Their argument was there was simply not enough data to make the correction in 1941. They point to a special tabulation in the 1941 IRS-SOI of 112,472 1040A forms from six states which was not deemed sufficient to make to make the corrections. However, later in the same document, there is a larger and sufficient sample of 516,000 returns from all 64 IRS collection districts (roughly 5% of all forms). By comparison, the 1942 sample Piketty and Saez used to correct only had 455,000 returns. Given the war year and the sample size, we believe that 1941 is a better year to make the adjustment.
Second, we also questioned the smoothing method to link net income-based series with adjusted-gross income based series (i.e. pre-1943 and post-1943 series). The reason for this is that the implied adjustment for deductions made by Piketty and Saez is actually larger than all the deductions claimed that were eligible under the definition of Adjusted Gross Income – which is a sign of overshot on their parts. Using the limited data available for deductions by income groups and making some assumptions (very conservative ones) to move further back in time, we found that adjusting for “actual deductions” yields a lower level of inequality. This is contrasted with the fixed multipliers which Piketty and Saez used pre-1943.
Third, we question their justification for not using the Kuznets income denominator. They argued that Kuznets’ series yielded an implausible figure because, in 1948, its use yielded a greater income for non-fillers than for fillers. However, this is not true of all years. In fact, it is only true after 1943. Before 1943, the income of non-fillers is equal in proportion to the one they use post-1944 to impute the income of non-fillers. This is largely the result of an accounting error definition. Incomes before 1943 were reported as net income and as gross incomes after that point. This is important because the stylized fact of a pronounced U-curve is heavily sensitive to the assumption made regarding the denominator.
These three adjustments are pretty important in terms of overall results (see image below). The pale blue line is that of Piketty of Saez as depicted in their 2003 paper in the Quarterly Journal of Economics. The other blue line just below it is the effect of deductions only (the adjustment for missing returns affects only the top 10% income share). All the other lines that mirror these two just below (with the exception of the darkest blue line which is the original Kuznets inequality estimates) compound our corrections with three potential corrections for the denominators. The U-curve still exists, but it is not as pronounced. When you look with the adjustments made by Mechling et al. (2017) and Auten and Splinter (2017) for the post-1960 period (green and red lines) and link them with ours, you can still see the curvilinear shape but it looks more like a “tea saucer” than a pronounced U-curve.
In a way, I see this as a simultaneous complement to the work of Richard Sutch and to the work of Piketty and Saez: the U-curve still exists, but the timing and pattern is slightly more representative of history. This was a long paper to write (and it is a dry read given the amount of methodological discussions), but it was worth it in order to improve upon the state of our knowledge.
A few days ago, in reply to this December NBER study, David Henderson at EconLog questioned the idea that labor market monopsonies matter to explain sluggish wage growth and rising wage inequality. Like David, I am skeptical of this argument. However, I am skeptical for different reasons.
First, let’s point out that the reasoning behind this story is well established (see notably the work of Alan Manning). Firms with market power over a more or less homogeneous labor force which must assume a disproportionate amount of search costs have every incentive to depress wages. This can lead to reductions in growth as, notably, it discourages human capital formation (see these two papers here and here as examples). As such, I am not as skeptical of “monopsony” as an argument.
However, I am skeptical of “monopsony” as an argument. Well, what I mean is that I am skeptical of considering monopsony without any qualifications regarding institutions. The key condition to an effective monopsony is the existence of barriers (natural and/or legal to mobility). As soon as it is relatively easy to leave a small city for another city, then even a city with a single-employer will have little ability to exert his “market power” (Note: I really hate that word). If you think about it simply through these lenses, then all that matters is the ability to move. All you need to care about are the barriers (legal and/or natural) to mobility (i.e. the chance to defect).
And here’s the thing. I don’t think that natural barriers are a big deal. For example, Price Fishback found that the “company towns” im the 19th century were hardly monopsonies (see here, here, here and here). If natural barriers were not a big deal, they are certainly not a big deal today. As such, I think the action is largely legal. My favorite example is the set of laws adopted following the Emancipation of slaves in the United States which limited the mobility (by limiting the chances of Northerners hiring agents to come who would act as headhunters in the South). That is a legal barrier (see here and here). I am also making that argument regarding the institution of seigneurial tenure in Canada in a working paper that I am reorganizing (see here).
What about today? The best example are housing restrictions? Well, housing construction and zoning regulations basically make the supply of housing quite inelastic. The areas where these regulations are the most severe are also, incidentally, high productivity areas. This has two effects on mobility. The first is that low-productivity workers in low-productivity areas cannot easily afford to move to the high-productivity area. As such, you are reducing their options of defection and increasing the likelihood that they will not look. You are also reducing the pool of places to apply which means that, in order to find a more remunerative job, they must search longer and harder (i.e. you are increasing their search costs). The second effect is that you are also tying workers to the areas they are in. True, they gain because the productivity becomes capitalized in the potential rent from selling any property they own. However, they are in essence tied to the place. As such, they can be more easily mistreated by employers.
These are only examples. I am sure I could extend the list to reach the size of the fiscal code (well, maybe not that much). The point is that “monopsony” (to the extent that it exists) is merely a symptom of other policies that either increase search costs for workers or reduce the number of options for defections. And I do not care much for analyzing symptoms.
Recently, the blog ThinkMarkets published a post by Gunther Schnabl about how Friedrich Hayek’s works helped to understand the link between Quantitative Easing and political unrest. The piece of writing summarized with praiseworthy precision three different stages of Friedrich Hayek’s economic and political ideas and, among the many topics it addressed, it was mentioned the increasing level of income and wealth inequality that a policy of low rates of interest might bring about.
It is well-known that Friedrich Hayek owes the Swedish School as much as he does the Austrian School on his ideas about money and capital. In fact, he borrows the distinction between natural and market interest rates from Knut Wicksell. The early writings of F.A. Hayek state that disequilibrium and crisis are caused by a market interest rate that is below the natural interest rate. There is no necessity of a Central Bank to arrive at such a situation: the credit creation of the banking system or a sudden change of the expectancies of the public could set the market interest rate well below the natural interest rate and, thus, lead to what Hayek and Nicholas Kaldor called “the Concertina Effect.”
At this point we must formulate a disclaimer: Friedrich Hayek’s theory of money and capital was so controversial and subject to so many regrets by his early supporters – like said Kaldor, Ronald Coase, or Lionel Robbins – that we can hardly carry on without reaching a previous theoretical settlement over the apportations of his works. Until then, the readings on Hayek’s economics will have mostly a heuristic and inspirational value. They will be an starting point from where to spring new insights, but hardly a single conclusive statement. Hayekian economics is a whole realm to be conquered, but precisely, the most of this quest still remains undone.
For example, if we assume – as it does the said post – that ultra-loose monetary policy enlarges inequality and engenders political instability, then we are bound to find a monetary policy that delivers, or at least does not avoid, an optimal level of inequality. As it is explained in the linked lecture, the definition of such a concept might differ whether it depends on an economic or a political or a moral perspective.
Here is where I think the works of F.A. Hayek have still so much to give to our inquiries: the matter is not where to place an optimal level of inequality, but to discover the conditions under which a certain level of inequality appears to us as legitimate, or at least tolerable. This is not a subject about quantities, but about qualities. Our mission is to discover the mechanism by which the notions of fairness, justice, or even order are formed in our beliefs.
Perhaps that is the deep meaning of the order or equilibrium that it is reach when, to use the terminology of Wicksell and Hayek’s early writings, both natural and market interest rates are the same: a state of affairs in which the most of the expectancies of the agents could prove correct. The solution does not depend upon a particular public policy, but on providing an abstract institutional structure in which each individual decision could profit the most from the spontaneous order of human interaction.
For some time now, I have been skeptical of the narrative that has emerged regarding income inequality in the West in general and in the US in particular. That narrative, which I label UCN for U-Curve Narrative, simply asserts that inequality fell from a high level in the 1910s down to a trough in the 1970s and then back up to levels comparable to those in the 1910s.
To be sure, I do believe that inequality fell and rose over the 20th century. Very few people will disagree with this contention. Like many others I question how “big” is the increase since the 1970s (the low point of the U-Curve). However, unlike many others, I also question how big the fall actually was. Basically, I do think that there is a sound case for saying that inequality rose modestly since the 1970s for reasons that are a mixed bag of good and bad (see here and here), but I also think that the case that inequality did not fall as much as believed up to the 1970s is a strong one.
The reasons for this position of mine relates to my passion for cliometrics. The quantitative illustration of the past is a crucial task. However, data is only as good as the questions it seek to answer. If I wonder whether or not feudal institutions (like seigneurial tenure in Canada) hindered economic development and I only look at farm incomes, then I might be capturing a good part of the story but since farm income is not total income, I am missing a part of it. Had I asked whether or not feudal institutions hindered farm productivity, then the data would have been more relevant.
Same thing for income inequality I argue in this new working paper (with Phil Magness, John Moore and Phil Schlosser) which is a basically a list of criticisms of the the Piketty-Saez income inequality series.
For the United States, income inequality measures pre-1960s generally rely on tax-reporting data. From the get-go, one has to recognize that this sort of system (since it is taxes) does not promote “honest” reporting. What is less well known is that tax compliance enforcement was very lax pre-1943 and highly sensitive to the wide variations in tax rates and personal exemption during the period. Basically, the chances that you will report honestly your income at a top marginal rate of 79% is lower than had that rate been at 25%. Since the rates did vary from the high-70s at the end of the Great War to the mid-20s in the 1920s and back up during the Depression, that implies a lot of volatility in the quality of reporting. As such, the evolution measured by tax data will capture tax-rate-induced variations in reported income (especially in the pre-withholding era when there existed numerous large loopholes and tax-sheltered income vehicles). The shift from high to low taxes in the 1910s and 1920s would have implied a larger than actual change in inequality while the the shift from low to high taxes in the 1930s would have implied the reverse. Correcting for the artificial changes caused by tax rate changes would, by definition, flatten the evolution of inequality – which is what we find in our paper.
However, we go farther than that. Using the state of Wisconsin which had a tax system with more stringent compliance rules for the state income tax while also having lower and much more stable tax rates, we find different levels and trends of income inequality than with the IRS data (a point which me and Phil Magness expanded on here). This alone should fuel skepticism.
Nonetheless, this is not the sum of our criticisms. We also find that the denominator frequently used to arrive at the share of income going to top earners is too low and that the justification used for that denominator is the result of a mathematical error (see pages 10-12 in our paper).
Finally, we point out that there is a large accounting problem. Before 1943, the IRS provided the Statistics of Income based on net income. After 1943, there shift between definitions of adjusted gross income. As such, the two series are not comparable and need to be adjusted to be linked. Piketty and Saez, when they calculated their own adjustment methods, made seemingly reasonable assumptions (mostly that the rich took the lion’s share of deductions). However, when we searched and found evidence of how deductions were distributed, they did not match the assumptions of Piketty and Saez. The actual evidence regarding deductions suggest that lower income brackets had large deductions and this diminishes the adjustment needed to harmonize the two series.
Taken together, our corrections yield systematically lower and flatter estimates of inequality which do not contradict the idea that inequality fell during the first half of the 20th century (see image below). However, our corrections suggest that the UCN is incorrect and that there might be more of small bowl (I call it the Paella-bowl curve of inequality, but my co-authors prefer the J-curve idea).
This question is the one that me and Phil Magness have been asking for some time and we have now assembled our thoughts and measures in the first of a series of papers. In this paper, we take issue with the quality of the measurements that will be extracted from tax records during the interwar years (1918 to 1941).
More precisely, we point out that tax rates at the federal level fluctuated wildly and were at relatively high levels. Since most of our inequality measures are drawn from the federal tax data contained in the Statistics of Income, this is problematic. Indeed, high tax rates might deter honest reporting while rapidly changing rates will affect reporting behavior (causing artificial variations in the measure of market income). As such, both the level and the trend of inequality might be off. That is our concern in very simple words.
To assess whether or not we are worrying for nothing, we went around to find different sources to assess the robustness of the inequality estimates based on the federal tax data. We found what we were looking for in Wisconsin whose tax rates were much lower (never above 7%) and less variable than those at the federal levels. As such, we found the perfect dataset to see if there are measurement problems in the data itself (through a varying selection bias).
From the Wisconsin data, we find that there are good reasons to be skeptical of the existing inequality measured based on federal tax data. The comparison of the IRS data for Wisconsin with the data from the state income tax shows a different pattern of evolution and a different level (especially when deductions are accounted for). First of all, the level is always inferior with the WTC data (Wisconsin Tax Commission). Secondly, the trend differs for the 1930s.
I am not sure what it means in terms of the true level of inequality for the period. However, it suggests that we ought to be careful towards the estimations advanced if two data sources of a similar nature (tax data) with arguably minor conceptual differences (low and stable tax rates) tell dramatically different stories. Maybe its time to try to further improve the pre-1945 series on inequality.
Taking public choice logic seriously means considering the political distortions/impediments to proposed policy. Taking inequality seriously is the flip side of that. Perceptions of (and attitudes towards) inequality matter and libertarians (and conservatives) would do well to acknowledge it.
I suspect that the problem is that 1) (like any ideology) we’ve got a blind spot, and inequality is in that spot. 2) Our liberal friends can see into that blind spot. 3) They’ve got a blind spot that leads them to make silly policy prescriptions (e.g. ignoring public choice roots of inequality and instead calling for policies that would reduce growth). And as a result, 4) we’re turned off by discussion of inequality before considering it.
Okay massive disagreement here:
A: Inequality is not something “measurable” in the sense of utility. I chose to be an economist. My income is X% below that of my wife who went to school fewer years than I did and her income grows faster than mine and she will live longer than me (in probabilistic terms given life expectancy differences M/F). According to that definition, my couple is an unequal one and growing more unequal. Yet, I would not trade her job for mine even if her job was twice as remunerative (she is an attorney). I chose a path of lesser income because it made me happy. Income maximization was, in that case, not synonymous with utility maximization. By definition, rich societies will have more cases like that since gains in marginal utility may not be associated with marginal gains in monetary income. See the issue of the backward-bending labor supply curve.
B: The literature on linking growth to inequality is VERY weak. Look at the empirical papers, the results often depend on the choice of variables and the time window. It NEVER accounts for what I mentioned in point A. More importantly, there is NO THEORETICAL LINK with neoclassical theory on this (with the notable exception of Herb Gintis and Sam Bowles and I am working on a paper tackling their logic) that is axiomatically consistent. An empirical observation without a theory that is logically sound (the most repeated is the general Keynesian argument about consumption, but that is very weak and that rebuttal is powerful in the theoretical papers) is basically rubbish.
C: The Great Gatsby Curve is also rubbish since most of the past observations are based on the weird assumptions that mobility based on father-sons is a proper estimate to compare with modern estimates. You can consult the very convincing rebuttals made by Scott Winship. Moreover, the Great Gatsby curve is again a case of empirical observations without theory. I don’t need any of this story to see that mobility is down (modestly) at the same time that labor market restrictions are up.
There is more discussion, too.