Interwar US inequality data are deeply flawed

For some years now, Phil Magness and myself have been working on improving the existing income inequality for the United States prior to World War II. One of the most important point we make concerns why we, as economists, ought to take data assumptions seriously. One of the most tenacious stylized facts (that we do not exactly dispute) is that income inequality in the United States has followed a U-curve trajectory over the 20th century. Income inequality was high in the early 1920s and descended gradually until the 1960s and then started to pick up again. That stylized fact comes from the work of Thomas Piketty and Emmanuel Saez with their data work (first image illustrated below). However, from the work of Auten and Splinter and Mechling et al. , we know that the increase post-1960 as measured by Piketty is somewhat overstated (see second image illustrated below).  While the criticism suggest a milder post-1960 increase, me and Phil Magness believe that the real action is on the left side of the U-curve (pre-1960).

Inequality

NOL1

Why? Here is our case made simple: the IRS data used to measure inequality up to at least 1943 are deeply flawed. In another paper recently submitted, I made the argument that some of the assumptions made by Piketty and Saez had flaws. This did not question the validity of the data itself. We decided to use state-level income tax data from the IRS to compute the state-level inequality and compare them with state-income tax data (e.g. the IRS in Wisconsin versus Wisconsin’s own personal income tax data). What we found is that the IRS data overstates the level of inequality by appreciable proportions.

Why is that? There are two reasons. The first is that the federal tax system had wide fluctuations in tax rates between 1917 and 1943 which means wide fluctuations in tax compliance. Previous scholars such as Gene Smiley pointed out that when tax rates fell, compliance went up so that measured inequality went up. But measured inequality is not true inequality because “off-the-books” income (which was unmeasured) divorced true inequality from measured inequality.  This is bound to generate false fluctuations in measurement as long as tax compliance was voluntary (which is true until 1943). State income taxes do not face that problem as their tax systems tended to be more stable throughout the period. The same is true with personal exemptions.

The second reason speaks to the manner the federal data is presented. The IRS created wide categories with the numbers of taxpayers according to net taxable income (rather than gross income) in each categories. For example, the categories go from 0$ to 1,000$ per filler and then increase by slice of 1,000$ until 10,000$ and then by slices of 5,000$ etc. This makes it hard to pinpoint where to start each the calculations for each of the fractiles of top earners. This is not true of all state income tax systems. For example, Delaware sliced the data by categories of 100$ and 500$ instead. Thus, we can more easily pinpoint the two. More importantly, most state-income tax systems reported the breakdown both for net taxable and gross income. This is crucial because Piketty and Saez need to adjust the pre-1943 IRS data – which are in net income – to that they can tie properly with the post-1943 IRS data – which are in adjusted gross income. Absent this correction, they would get an artificial increase in inequality in 1943. The problem is that the data for this adjustment is scant and their proposed solution has not been subjected to validation.

What do our data say? We compared them to the work of Mark Frank et al. who used the same methodology and Piketty Saez but at the state-level using the same sources. The image below pretty much sums it up! If the points are above the red line, the IRS data overestimates inequality. If below, the IRS underestimates. Overall, the bias tends towards overestimation. In fact, when we investigated all of the points separately, we found that those below the red line result merely from the way that Delaware’s (DE) was adjusted to convert net income into gross income. When we compared only net income-based measures of inequality, none are below the red line except Delaware from 1929 to 1931 (and by much smaller margins than shown in the figure below).

IRS.png

In our paper, we highlight how the state-level data is conceptually superior to the federal-level data. The problem that we face is that we cannot convert those measures into adjustments for the national level of inequality. All that our data do is suggest which way the bias cuts. While we find this unfortunate, we highlight that this would unavoidably alter the left side of the curve in the first graph of this blog post. The initial level of inequality would be less than it is now. Thus, combining this with the criticisms made for the post-1960 era, we may be in presence of a U-curve that looks more like a shallow tea saucer than the pronounced U-curve generally highlighted.  The U-curve form is not invalidated (i.e. is it a quadratic-looking function of time or not), but the shape of the curve’s tails is dramatically changed.

On the “tea saucer” of income inequality since 1917

I disagree often with the many details that underlie the arguments of Thomas Piketty and Emmanuel Saez. That being said, I am also a great fan of their work and of them in general. In fact, I think that both have made contributions to economics that I am envious to equal. To be fair, their U-curve of inequality is pretty much a well-confirmed fact by now: everyone agrees that the period from 1890-1929 was a high-point of inequality which leveled off until the 1970s and then picked up again.

Nevertheless, while I am convinced of the curvilinear aspect of the evolution of income inequality in the United State as depicted by Piketty and Saez, I am not convinced by the amplitudes. In their 2003 article, the U-curve of inequality really looks like a “U” (see image below).  Since that article, many scholars have investigated the extent of the increase in inequality post-1980 (circa). Many have attenuated the increase, but they still find an increase (see here here here here here here here here here). The problem is that everyone has been considering the increase – i.e. the right side of the U-curve. Little attention has been devoted to the left side of the U-curve even though that is where data problems should be considered more carefully for the generation of a stylized fact. This is the contribution I have been coordinating and working on for the last few months alongside John Moore, Phil Magness and Phil Schlosser. 

Blog Figure

To arrive at their proposed series of inequality, Piketty and Saez used the IRS Statistics of Income (SOI) to derive top income fractiles. However, the IRS SOI have many problems. The first is that between 1917 and 1943, there are many years where there are less than 10% of the potential tax population that files a tax return. This prohibits the use of a top 10% income share in many years unless an adjustment is made. The second is that prior to 1943, the IRS reports net income and reports adjusted gross income after 1943. As such, to link post-1943 with pre-1943, there needs to be an additional adjustment. Piketty and Saez made some seemingly reasonable assumptions, but they have never been put to the test regarding sensitivity and robustness. This is leaving aside issues of data quality (I am not convinced IRS data is very good as most of it was self-reported pre-1943 which is a period with wildly varying tax rates). The question here is “how good” are the assumptions?

What we did is verify each assumption to see their validity. The first one we tackle is the adjustment for the low number of returns. To make their adjustments, Piketty and Saez used the fact that single households and married households filed in different quantities relative to their total population. Their idea is that a year in which there was a large number of return was used, the ratio of single to married could be used to adjust the series. The year they used is 1942. This is problematic as 1942 is a war year with self-reporting when large quantities of young American males are abroad fighting. Using 1941, the last US peace year, instead shows dramatically different ratios. Using these ratios knocks off a few points from the top 10% income share. Why did they use 1942? Their argument was there was simply not enough data to make the correction in 1941.  They point to a special tabulation in the 1941 IRS-SOI of 112,472 1040A forms from six states which was not deemed sufficient to make to make the corrections. However, later in the same document, there is a larger and sufficient sample of 516,000 returns from all 64 IRS collection districts (roughly 5% of all forms). By comparison, the 1942 sample Piketty and Saez used to correct only had 455,000 returns.  Given the war year and the sample size, we believe that 1941 is a better year to make the adjustment.

Second, we also questioned the smoothing method to link net income-based series with adjusted-gross income based series (i.e. pre-1943 and post-1943 series). The reason for this is that the implied adjustment for deductions made by Piketty and Saez is actually larger than all the deductions claimed that were eligible under the definition of Adjusted Gross Income – which is a sign of overshot on their parts. Using the limited data available for deductions by income groups and making some assumptions (very conservative ones) to move further back in time, we found that adjusting for “actual deductions” yields a lower level of inequality. This is contrasted with the fixed multipliers which Piketty and Saez used pre-1943.

Third, we question their justification for not using the Kuznets income denominator. They argued that Kuznets’ series yielded an implausible figure because, in 1948, its use yielded a greater income for non-fillers than for fillers.  However, this is not true of all years. In fact, it is only true after 1943. Before 1943, the income of non-fillers is equal in proportion to the one they use post-1944 to impute the income of non-fillers. This is largely the result of an accounting error definition. Incomes before 1943 were reported as net income and as gross incomes after that point. This is important because the stylized fact of a pronounced U-curve is heavily sensitive to the assumption made regarding the denominator.

These three adjustments are pretty important in terms of overall results (see image below).  The pale blue line is that of Piketty of Saez as depicted in their 2003 paper in the Quarterly Journal of Economics. The other blue line just below it is the effect of deductions only (the adjustment for missing returns affects only the top 10% income share). All the other lines that mirror these two just below (with the exception of the darkest blue line which is the original Kuznets inequality estimates) compound our corrections with three potential corrections for the denominators. The U-curve still exists, but it is not as pronounced. When you look with the adjustments made by Mechling et al. (2017) and Auten and Splinter (2017) for the post-1960 period (green and red lines) and link them with ours, you can still see the curvilinear shape but it looks more like a “tea saucer” than a pronounced U-curve.

In a way, I see this as a simultaneous complement to the work of Richard Sutch and to the work of Piketty and Saez: the U-curve still exists, but the timing and pattern is slightly more representative of history. This was a long paper to write (and it is a dry read given the amount of methodological discussions), but it was worth it in order to improve upon the state of our knowledge.

FigureInequality

On Nancy MacLean’s Thesis

Nancy MacLean’s Democracy in Chains continues to yield surprises. Just a few days ago, Phil Magness now shows a “typo” that plays a significant role in MacLean’s thesis.

Despite all these detailed scrutiny of her work, it is not clear that MacLean understand the type of error is being pointed out about her book. There are two types of errors regarding a thesis: (1) the thesis is correctly defined, but the proof is flawed, or (2) the thesis is incorrectly defined, in which case there is no need to test the thesis. What MacLean and her supporters don’t seem to realize is that Democracy in Chains is built on the second error, not on the first one. Instead of ignoring her critics, MacLean should be up to the academic game and engage accordingly. Her behavior is very telling. If her research is so solid, what’s the problem?

Consider he following example. Let’s say you find a book built on the thesis that Milton Friedman was a french communist who lived in the 18th century. You don’t need to read this book to know that the author is wrong on her argument. This book on Friedman is both factually (Friedman did not live in the 18th century and was not French) and theoretically (Friedman was not a communist) wrong. This is how wrong MacLean’s thesis on Buchanan is for anyone with some minimal exposure to his work and Public Choice.

There a few reasons why someone would still read Democracy in Chains. For instance, if the book is a preach to the choir To try to understand how such a misguided thesis can actually be supported by by an author with so little knowledge and expertise on Buchanan and Public Choice. Etc. But a reason why MacLean thinks that their critics are unwilling to consider her thesis is because she is unaware her error is the second one mentioned above. Her thesis is just wrong from the go.

Against Guilt by Historical Association: A Note on MacLean’s “Democracy in Chains”

It’s this summer’s hottest pastime for libertarian-leaning academics: finding examples of bad scholarship in Nancy MacLean’s new book Democracy in Chains. For those out of the loop, MacLean, a history professor at Duke University, argues in her book that Nobel-prize winning public choice economist James Buchanan is part of some Koch-funded vast right-libertarian conspiracy to destroy democracy as inspired by southern racist agrarians and confederates like John Calhoun. This glowing review from NPR should give you a taste of her argument, which often has the air of a bizarre conspiracy theory. Unfortunately, to make these arguments she’s had to cut some huge corners in her federally-funded research. Here’s a round-up of her dishonesty:

  • David Bernstein points out how MacLean’s own sources contradict her claims that libertarian Frank Chodorov disagreed with the ruling in Brown v. Board.
  • Russ Roberts reveals how out-of-context Tyler Cowen was taken by MacLean, misquoting him to attribute to Cowen a view which he was arguing against.
  • David Henderson finds that she did the same thing to Buchanan.
  • Steve Horwitz points out how wildly out-of-context MacLean took a quote from Buchanan on public education.
  • Phil Magness reveals how much MacLean needed to wildly reach to tie Buchanan to southern agrarians with his use of the word “Leviathan.”
  • Phil Magness, again, reveals MacLean needed to do the same thing to tie Buchanan to Calhoun.
  • David Bernstein finds several factual errors about MacLean’s telling of the history of George Mason’s University.

I’m sure there is more to come. But, poor scholarship and complete dishonesty in source citation aside, an important question needs to be asked about all this: even if MacLean didn’t need to reach so far to paint Buchanan in such a negative light, why should we care?

I admittedly haven’t read her book yet (so could be wrong), but from the way even positive reviewers paint it and the way she talks about it herself in interviews (see around 15:30 of that episode), one can infer that she is in no way interested in a nuanced analytical critique of Buchanan’s public choice models or his arguments in favor of constitutional restrictions on democratic majorities. Her argument, if you can call it that, seems to be something like this:

  1. Democracy and majority rule are inherently good.
  2. James Buchanan wants stricter restrictions on democratic majority rule, and so did some Southern racists.
  3. Therefore, James Buchanan is a racist, evil corporate shill.

Even if she didn’t need to establish premise 2, why should we care? Every ideology has elements of it that can be tied to some seedy elements of the past, it doesn’t make the arguments that justify those ideologies wrong. For example, the pro-choice and women’s health movement has its roots in attempts to market birth control to race-based eugenicists (though these links, like MacLean’s attempts, aren’t as insidious as some on the modern right make them out to be), that does not mean modern women’s health advocates are racial eugenicists. Early advocates of the minimum wage argued for wage floors for racist and sexist reasons, yet nobody really thinks (or, at least, should think) modern progressives have dubious racist motives for wanting to raise the minimum wage. The American Economic Association was founded by racist eugenicists in the American Institutionalist school, yet nobody thinks modern economists are racist or that anyone influenced by the institutionalists today is a eugenicist. The Democratic Party used to be the party of the KKK, yet nobody (except the most obnoxious of Republican partisans) thinks that’s at all relevant to the DNC’s modern platform. Heidegger was heavily linked to Nazism and anti-Semitism, but it’s impossible to write off and ignore his philosophical contributions and remain intellectually honest.

Similarly, even if Buchanan did read Calhoun and it got him thinking about constitutional reform, that does not at all mean he agreed with Calhoun on slavery or that modern libertarian-leaning public choice theorists are neo-confederates, and it has even less to do with the merits of Buchanan’s analytical critiques of how real-world democracies function. In fact, as Vincent Geloso has pointed out here at NOL, Buchanan has given modern scholars the analytical tools to critique racism.

Intellectual history is messy and complicated, and can often lead to links we might—with the benefit of historical hindsight—view as situated in an unsavory context. However, as long as those historical lineages have little to no bearing on people’s motivations for making similar arguments or being intellectual inheritors of similar ideological traditions today (which isn’t always the case), there is no relevance to modern discourse other than perhaps idle historical curiosity. These types of attempts to cast guilt upon one’s intellectual opponents through historical association are, at best, another intellectually lazy version of the genetic fallacy (which MacLean also loves to commit when she starts conspiratorially complaining about Koch Brothers funding).

Just tell me if this sounds like a good argument to you:

  1. Historical figure X makes a similar argument Y to what you’re making.
  2. X was a racist and was influenced by some racists.
  3. Therefore, Y is wrong.

If it doesn’t, you’re right, 3 doesn’t follow from 2 (and in MacLean’s case 1 is a stretch).

Please, if you want to criticize someone’s arguments, actually criticize their arguments; don’t rely on a tabloid version of intellectual history to dismiss them, especially when that intellectual history is a bunch of dishonest misquotations and hand-waving associations.

Is the U-curve of US income inequality that pronounced?

For some time now, I have been skeptical of the narrative that has emerged regarding income inequality in the West in general and in the US in particular. That narrative, which I label UCN for U-Curve Narrative, simply asserts that inequality fell from a high level in the 1910s down to a trough in the 1970s and then back up to levels comparable to those in the 1910s.

To be sure, I do believe that inequality fell and rose over the 20th century.  Very few people will disagree with this contention. Like many others I question how “big” is the increase since the 1970s (the low point of the U-Curve). However, unlike many others, I also question how big the fall actually was. Basically, I do think that there is a sound case for saying that inequality rose modestly since the 1970s for reasons that are a mixed bag of good and bad (see here and here), but I also think that the case that inequality did not fall as much as believed up to the 1970s is a strong one.

The reasons for this position of mine relates to my passion for cliometrics. The quantitative illustration of the past is a crucial task. However, data is only as good as the questions it seek to answer. If I wonder whether or not feudal institutions (like seigneurial tenure in Canada) hindered economic development and I only look at farm incomes, then I might be capturing a good part of the story but since farm income is not total income, I am missing a part of it. Had I asked whether or not feudal institutions hindered farm productivity, then the data would have been more relevant.

Same thing for income inequality I argue in this new working paper (with Phil Magness, John Moore and Phil Schlosser) which is a basically a list of criticisms of the the Piketty-Saez income inequality series.

For the United States, income inequality measures pre-1960s generally rely on tax-reporting data. From the get-go, one has to recognize that this sort of system (since it is taxes) does not promote “honest” reporting. What is less well known is that tax compliance enforcement was very lax pre-1943 and highly sensitive to the wide variations in tax rates and personal exemption during the period. Basically, the chances that you will report honestly your income at a top marginal rate of 79% is lower than had that rate been at 25%. Since the rates did vary from the high-70s at the end of the Great War to the mid-20s in the 1920s and back up during the Depression, that implies a lot of volatility in the quality of reporting. As such, the evolution measured by tax data will capture tax-rate-induced variations in reported income (especially in the pre-withholding era when there existed numerous large loopholes and tax-sheltered income vehicles).  The shift from high to low taxes in the 1910s and 1920s would have implied a larger than actual change in inequality while the the shift from low to high taxes in the 1930s would have implied the reverse. Correcting for the artificial changes caused by tax rate changes would, by definition, flatten the evolution of inequality – which is what we find in our paper.

However, we go farther than that. Using the state of Wisconsin which had a tax system with more stringent compliance rules for the state income tax while also having lower and much more stable tax rates, we find different levels and trends of income inequality than with the IRS data (a point which me and Phil Magness expanded on here). This alone should fuel skepticism.

Nonetheless, this is not the sum of our criticisms. We also find that the denominator frequently used to arrive at the share of income going to top earners is too low and that the justification used for that denominator is the result of a mathematical error (see pages 10-12 in our paper).

Finally, we point out that there is a large accounting problem. Before 1943, the IRS provided the Statistics of Income based on net income. After 1943, there shift between definitions of adjusted gross income. As such, the two series are not comparable and need to be adjusted to be linked. Piketty and Saez, when they calculated their own adjustment methods, made seemingly reasonable assumptions (mostly that the rich took the lion’s share of deductions). However, when we searched and found evidence of how deductions were distributed, they did not match the assumptions of Piketty and Saez. The actual evidence regarding deductions suggest that lower income brackets had large deductions and this diminishes the adjustment needed to harmonize the two series.

Taken together, our corrections yield systematically lower and flatter estimates of inequality which do not contradict the idea that inequality fell during the first half of the 20th century (see image below). However, our corrections suggest that the UCN is incorrect and that there might be more of small bowl (I call it the Paella-bowl curve of inequality, but my co-authors prefer the J-curve idea).

InequalityPikettySaez.png

Can we trust US interwar inequality figures?

This question is the one that me and Phil Magness have been asking for some time and we have now assembled our thoughts and measures in the first of a series of papers. In this paper, we take issue with the quality of the measurements that will be extracted from tax records during the interwar years (1918 to 1941).

More precisely, we point out that tax rates at the federal level fluctuated wildly and were at relatively high levels. Since most of our inequality measures are drawn from the federal tax data contained in the Statistics of Income, this is problematic. Indeed, high tax rates might deter honest reporting while rapidly changing rates will affect reporting behavior (causing artificial variations in the measure of market income). As such, both the level and the trend of inequality might be off.  That is our concern in very simple words.

To assess whether or not we are worrying for nothing, we went around to find different sources to assess the robustness of the inequality estimates based on the federal tax data. We found what we were looking for in Wisconsin whose tax rates were much lower (never above 7%) and less variable than those at the federal levels. As such, we found the perfect dataset to see if there are measurement problems in the data itself (through a varying selection bias).

From the Wisconsin data, we find that there are good reasons to be skeptical of the existing inequality measured based on federal tax data. The comparison of the IRS data for Wisconsin with the data from the state income tax shows a different pattern of evolution and a different level (especially when deductions are accounted for). First of all, the level is always inferior with the WTC data (Wisconsin Tax Commission). Secondly, the trend differs for the 1930s.

Table1 for Blog

I am not sure what it means in terms of the true level of inequality for the period. However, it suggests that we ought to be careful towards the estimations advanced if two data sources of a similar nature (tax data) with arguably minor conceptual differences (low and stable tax rates) tell dramatically different stories.  Maybe its time to try to further improve the pre-1945 series on inequality.

On the (big) conditions for a BIG

This week, EconTalk featured a podcast between Russ Roberts and Michael Munger (he of the famous Munger-proviso which I live by) discussed the Basic Income Guarantee (BIG). In the discussion, there is little I ended up disagreeing with (I would have probably said some things differently though). However, I was disappointed about a point (which I made here in the past) which economists often ignore when discussing a BIG: labor demand.

In all discussions of the BIG, the debates always revolve around the issue of labor supply assuming that it will induce some leftward shift of the supply curve. While this is true, it is irrelevant in my opinion because there is a more important effect: the rightward shift of the labor demand curve.

To make this argument, I must underline the conditions of a BIG for this to happen. The first thing to say is that a) the social welfare net must be inefficient relative to the alternative of simply giving money to people (shifting to a BIG must be Pareto-efficient); b) the shift mean that – for a fixed level of utility we wish to insure – the government needs to spend less and; c) the lower level of expenditures allows for a reduction in taxation.  With these three conditions, the labor demand curve could shift rightward. As I said when I initially made this point back in January 2016:

Yet, the case is relatively straightforward: current transfers are inefficient, basic income is more efficient at obtaining each unit of poverty reduction, basic income requires lower taxes, basic income means lower marginal tax rates, lower marginal tax rates mean more demand for investment and labor and thus more long-term growth and a counter-balance to any supply-side effect.

As I pointed out back then, the Canadian experiment (in Manitoba) with a minimum income led to substantial improvements in health outcomes which meant lower expenditures for healthcare. As a result, b) is satisfied and (by definition) so is a). If, during a shift to a BIG, condition c) is met, the entire discussion regarding the supply effects becomes a mere empirical issue.

I mean, equilibrium effects are best analyzed when we consider both demand and supply…

P.S. I am not necessarily a fan, in practice, of BIG. Theoretically, the case is sound. However, I can easily foresee policy drifts where politicians expand the BIG beyond a sound level for electoral reasons (or even tweak the details in order to add features that go against the spirit of the proposal). The debate between Kevin Vallier (arguing that this public choice reasoning is not relevant) and Phil Magness (who argues the reverse) on this issue is pretty favorable to Magness (in my opinion). UPDATE: Jason Clemens over at the Fraser Institute pointed to a study they made regarding the implementation of a BIG in Canada. The practical challenges the study points too build upon the Magness argument as applied in a Canadian perspective.