A Short Note on “Net Neutrality” Regulation

Rick Weber has a good note lashing out against net neutrality regulation. The crux of his argument is that there are serious costs to consumers in terms of getting content slower to enforced net neutrality. But even if we ignore his argument, what if regulation isn’t even necessary to preserve the benefits of net neutrality (even though there really never was net neutrality as proponents imagine it to begin with, and it has nothing to do with fast lanes but with how content providers need to go through a few ISPS)? In fact, there is evidence that the “fast lane” model that net neutrality advocates imagine would

In fact, there is evidence that the “fast lane” model that net neutrality advocates imagine would happen in the absence of regulatory intervention is not actually profitable for ISPs to pursue, and has failed in the past. As Timothy Lee wrote for the Cato Institute back in 2008:

The fundamental difficulty with the “fast lane” strategy is that a network owner pursuing such a strategy would be effectively foregoing the enormous value of the unfiltered content and applications that comes “for free” with unfiltered Internet access. The unfiltered internet already offers breathtaking variety of innovative content and application, and there is every reason to expect things to get even better as the availabe bandwidth continues to increase. Those ISPs that continue to provide their users with faster, unfiltered access to the Internet will be able to offer all of this content to their customers, enhancing the value of their pipe at no additional cost to themselves.

In contrast, ISPs that chose not to upgrade their customers’ Internet access but instead devote more bandwidth to a proprietary “walled garden” of affiliated content and applications will have to actively recruit each application or content provider that participates in the “fast lane” program. In fact, this is precisely the strategy that AOL undertook in the 1990s. AOL was initially a propriety online service, charged by the hour, that allowed its users to access AOL-affiliated online content. Over time, AOL gradually made it easier for customers to access content on the Internet so that, by the end of the 1990s, it was viewed as an Internet Service Provider that happened to offer some propriety applications and content as well. The fundamental problem requiring AOL to change was that content available on the Internet grew so rapidly that AOL (and other proprietary services like Compuserve) couldn’t keep up. AOL finally threw in the towel in 2006, announcing that the proprietary services that had once formed the core of its online offerings would become just another ad-supported web-site. A “walled garden/slow lane” strategy has already proven unprofitable in the market place. Regulations prohibiting such a business model would be suprlusage.

It looks like it might be the case that Title II-style regulation is a solution in search of a problem. Add to it the potential for ISPs and large companies to lobby regulators to erect other barriers to entry to stop new competitors, like what happened with telecommunications companies under Title II and railroad companies under the Interstate Commerce Commission, and the drawbacks of pure net neutrality Rick pointed out, and it looks like a really bad policy indeed.

Vincent Geloso Interviewed for his Work on the War on Drugs

Regular readers of NOL know that fellow notewriter Vincent Geloso has done a lot of great work on the war on drugs. Dr. Geloso was recently on Student for Liberty’s Podcast to discuss a paper he recently co-authored compiling data on the effects of the war on drugs on increased security costs, which he previewed a few months ago on NOL. He had a wide-ranging discussion on his findings, secondary effects of the war on drugs in terms of economic costs, the psychology of policing with the war on drugs, and comparing the drug war to prohibition. Check out the discussion.

P.S. If you’re not already listening to SFL On Air, you should and not just because I’m in charge of marketing for it.

What is the optimal investment in quantitative skills?

As I plan out my summer plans I am debating how to allocate my time in skill investment. The general advice I have gotten is to increase my quantitative skills and pick up as much about coding as possible. However I am skeptical that I really should invest too much in quantitative skills. There are diminishing returns for starters.

More importantly though artificial intelligence/computing is increasing every day. When my older professors were trained they had to use IBM punch cards to run simple regressions. Today my phone has several times more the computing power, not to mention my PC. I would not be surprised if performing quantitative analysis is taken over entirely by AI within a decade or two. Even if it isn’t, it will surely be easier and require minimal knowledge of what is happening. In which case I should invest more heavily in skills that cannot be done by AI.

I am thinking, for example, of research design or substantive knowledge of research areas. AI can beat humans in chess, but I can’t think of any who have written a half decent history text.

Mind you I cannot abandon learning a base level of quantitative knowledge. AI may take over in the nex decade, but I will be on the job market and seeking tenure before then (hopefully!). 

A short note on God

I’ve been re-reading Neil Gaiman’s American Gods, thanks in large part to the new TV series on Starz based on the novel. Gaiman’s works always disappoint me in the end. Not because they’re bad, (I can never put them down), but because I prefer two types of endings: fell-good cheesy ones and depressing I-hope-you-learned-your-lesson ones. Gaiman’s endings always make me think, and I don’t necessarily like that in my fiction.

Behavioral economists will tell me that I’m not actually disappointed in Gaiman’s work because I always come back for more, but I insist they’re wrong.

At any rate, American Gods got me thinking about, well, God. The God I grew up with was the Mormon God (I’m a reluctant atheist now). The Mormon God is a loving god. It’s a man, with a wife, who views human beings as his children. Jesus Christ is his oldest son, and Lucifer is the 2nd oldest.Prior to human life on earth, a war erupted in Heaven between two factions, one led by Jesus and the other by Lucifer. (I highlight the word “war” because this is how Mormons describe what is essentially a philosophical argument. No blood was shed. It is a culture war. Mormons view themselves as God’s warriors. Because they view their God as a loving one, they smile and are nice to everybody, but they do so because they are at war.)

Jesus argued that everybody should have free choice in what they do on earth. All of his brothers and sisters (i.e. God’s children) should be free to make mistakes and sin. Jesus offered himself up as a sacrificial lamb for everybody. He would die on earth so that his brothers and sisters would get a chance to repent for their mistakes and sins.

Lucifer argued that everybody should have an outline of what to do in order to get back to Heaven. His brothers and sisters would already have their lives planned out for them when they were born, and there would be no room to make mistakes. Thus nobody would have to worry about making mistakes, so nobody would not make it back to Heaven.

At the end of their great debate, the people of Heaven, God’s children, the future inhabitants of earth, held a vote and decided to go with Jesus’ plan. Lucifer was butthurt, and left Heaven to found his own society, based on his plan, in Hell. According to the founders of the Mormon Church, about 1/3 of Heaven went with Lucifer. They didn’t have the courage to be tested through free agency. They wanted every aspect of their lives to be planned for them.

This portrait gives you a view, I hope, of a distinctly American God, born as he was in the early 19th century: democratic, freedom-loving, and generous. There is a lot to chew on here, I know. There’s lots of questions, too, such as “why did we have to leave Heaven in the first place?” The answer I received to most of my questions was “faith.”

The Mormon God, though, was also a mass murderer. He killed lots of people (or had people killed) to make his point, more than once. How can a loving God commit (or support) such atrocities? Nothing adds up. It didn’t add up when I was 10, or 16, or 25.

I think the bad math explains polytheistic logic pretty well. Instead of an omnipotent god who loves you immensely and also slaughters human life in anger or jealousy, there is a god responsible for love, and one for war, one for greed, etc. You can simply worship as you please. This polytheistic framework leads directly to questions about self-discipline, though: If you have many gods for many motives, wouldn’t this make it easier to murder people without feeling guilty about it? To swindle people? Just ignore the gods of love or forgiveness or justice and pray to the gods of anger or expedience.

Reality doesn’t conform to this rough logic, though. India’s Hindu population is no less violent than, say, Muslim Albania or Christian Serbia (or secular Los Angeles). India’s merchant class is no less devout than the West’s or Islam’s. Religion can shape a person’s life, indeed a whole culture, but it has less of an effect on good and bad than we like to think.

An argument against Net Neutrality

First off, Comcast sucks. Seriously, screw those guys.

But let’s assume a can opener and see if that doesn’t help us find a deeper root problem. The can opener is competition in the ISP network. Let’s consider how the issue of Net Neutrality (NN) would play out in a world where your choice of ISP looked more like your choice of grocery store. Maybe a local district is set up to manage a basic grid and ISPs bid for usage of infrastructure (i.e. cities take a page out of the FCC’s playbook on spectrum rights). Maybe some technological advance makes it easy to set up decentralized wireless infrastructure. But let’s imagine that world.

Let me also assume a bit of regulation. The goal is to create some simple rules that make the market work a bit better. Two regulations that I’d like to see are 1) a requirement that ISPs have a public list of any websites they restrict access to*, and 2) a limitation on how complicated end user agreements can be. I’m not sure these things would be possible in my anarchist utopia, but in a second best world of governments I’m pretty comfortable with them.

Let’s also create a default contracts for content providers with ISPs. “Unless otherwise agreed to, content providers (e.g. YouTube, my crazy uncle Larry, the cafe around the corner, etc.) relationship with ISPs is assumed to take the following form:…” An important clause would be “access/speed/etc. to your content will meet ______ specifications and cannot be negatively altered at the request of any third party.”

A similar default contract could be written for ISPs and end users. “Universal access under ____________ conditions will be provided and cannot be negatively altered at the request of any third party.”

Explicitly and publicly setting neutral defaults means we can get NN by default, but allow people the freedom to exchange their way out of it.

Do we need, or even want, mandated NN in that world? There are some clear potential gains to a non-neutral Internet. Bandwidth is a scarce resource, and some websites use an awful lot of it. YouTube and Netflix are great, but they’re like a fleet of delivery trucks creating traffic on the Information Super Highway. Letting them pay ISPs for preferred access is like creating a toll lane that can help finance increased capacity.

Replacing NN with genuine competition means that consumers who value Netflix can pay for faster streaming on that while (essentially) agreeing to use less of the net’s bandwidth for other stuff. We should encourage faster content, even if it means that some content gets that extra speed before the rest.

Competing ISPs would cater to the preferences and values of various niches. Some would seem benign: educational ISPs that provide streamlined access to content from the Smithsonian while mirroring Wikipedia content on their volunteer servers. Bandwidth for sites outside the network might come at some price per gigabyte, or it might be unlimited.

Other ISPs might be tailored for information junkies with absolutely every website made available at whatever speed you’re willing to pay for. Family friendly ISPs would refuse to allow porn on their part of the network (unsuccessfully, I suspect), but couldn’t stop other ISPs from anything. Obnoxious hate group ISPs would probably exist too.

There would be plenty of bad to go along with the good, just like there is in a neutral network.

I’m okay with allowing ISPs to restrict access to some content as long as they’re honest about it. The Internet might not provide a universal forum for all voices, but that’s already the case. If you can’t pay for server space and bandwidth, then your voice can only be heard on other people’s parts of the Internet. Some of those people will let you say whatever you want (like the YouTube comments section), but others are free to ban you.

Similarly, big companies will be in a better position to provide their content, but that’s already the case too. Currently they can spend more on advertising, or spend more on servers that are physically closer to their audience. A non-neutral net opens up one more margin of competition: paying for preferred treatment. This means less need to inefficiently invest physical resources for the same preferred treatment. (Hey, a non-neutral net is Green!)

There might be reason to still be somewhat worried about the free speech implications of a non-neutral net. As consumers, we might prefer networks that suppress dissident voices. And those dissident voices might (in the aggregate) be providing a public good that we’d be getting less of. (I think that’s a bit of a stretch, but I think plenty of smart people would take the point seriously.) If that’s the case, then let’s have the Postal Service branch out to provide modestly priced, moderate speed Internet access to whoever wants it. Not great if you want to do anything ambitious like play Counter Strike or create a major news network, but plenty fine for reading the news and checking controversial websites.

tl;dr: I can imagine a world without Net Neutrality that provides better Internet service and better economizes on the resources necessary to keep the Information Super Highway moving. But it’s not the world we currently live in. What’s missing is genuine market competition. To get there would require gutting much of the existing regulatory frameworks and replacing it with a much lighter touch.

What I’m talking about seems like a bit of a pipe dream from where we’re sitting. But if we could take the political moment of the Net Neutrality movement and redirect it, we could plausibly have a free and competitive Internet within a generation.


*Or maybe some description about how they filter out websites… something like a non-proprietary description of their parental filters for ISPs that (attempt to) refuse adult content access.

Know your data, show your data: A rant

I am finishing up my first year of doctoral level political science studies. During that time I have read a lot of articles – approximately 550. 11 courses. 5 articles a week on average. 10 weeks. 11×5×10=550. Two things have bothered me immensely when reading these pieces: (1) it’s unclear authors know their data well, regardless of it being original or secondary data and (2) the reader is rarely showed much about the data.

I take the stance that when you use a dataset you should know it well in and out. I do not just mean that you should just have an idea if its normally distributed or has outliers. I expect you to know who collected it. I expect you to know its limitations.

For example I have read public opinion data that sampled minority populations. Given that said populations are minorities they had to oversample in areas where said groups are over represented. The problem with this is that those who live near co-ethnics are different from those who live elsewhere. This restricts the external validity of results derived from the data, but I rarely see an acknowledgement of this.

Sometimes data is flawed but it’s the best we have. That’s fine. I’m not against using flawed data. I’m willing to buy most arguments if the underlying theory is well grounded. To be honest I view statistical work to be fluff most times. If I don’t really care about the statistics, why do I care if the authors know their data well? I do because it serves as a way for authors to signal that they thought about their work. It’s similar to why artists sometimes place a “bowl of only green m&ms” requirement on their performance contracts. Artists don’t know if their contracts were read, but if their candy bowl is filled with red twizzlers they know something is wrong. I can’t monitor whether the authors took care in their manuscripts, but NOT seeing the bowl of green only m&ms gives me a heads up that something is off.

Of those 500+ articles I have read only a handful had a devoted descriptive statistics section. The logic seems to be that editors are encouraging that stuff be placed in appendices to make articles more readable. I don’t buy that argument for descriptive statistics. Moving robustness checks or replications to the appendices is fine, but descriptive stats give me a chance to actually look at the data and feel less concerned that the results are driven by outliers. In my 2nd best world all dependent variables and major independent variables would be graphed. If the data was collected in differing geographies I would want the data mapped. In my 1st best world replication files with the full dataset and dofiles would be mandatory for all papers.

I don’t think I am asking too much here. Hell, I am not even fond of empirical work. My favorite academic is Peter Leeson (GMU Econ & Law) and he rarely (ever?) does empirical work. As long as empirical work is being done in the social sciences though I expect a certain standard. Otherwise all we’re doing is engaging in math masturbation.

Tldr; I don’t trust most empirical work out there. I’ll rant about excessive literature reviews next time.

James Cooley Fletcher

At the beginning of the 19th century there was almost no vestige of Protestantism in Brazil. From the 16th century the country was colonized basically only by Portuguese, who resisted the advance of Protestantism during the same period. Huguenots and Dutch Reformers tried to colonize parts of Brazil in the 16th and 17th centuries, but with little or no lasting effects. Only after the arrival of the Portuguese royal family in 1808 did this picture begin to change.

First came the English Anglicans. England rendered a great help to Portugal in the context of the Napoleonic Wars, and thus the subjects of the English crown gained religious freedom on Brazilian soil. This freedom soon extended to German Lutheran immigrants who settled mainly in the south of the country from the 1820s. However, it was only with the American missionary work, from the 1840s and 1850s, that Protestantism really began to settle in Brazil.

James Cooley Fletcher was one of the people who contributed most to the establishment of Protestantism in Brazil. Quoted frequently by historians, he is, however, little understood by most of them and little known by the general public. Born April 15, 1823 in Indianapolis, Indiana, he studied at the Princeton, Paris, and Geneva Seminary between 1847 and 1850 and first came to Brazil in 1852. In 1857 he published the first edition of The Brazil and the Brazilians, a book which for many decades would be the main reference regarding Brazil in the English language.

Fletcher first came to Brazil as chaplain of the American Seamen’s Friend Society and a missionary of the American and Foreign Christian Union. However, shortly after his arrival in the country, he made it his mission to bring Protestantism to the Brazilians. His performance, however, would be indirect: instead of preaching himself to the Brazilians, Fletcher chose to prepare the ground for other missionaries. For this he became friends with several members of the Brazilian elite, including Emperor Dom Pedro II. Through these friendships, he managed to influence legislation favorable to the acceptance of Protestantism in Brazil.

Although Fletcher anticipated and aided missionaries who would work directly with the conversion of Brazilians to Protestantism, his relationship with these same missionaries was not always peaceful. Some of the missionaries who succeeded Fletcher were suspicious of him because of his contacts with Brazilian politicians. It is true, Fletcher had an agenda not always identical with that of other missionaries: while others wished to focus only on the conversion of Brazilians, he understood that Protestantism and liberalism were closely linked, and that the implementation of the first in Brazil would lead to the progress propelled by the second. For this very reason, Fletcher had no problem engaging in activities that at first glance would seem oblivious to purely evangelistic work. He promoted, for example, the immigration of Americans to Brazil, the establishment of ship lines linking the two countries, the end of slavery in Brazil and commercial freedom.

James Cooley Fletcher is generally little remembered by Brazilian Protestants, although he has contributed decisively to the end of the Roman Catholic monopoly in the country. He is also little remembered by historians, but this should not be so. Fletcher was one of the people who contributed most to the strengthening of religious freedom in Brazil, and also to a combination of religious, political, and economic beliefs. It was precisely because of his religious beliefs that he believed in the political and economic strength of liberalism to transform any country, including Brazil.

A Tax is Not a Price

Auto_stoped_highwayAccording to The Economist, the latest US federal budget includes incentives for “congestion pricing” of roads.

Ostensibly, this is about reducing congestion. But some municipalities like the idea of charging for roads because it represents a new revenue stream. This creates an incentive to charge a price above cost. When a firm does this, we call it a “monopoly price.”

But when a government monopoly forces you to pay a fee to use a good or service, do not call it a price. It is a fee that a government collects by fiat. In other words, it is a tax.

A price is a voluntary exchange of money for a good or service. The emphasis on voluntary is important, because it is this aspect of the price that enables economic calculation for what people really want.  Even a free market “monopolist” (however unlikely or conceptually vague it may be) engages in voluntary exchange.

On the other hand, a bureaucrat “playing market” by imposing fees on government-controlled goods and services will not have the same results as a market process. For starters, unlike a person making decisions on their own behalf, a government bureaucrat has to guess at costs. Under a voluntary system, a cost is the highest valued good or service you voluntarily give up in order to attain a goal. But the bureaucrat is dealing with other people’s money.

To “objectively” determine costs, in order to set “fair” prices, is a chimera. In the words of Ludwig von Mises, “[a] government can no more determine prices than a goose can lay hen’s eggs.”

On Borjas, Data and More Data

I see my craft as an economic historian as a dual mission. The first is to answer historical question by using economic theory (and in the process enliven economic theory through the use of history). The second relates to my obsessive-compulsive nature which can be observed by how much attention and care I give to getting the data right. My co-authors have often observed me “freaking out” over a possible improvement in data quality or be plagued by doubts over whether or not I had gone “one assumption too far” (pun on a bridge too far). Sometimes, I wish more economists would follow my historian-like freakouts over data quality. Why?

Because of this!

In that paper, Michael Clemens (whom I secretly admire – not so secretly now that I have written it on a blog) criticizes the recent paper produced by George Borjas showing the negative effect of immigration on wages for workers without a high school degree. Using the famous Mariel boatlift of 1980, Clemens basically shows that there were pressures on the US Census Bureau at the same time as the boatlift to add more black workers without high school degrees. This previously underrepresented group surged in importance within the survey data. However since that underrepresented group had lower wages than the average of the wider group of workers without high school degrees, there was an composition effect at play that caused wages to fall (in appearance). However, a composition effect is also a bias causing an artificial drop in wages and this drove the results produced by Borjas (and underestimated the conclusion made by David Card in his original paper to which Borjas was replying).

This is cautionary tale about the limits of econometrics. After all, a regression is only as good as the data it uses and suited to the question it seeks to answer. Sometimes, simple Ordinary Least Squares are excellent tools. When the question is broad and/or the data is excellent, an OLS can be a sufficient and necessary condition to a viable answer. However, the narrower the question (i.e. is there an effect of immigration only on unskilled and low-education workers), the better the method has to be. The problem is that the better methods often require better data as well. To obtain the latter, one must know the details of a data source. This is why I am nuts over data accuracy. Even small things matter – like a shift in the representation of blacks in survey data – in these cases. Otherwise, you end up with your results being reversed by very minor changes (see this paper in Journal of Economic Methodology for examples).

This is why I freak out over data. Maybe I can make two suggestions about sharing my freak-outs.

The first is to prefer a skewed ratio of data quality to advanced methods (i.e. simple methods with crazy-data). This reduces the chances of being criticized for relying on weak assumptions. The second is to take a leaf out of the book of the historians. While historians are often averse to advantaged data techniques (I remember a case when I had to explain panel data regressions to historians which ended terribly for me), they are very respectful of data sources. I have seen historians nurture datasets for years before being willing to present them. When published, they generally stand up to scrutiny because of the extensive wealth of details compiled.

That’s it folks.

 

From the Comments: Naval Power and Trade

This is an extremely interesting point, the worth of fighting pirates and guerre de course seems difficult but is completely worth the effort. Strangely, just before reading this post, I finished the book To Rule The Waves by Arthur Herman, which asserts that the rise of large-scale trade went hand in hand with the growth of British naval strength, and points very specifically to the 18th and 19th centuries. On page 402, he asserts that it was only naval protection that enabled British trade to grow considerably during the Napoleonic wars (over 11,000 British merchants were captured by the French from 1793-1815 and far more would have been but for the British blockades and convoy protection). How much can one measure the cost-to-yield of maintaining peaceful trade against such depredation?

Herman also argues that Naval research and technology drove the development of far better seagoing technologies without which large-scale merchant ventures would have had far lower yield (perhaps the most famous example is the Longitude Prize) and the demand for iron and ship production was a major driver of the early Industrial Revolution. While I think that both of these arguments are very vulnerable to crowding out arguments, it seems to me that there were nuanced interconnections between technology, trade, and naval power that each had positive feedback into the others. It seems to me that by examining the very large investment made by the British East India Company in their merchant marine in this very period gives a parallel in which private interests made similar investments in protection of sea trade routes, showing its probable positive return on investment.

I am glad to see that you have recognized that naval production was almost always based on relative strengths of navies. The huge decomissioning trends of the mid-19th century in Britain was exceeded by that of their enemies/rivals (the Dutch had been weak since the late 1600s, the French were exhausted completely, and the Spanish and Portuguese were on a long decline worsened by French occupation). However, there is one major aspect to consider in examining naval strength longitudinally: complete revolution in ship technology. Steam, iron plating, and amazing advances in artillery picked up hugely after 1815, and the British navy in the Crimean War would have been unrecognizable to Nelson. I am not sure how this would affect your analysis, because navies became simultaneously more expensive and more effective, and GDP was exploding fast enough to support such high-tech advances without bankrupting the Brits. I am sure this is not an original problem, but I am interested in seeing how historical economists can control for such changes.

Good luck on this paper, it seems like an extremely useful examination with a lot of interesting complications and a fundamentally important commentary on the balance between maintaining law and allowing market determination of resource distribution.

This is from my fellow Notewriter Kevin on another fellow Notewriter’s (Vincentrecent post about shipping and imperial navies.

A short note on Brazil’s present political predicament

This Wednesday, O Globo, one of the newspapers of greater audience in Brazil, leaked information obtained by the Federal Policy implicating president Michel Temer and Senator Aécio Neves in a corruption scandal. Temer was recorded supporting a bribe for former congressman Eduardo Cunha, now under arrest, so that Cunha would not give further information for the police. Aécio, president of PSDB (one of the main political parties in Brazil), was recorded asking for a bribe from a businessman from JBS, a company in the food industry. The recordings were authorized by the judiciary and are part of the Operation Lava Jato.

In the last few years Oparation Lava Jato, commanded by Judge Sérgio Moro and inspired by the Italian Oparation Clean Hands, brought to justice some of the most important politicians in Brazil, including formed president Luis Inácio Lula da Silva. However, supporters of president Lula, president Dilma and their political party (PT) complained that Moro and his team were politically biased, going after politicians from the left, especially PT, and never form the right – especially PSDB. PSDB is not actually a right-wing party, if we consider right wing only conservatives and libertarians. PSDB, as it name implies, is a social democratic party, i.e., a left wing one. However, since the late 1980s and especially mid-1990s, PSDB is the main political adversary for PT, creating a complicated scenario that PT usually explores politically in its own benefit. In any way, it is clear now (although hardcore Lula supporters will not see this) that Operation Lava Jato is simply going after corrupt politicians, regardless if their political parties or ideologies.

With president Michel Temer directly implicated in trying to stop Operation Lava Jato, his government, that already lacked general public support, is held by a string. Maybe Temer will resign. Other possibility is that the Congress will start an impeachment process, such as happened with Dilma Rousseff just a year ago. In one way or another, the Congress will have to call for a new presidential election, albeit an indirect one: the Congress itself will elect a new president and virtually anyone with political rights in Brazil can be candidate. This new president would govern only until next year, completing the term started by Dilma Rousseff in 2014. There is also another possibility in the horizon: the presidential ticket that brought both Dilma Rousseff and Michel Temer to Brasília is under investigation and it is possible that next June Temer will be declared out of office by the electoral justice.

Politicians from the left, especially REDE and PSOL, want a new presidential election with popular vote. In case Temer simply resigns or is impeached, this would require an amendment to the already tremendously amended Brazilian constitution. This new election might benefit Marina Silva, virtual candidate for REDE and forerunner in the 2010 and 2014 presidential elections. Without a solid candidate, it is possible that PSOL will support Marina, or at least try a ticket with her. A new presidential election with popular vote could also benefit Lula, still free, but under investigation by Moro and his team. Few people doubt that Lula will be in jail very soon, unless he escapes to the presidential palace where he would have special forum.

Temer already came to public saying that he will not resign. Although a corrupt, as it is clear now, Temer was supporting somewhat pro-market reforms in Brazil. In his current political predicament it is unlikely that he will be able to conduct any reform. The best for Brazil is that Temer resigns as soon as possible and that the Congress elects equally fast a new president, someone with little political connections but able to run the government smoothly until next year. Unfortunately, any free market reform would have to wait, but it would also give time for libertarian, classical liberal and conservative groups to grow support for free market ideas among the voters until the election. A new presidential election with popular vote would harm everyone: it would be the burial of democratic institutions in Brazil. Brazil needs to show the World that it has institutions that are respected, and to which people can hold in times of trouble, when the politicians behave as politicians do.

Can we trust US interwar inequality figures?

This question is the one that me and Phil Magness have been asking for some time and we have now assembled our thoughts and measures in the first of a series of papers. In this paper, we take issue with the quality of the measurements that will be extracted from tax records during the interwar years (1918 to 1941).

More precisely, we point out that tax rates at the federal level fluctuated wildly and were at relatively high levels. Since most of our inequality measures are drawn from the federal tax data contained in the Statistics of Income, this is problematic. Indeed, high tax rates might deter honest reporting while rapidly changing rates will affect reporting behavior (causing artificial variations in the measure of market income). As such, both the level and the trend of inequality might be off.  That is our concern in very simple words.

To assess whether or not we are worrying for nothing, we went around to find different sources to assess the robustness of the inequality estimates based on the federal tax data. We found what we were looking for in Wisconsin whose tax rates were much lower (never above 7%) and less variable than those at the federal levels. As such, we found the perfect dataset to see if there are measurement problems in the data itself (through a varying selection bias).

From the Wisconsin data, we find that there are good reasons to be skeptical of the existing inequality measured based on federal tax data. The comparison of the IRS data for Wisconsin with the data from the state income tax shows a different pattern of evolution and a different level (especially when deductions are accounted for). First of all, the level is always inferior with the WTC data (Wisconsin Tax Commission). Secondly, the trend differs for the 1930s.

Table1 for Blog

I am not sure what it means in terms of the true level of inequality for the period. However, it suggests that we ought to be careful towards the estimations advanced if two data sources of a similar nature (tax data) with arguably minor conceptual differences (low and stable tax rates) tell dramatically different stories.  Maybe its time to try to further improve the pre-1945 series on inequality.

A short note on the Trump-Russia scandal

This whole thing is much ado about nothing.

Intelligence sharing in regards to the global war on Islamic peoples terrorism has been an ongoing affair for numerous states since the collapse of socialism in 1989. Russia, the US, Europe, Israel, states in the Near East, and China have all shared intelligence in this regard.

Here’s what’s happening in the US: the American Left needs a foreign boogeyman to harp on the Right. The Right uses Muslims, immigrants, and China to harp on the Left, but the Left counters with charges of racism and xenophobia. The Left still needs a foreign boogeyman (voters love foreign scapegoats) and Russia’s political class is white, conservative, and Christian. Traditionally (at least in my time) the Left’s foreign boogeyman has been Israel and its political class (white and conservative but not Christian), but populism in Russia has produced a product that the American left just couldn’t resist.

This is a boring scandal.

(Reminder: I’m not a Trump supporter.)

Empire effects : the case of shipping

I have been trying, for some time now, to circle an issue that we can consider to be a cousin of the emerging “state capacity” literature (see Mark Koyama’s amazing summary here). This cousin is the literature on “empire effects” (here and here for examples).

The core of the “empire effect” claim is that empires provide global order which we can consider as a public good. A colorful image would be the British Navy roaming the seas in the 19th century which meant increased protection for trade. This is why it is a parent of the state capacity argument in the sense that the latter concept refers (broadly) to the ability of a state to administer the realm within its boundaries. The empire effect is merely the extension of these boundaries.

I still have reservations about the nuances/limitations of state capacity as an argument to explain economic growth. After all, the true question is not how states consolidate, but how they create constraints on rulers to not abuse the consolidated powers (which in turn generates room for growth). But, it is easy to heavily question its parent: the empire effect.

This is what I am trying to do in a recent paper on the effects of empire on shipping productivity between 1760 and 1860.

Shipping is one of the industry that is most likely to be affected by large empires – positively or negatively. Indeed, the argument for empire effects is that they protect trade. As such, the British navy in the 19th century protected trade and probably helped the shipping industry become more productive. But, achieving empire comes at a cost. For example, the British navy needed to grow very large in size and it had to employ inputs from the private sector thus crowding-it out. In a way, if a security effect from empire emerged as a benefit, there must have been a cost. The cost we wish to highlight is the crowding-out one.

In the paper (written with Jari Eloranta of Appalachian State University and Vadim Kufenko of University of Hohenheim), I argue that, using the productivity of the Canadian shipping industry which was protected by the British Navy, the security effect from a large navy was smaller than the crowding-out from high-levels of expenditures on the navy.

While it is still a working paper which we are trying to expand and improve, our point is that what allowed the productivity of the Canadian shipping industry (which was protected by Britain) to soar was that the British Navy grew smaller in absolute terms. While the growth of the relative strength of the British Navy did bolster productivity in some of our tests, the fact that the navy was much smaller was the “thing in the mix that did the trick”.  In other words, the empire effect is just the effect of a ramping-down in military being presented as something else than it truly is (at least partly).

That’s our core point. We are still trying to improve it and (as such) comments are welcomed.

BC’s weekend reads

  1. The demise of ISIS is greatly exaggerated. Good analysis, but Whiteside is still asking the wrong question
  2. 10% of DR Congo’s landmass is dedicated to national parks and other protected environmental areas. Guess how well they’re protected. Privatization might not work here, though. Why not go through traditional “tribal” property rights first, and then, eventually, mix up the customary land rights with private property rights?
  3. Has Stephen Walt been reading reading NOL? This great essay suggests he has…
  4. Russian politics. Authoritarian regimes have factions, too