Social Care: Who should pay, other than those who benefit?

Guest post by Dr Wesley Key

Image source

When Boris Johnson became Prime Minister in July 2019, he promised from outside 10 Downing Street that “we will fix the crisis in social care once and for all.” With his premiership since being dominated by Brexit and then Covid-19, little has since been heard about how this may be achieved, but reports in July 2021 suggest that a rise in the basic rate of income tax or in National Insurance Contributions (NICs) is being considered in order to increase Social Care funding for England. Such a move would break a Conservative manifesto pledge and would also be highly contentious in terms of intergenerational fairness.

Other potential options put forward to help to fund Social Care (and other public services) in England have included: An extra tax on people aged 40-plus (similar to the system in Japan), levying NICs on private pension income, reducing tax relief on pension contributions (to a maximum of 20% for higher rate taxpayers), raising the Upper Earnings Limit (UEL) for NICs, and making workers over state retirement pension age liable to pay NICs at the same rate (or a reduced rate) as ‘working age’ employees. Work by the IFS in 2018 implied that NICs paid by workers over state pension age could potentially raise £1-1.5 billion annually, not accounting for the behavioural changes that such a policy would inevitably lead to. Regardless of the amount of revenue raised, such a system would be morally justifiable in terms of: Older workers losing their current privileged position within the personal taxation system; older people who are sufficiently healthy to carry on paid employment helping to fund the social care needs of others in their birth cohort who are in poorer health (perhaps due to working in more physically demanding job roles earlier in life).

Whilst this reform would see the national insurance system return closer to its Beveridgean roots (Beveridge did not intend that NICs were solely a way of accumulating state pension entitlement), it would be insufficient to properly fund a Social Care system that included a lifetime cap on care costs along the lines of that proposed in the Dilnot Report on Social Care: Commission on Funding of Care and Support, 2011. It is therefore also recommended that the UEL for National Insurance contributions is significantly increased, along with the NICs rate for earnings above the UEL, as it is iniquitous that, in 2021-22, earnings over £967 a week are liable to just 2% NICs, compared to 12% NICs paid on earnings of £184-967 a week.  

By reforming National Insurance in the above ways, this could help to better fund England’s creaking Social Care system, potentially enabling a lifetime care costs cap to be introduced, and without raising taxes on employees aged under 66 who earn less than £50,000 per year. Such an approach would be compatible with Boris Johnson’s 2019 pledge to ‘fix’ social care and with retaining his expanded support base among lower paid working age people across the Midlands and Northern England. It would also be justified in terms of intergenerational fairness in a period when government spending on pensioners has risen by more than spending on younger age groups.

John Rawls at 100

Neoliberal Social Justice available April 2021

John Rawls, the most influential political philosopher of the 20th century, was born 100 years ago today. He died one year before I first read A Theory of Justice as part of my undergraduate degree in philosophy at University College London. This year, Edward Elgar publishes Neoliberal Social Justice: Rawls Unveiled, my book which updates Rawls’ approach to assessing social institutions in light of contemporary economic thought.

Mike Otsuka (now at the LSE) introduced us first to the work of Robert Nozick and then to Rawls, the reverse of what I imagine is normally the case in an introductory political philosophy course. Most people ultimately found Rawls’ the more attractive approach whereas I was drawn to Nozick’s insistence on starting strictly from the ethical claims of individuals. I wondered why something calling itself ‘the state’ should have rights to coerce beyond any other actor in civil society.

Years of working in public policy and studying political economy made me recognise a distinctive value for impersonal institutions with abstract rules. Indeed, I now think the concept of equal individual liberty is premised on the existence of such institutions. Although the rule of law could theoretically emerge absent a state, states are the only institutions that have been able to generate it so far. Political philosophy cannot be broken down into applied ethics in the way Nozick proposed.

Some classical liberals and market anarchists are increasingly impatient with the Rawlsian paradigm. Michael Huemer, for example, argues that Rawls misunderstands basic issues with probability when proposing that social institutions focus on maximising the condition of the least advantaged. Huemer argues that Rawls ultimately offers no reason to pick justice as fairness over utilitarianism, the very theory it was directed against.

I think these criticisms are valid for rejecting the blunt assessments of real-world inequalities that some Rawlsians are apt to make. But I do not think Rawls himself, nor his theory when read in context, made these elementary errors. Rawls’ principles of justice apply to the basic structure of social institutions rather than the resulting pattern of social resources as such. Moreover, the primary goods that Rawls take to be relevant for assessing social institutions are essentially public goods. It makes sense to guarantee, for example, basic civil liberties to all on an equal basis even if turns out to be costly. I can think of two reasons for this:

  1. In a society not facing acute scarcity, you would not want to risk placing yourself in a social position where your civil liberties could be denied even if it was relatively unlikely.
  2. Living in a society where basic liberties are denied to others is going to cause problems for everyone, whether through regime instability or fraught social and economics relationships that are not based on genuine mutual advantage but coercion from discretionary powers.

To be fair to utilitarians, J.S. Mill went in this direction, although one had to squint to see how it fit into a utilitarian calculus. But if Rawls was ultimately defending a more principled approach to social relationships using the tools of expediency, I see that as a valuable project.

So, I think that the Rawlsian approach is still a fruitful way to evaluate the distinctive problem of political order. His theory offers the resources to resist not just utopian libertarian rights theorists, but also socialists and egalitarians who similarly fail to account for the distinctive role of political institutions for resolving problems of collective action. Where I think Rawls erred when endorsing what amounts to a socialist institutional framework is on his interpretation of social theory. Rawls argued that people behave pretty selfishly in market interactions but could readily pursue the public good when engaged in everyday politics. I argue otherwise. Here is a snippet from Neoliberal Social Justice (pp. 96-97) where I make the case for including a more consistently realistic account of human motivation within his framework:

Problems of justice are not purely about assurance amongst reasonable people or identifying anti-social persons. Instead, we must consider the anti-social person within ourselves: the appetitive, biased, narrow-minded, prejudicial self that drives a great deal of our every-day thoughts and interactions (Cowen, 2018). If we are to make our realistic selves work with each other to produce a just outcome, then we should affirm institutions that allow these beings, not just the wholesome beings of our comfortable self-perception, to cooperate. We have to be alive to the fact that we are dealing with agents who are apt to affirm a scheme as fair and just at one point (and even sincerely mean it), then forgetfully, carelessly, negligently or deliberately break the terms of that scheme at another point if they have an opportunity and reason enough to do so. Addressing ourselves as citizens in this morally imperfect state, as opposed to benighted people outside a charmed circle of reasonableness, is helpful. It means we can now include such considerations within public reason. The constraints of rules emerging from a constitutional stage may chafe at other stages of civil interaction. Nevertheless, they may be fully publicly justified.

In memory of Gerald Gaus (1952-2020)

I was saddened to hear that Gerald Gaus, the world-renowned liberal philosopher, died yesterday. Gaus was a critical developer of a public reason approach to classical liberalism, and powerful exponent of the interdisciplinary research agenda of Philosophy, Politics and Economics. While we met in person only occasionally, he was a significant influence on my approach to understanding the liberal tradition.

His perspective was deeply pluralist. One observation that really struck me from The Order of Public Reason (and that I still grapple with today) was that a society could function more effectively (in fact, might only function at all) when citizens have a range of moral attitudes towards things like rule-following, and especially eagerness to punish rule-breakers. For society to progress, you may need both conservative-inclined individuals to enforce moral norms and liberal-minded people to challenge them when circumstances prompt reform.

He applied this idea of strength through moral diversity to the political system too. On Gaus’s account, one of the strengths of liberal democracy is its ability to shift from conservative to liberal, and left to right, through competitive elections. Social progress cannot follow a straight and obvious path but requires, at different moments, experimentation, innovation, reversal and consolidation. Democracy helps select the dominant mode from a diversity of perspectives.

This depth of pluralism is counter-intuitive within the discipline of normative political theory that increasingly avers a narrow set of ideological commitments as acceptable, and rejects even fairly minor variations in social morality as possessing little or no value. Indeed, the last time I saw Gaus was early this year when he gave an evening talk at the Britain and Ireland Association for Political Thought conference. He presented a model for seeking political compromises among very different moral ideals. His commitment to treating the whole political spectrum as worthy of engagement drew a few heckles. The prospect of engaging with Trump supporters, for example, evidently nauseated some of the audience. Gaus was the very model of the liberal interlocutor, ignoring the hostility, and responding with grace, civility and ideas for going forward productively.

His approach to scholarship and discussion embodied his commitment to liberal toleration and the fusion of ethical horizons. That’s how he will be remembered.

New ‘summer eating allowance’ hard to stomach for low earning taxpayers

Rishi Sunak on the economy and lessons learnt from the Covid-19 ...

[This is a guest post by Dr Wesley Key, Senior Lecturer in Social Policy at the University of Lincoln.]

The announcement on 8th July 2020 by Chancellor Rishi Sunak that the government will refund 50% of the cost of meals out during Mondays-Wednesdays in August 2020, at an estimated cost to the taxpayer of £500m, will, for many reasons, be hard to stomach for low paid working age taxpayers who cannot afford to eat out themselves. For such people, paying the rent, heating their homes and feeding their children will often leave little or nothing left over for dining out.

This new ‘summer eating allowance’ is likely to disproportionately benefit affluent older people with high levels of disposable income, whose custom typically helps to sustain many eating outlets during the mid-day/afternoon periods of the working week. The very same affluent older people who have qualified, with no means test, for free prescriptions aged 60-plus, for a free TV licence if a household member is aged 75-plus (up to August 2020), for a Winter Fuel Payment if a household member has reached state pension age, and for free local bus travel if they have reached women’s state pension age, regardless of their gender. The very same older people to benefit from the seven-fold rise in UK private pension income during 1977-2016.

Low paid and benefit dependent parents may also wonder why Chancellor Sunak is splashing out such a large sum of taxpayers’ cash, given that it took the efforts of Manchester United footballer Marcus Rashford to change government policy on 16th June 2020 to ensure that children eligible for Free School Meals continued to receive the relevant food vouchers during the elongated summer vacation period. This ‘COVID Summer Food Fund‘ was eventually set up at an estimated cost of £120m, less than a quarter of the cost of Sunak’s ‘Eat out to help out’ scheme, a.k.a. the ‘summer eating allowance.’

In the longer term, when the reality of tax rises and/or spending cuts to pay for the COIVD-19 bailout begins to bite, the government needs to focus on intergenerational fairness and ensure that well off pensioners pay their share of the nation’s debt. It is time that a government made non-poor over-60s purchase their medication via a Prescription Prepayment Certificate (PPC), which in 2020-21 costs younger adults £29.65 for 3 months or £105.90 for 12 months, sums well within the reach of people in receipt of private and state pension payments. It is also time to make employees aged 65-plus pay a tax of the same rate as the employee National Insurance Contributions paid by younger workers, in order for older workers to fully contribute to the funding of the public services that they use more extensively than their younger colleagues. Such moves to cut the benefits received by, and increase the tax taken from, healthy, active people in their 60s and 70s would help to increase the funding of the social care services that are largely used by people aged 80-plus who are no longer able to undertake paid work and are entitled to face lower user charges for the social care that they require to ensure a degree of dignity and independence in old age.

BHL is dead, long live BHL?

The Bleeding Heart Libertarians blog has ended due, I think, as Henry Farrell intuits to creative differences between the founders. Jason Brennan, who was recently making by far the most contributions to the blog, has joined a new blog set up by Jessica Flanigan, 200-proof liberals. [Corrected to reflect who set up what]

I liked the orientation of BHL but I never liked the label. My way into libertarianism was noticing the state insisted on locking people up for taking or selling drugs and putting gay men in docks to justify their private sexual interests. I did not think you should trust such a violent entity with something important like poverty aveliation. There was nothing heartless about my state skepticism. The label ‘BHL’ on some readings suggests there was.

When I realised things were a little more complicated and counter-intuitive when it came to political authority, my ideology shifted to classical liberalism. I now believe that welfare provision can (and should) be disentangled from the more coercive aspects of the state. This is a case of my theorising getting a head, rather than a heart. Libertarians do not need lack for heart. If everyone naturally respected each other’s rights and were generous with those less fortunate than themselves, you would have as much as an ideal society as any liberal egalitarian could offer. Reality means that what purist libertarians have to offer is often not going to work than various statist alternatives.

One of the divisions within BHL was whether it was worth engaging sympathetically with John Rawls’ theory of justice. Both Brennan and Jess Flanigan have written pointed criticisms of Rawls’ framework. They argue that Rawlsian distinctions between basic liberties (to be constitutionally enshrined) and other liberties that are inessential for liberal political life fail. Flanigan argues that all liberties could be essential depending on the specific life plans that people may have, so the distinction between basic and non-basic fails. Brennan argues that Rawls’ own ‘moral powers’ tests for what makes a liberty basic are so rigorous that highly non-liberal regimes could pass them, at least in principle.

I disagree. Engagement with Rawls’ framework among classical liberals still has intellectual pay-offs in terms of discovering what a free and fair society looks like. A Rawlsian case for liberal democracy and capitalism follows from some logical extrapolations of Rawls’ principles alongside some updated empirical evidence. The case can be made according to Rawls’ notion of public reason.

It has proven a little difficult so far to get contemporary Rawlsians to take this reconciliation between right and left liberalisms seriously. When Tomasi wrote in Free Market Fairness about libertarians and liberals being stuck in two opposing camps, he was not exaggerating! But I do not think that is a flaw in Rawls’ framework that was developed thanks to sustained engagement with economic theory. Most contemporary Rawlsians are more engaged in the philosophy of Rawls rather than the political economy that motivates some of his claims about regime types. But Rawls was pretty interdisciplinary and the addition of refined economic theory is compatible with his logic and framework.

Again: Never reason from a fatality change

The future isn’t written yet

Last week Richard Epstein predicted around 500 fatalities in the United States (I originally misread his estimate to be 50,000 for the US, not the whole world). His estimate was tragically falsified within days and he has now revised his estimate to 5,000. I still think that’s optimistic but I am hopeful for less than 50,000 deaths in the United States given the social distancing measures currently in place.

Today, several US peers have become excited about a Daily Wire article on comments by a British epidemiologist, Neil Ferguson. He has lowered his UK projections from 500,000 to 20,000 Coronavirus fatalities. The article omits the context of the change. The original New Scientist article (from which the Daily Wire is derivative with little original reporting) explains that the new fatality rate is partly due to a shift in our understanding of existing infections, but also a result of the social distancing measures introduced.

The simple point is:

Policy interventions will change infection rates, alter future stresses on the health system, and (when they work) lower future projections of fatalities. When projections are lower, it is not necessarily because the Coranavirus is intrinsically less deadly than believed but because appropriate responses have made it less deadly.

Life

Screenshot 2020-03-26 at 12.15.15

No matter how old, frail or vulnerable it may be, a life isn’t something to take or risk at another’s discretion. Nor does it undermine culpability when someone dies as a result of negligence. The common law ‘eggshell skull’ rule reflects this moral principle.

During the Coronavirus pandemic, some erstwhile defenders of the famous Non-Aggression Principle (NAP) appear to have forgotten that natural rights are conceived to protect life as well as liberty and property. They seem to think that the liberties we ordinarily enjoy have priority over the right to life of others. The environment has changed and, for the time being, many activities that we previously knew to be safe for others are not. They are not part of our set of liberties until a reformed set of rules, norms and habits establishes a sufficiently hygienic public environment. To say that bans on public gatherings violate natural rights a priori is as untenable as G.A. Cohen’s claim that a prohibition on walking onto a train without a valid ticket is a violation of one’s freedom.

The clue for anarcho-capitalist state-sceptics that this is a genuine shift in social priorities is that even organized criminal gangs are willing to enforce social distancing. You do not have to believe that the state itself is legitimate to see that the need for social distancing is sufficiently morally compelling that it can be enforced absent free agreement, just as one does not need free agreement to exercise a right to self-defense.

Not every restriction is going to be justified, although erring on the restrictive side makes sense while uncertainty about the spread of infection persists. Ultimately, restrictions have to balance genuine costs with plausible benefits. But rejecting restrictions on a priori grounds does not cohere with libertarian principles. Right now, our absolute liberties extend to the right to be alone. Everything else must be negotiated under uncertainty. Someone else’s life, even two-weeks or so in the future, is a valid side-constraint on liberty. People can rightfully be made to stay at home if they are fortunate enough to have one. When people have to travel out of necessity, they can be temporarily exempted, compensated or offered an alternative reasonable means of satisfying their immediate needs.

Pandemic responses are beyond Evidence-based Medicine

critical-appraisal-of-randomized-clinical-trials-14-638

John Ioannidis, a professor of medicine at Stanford University, fears that the draconian measures to enforce social distancing across Europe and United States could end up causing more harm than the pandemic itself. He believes that governments are acting on exaggerated claims and incomplete data and that a priority must be getting a more representative sample of populations currently suffering corona infections. I agree additional data would be enormously valuable but, following Saloni Dattani, I think we have more warrant for strong measures than Ioannidis implies.

Like Ioannidis’ Stanford colleague Richard Epstein, I agree that estimates of a relatively small overall fatality rate are plausible projections for most of the developed world and especially the United States. Unlike Epstein, I think those estimates are conditional on the radical social distancing (and self-isolation) measures that are currently being pushed rather than something that can be assumed. I am not in a position to challenge Ioannidis’ understanding of epidemiology. Others have used his piece as an opportunity to test and defend the assumptions of the worst-case scenarios.

Nevertheless, I can highlight the epistemic assumptions underlying Ioannidis’ pessimism about social distancing interventions. Ioannidis is a famous proponent (occasionally critic) of Evidence-based Medicine (EBM). Although open to refinement, at its core EBM argues that strict experimental methods (especially randomized controlled trials) and systematic reviews of published experimental studies with sound protocols are required to provide firm evidence for the success of a medical intervention.

The EBM movement was born out of a deep concern of its founder, Archie Cochrane, that clinicians wasted scarce resources on treatments that were often actively harmful for patients. Cochrane was particularly concerned that doctors could be dazzled or manipulated into using a treatment based on some theorized mechanism that had not been subject to rigorous testing. Only randomized controlled trials supposedly prove that an intervention works because only they minimize the possibility of a biased result (where characteristics of a patient or treatment path other than the intervention itself have influenced the result).

Picture4

So when Ioannidis looks for evidence that social distancing interventions work, he reaches for a Cochrane Review that emphasizes experimental studies over other research designs. As is often the case for a Cochrane review, many of the results point to uncertainty or relatively small effects from the existing literature. But is this because social distancing doesn’t work, or because RCTs are bad at measuring their effectiveness under pandemic circumstances (the circumstances where they might actually count)? The classic rejoinder to EBM proponents is that we know that parachutes can save lives but we can never subject them to RCT. Effective pandemic interventions could suffer similar problems.

Nancy Cartwright and I have argued that there are flaws in the methodology underlying EBM. A positive result for treatment against control in a randomized controlled trial shows you that an intervention worked in one place, at one time for one set of patients but not why and whether to expect it to work again in a different context. EBM proponents try to solve this problem by synthesizing the results of RCTs from many different contexts, often to derive some average effect size that makes a treatment expected to work overall or typically. The problem is that, without background knowledge of what determined the effect of an intervention, there is little warrant to be confident that this average effect will apply in new circumstances. Without understanding the mechanism of action, or what we call a theory of change, such inferences rely purely on induction.

The opposite problem is also present. An intervention that works for some specific people or in some specific circumstances might look unpromising when it is tested in a variety of cases where it does not work. It might not work ‘on average’. But that does not mean it is ineffective when the mechanism is fit to solve a particular problem such as a pandemic situation. Insistence on a narrow notion of evidence will mean missing these interventions in favor of ones that work marginally in a broad range of cases where the answer is not as important or relevant.

Thus even high-quality experimental evidence needs to be combined with strong background scientific and social scientific knowledge established using a variety of research approaches. Sometimes an RCT is useful to clinch the case for a particular intervention. But sometimes, other sources of information (especially when time is of the essence), can make the case more strongly than a putative RCT can.

In the case of pandemics, there are several reasons to hold back from making RCTs (and study designs that try to imitate them) decisive or required for testing social policy:

  1. There is no clear boundary between treatment and control groups since, by definition, an infectious disease can spread between and influence groups unless they are artificially segregated (rendering the experiment less useful for making broader inferences).
  2. The outcome of interest is not for an individual patient but the communal spread of a disease that is fatal to some. The worst-case outcome is not one death, but potentially very many deaths caused by the chain of infection. A marginal intervention at the individual level might be dramatically effective in terms of community outcomes.
  3. At least some people will behave differently, and be more willing to alter their conduct, during a widely publicized pandemic compared to hygienic interventions during ordinary times. Although this principle might be testable in different circumstances, the actual intervention won’t be known until it is tried in the reality of pandemic.

This means that rather than narrowly focusing on evidence from EBM and behavioral psychologists (or ‘nudge’), policymakers responding to pandemics must look to insights from political economy and social psychology, especially how to shift norms towards greater hygiene and social distancing. Without any bright ideas, traditional public health methods of clear guidance and occasionally enforced sanctions are having some effect.

Screenshot 2020-03-23 at 23.57.13

What evidence do we have at the moment? Right now, there is an increasing body of defeasible knowledge of the mechanisms with which the Coronavirus spreads. Our knowledge of existing viruses with comparable characteristics indicates that effectively implemented social distancing is expected to slow its spread and that things like face masks might slow the spread when physical distancing isn’t possible.

We also have some country and city-level policy studies. We saw an exponential growth of cases in China before extreme measures brought the virus under control. We saw immediate quarantine and contact tracing of cases in Singapore and South Korea that was effective without further draconian measures but required excellent public health infrastructure.

We have now also seen what looks like exponential growth in Italy, followed by a lockdown that appears to have slowed the growth of cases though not yet deaths. Some commentators do not believe that Italy is a relevant case for forecasting other countries. Was exponential growth a normal feature of the virus, or something specific to Italy and its aging population that might not be repeated in other parts of Europe? This seems like an odd claim at this stage given China’s similar experience. The nature of case studies is that we do not know with certainty what all the factors are while they are in progress. We are about to learn more as some countries have chosen a more relaxed policy.

Is there an ‘evidence-based’ approach to fighting the Coronavirus? As it is so new: no. This means policymakers must rely on epistemic practices that are more defeasible than the scientific evidence that we are used to hearing. But that does not mean a default to light-touch intervention is prudent during a pandemic response. Instead, the approaches that use models with reasonable assumptions based on evidence from unfolding case-studies are the best we can do. Right now, I think, given my moral commitments, this suggests policymakers should err on the side of caution, physical distancing, and isolation while medical treatments are tested.

[slightly edited to distinguish my personal position from my epistemic standpoint]

Never reason from a fatality rate

 

Richard Epstein has produced several posts and a video interview arguing that the mainstream media is overreacting to the Coronavirus pandemic. Richard understands the potential seriousness of this situation and the proper role of government. He recognises the value of the Roman maxim Salus populi suprema lex esto – let the health of the people be the highest law. In public health emergencies, many moral and legal claims resulting from individual rights and contracts are vitiated, and some civil liberties suspended.

Nevertheless, along with Cass Sunstein, Richard claims that this particular emergency is likely to be overblown. His justification for this is based on data for infection and fatality rates emerging from South Korea and Singapore that appear (currently) under control with only a relatively small proportion of their population infected. This was achieved without the country-wide lockdowns now being rolled out across Europe. Extrapolating from this experience, Richard suggests that the Coronavirus is not too contagious outside particular clusters of vulnerable individuals in situations like cruise ships and nursing homes.

The line of argument is vulnerable to the same criticism that one should never reason from a price change. The classic case of reasoning from a price change is reading oil prices as a measure of economic health. When oil prices drop, it could herald an economic boom or, paradoxically, a recession. If the price dropped because supply increased, when OPEC fails to enforce a price floor, then that lower price should stimulate the rest of the economy as transport and travel become cheaper. But if the price drops because economic activity is already dropping, and oil suppliers are struggling to sell at high prices, then the economy is heading towards a recession. The same measure can mean the opposite depending on the underlying mechanism.

The same logic applies to epidemics. The transmission rate is a combination of the (potentially changing) qualities of the virus and the social environment in which it spreads. The social environment is determined, among other things, by social distancing and tracking. Substantial changes in lifestyle can have initially marginal, but day after day very large, impacts on the infection rate. When combined with the medium-term fixed capacity of existing health systems, those rates translate into the difference between 50,000 and 500,000 deaths. You can’t look at relatively low fatality rates in some specific cases to project rates elsewhere without understanding what caused them to be the rate they are.

Right now, we don’t know for sure if the infection is controllable in the long run. However, we now know that South Korea and Singapore controlled the spread so far and also had systems in place to test, track and quarantine carriers of the virus. We also now know that Italy, without such a system, has been overrun with serious cases and a tragic increase in deaths. We know that China, having suppressed knowledge and interventions to contain the virus for several months, got the virus under control only through aggressive lockdowns.

So the case studies, for the moment, suggest social distancing and contact tracing can reduce cases if applied very early on. But more draconian measures are the only response if testing isn’t immediately available and contact tracing fails. Now is sadly not the time for half-measures or complacency.

I believe that Richard’s estimated fatality rates (less than 50,000 fatalities in the US) are ultimately plausible, but optimistic at this stage. Perversely, they are only plausible at all insofar as people project a much higher future fatality rate now. People must act with counter-intuitively strong measures before there is clear and obvious evidence it is needed. Like steering a large ship, temporally distant sources of danger must prompt radical action now. We will be lucky if we feel like we did too much in a few months’ time. Richard believes people are more worried than warranted right now. I think that’s exactly how worried people need to be to adopt the kind of adaptive behaviors that Richard relies on to explain how the spread of infection will stabilise.

Joker: an evidence-based criminology review (spoilers)

joker-phoenix-1135161-1280x0

Last Friday, Joker hit cinemas to much acclaim and some anxiety. Hot takes claim it glamorizes violence while the Slate pitch is that it’s boring. Having seen it over the weekend, I don’t see anything more transgressive about it than the various Batman films to which Joker is a sort of prequel, but it is more entertaining.

 

The film uses both narrative and moral ambiguity driven by the theme of mental illness. We are not quite sure what’s real and what’s not. And what (if anything) is responsible for catastrophic events that turn wannabe standup-comedian Arthur Fleck into the Clown Prince of Crime. This ambiguity is in league with contemporary criminology. Many researchers now suspect that crimes are typically the result of multiple, incremental causes (little things going wrong) that together add up to sometimes catastrophic outcomes.


So with spoilers already skulking in the alleyways over the fold, let’s review some of 
Joker’s overlapping narrative alongside some theories of crime (some of which I draw from my forthcoming book chapter on evidence-based policing).

 

Continue reading

Atomistic? Moi?

I have written a brief paper entitled ‘Hayek: Postatomic Liberal’ intended for a collection on anti-rationalist thinkers. For the time being, the draft is available from SSRN and academia.edu. Here are a couple of snippets:

Hayek offers a way of fighting the monster of Rationalism while avoiding becoming an inscrutable monster oneself. The crucial move, and in this he follows Hume, is to recognize the non-rational origins of most social institutions, but treating this neither as grounds for dismissal of those institutions as unsound, nor an excuse to retreat from reason altogether. Indeed, reason itself has non-rational, emergent origins but is nevertheless a marvelous feature of humanity. Anti-rationalist themes that appear throughout Hayek’s work include: an emphasis on learning by processes of discovery, trial and error, feedback and adaptation rather than knowing by abstract theorizing; and the notion that the internal processes by which we come to a particular belief or decision is more complex than either a scientific experimenter or our own selves in introspection can know. We are always, on some level, a mystery even to ourselves…

Departing from Cartesian assumptions of atomistic individualism, this account can seem solipsistic. When we are in the mode of thinking of ourselves essentially as separate minds that relate to others through interactions in a material world, then it feels important that we share that world and are capable of clear communication about it and ourselves in order to share a genuine connection with others. Otherwise, we are each in our separate worlds of illusion. From a Hayekian skeptical standpoint, the mind’s eye can seem to be a narrow slit through which shadows of an external world make shallow, distorted impressions on a remote psyche. Fortunately, this is not the implication once we dispose of the supposedly foundational subject/object distinction. We can recognize subjecthood as an abstract category, a product of a philosophy laden with abstruse theological baggage… During most of our everyday experience, when we are not primed to be so self-conscious and self-centered, the phenomenal experience of ourselves and the environment is more continuous, flowing and irreducibly social in the sense that the categories that we use for interacting with the world are constituted and remade through interactions with many other minds.

Is Dominic Cummings a hypocrite, or does the EU’s Common Agricultural Policy just suck?

Tedandralph

On Saturday, The Observer revealed that Prime Minister Boris Johnson’s recently appointed chief of staff received around £235,000 of EU farm subsidies over the course of two decades in relation that his family owns. Dominic Cummings is often portrayed as the mastermind behind the successful referendum campaign to Leave the European Union. So he is currently enemy no.1 among remain-supporters.

I am unconvinced this latest line of attack plays in Remainers’ favor (I was a marginal Remain voter in the referendum and still hold out some hope for an eventual EEA/EFTA arrangement). Instead, this story serves as a reminder of probably the worst feature of the current EU: the Common Agricultural Policy.

The CAP spends more than a third of the total EU budget (for a population of half a billion people) on agricultural policies that support around 22 million people, most of them neither poor nor disadvantaged as Cummings himself illustrates. Food is chiefly a private good and both the interests of consumers, producers and the environment (at least in the long-term, as suggested by the example of New Zealand) are best served through an unsubsidized market. But the CAP, developed on faulty dirigiste economic doctrines, has artificially raised food prices throughout the European Union, led to massive over-production of some food commodities, and denied farmers in the developing world access to European markets (the US, of course, has its equivalent system of agricultural protectionism).

These economic distortions make an appearance in my new paper with Charles Delmotte, ‘Cost and Choice in the Commons: Ostrom and the Case of British Flood Management’. In the final section, we discuss the role that farming subsidies have historically played in encouraging inappropriately aggressive floodplain drainage strategies and uneconomic use of marginal farming land that might well be better left to nature:

British farmers currently receive substantial subsidies through the European Union’s Common Agricultural Policy. This means that both land-use decisions and farm incomes are de-coupled from underlying farm productivity. Without the ordinarily presumed interest in maintaining intrinsic profitability, farmers may fail to contribute effectively to flood prevention or other environmental goals that impacts their output unless specifically incentivized by subsidy rules. If the farms were operating unsubsidized, the costs of flooding would figure more plainly in economic calculations when deciding where it is efficient to farm in a floodplain and what contributions to make to common flood defense. Indeed, European governments are currently in the perverse position of subsidizing relatively unproductive agriculture with one policy, while attempting to curb the resulting harm to the natural environment with another. These various schemes of regulation and subsidy plausibly combine to attenuate the capacity of the market process to furnish both private individuals and local communities with the appropriate knowledge and incentives to engage in common flood prevention without state support.

Our overall argument is that it is not just the direct costs of subsidies we should worry about, but the dynamics of intervention. In this case, they have led not only farmers but homeowners and entire towns to become reliant on public flood defenses with significant costs to the natural environment. There is limited scope for the government to withdraw provision (at least in a politically palatable way).

Turning back to The Observer’s gotcha story, it isn’t clear to me that Cummings is a hypocrite. I think the best theoretical work on hypocrisy in one’s personal politics comes from Adam Swift’s How Not To be Hypocrite: School Choice for the Morally Perplexed. In it Swift argues that the scope to complain about supposed hypocritical behavior, especially taking advantage of policies that you personally disagree with, can be narrower than intuitively imagined, mainly because of the nature of collective action problems. Swift’s conclusion is that, in some circumstances, leftwing critics of private schools are entitled to send their own children to private schools so long as others continue to do so and burden of doing otherwise is too strong. Presumably, this also means that strident libertarians are not hypocritical to use public roads so long as a reasonable private alternative is unavailable.

In an environment where every farmer receives an EU subsidy, it might be asking too much of EU-skeptic farmers to deny it to themselves. Instead, it seems legitimate and plausible to take the subsidy while campaigning sincerely to abolish it.

Do we want criminals to ‘feel terror at the thought of committing crimes’?

Last week, Priti Patel, the new British Home Secretary, provoked a media stir when she announced that she thought the criminal justice system should aim to strike fear into the heart of criminals. Critics combined her new interview with her previous support for the death penalty, banned in the mainline UK since 1965, to suggest that Patel represents a draconian and reactionary turn in British law enforcement.

Then a couple of days ago, a YouGov survey showed, that 72 per cent of the British public agreed with her. Media commentators can forget quite how high support is for law and order among ordinary citizens. Support for the death penalty itself still attracts almost half of the population.

Are the public right? The meat of the Government’s new policy is an increase in the number of police officers; this at a time of increasing violent crime and concerns about rising knife crime in London. On that front, the evidence points in Patel’s favour. More police often reduce crime and do so through a variety of mechanisms, including situational deterrence (for example, patrolling in high-crime areas) as well as increasing detection rates. There is general agreement that increasing the certainty of apprehension contributes to deterrence.

What about punishment severity? There the evidence is decidedly more mixed. There is remarkably little evidence, for example, that the death penalty deters crimes like murder more than an appropriate prison sentence.  Using a new data set of sentencing practice in all police force areas in England and Wales, myself and some great colleagues at the Centre for Crime, Justice and Policing at the University of Birmingham produced a study just printed last month: ‘Alternatives to Custody’. We compared the way a previous year’s sentencing influenced the subsequent year’s recorded crime.

What we found was that for property crime, our largest category, and robbery, community sentences generally reduced crime more than prison. In fact, one of our models suggested increased use of prison caused subsequent crime to go up. On the other hand, prison seemed to work (and was the only thing that worked) to reduce violent crime and sexual offences. (We summarised our results for the LSE British Politics and Policy blog.)

The lesson that we draw is that deterrence isn’t an overwhelming explanation of the impact of sentencing. Harsher sentencing probably works to deter some offenders. But at the same time carrying out punishments can have criminogenic effects. Experience of prison often makes convicts less employable and can effectively socialise them into having an enduring criminal identity. Of course, many offenders in the real-world are not particularly well informed about the criminal justice system. They may also have less self-control than a typical member of the public. So information about an increased penalty for a crime may never effectively filter into the deliberation and reflection of some offenders until they are sentenced, at which point you get the high financial and social costs of prison kicking in.

Getting caught by the police, perhaps on a few occasions,  is a more immediate sign to an offender that their behaviour is unlikely to pay off in the long-term. What does this mean for Patel? It suggests that fear of the consequences can play a role, but what we really need is graduated sanctions, avoiding prison when possible. This gives offenders plenty of options to exit a criminal career path. Relying on terror, by contrast, can lead to a large prison population producing a lot of stigmatized and harmed individuals who quite possibly will re-offend when they are released.

One weird old tax could slash wealth inequality (NIMBYs, don’t click!)

yesnoimputedrent

What dominates the millennial economic experience? Impossibly high house prices in areas where jobs are available. I agree with the Yes In My Back Yard (YIMBY) movement that locally popular, long-term harmful restrictions on new buildings are the key cause of this crisis. So I enjoyed learning some nuances of the issue from a new Governance Podcast with Samuel DeCanio interviewing John Myers of London YIMBY and YIMBY Alliance.

Myers highlights the close link between housing shortages and income and wealth inequality. He describes the way that constraints on building in places like London and the South East of England have an immediate effect of driving rents and house prices up beyond what people relying on ordinary wages can afford. In addition, this has various knock-on effects in the labour market. Scarcity of housing in London drives up wages in areas of high worker demand in order to tempt people to travel in despite long commutes, while causing an excess of workers to bid wages down in deprived areas.

One of the aims of planning restrictions in the UK is to ‘rebalance’ the economy in favour of cities outside of London but the perverse result is to make the economic paths of different regions and generations diverge much more than they would do otherwise. Myers cites a compelling study by Matt Rognlie that argues that most increased wealth famously identified by Thomas Piketty is likely due to planning restrictions and not a more abstract law of capitalism.

Rognlie also inspires my friendly critique of Thomas Piketty and some philosophers agitating in his wake just published online in Critical Review of International Social and Political Philosophy: ‘The mirage of mark-to-market: distributive justice and alternatives to capital taxation’.

My co-author Charles Delmotte and I argue that for both practical and conceptual reasons, radical attempts to uproot capitalism by having governments take an annual bite out of everyone’s capital holdings are apt to fail because, among other reasons, the rich tend to be much better than everyone else at contesting tax assessments. Importantly, such an approach is not effectively targeting underlying causes of wealth inequality, as well as the lived inequalities of capability that housing restrictions generate. The more common metric of realized income is a fairer and more feasible measure of tax liabilities.

Instead, we propose that authorities should focus on taxing income based on generally applicable rules. Borrowing an idea from Philip Booth, we propose authorities start including imputed rent in their calculations of income tax liabilities. We explain as follows:

A better understanding of the realization approach can also facilitate the broadening of the tax base. One frequently overlooked form of realization is the imputed rent that homeowners derive from living in their own house. While no exchange takes place here, the homeowner realizes a stream of benefits that renters would have to pay for. Such rent differs from mark-to-market conceptions by conceptualizing only the service that a durable good yields to an individual who is both the owner of the asset and its consumer or user in a given year. It is backward-looking: it measures the value that someone derives from the choice to use a property for themselves rather than rent or lease it over a specific time-horizon. It applies only to the final consumer of the asset who happens also to be the owner.

Although calculating imputed rent is not without some difficulties, it has the advantage of not pretending to estimate the whole value of the asset indefinitely into the future. While not identical and fungible, as with bonds and shares, there are often enough real comparable contracts to rent or lease similar property in a given area so as to credibly estimate what the cost would have been to the homeowner if required to rent it on the open market. The key advantage of treating imputed rent as part of annual income is that, unlike other property taxes, it can be more easily included as income tax liabilities. This means that the usual progressivity of income taxes can be applied to the realized benefit that people generally draw from their single largest capital asset. For example, owners of a single-family home but on an otherwise low income will pay a small sum at a small marginal rate (or in some cases may be exempted entirely under ordinary tax allowances). By contrast, high earners, living in large or luxury properties that they also own, will pay a proportionately higher sum at a higher marginal rate on their imputed rent as it is added to their labor income. Compared to other taxes on real estate, imputed rent is more systematically progressive and has significant support among economists especially in the United Kingdom (where imputed rent used to be part of the income tax framework).

This approach to tax reform is particularly apt because a range of international evidence suggests that the majority of contemporary observed increases in wealth inequality in developed economies, at least between the upper middle class and the new precariat, can be explained by changes in real estate asset values. Under this proposal, homeowners will feel the cost of rent rises in a way that to some extent parallels actual renters.

For social democrats, what I hope will be immediately attractive about this proposal is that it directly takes aim at a major source of the new wealth inequality in a way that is more feasible than chasing mirages of capital around the world’s financial system. For me, however, the broader hope is the dynamic effects. It will align homeowners’ natural desire to reduce their tax liability with YIMBY policies that lower local rents (as that it is what part of their income tax will be assessed against). If a tax on imputed rent were combined with more effective fiscal federalism, then homeowners could become keener to bring newcomers into their communities because they will share in financing public services.