Be Our Guest: “Providing healthcare isn’t practicing medicine”

Jack Curtis has a new Guest Post up. An excerpt:

It was expected that doctors would have some charity patients from those less well off. You also expected that he would do everything possible for your care because that reputation was the reason you wouldn’t call someone else next time. That was reinforced by the priceless value set on human life by the prevailing Judeo-Christian ethos. No, this is not fiction; such was medical practice in Los Angeles in my youth. A simplification certainly, but it conveys the essential: Human ills and injuries were serviced by medical doctors whose state licensing and professional organizations approximated medieval guilds.

Please, read the rest.

On a different note, Jack’s excellent thoughts will be the last installment of NOL‘s experimental “Be Our Guest” feature. I just couldn’t find the time to get a decent turnaround. If you still want to have your say, and nowhere to say it, jump on in the ‘comments’ threads.

Broken incentives in medical research

Last week, I sat down with Scott Johnson of the Device Alliance to discuss how medical research is communicated only through archaic and disorganized methods, and how the root of this is the “economy” of Impact Factor, citations, and tenure-seeking as opposed to an exercise in scientific communication.

We also discussed a vision of the future of medical publishing, where the basic method of communicating knowledge was no longer uploading a PDF but contributing structured data to a living, growing database.

You can listen here: https://www.devicealliance.org/medtech_radio_podcast/

As background, I recommend the recent work by Patrick Collison and Tyler Cowen on broken incentives in medical research funding (as opposed to publishing), as I think their research on funding shows that a great slow-down in medical innovation has resulted from systematic errors in organizing knowledge gathering. Mark Zuckerberg actually interviewed them about it here: https://conversationswithtyler.com/episodes/mark-zuckerberg-interviews-patrick-collison-and-tyler-cowen/.

Launching our COVID-19 visualization

I know everyone is buried in COVID-19 news, updates, and theories. To me, that makes it difficult to cut through the opinions and see the evidence that should actually guide physicians, policymakers, and the public.

To me, the most important thing is the ability to find the answer to my research question easily and to know that this answer is reasonably complete and evidence-driven. That means getting organized access to the scientific literature. Many sources (including a National Library of Medicine database) present thousands of articles, but the organization is the piece that is missing.

That is why I launched StudyViz, a new product that enables physicians to build an updatable visualization of all studies related to a topic of interest. Then, my physician collaborators built just such a visual for COVID-19 research, presenting a sunburst diagram that users can select to identify their research question of interest.

Studyviz sunburst

For instance, if you are interested in the impact of COVID-19 on pregnant patients, just go to “Subpopulations” and find “Pregnancy” (or neonates, if that is your concern). We nested the tags so that you can “drill down” on your question, and so that related concepts are close to each other. Then, to view the studies themselves, just click on them to see an abstract with the key info (patients, interventions, and outcomes) highlighted:

Abstract

This is based on a complex concept hierarchy built by our collaborators and that is constantly evolving as the literature does:

Hierarchy

Even beyond that, we opened up our software to let any researchers who are interested build similar visuals on any disease state, as COVID-19 is not the only disease for which organizing and accessing the scientific literature is important!

We are seeking medical coinvestigators–any physician interested in working with us, simply email contact@nested-knowledge.com or contact us on our website!

Pandemic responses are beyond Evidence-based Medicine

critical-appraisal-of-randomized-clinical-trials-14-638

John Ioannidis, a professor of medicine at Stanford University, fears that the draconian measures to enforce social distancing across Europe and United States could end up causing more harm than the pandemic itself. He believes that governments are acting on exaggerated claims and incomplete data and that a priority must be getting a more representative sample of populations currently suffering corona infections. I agree additional data would be enormously valuable but, following Saloni Dattani, I think we have more warrant for strong measures than Ioannidis implies.

Like Ioannidis’ Stanford colleague Richard Epstein, I agree that estimates of a relatively small overall fatality rate are plausible projections for most of the developed world and especially the United States. Unlike Epstein, I think those estimates are conditional on the radical social distancing (and self-isolation) measures that are currently being pushed rather than something that can be assumed. I am not in a position to challenge Ioannidis’ understanding of epidemiology. Others have used his piece as an opportunity to test and defend the assumptions of the worst-case scenarios.

Nevertheless, I can highlight the epistemic assumptions underlying Ioannidis’ pessimism about social distancing interventions. Ioannidis is a famous proponent (occasionally critic) of Evidence-based Medicine (EBM). Although open to refinement, at its core EBM argues that strict experimental methods (especially randomized controlled trials) and systematic reviews of published experimental studies with sound protocols are required to provide firm evidence for the success of a medical intervention.

The EBM movement was born out of a deep concern of its founder, Archie Cochrane, that clinicians wasted scarce resources on treatments that were often actively harmful for patients. Cochrane was particularly concerned that doctors could be dazzled or manipulated into using a treatment based on some theorized mechanism that had not been subject to rigorous testing. Only randomized controlled trials supposedly prove that an intervention works because only they minimize the possibility of a biased result (where characteristics of a patient or treatment path other than the intervention itself have influenced the result).

Picture4

So when Ioannidis looks for evidence that social distancing interventions work, he reaches for a Cochrane Review that emphasizes experimental studies over other research designs. As is often the case for a Cochrane review, many of the results point to uncertainty or relatively small effects from the existing literature. But is this because social distancing doesn’t work, or because RCTs are bad at measuring their effectiveness under pandemic circumstances (the circumstances where they might actually count)? The classic rejoinder to EBM proponents is that we know that parachutes can save lives but we can never subject them to RCT. Effective pandemic interventions could suffer similar problems.

Nancy Cartwright and I have argued that there are flaws in the methodology underlying EBM. A positive result for treatment against control in a randomized controlled trial shows you that an intervention worked in one place, at one time for one set of patients but not why and whether to expect it to work again in a different context. EBM proponents try to solve this problem by synthesizing the results of RCTs from many different contexts, often to derive some average effect size that makes a treatment expected to work overall or typically. The problem is that, without background knowledge of what determined the effect of an intervention, there is little warrant to be confident that this average effect will apply in new circumstances. Without understanding the mechanism of action, or what we call a theory of change, such inferences rely purely on induction.

The opposite problem is also present. An intervention that works for some specific people or in some specific circumstances might look unpromising when it is tested in a variety of cases where it does not work. It might not work ‘on average’. But that does not mean it is ineffective when the mechanism is fit to solve a particular problem such as a pandemic situation. Insistence on a narrow notion of evidence will mean missing these interventions in favor of ones that work marginally in a broad range of cases where the answer is not as important or relevant.

Thus even high-quality experimental evidence needs to be combined with strong background scientific and social scientific knowledge established using a variety of research approaches. Sometimes an RCT is useful to clinch the case for a particular intervention. But sometimes, other sources of information (especially when time is of the essence), can make the case more strongly than a putative RCT can.

In the case of pandemics, there are several reasons to hold back from making RCTs (and study designs that try to imitate them) decisive or required for testing social policy:

  1. There is no clear boundary between treatment and control groups since, by definition, an infectious disease can spread between and influence groups unless they are artificially segregated (rendering the experiment less useful for making broader inferences).
  2. The outcome of interest is not for an individual patient but the communal spread of a disease that is fatal to some. The worst-case outcome is not one death, but potentially very many deaths caused by the chain of infection. A marginal intervention at the individual level might be dramatically effective in terms of community outcomes.
  3. At least some people will behave differently, and be more willing to alter their conduct, during a widely publicized pandemic compared to hygienic interventions during ordinary times. Although this principle might be testable in different circumstances, the actual intervention won’t be known until it is tried in the reality of pandemic.

This means that rather than narrowly focusing on evidence from EBM and behavioral psychologists (or ‘nudge’), policymakers responding to pandemics must look to insights from political economy and social psychology, especially how to shift norms towards greater hygiene and social distancing. Without any bright ideas, traditional public health methods of clear guidance and occasionally enforced sanctions are having some effect.

Screenshot 2020-03-23 at 23.57.13

What evidence do we have at the moment? Right now, there is an increasing body of defeasible knowledge of the mechanisms with which the Coronavirus spreads. Our knowledge of existing viruses with comparable characteristics indicates that effectively implemented social distancing is expected to slow its spread and that things like face masks might slow the spread when physical distancing isn’t possible.

We also have some country and city-level policy studies. We saw an exponential growth of cases in China before extreme measures brought the virus under control. We saw immediate quarantine and contact tracing of cases in Singapore and South Korea that was effective without further draconian measures but required excellent public health infrastructure.

We have now also seen what looks like exponential growth in Italy, followed by a lockdown that appears to have slowed the growth of cases though not yet deaths. Some commentators do not believe that Italy is a relevant case for forecasting other countries. Was exponential growth a normal feature of the virus, or something specific to Italy and its aging population that might not be repeated in other parts of Europe? This seems like an odd claim at this stage given China’s similar experience. The nature of case studies is that we do not know with certainty what all the factors are while they are in progress. We are about to learn more as some countries have chosen a more relaxed policy.

Is there an ‘evidence-based’ approach to fighting the Coronavirus? As it is so new: no. This means policymakers must rely on epistemic practices that are more defeasible than the scientific evidence that we are used to hearing. But that does not mean a default to light-touch intervention is prudent during a pandemic response. Instead, the approaches that use models with reasonable assumptions based on evidence from unfolding case-studies are the best we can do. Right now, I think, given my moral commitments, this suggests policymakers should err on the side of caution, physical distancing, and isolation while medical treatments are tested.

[slightly edited to distinguish my personal position from my epistemic standpoint]

Nightcap

  1. Let’s have more audience-free debates Fred Kaplan, Slate
  2. Why the Natives usually sided with the British Jeffrey Ostler, Atlantic
  3. The case for shortening medical education Jain & Orr, Niskanen
  4. What’s buried in the coronavirus relief package? Billy Binion, Reason

Broken Incentives in Medical Innovation

I recently listened to Mark Zuckerberg interviewing Tyler Cowen and Patrick Collison concerning their thesis that the process of using scientific research to advance major development goals (e.g. extending the average human lifespan) has stagnated. It is a fascinating discussion that fundamentally questions the practice of scientific research as it is currently completed.

Their conversation also made me consider more deeply the incentives in my industry, medical R&D, that have shaped the practices that Cowen and Collison find so problematic. While there are many reasons for the difficulties in maintaining a breakneck pace of technological progress (“all the easy ideas are already done,” “the American education system fails badly on STEM,” etc), I think that there are structural causes that are major contributors to the great slowdown in medical progress. See my full discussion here!

The open secrets of what medicine actually helps

One of the things that I was most surprised by when I joined the medical field was how variable the average patient benefit was for different therapies. Obviously, Alzheimer’s treatments are less helpful than syphilis ones, but even within treatment categories, there are huge ranges in actual efficacy for treatments with similar cost, materials, and public conception.

What worries me about this is that not only in public but within the medical establishment, actually differentiating these therapies–and therefore deciding what therapies, ultimately, to use and pay for–is not prioritized in medical practice.

I wrote about this on my company’s blog, but its concept is purely as a comment on the most surprising dichotomy I learned about–that between stenting (no benefit shown for most patients!!) vs. clot retrieval during strokes (amazing benefits, including double the odds of good neurological outcome). Amazingly, the former is a far more common procedure, and the latter is underprovided in rural areas and in most countries outside of the US, EU, Japan, and Korea. Read more here: https://about.nested-knowledge.com/2020/01/27/not-all-minimally-invasive-procedures-are-created-equal/.

Nightcap

  1. Can Francis change the Church? Nancy Dallavalle, Commonweal
  2. A Catholic debate over liberalism Park MacDougald, City Journal
  3. Why Hari Seldon was part of the problem Nick Nielsen, Grand Strategy Annex
  4. How not to die (soon) Robin Hanson, Overcoming Bias

There is no Bloomberg for medicine

When I began working in medical research, I was shocked to find that no one in the medical industry has actually collected and compared all of the clinical outcomes data that has been published. With Big Data in Healthcare as such a major initiative, it was incomprehensible to me that the highest-value data–the data that is directly used to clear therapies, recommend them to the medical community, and assess their efficacy–were being managed in the following way:

  1. Physician completes study, and then spends up to a year writing it up and submitting it,
  2. Journal sits on the study for months, then publishes (in some cases), but without ensuring that it matches similar studies in the data it reports.
  3. Oh, by the way, the journal does not make the data available in a structured format!
  4. Then, if you want to see how that one study compares to related studies, you have to either find a recent, comprehensive, on-point meta-analysis (which is a very low chance in my experience), or comb the literature and extract the data by hand.
  5. That’s it.

This strikes me as mismanagement of data that are relevant to lifechanging healthcare decisions. Effectively, no one in the medical field has anything like what the financial industry has had for decades–the Bloomberg terminal, which presents comprehensive information on an updatable basis by pulling data from centralized repositories. If we can do it for stocks, we can do it for medical studies, and in fact that is what I am trying to do. I recently wrote an article on the topic for the Minneapolis-St Paul Business Journal, calling for the medical community to support a centralized, constantly-updated, data-centric platform to enable not only physicians but also insurers, policymakers, and even patients examine the actual scientific consensus, and the data that support it, in a single interface.

Read the full article at https://www.bizjournals.com/twincities/news/2019/12/27/there-is-no-bloomberg-for-medicine.html!

Changing the way doctors see data

Over the past four years, my brother and I have grown a business that helps doctors publish data-driven articles from the two of us to over 30 experienced researchers. However, along the way, we noticed that data management in medical publication was decades behind other fields–in fact, the vital clinical outcomes from major trials are generally published as singular PDFs with no structured data, and are analyzed in comparison to existing studies only in nonsystematic, nonupdatable publications. Effectively, medicine has no central method for sharing or comparing patient outcomes across therapies, and I think that it is our responsibility as researchers to present these data to the medical community.

Based on our internal estimates, there are >3 million published clinical outcomes studies (with over 200 million individual datapoints) that need to be abstracted, structured, and compared through a central database. We recognized that this is a monumental task, and we therefore have focused on automating and scaling research processes that have been, through today, entirely manual. Only after a year of intensive work have we found a path toward creating a central database for all published patient outcomes, and we are excited to debut our technology publicly!

Keith recently presented our venture at a Mayo Clinic-hosted event, Walleye Tank (a Shark Tank-style competition of medical ventures), and I think that it is an excellent fast-paced introduction to a complex issue. Thanks also to the Mayo Clinic researchers for their interesting questions! You can see his two-minute presentation and the Q&A here. We would love to get more questions from the economic/data science/medical communities, and will continue putting our ideas out there for feedback!

My Startup Experience

Over the past 4 years, I have had a huge transition in my life–from history student to law student to serial medical entrepreneur. Essentially, I have learned a great deal from my academic work that taught me the value that we can create if we find an unmet need in the world, create an idea that fills that need, and then use technology, personal networks, and hard work to create novelties. While startups obviously tackle any new problem under the sun, to me, they are the mechanism to bring about a positive change–and, along the way, get the resources to scale that change across the globe.

I am still very far from reaching that goal, but my family and cofounders have several visions of how to improve not only how patients are treated but also how we build the knowledge base that physicians, patients, and researchers can use to inform care and innovation. My brother/cofounder and I were recently on an entrepreneurship-focused podcast, and we got the chance to discuss our experience, our vision, and our companies. I hope this can be a springboard for more discussions about how companies are a unique agent of advancing human flourishing, and about the history and philosophy of entrepreneurship, technology, and knowledge.

You can listen here: http://rochesterrising.org/podcast/episode-151-talking-medical-startups-with-keith-and-kevin-kallmes. Heartfelt thanks to Amanda Leightner and Rochester Rising for a great conversation!

Thank you!

Kevin Kallmes

Nightcap

  1. The gory, secret lives of NHL dentists David Fleming, ESPN
  2. Imam publicly caned for breaking adultery law he helped draft BBC
  3. The Chinese Communist Party on the worldwide protests Global Times
  4. Are countries like people? Niall Ferguson, Times Literary Supplement

Nightcap

  1. To love is no easy task (America is just fine) Rachel Vorona Cote, New Republic
  2. Chronic vomiting (medical marijuana) Christopher Andrews, OUPblog
  3. The Neanderthal renaissance Rebecca Wragg Sykes, Aeon
  4. A mild defense of Andrew Johnson (the American president) RealClearHistory

Nightcap

  1. Hanukkah’s Celebration of Assimilation Michael Koplow, Ottomans & Zionists
  2. How apartheid poisoned the world Peter Hain, Spectator
  3. A new understanding of human fragility and wholeness Stefanos Geroulanos, Aeon
  4. GM vs. Tariff Man Shikha Dalmia, the Week

Nightcap

  1. Lagos: Hope and Warning Armin Rosen, City Journal
  2. The Agonizing Death of James Garfield Rick Brownell, Historiat
  3. Authority Interfluidity
  4. Ukraine wants a national church that is not beholden to Moscow Bruce Clark, Erasmus