By THOMAS WILSON PhD, DrPH and VINCE KURAITIS JD, MBA
A recent study in the New England Journal of Medicine reported on the results of a “hotspotting” program created by the Camden Coalition of Healthcare Providers (Camden Coalition). Hotspotting targets interventions at all or a subset of healthcare superutilizers – the 5% of patients that account for 50% of annual healthcare spending.
The results of the study were disappointing. While utilization (hospital readmissions) declined for the hotspotting group, the declines were almost identical in the control group. At least three headlines implied that the conclusion of the study was that hotspotting care management approaches have been proven not to work:
“’Hot spotting’ doesn’t work. So what does?” Politico Pulse
“Reduce Health Costs By Nurturing The Sickest? A Much-Touted Idea Disappoints.” NPR
“Hotspotting” Apparently Doesn’t Reduce Superutilizers’ Readmissions” NEJM Journal Watch
NOT SO FAST!
As we’ll explain, we believe that much of what’s going on here can be explained by one or both of what we call “RTM Traps” (regression to the mean traps).
In this essay, we will:
- Define RTM (regression to the mean)
- Explain the RTM Traps and how many have fallen into the traps
- Suggest how to avoid the RTM Traps
We believe our POV is relevant to clinical, technical, and executive staff in the many organizations focusing on the superutilizer population – hospitals, physicians, ACOs, health plans, community groups, etc.
Defining Regression to the Mean
Regression to the mean is well known to statisticians, epidemiologists, actuaries and other ‘quants’ — it is the natural tendency for extreme values from a population (like the sickest) to be less extreme on remeasurement, i.e., to trend toward the middle (mean). RTM happens due to two factors. We will use dice as an example to show the issues — don’t worry we will go back to people shortly.
You start with 100 six-sided dice. The expected mean over time for any one die thrown on the table repeatedly would be 3.5. Here’s the math: ((1+2+3+4+5+6)/6 = 3.5).
- Selection bias. You throw all 100 dice onto a table and 20 of them come up as sixes. You select these 20 dice believing they tend to throw sixes more reliably.
- Variation (random). You now throw these 20 dice onto the table again, and the result is that 5 of them come up as sixes.
These remaining 5 dice are analogous to the superutilizer category – in this case the top 5% out of the original 100 dice. If you were to throw these 5 dice again, you should NOT expect 5 sixes — you’d expect a dramatic decline in the totals of the dice and that the anticipated mean for any one die would continue to be 3.5. This is regression to the mean at work.
People are not like dice and their variation is not just random, but often when you select sick people, on the average, they tend to be less sick over time. Most of this is very likely due to RTM — it is possible (even likely) that other reasons exist too: The medical care system intervenes on sick people all the time and their interventions can obviously be beneficial.
RTM is a particularly significant issue when working with superutilizer groups of patients. In the Camden Coalition study focused on 0.5% of eligible superutilizers, the authors reported: “The 180-day readmission rate was 62.3% in the intervention group and 61.7% in the control group. The intervention had no significant effect on this primary outcome.”
At first, it might seem counterintuitive to many that the decline in utilization was so dramatic in both groups. However, in our extensive experience working with chronic disease management programs, these results are typical when working with superutilizer groups. The Camden Coalition authors also acknowledge that studies targeting high cost patients are prone to RTM.
The RTM Traps
In health care, when cost or utilization drops in a group of sick people, observers can fall into two separate RTM traps:
RTM Trap #1: Ignoring or discounting the regression to the mean phenomena and attributing all the changes seen to the intervention(s) implemented. This often happens in pre-post studies, or where the patient is his/her own control in studies.
RTM Trap #2: Ignoring or discounting other potential causal factors and attributing the drop seen among the intervention group only to regression to the mean.
Let’s look into each of these more deeply and consider approaches to avoid falling into an RTM Trap.
RTM Trap #1: Ignoring RTM.
Even the best researchers that publish in peer-reviewed journals can fall into the trap of ignoring RTM. A peer reviewed article in 2017 concluded that “Metformin can lead to weight loss gradually in newly-diagnosed type 2 diabetes patients.” Moreover, “The patients with higher BMI and bigger waist circumference at baseline showed a more pronounced weight loss.” A letter-to-the-editor pointed to the RTM trap the 2017 authors fell into: “The conclusions…are not substantiated due to the lack of a control group and failure to consider other factors that may have confounded these results. Unfortunately, we believe these results to be due to the regression to the mean (RTM) phenomenon, which weakens the causal inference proposed in this study.“
In another study on asthma disease management, the authors acknowledge RTM (thus attempting to avoid RTM Trap #1). Yet, two letters to the editor comment on the need to deploy better methods to take it into account.
Let’s go back to the Camden Coalition study. The early results of this program had been very encouraging and showed dramatic declines in cost and utilization. As explained in the definition of RTM Trap #1, it’s human nature to be “attributing all the changes seen to the intervention(s) implemented”.
This is the time for program managers and researchers to stop take a deep breath, and ask “Are the program results truly due to our interventions? Is it even possible that regression to the mean could be accounting for results? Would there be significant value in conducting a randomized trial?”
Avoiding RTM Trap #1: Compare Your Results to Your Own Equivalent Reference Group.
As was done in the Camden Coalition study, one way to avoid RTM Trap #1 is to conduct a scientific study – a randomized control trial. The challenge here is that this route is expensive and time consuming – it will take years to design an experiment, carry it out, and analyze results.
There is a simpler path, and it’s particularly useful when working with groups of superutilizer patients.
Revisit your own data from a previous year and select an equivalent reference group to those in your program for Baseline Year 1. One could also deploy various matching techniques to make sure the two populations are equivalent.
Then compare the results for Year 1 to Year 2. The odds are very good that you will see a dramatic decline in utilization and cost from Year 1 to Year 2 in your own stats – even without a special intervention. Regression to the mean was probably responsible for results seen.
You then can decide whether the additional effort of a scientific experiment is justified to “beat” the expected decline. But even before doing the fancy study it is always a great idea to select a similar reference from the past and see what happened over time.
RTM Trap #2: Ignoring or discounting other potential causal factors.
As we quoted in the beginning of our article, many headlines in the popular press suggested a conclusion similar to “Hot spotting doesn’t work”.
We believe a more accurate conclusion would be: It is possible (even likely) that the results of the Camden Coalition hotspotting study were due to regression to the mean, but is also possible that the results were at least partially due to other factors acknowledged by the authors.
Avoiding RTM Trap #2: Consider Alternative Explanations.
The Camden Coalition authors write at the end of the article: “Camden was evolving during the trial period, multiple other care-management programs were starting and the Coalition was leading a city-wide effort to connect patients with primary care within 7 days after hospital discharge.”
Isn’t it possible – even likely – these city-wide care management efforts improved care for both the intervention and the control group patients? The authors – to our surprise – do not comment on the possibility that their reference group was influenced by the same type of intervention as the target group. Thus, care management might have worked, it was just as impactful in their targeted group as it was in the control group.
The lesson: Even when doing a community randomized trial, try to see what else is going on in the community that could significantly impact your reference group. This is always a problem in community trials, as the world does not stand still so you can do a perfect experiment.
Conclusion
Be wary of falling into an RTM Trap when working with superutilizers. RTM can be especially pronounced in this group of patients.
One should not conclude that care coordination does not work on superutilizers. We applaud continuing efforts to understand the challenging and uniqueness of these sickest of patients.
Thomas Wilson PhD, DrPH is a recovering academic and the founder and pragmatic epidemiologist at Trajectory® Healthcare, LLC.
Vince Kuraitis, JD, MBA (@VinceKuraitis) is an independent healthcare strategy consultant with over 30 years’ experience across 150+ healthcare organizations. He blogs at e-CareManagement.com, where this article first appeared.
The post Hotspotting, Superutilizers, and Avoiding “RTM Traps” appeared first on The Health Care Blog.
from The Health Care Blog https://ift.tt/38ljvKN
via IFTTT
No comments:
Post a Comment