Subscribe if you want to be notified of new blog posts. You will receive an email confirming your subscription.

Please enter your name.
Please enter a valid email address.

Please check the captcha to verify you are not a robot.

Something went wrong. Please check your entries and try again.

One More Dark Cloud in the Stormy Skies of Medicare DM

Mathematica Research has just released a report: The Evaluation of the Medicare Coordinated Care Demonstration: Findings for the First Two Years.  It’s not pretty.

Section F of the Executive Summary is entitled “Synthesizing the Findings: What Works, and What Doesn’t”.  That section begins:

Given that few of the programs have shown convincing evidence to date of reducing beneficiaries’ need for hospitalizations and saving money or of improving the quality of care received, there is relatively little assessment that can be done yet of  what works.[xxxii]

The Medicare Coordinated Care Demonstration (MCCD) is one of a series of Medicare demonstration projects that have been going on for a decade.  Many of these projects relate to chronic care management.  However, don’t confuse the MCCD with the more widely known Medicare Health Support project, which is also experiencing some early bumps.

The MCCD projects began operations in 2002. The15 awardees include “five commercial disease management vendors, three
hospitals, three academic medical centers, an integrated delivery system, a hospice, a long-term care facility, and a retirement community… The 15 programs differed widely in both how they implemented their care coordination interventions with patients and their involvement with patients’ physicians and other providers”

Other key findings from the 230 page report:

Treating only statistically significant treatment-control differences as evidence of program effects, the results show:
• Few effects on beneficiaries overall satisfaction with care
• An increase in the percentage of beneficiaries reporting they received health education
• No clear effects on patients adherence or self-care
• Favorable effects for only two programs each on: the quality of preventive care, the number of preventable hospitalizations, and patients well-being
• A small but statistically significant reduction (about 2 percentage points) across all programs combined in the proportion of patients hospitalized during the year after enrollment
• Reduced number of hospitalizations for only 1 of the 15 programs over the first 25 months of program operations
• No reduction in expenditures for Medicare Part A and B services for any program [xxvii]

<

p dir=”ltr”>The Mathematica researchers — who by the way do very good work — weakly offer a few strained references to program successes:

  • Treatment groups were generally very satisfied with the care coordination they received
  • Patients had high praise for the care coordinators’ knowledge
  • Most of the programs received high ratings from their patients’ physicians on most dimensions, although there were clear differences across the dimensions and across programs.

The report is an 2 year interim evaluation of a 4 year study. The executive summary is curiously peppered with many cautionary statements such as

while the evaluation cannot reject the hypothesis that savings in Medicare Part A and B expenditures are zero, for some programs it also cannot reject the hypothesis that savings are large enough to cover the average fee paid to the programs for care coordination [xxxii]

Allow me to translate. This is “researcher” speak for “Yes, it’s possible in the long run that these programs could still break even, just as it’s possible that the person sitting next to you could spontaneously combust in the next ten seconds.”

What are some of the implications of this report? 

  • By itself, the report is not conclusive.  It is an interim report on one demonstration project.
  • However, it is one more cloud in the stormy skies of Medicare DM.  Other recent news about Medicare demonstration/pilot projects has also not been encouraging.
  • This news opens the door another crack for provider driven care coordination projects, such as the Medicare Medical Home Demonstration.
  • Medicare’s “problem” — an aging population with an increasing burden of chronic conditions — is still there.  While it would be great to have some clear guidance about what works, the problem is not going away.

Your thoughts?

This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. Feel free to republish this post with attribution.

4 Comments

  1. Ariel Linden, DrPH, MS on April 6, 2007 at 1:55 pm

    These results are not terribly surprising. Given that this a randomized controlled trial, we can’t really argue about the research design.

    My main points are these:

    1) The hospitalization rates were not better in the treatment groups
    than in the control groups – and thus costs…

    (2) There was no indication that treatment group participants had any
    behavioral change. If we expect to see reduced hospitalizations, we
    have to see the intervention change behaviors.

    (3) As for the dose/response, it could be that longer time will move
    the non-significant levels to significant, however without
    demonstrating a meaningful intervention, I doubt we’ll see that…



  2. Thomas Wilson, PhD, DrPH on April 9, 2007 at 6:45 pm

    So Mathematica can not reject the null hypothesis at this point. But in the future it is possible they may be able to do so. Or in the words of Blogger Vince: “It’s possible that the person sitting next to you could spontaneously combust in the next ten seconds.” But, does anyone really believe in spontaneous human combustion? For the record, I refuse to answer that question on the grounds of self-incineration.

    But Mathematica could have / should have done the DM world a great service by showing how the average Medicare expenditures and admission rates changed from pre-DM intervention to post-DM intervention in both the treatment and control groups. They could have / should have helped us answer the question: “Are the results of the commonly used pre-post design in DM consistent with the results of the RCT?”

    I did a “quick and dirty” comparison of Table VIII.7 data (pre-enrollment characteristics) to Table V1.2 date (cumulative through month 25). It appears that a majority of the sites both treatment and control do show a decline in costs and/or admission rates from pre to post (not all do show the drop, so this does not appear to be an immutable law or regression-to-the-mean). But the latter table was regression adjusted and the former was not. So, it was not apples-to-apples: How can we encourage Mathematica to do formal and rigorous review of this comparison? And when? … I’m burnin’ up.



  3. Randy Williams on April 10, 2007 at 5:39 am

    While I agree with the concern that of the 15 different models and designs of care coordination, only one showed statistically significant improvements in hospitalization rates vs. control, I alsoo have first hand knowledge of the one program that DID reduce hospitalizations. By way of disclosure, that program, Mercy of Iowa, is a client of Pharos, and utilized a daily remote monitoring technology we vend called Tel-Assurance. While the Mercy program is relatively labor intensive and involves a number of other care coordination elements, we are currently undertaking an evaluation of the treatment effect seen, to determine what role our intervention played. This is not the only CMS demonstration looking at our model and technology intervention. Early results from 2 of the 10 Physician Group Practice demonstrations are showing similar positive results on averting hospitalizations. While still preliminary, perhaps there is an important difference between models and technologies which will deserve further understanding.



  4. Warren Todd on April 12, 2007 at 12:57 pm

    Many thanks to Vince for summarizing and “translating” the 230 page report [ugh]..and making sure that we do not mix up all the demonstration projects.
    While the US market has been dominated by the original call-center centric model of DM, we do have some information on other models, not a lot,and it is somewhat mixed. Including early reports on the recent Medicare Health Support pilots. I must admit I am begining to be concerned that we have no clear cut solution in hand as a tsunami of elderly threatens to overwhelm our healthcare systems. A colleague and I recently wrote a brief commentary on the interesting prospect of more physician-centric models of DM and projected that we will likely hear more about them as the physician community starts to get there arms around going “back to the future” in terms of their role in care coordination. See [http://healthleadersmedia.com/viewfeature/88615.html] for the article. I also agree with Randy Williams;s comment about the promise of other more technology driven DM programs. As posted earlier, during this phase of DM we need to work harder at exploring different models of DM. Again, it is a bit discouraging that a host of Medicare demonstration projects are not turning up the silver bullet.

    Commenting of Dr. Wilson’s posting, my only concern back in the early days of the pilots was whether Mathematica knew enough about disease management to be evaluating the pilots. This is out of my league but I am glad that more knowledgeable people like Tom have their eyes on the details of the outcomes analysis and I hope that Mathematica will tap into these industry resources.