Subscribe if you want to be notified of new blog posts. You will receive an email confirming your subscription.

Please enter your name.
Please enter a valid email address.

Please check the captcha to verify you are not a robot.

Something went wrong. Please check your entries and try again.

Disease Management and the Medicare Health Support (MHS) Project: “Houston, we have a problem.”

Thomas Wilson, PhD, DrPH and Vince Kuraitis, JD, MBA

The conventional wisdom in the disease management (DM) community has been that the Medicare Health Support (MHS) project would provide the evidence to resolve two issues:

First, MHS would once-and-for-all resolve the issue of “does DM have ROI? (return on investment).” It was thought that the randomized control trial process employed in MHS would provide scientific evidence to prove that DM has a positive ROI, clearing the way for unqualified acceptance and widespread adoption.

Second, MHS would be the lever to open the floodgates to the broader Medicare population market – that success in MHS would translate to an industry-wide opportunity to provide DM services to a large portion of Medicare’s 44 million enrollees. The legislation authorizing MHS mandated expansion if the programs prove successful:

“if the Secretary finds that the results of the independent evaluation … have been met by a program (or components of such program), the Secretary shall enter into agreements … to expand the implementation of the program (or components) to additional geographic areas … which may include the implementation of the program on a national basis”.

As MHS stands today, it’s unlikely to resolve either of these issues.

In this essay we will:

  • Briefly review the history of MHS
  • Summarize what’s being tested in MHS – examine the hypotheses that underpin this experiment
  • Review the 6-month findings of the first official report released in June 2007 from Research Triangle Institute (RTI)
  • Put the MHS project into context
  • Suggest options for Medicare’s future efforts

1) A Brief Recap of the History of MHS

In a nutshell, MHS is an adaptation of the commercial disease management contracting model prevalent circa 2002 (“Commercial DM 2002”). Sandra Foote’s article in Health Affairs proposed an experiment for applying “Commercial DM 2002” to the Medicare population. She later became the project director of MHS in 2004.

We remember the DM community as having had a schizophrenic perspective toward Medicare’s rollout of MHS.

On the one hand, many people disliked the “Commercial DM 2002” contracting model when it was prevalent.

  • The power in this model was concentrated in the hands of the buyers, primarily large health plans.
  • The process required pledges of guaranteed savings by vendors. If vendors didn’t produce specified results, they were required to pay back “fees” that had been advanced by purchasers, sometimes as much as 100% of fees. Chief Financial Officers of DM vendors hated the guaranteed savings model because they couldn’t recognize revenues on their income statements; early stage DM companies could meet the requirements of guaranteed savings contracts only by securing 3rd party reinsurance to provide the financial guarantee; this reinsurance process added significant costs to providing DM services and resulted in more squeeze on margins.
  • The guaranteed savings contracts also required a reconciliation process at the end of the contract – a process to measure whether savings and other metrics had actually been achieved by the vendors. Reconciliations often resulted in disputes, holdbacks of funds, and unfriendly spitting contests.
  • Finally, the industry standard contracts used in “Commercial DM 2002” have been recognized as methodologically flawed and are being replaced with more rigorous models.

On the other hand, when the Medicare Modernization Act of 2003 included language requiring the MHS demonstrations project, people quickly forget their generally ill feelings toward “Commercial DM 2002”. We saw headlines like “DM Industry Jumps for Joy Over Medicare’s Leap of Faith“, “DMAA Applauds Bipartisan Medicare Agreement. Chronically Ill Beneficiaries to Benefit from Enhanced Care“, and “Miracle Cure: Medicare is Bringing ‘Disease Management’ to Your Door.”

2) What’s Being Tested in MHS?

The key elements of the model are:

  • Ability to achieve short-term clinical and economic ROI (3 years or less)
  • Guaranteed savings from vendors. As compared to a control population selected by a randomized process, vendors are required to guarantee Medicare 5% total medical cost savings over the entire intervention population. Vendors who do not achieve savings are required to pay back up to 100% of fees.
  • A focus on high cost/ high risk patients — Medicare’s beneficiaries with some combination of congestive heart failure (CHF), diabetes, and/or COPD.
  • Programs implemented by DM companies and health plans. While the original RFP issued by Medicare invited a broad range of health care organizations to apply, the eight organizations ultimately awarded contracts by Medicare were either DM companies or health plans; health care provider organizations were not selected.
  • A randomized control trial (RCT) study design was used – presumably to avoid the methodological flaws of the “Commercial DM 2002” model and to address the question – “Does DM have ROI”.

Our take here is that MHS actually is testing a DM model that is 1) very narrow — an adaptation of “Commercial DM 2002” to the Medicare population, and 2) not representative of a much wider range of potential options that Medicare could use to provide chronic care management benefits to beneficiaries.

3) 6-Month Findings From the RTI Study on MHS

The Research Triangle International (RTI) study is 47 pages long and contains extensive analysis. The executive summary of the study highlights 3 key findings.

a) Finding 1: An unexpected pattern in Per Beneficiary Per Month (PBPM) differences between intervention and comparison groups emerges between the time of randomization and the start of the MHS pilots.

These two groups were selected by a randomization process by CMS (and by law) and were reported by RTI as equivalent at that time (i.e. no sites showed statistically significant difference). However, by launch date the two groups were reported by RTI to be non-equivalent, as most sites showed baseline costs to be higher in the intervention group than the reference group (only one site show statistically significant differences). For an in-depth examination of this issue see: Wilson T. A Critique of Baseline Issues in the Initial Medicare Health Support Report, July 18, 2007.

This loss of equivalence at the time of enrollment – if borne out over time in all the sites — is a very disturbing finding, especially as it appears to be biased against the MHSOs. It will undermine the entire project unless corrected.

The best scientific solution to this problem is for RTI to do standard statistical adjustments — using transparent methodologies that are vetted by outside experts — to adjust as much as possible for this non-equivalence.

b) Finding 2: MHSOs (MHS Organizations) did not engage the most costly beneficiaries.

Let’s start with some good news. This RTI finding hides one of the monumental successes of the program – the high percentage of people agreeing to participate in the program. RTI reports that an average of 79% of people agreed to participate (ranging from 70% to 92% across the various programs). [RTI Report, Table 6-4, p. 42]. However, even though 21% of people in the intervention group are not participating, the MHSOs are still accountable for the costs incurred by these individuals (the “Intent to Treat” model).

…and this 21% is sicker and more expensive than the participants [RTI Report, Table 3-1, p. 18].

…not impacting these costly beneficiaries will likely hinder the ability of MHSOs to control their intervention group’s overall PBPM growth and meet the financial terms of the pilot program [RTI Report, p. 41].

Of course, by definition, the MHSOs cannot have a strong impact on the non-participants, thus, the promised savings targets for the entire intervention population will have to come completely from the participants.

c) Finding 3: fees paid to date far exceed any savings produced.

In RTI’s words:

“…fees paid to date far exceed any savings produced. The negotiated MHSO monthly fees are a much higher percentage of the comparison groups’ Per Beneficiary Per Month (PBPM) costs than the percentage savings on payments through the first 6-month pilot period. Fees negotiated by the MHSOs with CMS have not been covered by reductions in Medicare expenditures, let alone an additional 5% savings in Medicare payments.” [RTI report, p. 5]

The RTI report covers only six month of operations and the authors acknowledge that this finding regarding “savings produced” in the first 6 months is very preliminary. We totally agree.

Yet, how likely is it that the MHSOs can produce required savings over the entire 3 year study period? Let’s do the math — click here to see our assumptions and calculations:

  • Guaranteed savings requirement: 5% of costs
  • Savings that MHSOs must recoup from intervention group participants to cover costs of more expensive non-participants, who are 21% of the intervention population (Finding 2, described above): 2.1% of participant costs
  • Recouping the monthly fees Medicare is paying MHSOs: 9.5% of participant costs
  • Recovering losses from first six months of operations: 3.3% of participant costs

To sum it up, we calculate that MHSOs will need to produce an average savings of 19.9% of costs of the participants in the intervention group to achieve financial “success” for the project. We conclude that this is a very steep hill to climb, but welcome your opinions to the contrary.

4) Putting MHS Into Context

In isolation the 6 month findings from RTI are not conclusive. However, they fit an emerging pattern — high-risk, chronic, co-morbid patients are a challenging population:

Our aim here is to point out how challenging it’s turning out to be apply disease/care management approaches with elderly patients, not to suggest that we should stop trying or that the challenge can’t be addressed.

The bottom line: we believe MHS is in big trouble.

5) Implications/Discussion

We’ve mentioned before that we see MHS as a narrow model and that there are many other options for Medicare to consider. What are some of the other options toward an optimal chronic care management program?

From: randomization will achieve equivalence, case closed.
To: randomization may not work, equivalence between the intervention and reference populations may not be achieved throughout the study–adjustments must be made.

CMS must adopt adjustment techniques successfully used in observational and quasi-experimental study designs to re-claim equivalence. If not done, the MHS project is dead, in our opinion. When these techniques are implemented, they must be scientifically valid, transparent and verifiable.

From: a guaranteed 5% savings business model
To: considering many alternative payment mechanisms: capitation, shared savings, pay-for-performance, and/or fee-for-service DM.

From: focusing on short-term ROI
To: focusing on medium-long term ROI, quality improvement & compression of morbidity

From: DM carve outs to private companies & health plans
To: exploring options to re-integrate care providers into care management processes, e.g., the Chronic Care Model, the Medical Home model or a managed care model (e.g. utilization review, case management, pre-certification, etc.).

From: focusing on high-risk, chronic, co-morbid patients
To: including programs to address mainstream Medicare patients with prevention and population health approaches

 

From: rigid implementation of inflexible program structures

To: Rapid Learning Models. Open up the data, while protecting for personal identification, to all “qualified” people to learn what can be learned in a timely fashion. The process should be transparent and open and available to the taxpayers who have funded these demonstrations.

 

From: is MHS working as originally designed?

To: what’s the optimal chronic care management program, financing structure, and evaluation model for Medicare?

 

6) Conclusion

When the 1970 Apollo 13 flight first signaled to Houston that there was a problem, many ideas were considered to solve it. But only after lengthy deliberations, learning from the lessons in the data, and the encouragement of creative out-of-the-box thinking was the ultimate solution – a successful one, it turned out — determined. The US space program adapted to changing circumstances, and has continued, with its ups and downs, to the present day.

We believe the same thinking should apply here. How can we determine the best method to improve the health and care of the population with chronic diseases? This will require open dialog and debate, evidence-based decision making, and creative out-of-the-box thinking.

© 2007 Thomas Wilson & Vince Kuraitis. All rights reserved.

This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License. Feel free to republish this post with attribution.

5 Comments

  1. Joel Brill on September 8, 2007 at 8:51 am

    The commentary calls the question of the goals and designs of the health support projects.

    Can one achieve measureable improvement in a population that has taken six+ decades to get where they are? Does Disease Management in Medicare
    populations change (and improve) outcomes, or are we simply delaying
    the inevitable based on what the patient did in the preceding decades?

    Are the incentives in these programs aligned properly? I’m not convinced the Advanced Medical Home project will do any better, as physicians are not trained / do not have the patience to
    act as care managers.

    Perhaps others can comment on the value of these federally-funded chronic care projects. As a taxpayer, I’d like to know my money is being spent wisely.



  2. Jaan Sidorov on September 10, 2007 at 7:13 am

    Congrats Tom & Vince on an insightful and timely review of MHS. The control-intervention split after randomization but before the intervention began makes me ask if the act of randomization itself changed participant behavior. I also note the cost analysis did not “trim” outliers, which could also obscure the measure of true central tendency.

    I also wonder if the study cohorts were relatively medically underserved. I’m not familiar with the average Medicare PBPM, but if true, MHS may have prompted increased utilization compared to baseline. This contrasts with typical commercially insured populations, which are prone to relative ‘over” utilization. This has implications for the generalizability of MHS to all sectors of the DM industry.

    I could not agree more with the even broader dimension of generalizability raised in “5) Implications/Discussion” above: MHS may be already testing an outmoded version of DM. As you note, the industry is destined to change, and many of the adjustments you identify are already happening. Many of these changes will be on display at the upcoming DMAA DMLF meeting this month in Las Vegas.



  3. Gordon Norman on September 10, 2007 at 11:01 am

    Vince: I think that you and Tom hit the proverbial nail on the head when you contrasted “Is MHS working as originally designed?” with “What’s the optimal chronic care management program, financing structure, and evaluation model for Medicare?” This paraphrases the frequent tension between “measurement for judgment” and “measurement for improvement” that often (invariably?) accompanies efforts to quantify quality within healthcare. MHS may be regarded as a judgment question by the CMS staff who devised and implemented the MHS program, but for the greater public good, the learning and quality improvement potential are the more important end points for these pilots. I also believe that is much more in keeping with the Congressional intent in passing section 721 of the MMA which authorized these chronic care improvement programs.

    Secondly, even the very preliminary results from the initial RTI report highlight that the original premise of testing “Does DM have ROI?” was overly simplistic, if not fatally flawed from the outset. Just as the questions, “Does surgery have ROI?” or “Does medication have ROI?” seem ridiculous on their face, the important questions we should be addressing now are more complex – e.g., “What outcomes (clinical, financial, mortality, quality of life, functional) are impacted, for which patient subpopulations, over what time frame, by which care management interventions, delivered in what fashion, with what roles for each component in the healthcare value chain, etc.?” Care and disease management are complex processes, with many data gathering and intervention components and integration across multiple sites and among multiple parties. Attempting to evaluating effectiveness at such a high level of abstraction may be misleading, at best, or worse, lead us to throw out the baby with the bathwater.

    It is interesting to note that in deciding on approving new medical services for Medicare coverage, CMS is not permitted any consideration of cost-effectiveness; also, the FDA approves new drugs on the basis of safety and clinical effectiveness, not on cost-effectiveness. Yet in the case of care management interventions, it would appear that the primary outcomes of greatest interest to Medicare are the cost-savings. Even MedPAC has noted this constitutes something of a double standard applied to conventional medical care vs. care coordination services. MHS pilots are gathering rich clinical and participant satisfaction data that are critical for assessing the value provided by these programs.

    Yes, as Vince/Tom state, “high risk, chronic, comorbid patients are a challenging population”; if they were not, the current system of care would be doing an adequate job with this task and we would not have such a desperate need to find new solutions and alternative approaches, such as those represented by these MHS pilots. There is already widespread belief in the value of DM services for senior populations in the Medicare Advantage realm, where these programs are prevalent and in many cases, long-standing, based on measured outcomes outside of experimental settings. It should be the job of CMS, its contractors, and others to ferret out the valuable insights from the rich dataset provided by MHS for learning purposes, regardless of what other judgment uses it will provide for contractual reconciliation of these pilot projects. After all, isn’t that the usual purpose for which pilot projects are undertaken? To discover unanticipated issues, learn of early successes, adapt and refine and then iteratively retest new approaches based on the pilot learnings?

    MHS outcomes analysis is still in its infancy, there are many questions yet to be answered, and premature summary judgments based on overly simplistic premises may preclude us from thoroughly extracting the rich learnings that should inform the healthcare and care management communities. The bathwater may be murky at this point, to be sure, but let’s not throw it out before we even know whether one or more babies are lurking beneath the surface. The potential to use learnings from Phase I of MHS to refine the approaches used for a broader Phase II implementation is not just the icing on the cake – it is the cake!

    Thanks to Vince and Tom for pointing out these important points. While their Apollo 13 analogy may be apropos in some regards, the MHS situation could also evolve to one more suggestive of Apollo 11, where taking the “one small step” of devoting as much attention to the learning potential of MHS as for judgment purposes could result in “one giant leap for mankind” as represented by FFS Medicare beneficiaries and the global healthcare community who urgently need to find better ways of providing care coordination and disease management. “Houston” may be able to lead the way, but not with the mindset that got them this far along the MHS journey.



  4. Asako on September 10, 2007 at 6:56 pm

    Vince, thank you for another great article.

    Gordon has a great point on the double standard applied to conventional medical care vs. care coordination services.

    The issue I have with the current assessment on MHS pilot is…the attention has not been paid enough on what exactly each patient is getting from this pilot and what outcome (clinical, quality of life) that delivered or did not deliver to the patient. As Gordon mentioned, the secret must be hidden in the rich clinical and patient satisfaction data, and I hope we examine it thoroughly before we jump onto the next untested paradigm.



  5. Dave Moskowitz MD FACP on March 17, 2008 at 2:30 pm

    I have to admit being suspicious about MHS’s intentions at the outset because of who was involved, and how they behaved.

    Who was involved
    Here in St. Louis, only one hospital was involved–St. John’s Mercy. St. John’s was busy building the first heart-only hospital in the state, at the same time that it was awarded one of the 7 MHS positions. Why would a hospital work hard to keep patients out of its new subspecialty profit center?

    How they behaved
    Sandy Foote, the Director of MHS, was present with Sean Tunis, the former Medical Director of CMS, when I told them how to save 90% of their dialysis and transplantation budget in October, 2004 (1). At the time, this represented some $22 billion. The amount is several billion dollars more now. Neither one had any interest whatsoever in eliminating the dialysis program.

    None of the participants in MHS had any interest in reversing kidney failure, when I contacted each of them individually.

    My conclusions:

    1. MHS was designed to fail. If DM can be shown not to work, then we’ll all just have to be satisfied with our expensive hospital-based system, secure in the knowledge that there’s nothing better we can do.

    2. Bureaucrats cannot be trusted to lower healthcare costs and improve outcomes. Neither can First Generation DM companies.

    3. Eventually, thanks to the Internet, word will get out that dialysis could have been prevented back in 2002. Those involved in keeping the secret might have a little explaining to do.