Guest Post: The CMS Announcement Of Medicare Health Support Program Cancellation — What It Means For Buyers

by Al Lewis, JD

Add the Centers for Medicare and Medicaid Services (CMS) to the growing list of people and organizations who cannot find financial savings through disease management.  Weeks after “lowering the bar” on MHS program savings requirements to 0% from 5%, CMS cancelled the program altogether due to the unlikelihood that the much-reduced threshold for savings would be achieved in the remaining months of the three-year measurement period.

Yet even as CMS’ conclusion mirrors that of the Congressional Budget Office, and the RAND Corporation, other organizations and consultants are finding enough value to justify contract extensions and expansions.  Paradoxically, one such announcement — by Independence Blue Cross — was made within hours of the CMS announcement.

So how is it that well-informed people can look at the same data and come up with dramatically different conclusions and action implications?   One must recall the quote from the immortal philosophers Dire Straits:  “If two men say they’re Jesus, one of them must be wrong.”  

It seems to turn on whether the analysis is done by biostatisticians or DMPC looking at utilization data in an academically rigorous way,  or actuaries/benefits consultants looking at overall financial trends in a pre-post manner.  The latter group’s measurements find far more savings than the former group does.  Partly it’s because benefits consultants and actuaries rarely if ever “plausibility-test” their results, relying wholly on pre-post comparisons to generate their findings of substantial financial opportunity or results.   A brief plausibility test — using actual planwide changes in utilization rates for the events being managed — often reveals major mistakes in the actuarial approach.  

The actuarial approach is prone to mistakes .  Changing a few key assumptions makes a huge difference.  One Mercer presentation showed that just changing the outlier filter could make results vary twentyfold.   That didn’t seem to bother them – they considered this twentyfold variation the “range of possible savings.” 

So, it is indeed possible that, as some critics have said, the study was mis-designed or mis-interpreted, as the Mercer one was.  I would look at some plausibility indicators to see if in fact event rates showed separation, even if over costs and components of cost didn’t.  Until an answer is plausibility-tested, it is not an answer.  If plausibility confirms the CMS finding, it probably is close to over.   If not, the vendors have a chance.

CMS and RAND notwithstanding, my view remains that if a payor contracts with a qualified vendor, doesn’t count asthma in the savings calculations, works together with the vendor to minimize that all-important “time to contact”  (see the special report on this topic at http://www.managedhealthcareexecutive.com/ ), and gets a good price, the program will “move the needle” enough to show modest net improvement in avoided events — as measured by changes in population-wide utilization in key event rates such as heart attacks.   

By contrast, the stand-alone pre-post actuarial approach — which doesn’t include a check of actual utilization — is discredited in my most recent White Paper The Ultimate Guide to Outcomes Measurement as not appropriate for measuring disease management.  Since I was the guy who, according to that very same Managed Healthcare Executive, invented pre-post population-based financial measurement, it’s not like I am attacking someone else here.  I was wrong, and many other people still are.   DMPC members, however, have long since recognized this error and adjusted their contracting processes accordingly.

The only remaining argument in favor of the actuarial approach is that payors “need to show the CFOs and their customers a number.”  That “number” if calculated fallaciously, means nothing.  An analogy would be  the Vietnam War, when the army would release the North Vietnamese “body count” and compare it to ours and conclude that we were winning the war.  That too was “a number,” which somehow related to the task at hand, but ultimately had nothing to do with the outcome.

My view has always been that the DM field would be much better off measuring conservatively and showing modest but unimpeachable returns, as most DMPC members do, rather than cave into demands to show large savings numbers which don’t mesh with any of the academic literature.