By: Chris E. Stout, PhD
In this time of rapidly evolving payer models—the Patient Protection and Affordable Care Act (PPACA, or more colloquially, ACA), hospital-centric Accountable Care Organizations (ACOs), state insurance exchanges, Hospital Value-Based Purchasing (HVBP) Programs, bundled-payment programs, Comprehensive Care for Joint Replacement (CCJR) programs – the approaches’ concomitant alphabet-soup of acronyms seem to be increasing and accelerating at a dizzying pace.
The ultimate goal, as always in healthcare, is optimal outcomes predicated on evidence-based treatment approaches provided in a prompt and economical fashion (in order to maximize value and minimize untoward complications), and be evaluated in an empirical framework. Oddly, perhaps the greatest challenge lies in the latter-most factor. That is, medical practices are aligning with hospitals, which are coordinating with post-operative venues of care (skilled nursing facilities, home healthcare agencies, and outpatient practices). This goes a long way in achieving the value aspect of good clinical outcomes. Silos are being brought down, case management is becoming synthetic to the episode of care as well as very, and providers are sharing information and coordinating hand-offs. None of this is simple or easy to achieve, but there are signs that it is working.
So what’s the problem? In one word: benchmarking.
While the proverbial “big data” of healthcare seems to be ubiquitous in many papers and articles these days, it is surprisingly absent from one of the most prevalent and growing areas of cost-concern for Medicare: hip and knee joint replacements. Take a look at the driver behind payments and bundling for benchmarks for joint replacements, and what does one find? No data. How about third-party payers? We cannot really know because if they do have data, it is not available to the public or policy makers. What about outcomes companies that contract with providers to track outcomes? They do not parse their findings to specific diagnoses or procedures, so all knees from bruises to replacements appear in the same data-set and are indistinguishable. What about the Official Disability Guidelines? Their work is good, but generally focused on injured workers (thus more trauma-focused), and based on the research literature. Well then, what does the literature say? Peer reviewed works’ generalizability to the realities of clinical practice is severely limited as peer reviewed papers are encumbered with so many exclusionary criteria in subject selection in order to control extraneous, contaminating variables and comorbidities. Therefore, their utility as a proxy for benchmarking is poor at best.
But there may be an answer…
There are a budding number of practices, professional associations, and universities compiling their own data, conducting analyses, and sharing HIPAA compliant records with others. While there may be limitations (e.g., certain regions, MDs, hospitals), it is critical to know what is reasonable to expect vis-à-vis outcomes versus time and resources for joint replacement patients as they make their way through their episode of care and to identify risks and untoward events early on. Indeed, one university-based joint registry served as an epidemiological indicator that there was a problem with a joint device maker’s product.
Just as Google scooped the Centers for Disease Control and Prevention for flu prediction a few years ago, it may be universities and practice groups that become the go-to entities to best understand the real-world experiences of heterogeneous populations, in other words, benchmarks. For example, the University of Michigan plans to invest $100 millioninto a big data program. Furthermore, The University of Massachusetts Medical School developed the Function and Outcomes Research for Comparative Effectiveness in Total Joint Replacement and Quality Improvement (FORCE-TJR), a data system that guides total joint replacement practices. Finally, Nicklaus Children’s Hospital implemented a $67 million health-record, replacing a paper process that had been used for nearly half a century, and is looking to include genetic and ethnic variants on treatment outcomes.
However, even proponents of big data have words of caution. Carol McCall, a Society of Actuaries health fellow says, “The key issue…is how we take all of the insights Big Data has to offer and translate them into something actionable…if they don’t change what we do, then they haven’t really impacted anything.” Indeed, and there is also iatrogenic the risk of making things worse if acting on misread patterns or presumptions of causality that are not correct.
If you’d like to learn more or connect, please do at http://DrChrisStout.com. You can find my Tweets as well. Various tools are available via http://ALifeInFull.org.