Peter Drucker was one of the best-known and influential authority on management theory and practice during his life, and his writings are still widely read and referenced today. He once said, "If you can't measure it, you can't manage it." While he was referring to business practices, his concept of using measurement to drive improvement is just as relevant in health care. We can't improve something unless we are first able to measure it. Think back to the last visit to your primary care physician. He or she monitors and improves your health by checking your weight, blood pressure, and cholesterol (likely several others too). At the population level, we measure things like life expectancy, infant mortality, or even disease-specific outcomes as a way to both monitor and improve the overall health of our local population.
Just as important, the statistician and engineer, W. Edwards Deming said, "In God we trust, all others must bring data." In order to trust and understand what it is that we are trying to improve, we have to be able to show the data in a meaningful way. More importantly, the data has to be valid (we are measuring what we want to measure), accurate (the measurements we obtain are close to the true result of whatever it is that we are trying to measure), and reliable (we obtain the same results if we repeat the measurement more than once). For example, as I have suggested before on a number of occasions, commonly cited population health metrics such as life expectancy and infant mortality may be valid and reliable measures of the overall health of a population, but they may not accurately measure the overall quality of a health care system (see, for example, the discussion here).
What about hospitals? Do we have any good ways to measure (and then compare) the quality of the care delivered by individual hospitals? Our society likes to rank things from best to worst - just think about all of the rankings we talk about in our everyday lives. We rank sports teams, guitar players, colleges, and the best cities to live or retire. We also rank hospitals, purportedly by the quality of care that they deliver. There are a number of different organizations that rank hospitals, including the Centers for Medicare and Medicaid Services (CMS), Leapfrog, Healthgrades, Consumer Reports, and perhaps most famously, the U.S. News and World Report.
The validity, accuracy, and reliability of these different hospital ranking and/or rating systems have been called into question by a number of public health experts (see, for example, the discussions here and here). A few years ago, a group of physician scientists and experts in outcomes measurement assigned a grade (think A, B, C, D, or F) to the various hospital rankings and ratings systems that are commonly used (these findings were published in the online journal, NEJM Catalyst in an article titled, "Rating the Raters: An Evaluation of Publicly Reported Hospital Quality Rating Systems"). Importantly, no rating system received an A (the top grade) or F (a failing grade). The highest grade received was a B, by the U.S. News and World Report (USNWR). The authors of the study concluded, "Each rating system had unique weaknesses that led to potential misclassification of hospital performance, ranging from inclusion of flawed measures, use of proprietary data that are not validated, and methodological decisions. More broadly, there were several issues that limited all rating systems we examined: limited data and measures, lack of robust data audits, composite measure development, measuring diverse hospital types together, and lack of formal peer review of their methods."
What's perhaps even more concerning is the fact that these different hospital rating systems don't agree (see "Disagreement Between Hospital Rating Systems: Measuring the Correlation of Multiple Benchmarks and Developing a Quality Composite Rank"). In other words, these rating sytems aren't reliably measuring quality. Conflicting information is rarely, if ever, helpful.
Lastly, a recent article published in JAMA (see "National Hospital Quality Rankings: Improving the Value of Information in Hospital Rating Systems") asked the very relevant question on whether the U.S. News and World Report rankings were in fact measuring the local health of the population as opposed to the actual quality of the individual hospitals that were being ranked. Using a similar argument to the one that I used above (that population health metrics such as life expectancy and infant mortality measure so much more than the quality of the health care delivery system), the authors of this article stated, " Socioeconomic factors have a major effect on patient health, and people of lower socioeconomic status experience comparatively worse health outcomes...Patterns of socioeconomic deprivation, race, and ethnicity vary markedly by region, and individuals in some regions are more likely than those in other regions to experience serious chronic illnesses."
The authors of this latest study constructed a heat map (provided in the eSupplement) that compared the regional differences in life expectancy (which reflect the social determinants of health) with the regional distribution of the USNWR's top-ranked hospitals. A striking pattern emerged. Only the regions of the United States with higher life expectancy have hospitals on the USNWR Honor Roll. The regions of the United States with the lowest life expectancy - and often the greatest health care disparities - had no hospitals listed on the USNWR Honor Roll.
Let's bring back the infamous "chicken and egg" question here. Is there a cause-and-effect relationship here? One could argue (and I won't - please keep reading) that the USNWR Honor Roll hospitals are directly impacting the local health of their populations and hence, the life expectancy is better in the regions of the United States where the local population has access to these hospitals. However, if we use that argument, we have to reconcile why the U.S. has the most expensive health care delivery system in the world, yet has the worst performance on population health metrics, such as life expectancy and infant mortality. I can't reconcile that fact, and there are other, smarter individuals who can't either. I have continued to argue, as have many others, that these population health metrics have more to do with the social determinants of health and U.S. investment in programs that address these determinants and less to do with the quality of the hospitals in the United States. As such, I can't and won't make the argument that the USNWR Honor Roll hospitals are the most important factor impacting life expectancy for the populations that they serve.
So back to my original question then - do hospital rankings matter? I go back to the fact that the USNWR ranking system received the highest grade (a "B" grade) by a group of independent, objective, and unbiased experts. It is by no means perfect. However, I see the USNWR ranking system as a good place to start. There are likely to be health outcomes that are most important and relevant to providers, patients, payors, and public health. Ideally, the outcomes that are most important to these different stakeholders overlap - but that is just not always the case. Similarly, while there are some USNWR measures that overlap with the measures that are most important to our patients and to public health, that's not universal either. For that reason, hospitals should focus on improving the metrics that matter most to our patients and to the public. As the saying goes, "A rising tide lifts all boats." I would suggest that an overall focus on "outcomes that matter" will have the secondary effect of improving some of the outcomes that are most relevant to the USNWR.
No comments:
Post a Comment