By Caitlin Gillooley
The number of public quality scorecards for hospitals has increased exponentially in recent years as consumers take more interest in getting the most value for their health care dollar. These attempts at simplifying the complex hospital environment into laymen’s terms often condense hospital performance on a select number of quality measures into a letter grade, star, or ranking—much easier for the average stakeholder to understand than risk-adjusted infection rates.
Hospitals have long supported transparency on quality information, and the emergence of the public report cards reflects consumers’ keen interest in better understanding the quality of care in hospitals. At a time when hospitals, consumers and policymakers alike are focused on improving the value of care, quality data is crucial to ensuring health choices are not based on lower prices alone.
However, the producers of these scorecards and reports bear great responsibility in making judgment calls on behalf of consumers. The average patient won’t know whether a certain scoring methodology is truly reflective of the quality of care provided at a specific hospital, or if the calculations behind the letter-grade are statistically flawed and don’t show much about the provider at all. In addition, the easy-to-understand nature of the scores carries the inherent risk of oversimplifying the complexity of delivering quality care: each patient, doctor, and hospital is unique and operates in an environment that is only so comparable to every other patient, doctor, and hospital. Finally, the proliferation of scorecards means that hospitals often receive discordant ratings across different reports, even when the reports are based on some of the same measures.