Today HealthGrades.com, which rates individual hospitals on an assortment of relatively routine procedures such as heart bypass surgery and C-sections, issued its fifth annual report on how well hospitals treat women when they give birth and when they have heart disease. As in previous years, it shows a gap between top-performing hospitals and those at the bottom in complications following vaginal and C-section births and in deaths following cardiovascular procedures such as bypass surgery and stent insertion.
I thought I'd take a look, partly out of curiosity—I hadn't spent much time with the previous reports—and partly because of the proliferation of Web-based report cards on hospitals. If hospitals were patients, they'd be spending much of their time being poked and questioned and examined. The Agency for Healthcare Research and Quality, a federal body, currently tabulates 68 online report cards (including the U.S. News Americas Best Hospital's rankings) that use different combinations of clinical data, patient surveys, and other information (on patient safety, for example) to put hospitals—nationally, regionally, by state—through the mill.
I try to look at any new hospital performance assessment or study through the lens of a real-world user. Should I care about the findings? Are they clearly presented? Are they overly detailed—or oversimplified? Are they complete and in context? And can I trust them?
The HealthGrades report certainly is worth caring about. No woman should deliver at a hospital that puts her at serious risk of complications and her baby at risk of death. No woman should be treated for heart disease at a hospital where she has a significant chance of dying following a stroke, or after a stent is inserted or a valve is replaced.
In the "women's health" category, as HealthGrades calls it, each hospital receives an overall rating, a "maternity care" rating, and a "women's cardiac and stroke mortality" rating. The top 15 percent of hospitals get five stars, the next 70 percent three stars, and the bottom 15 percent one star. It's easy to see at a glance how a hospital performed, but the system is simplistic. The range of performance within the top 15 percent of hospitals and bottom 15 percent cannot be trivial, let alone differences within the middle 70 percent, but as far as the information displayed is concerned, all five-star, all three-star, and all one-star hospitals are the same.
Maternity care results are incomplete—data was available from only the 17 states willing to make such information public. HealthGrades spokesman Scott Shapiro says patients in those states accounted for about 60 percent of admissions. (Three omitted states with a large number of patients: Illinois, Michigan, and Ohio.) In computing complication rates for deliveries, moreover, no adjustments were made for age or medical condition, so valid comparisons between hospitals are impossible. The 14.5 percent complication rate for elective C-sections at Temple University Hospital in Philadelphia, for example, cannot be assumed to be twice as bad as the 7.2 percent rate at Thomas Jefferson University Hospitals in the same city—Temple may have had older or riskier patients.
The heart care results, which reveal death rates for six procedures and conditions, are more solid. The patients represented the entire universe of female Medicare enrollees, so no states were sidelined, and risk adjustments were made. But the results are frustratingly displayed merely as "best," "as expected," and "poor." For some procedures and conditions, the difference between "best" and "poor" is large, so it would be important to look for a hospital in the best group. Take death rate following a stroke: According to a full description of the HealthGrades methodology, the average death rate for the five-star "best" hospitals is 8.4 percent, compared with 13.8 percent for the one-star "poor" hospitals. But for stenting, the difference between 0.84 percent (best) and 0.97 percent (poor) is barely meaningful.
I've had a problem with the HealthGrades rating system from the beginning in 1999, when it gave hospitals a full range of one to five stars, and even more of one when they moved to a three-tier setup. It's definitely consumer-friendly, but my feeling is that it's far too blunt an instrument. Is this a fair judgment?