U.S. News's 'Best Hospitals' Clashes With Other Ratings. Is That Bad?

Different ways of looking at hospitals yield different results.

By SHARE

U.S. News, publisher of the annual "Americas Best Hospitals" rankings, isn't the only hospital-rating game in town. Corporate-backed groups such as Leapfrog and the federal government's Medicare arm, through its Hospital Compare page on the Web, are other examples of public reporting of hospital data and ratings, each with its own unique approach. A new study in Health Affairs , a public-policy journal, concludes that because the ratings measure different qualities and disagree with one another, consumers are confused rather than enlightened. As Health Affairs puts it, sometimes more is less.

I see the point, but I think motivated consumers—as I would call anyone looking for information about particular hospitals—can sort things out better and be smarter than the authors seem to believe they can. And the pot of gold the authors are seeking at rainbow's end—broad-based information that is useful, accurate, and consistent across different reporting platforms—is wishful, almost delusional thinking. Developing a consensus among clinicians, analysts of data quality, and occupants of hospital executive suites about how to define, collect, measure, and report data that is meaningful is far more difficult than herding cats or whatever comparison you want to make.

Hospital Compare is a good example. After years of wrangling, hospitals finally agreed on a set of "process measures" that the Centers for Medicare and Medicaid Services could make public. The roughly dozen measures would show how consistently heart attack patients received an aspirin after they got to the ER, how often surgical antibiotics were administered at the appropriate times, whether heart patients who smoked were counseled to stop smoking, and other such checklist-type compliance. The majority of the measures relied on evidence accumulated over many years. They made sense.

But how well do they predict whether patients will live or die, or suffer complications? Several studies have shown that a center's performance on Hospital Compare process measures has little to do with outcomes, such as the mortality rate of heart patients who have bypass surgery. A 2007 study in the Journal of the American Medical Association found that one of the measures, whether heart failure patients got drugs called ACE inhibitors or ARBs when they were sent home, had little effect on the death rate during the following two to three months.

The Health Affairs study examined the results of five online providers of comparative hospital information—U.S. News, Leapfrog, HealthGrades, Hospital Compare, and a state-sponsored service, Massachusetts Healthcare Quality and Cost. The authors (Michael Rothberg of the Tufts University School of Medicine and others) checked each ratings provider for its verdict on how well nine medium-to-large hospitals within 30 miles of Boston did with patients who had certain procedures, such as heart bypass surgery, or had medical conditions such as community-acquired pneumonia. Each hospital's care of heart-attack patients also was evaluated.

The results of the analysis were predictable. Only two of the ratings providers broke out death rates for every patient, and only three furnished heart-attack death rates. Virtually no ratings were consistent across all five.

But wasn't that mostly because the intent of each ratings provider is different? And is a range of missions a bad thing? The stated purpose of the Best Hospitals rankings, for example, is to help direct patients to centers that excel in the most difficult cases. We are not trying to identify hospitals that would be good choices for routine care or a specific kind of everyday surgery. The assumption, moreover, is that such patients, given their needs, probably are far more willing to go some distance to meet them than most people would be. The study's authors, by explicit contrast, state that the 30-mile radius was based on consumers "who would be willing to travel up to one hour to receive high-quality care." If an elderly parent needed major surgery, I suspect that most children would be willing to go more than an hour away if they thought their parent would get top-flight care. HealthGrades, which focuses on specific conditions and procedures typical of large numbers of patients, comes closer to the authors' model.

Individual patients have individual needs. If I had a history of heart attack, you bet I'd be curious about how well my local hospital handles emergency cases. Hospital Compare, here I come. If the issue was a hip replacement in a low-risk patient, I might look at HealthGrades or Leapfrog.

It is true that not many people make decisions about hospitals based on ratings or rankings. Only about 20 percent of the public even saw such information in the past year, according to a survey released last month by the Kaiser Family Foundation, and of those who did, only about one third factored it into their health decisions. The numbers were higher for those with more education, but not dramatically. Just 6 percent of those surveyed were aware of the Hospital Compare site.

Confusion is not the issue. If public-health authorities and the healthcare community are committed to data transparency, the greater challenge is to address the 80 percent of the population that doesn't know there are data out there to be had. I smell a whiff of condescension in the Health Affairs study (consumers, poor lambs—so easily led astray).

I invite Dr. Rothberg to respond.