Doctors Get Answers to Their Best Hospitals Questions

Misunderstandings by eye doctors prompt us to try to clear away the confusion for all

By SHARE

The Cleveland Clinic asked U.S. News to contribute an article to its ConsultQD/ophthalmology page to address various misunderstandings concerning the Best Hospitals rankings that surfaced in a national discussion with ophthalmologists convened by the Cleveland Clinic's Cole Eye Institute. We were happy to have the opportunity to do so, and the article was posted today. Because the issues raised transcend a single specialty, the article, with minor edits, is reproduced here for the benefit of others with similar questions and concerns.

A 130-page methodology report is freely available for downloading and provides considerable detail on these and other questions.

How are the ophthalmology rankings generated?

In ophthalmology and three other specialties (psychiatry, rehabilitation and rheumatology), hard outcomes and other relevant performance data are an elusive combination of irrelevant, unreliable and unavailable. Rankings in these four specialties, therefore, is based entirely on results from our three most recent annual surveys of a random sample of boarded specialists (200 per year). These physicians are asked which hospitals in their specialty they believe provide the best care for the most challenging patients if cost and location were not considerations. Centers named by 5 percent or more of the responding physicians are nationally ranked as Best Hospitals. The average response rate in ophthalmology over the last three years has been 39.5 percent. That is quite high for a physician survey.

Is the sample truly random? Is there geographical weighting?

It is a geographically weighted probability sample. The source is the AMA Masterfile. Of the 200 physicians surveyed per year in each specialty, 50 are selected from each of the four census regions.

Who actually receives the survey?

The survey goes to individual physicians. Hospitals do not know whether or which staff or privileged physicians are surveyed unless the doctors tell them. Also, it is not in the form of a list of centers to check off. It simply has spaces for entering up to five hospitals. (If a name is not in the U.S. News database, our contractor, RTI International, crosswalks the name to the appropriate entry.)

Do hospitals pay U.S. News to be placed higher on the list?

Emphatically, no. There is nothing that a hospital can arrange with U.S. News to get a bump of any kind. There has never been and will never be any way for a hospital to pay to play or improve its standing.

In ophthalmology – in all of the specialties, for that matter – aren't the rankings just a popularity contest?

I've addressed this question previously, most recently last year in the Wall Street Journal. My published reply in that publication, echoed in a concurrent Second Opinion blog posting, included this excerpt: "We believe that responsible specialists plug into extensive networks of other specialists in seeking the best care for the most challenging patients wherever it might be located. The late Bernadine Healy, a cardiologist and director of the National Institutes of Health before coming to U.S. News as health editor, used to call the physician survey a form of peer review."

In the 12 specialties that use hard data in addition to reputation, it is apparent at a glance that many of the ranked hospitals have a very small reputational score or none at all. Reputation counts, but except for a small number of medical centers – which include the Cleveland Clinic – it is not a key factor in the rankings.

We have always wanted to decrease the role of reputation, however, in favor of metrics that directly reflect quality of care. The next round of Best Hospitals rankings, which will appear in July, will indicate reduced weight assigned to reputation in the 12 data-oriented specialties. Reputational weight in the final score will be reduced from the current 32.5 percent to 27.5 percent. The weight assigned to patient safety metrics will rise from the current 5 percent to 10 percent.

We announced the coming change in another Second Opinion post. The four reputationally driven specialties will be unaffected.

A second change involving reputation is the expansion of this year's physician survey to include members of the Doximity physician network. Announced on our web page two weeks ago, this change has generated considerable interest and some concern, which we plan to address shortly in a follow-up post. Watch the Second Opinion page for the update.

We would much prefer to evaluate hospital performance in every specialty with the help of hard data. If meaningful, robust data are available that would allow us to compare hospitals in ophthalmology as we now do in 12 specialties, we want to know that – and if we can gain access to such data, I promise that we will use them.