How Patients Rate Doctors

SHARE

Over the years, a fair number of consumers who have consulted our annual America's Best Hospitals rankings have gotten in touch with me (I direct the project) to ask why we don't rate doctors, too. If there were a fair and objective way to do that, we would. But public data that might get at the competence of individual physicians doesn't exist. A handful of states, among them New York and Pennsylvania, post statistics on the Web showing how well surgeons in a particular state perform in a limited number of procedures, such as heart bypass surgery. But that is pretty much it. Even in those states, you won't find assessments of primary-care providers and other doctors, because these caregivers don't do procedures that neatly lend themselves to tabulation and analysis.

The "best doctors" books and listings in various metropolitan publications rely on physician surveys–opinions garnered from doctors about other doctors. That's better than flying blind when you need a doctor. It is helpful to have names of physicians who are deemed by their peers to be good enough to treat a family member, which is the kind of question these surveys generally ask. But there's no getting away from the fact that it is reputation and not hard data that drives these guides.

A few websites have sprung up that solicit and display what patients think of their doctors. The largest seems to be the free RateMDs, which now has more than 90,000 U.S. and Canadian physicians in its virtual vaults. Patients give doctors ratings of 1 to 5 in punctuality, helpfulness, knowledge, and overall quality, all of which are posted, as well as comments that patients are encouraged to submit. The average score is compiled and a smiley face, no-expression face, or sad face is awarded.

The idea has obvious appeal, but take care. Most of the ratings are based on a very small number of patient reviews. No physician in the entire country has more than 53. The vast majority of physician reviews are in single digits, and a large number are below three. In Washington, D.C., for example, only three local cardiologists are rated, two with one review and one with two reviews.

How can a sad face–or a happy one–be trustworthy if it relies on such flimsy input? Even with more reviews behind a rating, one bad one obviously can drag down the average score. And if you drill down to see the details of a physician's ratings, you'll find that some patients slap up the lowest one possible in every category with only the skimpiest of explanations. It's more like venting than rating.

How would you feel if you were a doctor who got one of these sad faces? You could post a response on the detail page, but it wouldn't appear on the higher-up page that only shows a summary of your ratings.

And this is the most highly developed of the rate-your-doctor sites. I'm afraid these services are still very much in their infancy and not yet ready to be taken seriously.