Could a doctor's professional network—the pattern of referrals to and from other doctors—say something important about the quality of care he gives his patients? Health IT expert Fred Trotter thinks so, and he's been amassing data to see whether he's right. Does a primary care doctor choose wisely when he is referring patients for complex care, or does he send some of them to a colleague with a record of medical or ethical lapses? Does a specialist handle mostly routine referrals from local colleagues, or is she a superspecialist to whom doctors refer their most challenging patients?
Trotter, author of Hacking Healthcare: A Guide to Standards, Workflows and Meaningful Use (O'Reilly Media), believes that referrals at least suggest the level of care the doctor provides, if not a precise measure of quality. He readily acknowledges that even if he's right, referral patterns fall short of hard data that would allow every doctor's performance to be evaluated. "I've been looking for an objective way to rank doctors for a long time," he says.
Patient-shielding privacy provisions in the Health Insurance and Portability Accountability Act (HIPAA), however, make it impossible to get the information needed for patient-level comparisons of physician practice patterns and performance as reflected by clinical outcomes. Lacking such data, physician-rating systems—including U.S. News Top Doctors, developed in collaboration with Castle Connolly Medical—rely primarily on patient or peer reviews.
But HIPAA also had an unintended consequence that gave Trotter an opening. The act created a 10-digit number for every physician, the National Provider Identifier (NPI), that must be used to prescribe drugs or to be paid by insurers. The NPI database is publicly available, and Trotter realized he could tell which doctors practice in groups by their shared phone numbers and addresses.
He began to wonder whether coupling NPI data with Medicare information would provide meaningful insights about doctors' practice patterns and the quality of their care. About three years ago, he submitted a Freedom of Information Act request for administrative Medicare data on small groups of patients and their doctors' interactions with them. By focusing on clusters of patients rather than on individuals, Trotter thought he could track the doctors' activities without violating HIPAA privacy regulations, and he was right.
"What I was interested in was the strength of the relationship between doctors," Trotter says. The data clearly displayed patterns of referrals between doctors and between doctors and hospitals. "Lets say you're a cardiologist," he says. "The database will show which hospitals you're referring to."
What will it take, Trotter wants to know, to transform this foundational database into a doctor-ranking system that "patients find useful and that doctors find fair?" For one, it will have to be coupled with other data sets. It would be relatively easy, for example, to spot doctors whose patients are readmitted to the hospital at an unusually high rate, or those who refer patients to hospitals with disproportionately high rates of central-line infections. Both of these measures are displayed in the government's Hospital Compare website, and the data are publicly available for downloading and analysis.
By linking referral data with information banked by state medical examining boards, you could also rate doctors on their qualifications and on the qualifications of the specialists they refer patients to. Doctors might be marked down if they repeatedly refer challenging patients to colleagues who aren't board-certified or—in the worst case—who have been disciplined for medical or ethical infractions.
Another possibility is what Trotter calls the "third-referral algorithm." He theorizes that most referrals for patients who need to see a specialist flow from their primary-care doctors to a single specialist, say a cardiologist, with whom the primary-care physician has built longstanding trust. But the referring doctor likely also has a backup choice for a small percentage of patients who don't match up for some reason with the specialist who gets most of the referrals. Perhaps these patients need a different desk-side manner, more attention and handholding.
And then there's the godlike specialist, on an even more exalted plane, to whom the primary-care doctor—and likely many other primary-care doctors, too—refers the most challenging patients. If Trotter's theory is correct, this analysis yields information about three different levels of medical care without the need for additional data.
On his own, Trotter doesn't have a big enough team of data scientists to test out his theory on a project of this scale and complexity on his own. He and a partner have developed a small for-profit startup with a social mission, Not Only Development, to obtain relevant health data and transform it into a form consumers can use to guide their healthcare decisions. They have raised $23,000 to get it off the ground. They are encouraging others to dive into the data, charging $120 to researchers and data scientists who are interested in working with the database. (U.S. News is an early customer.) A $1,200 licensing fee is charged to build the data into a proprietary product.
U.S. News has not taken that step, but the prospect is appealing. We'd like nothing better than to be able to narrow the information gap for consumers who, as things stand, have little objective information to rely on when they're choosing doctors. It's hard to understate the importance of picking a good doctor, especially one who will likely serve as a confidant and guide to higher levels of health care. Today, all too often picking a new doctor means entrusting your health and well-being to a stranger. What would you want to know about your doctor before you put yourself in his or her hands?