When deployed effectively, big data analytics can deliver staggeringly complex and meaningful insights, helping providers improve everything from population health management to adverse event rates to financial returns.
But most of the data used to perform these innovative calculations does not spontaneously generate itself. Clinicians must learn how to use their electronic health records to collect the right information and report it to half a dozen different quality programs and measurement groups.
Physicians have been particularly hard-hit by the increase in quality reporting requirements, and they have not been hesitant to voice their complaints about how the convoluted processes can sap their time, energy, and ability to provide top-shelf patient care.
But big data doesn’t have to do more harm than good, argues L. Gordon Moore, MD, Senior Medical Director of Population and Payment Solutions at 3M Healthcare. With the right strategies, the right attitude, and a few tweaks to the system, providers could learn to love their jobs again.
Moore sat down with HealthITAnalytics.com at the 2016 HIMSS Conference and Exhibition to discuss why physicians feel as if they’re losing their autonomy and how big data analytics can help to reinvigorate the patient-provider relationship that is central to clinician satisfaction.
Digging down to the root of the problem
It all starts with data normalization.
“The basic question that we are all trying to answer is how we can use all the resources at hand to achieve brilliance and outcomes for the people that we serve,” he said. “It starts with simply by knowing what is wrong with the person, and how we can understand it. If we’re going to use machines to help us with that, we need to turn these things into numbers. And that is, fundamentally, coding.”
No matter what form of coding is used – ICD-10, SNOMED, LOINC, or HL7 – turning data into standardized bits and bytes that can then be compared, contrasted, and complied to develop actionable insights is the foundation of any analytics work.
Once a patient’s experiences are codified, data scientists can start to build individualized portraits of a person’s total burden of illness, Moore explained, and use that information to draw conclusions about how to plan future treatments.
“We can take all the different claims and diagnoses and aggregate them. So now we know that this person has diabetes, and we can also rank them on a scale of severity. They also have congestive heart failure, and we can rank that on the scale, too. Then we can take these things together and say that they’ve been to the emergency department three times in the past year, which puts them in an incredibly high-risk category.”
“That allows a whole number of things, and opens the door to saying that a specific population is risk-adjusted.”
Once a patient’s risk level is fully defined, analysts can then start to investigate how the patient’s care may impact his or her outcomes. “We can look at something like their hospitalization rate, and then we can go to the hospital or a health plan or a provider and tell them that this rate is above expected,” Moore said. “It may not be a marker of bad care – it may be something about the people that we serve and what they’re struggling with. And we can look at that data, too, to figure that out.”
“The point is that none of that would be possible without big data and being able to code that and run it through methodologies that apply logic and flags and highlight all these interesting insights that we couldn’t learn before.”
Healthcare organizations can then present the data to their clinicians in an effort to close gaps in care, improve delivery, and raise patient satisfaction levels.;