It’s kind of a no-brainer that Dr. Keith Dreyer would be among those who lead the advance of artificial intelligence into healthcare. Dreyer is a rare breed, a radiologist who teaches at Harvard Medical School, but he also holds a degree in mathematics and has a doctorate in computer science. So it’s fitting that Dreyer serves as the chief data science officer at Partners HealthCare, a healthcare network that includes Brigham and Women’s Hospital and Massachusetts General Hospital, two of America’s most prestigious medical institutions.
Earlier this year, Partners and GE Healthcare signed a 10-year agreement to “integrate artificial intelligence into every aspect of the patient journey.” Why? A hospital generates some 50 petabytes of data per year on average, enough to fill 20 million four-drawer filing cabinets with standard pages of text. But 97 percent of the information never gets used.
Not for long, if digitization continues to gain steam. Dreyer says “thousands of algorithms” are in the works that use data to help medical professionals do their jobs better and more efficiently.
We caught up with Dreyer in Chicago last week at the annual meeting of the Radiological Society of North America (RSNA), the world’s largest gathering of radiologists and other medical professionals. Here’s an edited version of our conversation.
GE Reports: Everybody seems to be talking about machine learning and artificial intelligence at RSNA this year. What is going on?
Keith Dreyer: You have to look at the evolution of the field. When I was getting my PhD in computer science 20 years ago, some of my projects touched AI. But the algorithms weren’t sophisticated enough and there wasn’t enough data. So I went back into radiology and medicine and focused on building out the digital infrastructure.
Now, because of all of the data streaming across the internet and the amount of money invested in the space, AI is coming back. We can actually see the stuff work. For example, the neural net architecture and deep learning have vastly improved as we started looking for things online.
The computers themselves also got so much faster. Back then, if you tried to train an algorithm, it took four months just to run one computation. Now you can do the same thing in 10 minutes and iterate quickly.
GER: Why do we need AI in healthcare?
KD: You have to have it. There’s too much data for humans to analyze.
GER: Can you give us an example?
KD: Modern medical scanners encode an image pixel in 56 bits [giving it 72 quadrillion possible “shades”]. But the machine brings down the amount of information to 16 bits [per pixel, or 65,536 shades] before the radiologists see it. So, in theory, 40 bits of information is being lost. And that’s just one pixel. We have about 10 billion images stored in our archives at our healthcare system alone.
GER: These numbers are overwhelming. Where do you start?
KD: I’m on the board of the American College of Radiology and we’ve created what’s called the Data Science Institute. We realize that there are huge opportunities with AI and machine learning, but we need to create a structure around it.
First we have to define the clinical challenges we have, and then apply data science to create the prediction models. As you can imagine, there are thousands of algorithms already being built right now. We need mechanisms to integrate them into the workflow, patient care, all of those kinds of things.
GER: Let’s start from the beginning. How do you build an algorithm for doctors?
KD: Stroke detection is a good example. It’s critical to detect and correctly identify stroke as soon as possible, because there’s only a limited window of time when you can actually treat [the patient] and preserve the neurons. The [specific] type of stroke affects treatment. Let’s say we do 200,000 MRI exams of the brain per year and 20,000 are stroke. We can annotate those 20,000, measure the brain lesions caused by the stroke and so on. Next we use the entire 200,000-image set to train the algorithm and use it to identify the type of stroke. When it’s finished, we come back with a test set to see how accurate it was and repeat the process.
GER: Is your image sample large enough to train the algorithm?
KD: As I said before, we have about 10 billion images stored inside of our archives. So if you’re looking for specific rare diseases, maybe it’s not enough. But the findings inside our data are pretty consistent for most of the work that we do.
Ideally, you should be using your exams that were done on scanners that you’re familiar with and then annotated by the right people. If you get any noise in the annotations, the algorithm is just going to train with less accuracy. Those are the fundamental challenges.
GER: Who are the people working on AI and machine learning at Partners HealthCare?
KD: It would be very hard for any single physician or any single data scientist today to solve these problems. It takes a team approach. We’ve created a Center for Clinical Data Science, which has about 35 people. They are a combination of data and computer scientists, physicians, workflow people and others. We currently have 41 projects in the pipeline, and it would be pretty hard to imagine how we could do it in any other fashion.
GER: What diagnoses?
KD: Some are in pulmonology, others in pathology and neurology, and some are in imaging.
GER: When are you going to start using them?
KD: First of all, you need an FDA approval. But even if we had approvals and actually deployed 41 algorithms inside one of our own hospitals, we could never support them. We have 70,000 employees, 6,000 physicians and over a dozen hospitals. Before we start to get too crazy with the creation of algorithms, we have to work with companies like GE. They have the scale and knowledge to help us.
GER: How does GE Healthcare help you?
KD: The relationship with GE Healthcare is an integral part of what we are doing. I could come up with the greatest algorithm in the world, but GE knows how to deploy it across the globe and what needs to be done to integrate it into the existing infrastructure. Do the results go to a referring clinician first, or to the emergency room doctor, for example? If that workflow isn’t put in place first, it just won’t work.
GER: What do you mean by workflow?
KD: We haven’t even scratched the surface of getting AI integrated in a way to make it work the best. For example, when we look at a CT image of the head, we might have 1,000 images to review. And today, you just have to step through all 1,000 images. Well, if an AI can run before us and say, those are the four of the 1,000 images that you need to review, do I need to even scroll through 1,000 images? Couldn’t it just show me the four I need, and then give me some information? But am I still the human making the decisions?
GER: Is AI going to replace doctors one day?
KD: I firmly believe that a radiologist plus an AI will beat a radiologist, and will also beat an AI working alone. We have to figure out how to make them work together.
GER: What does the future of AI in medicine look like?
KD: One interesting area is population health, being able to identify a group or a population at risk. You could pull data off a wearable device, like a Fitbit, and say, OK, I’m seeing a variability in heart rate, I’m seeing a change in motion, and that gives me risk considerations for a patient who needs further diagnostic workup.
If you can detect patients before they become symptomatic, you’ve got a much better prognosis. I think that’s where a lot of this is going to end up.