AI could take medical imaging to the next level 

Artificial intelligence can help doctors spot disease, but it's not taking over medicine 

An X-ray image of the chest, shown in shades of white, blue and black. AI models can scan these medical images and reveal health information.

AI computer models can scan medical images like this chest X-ray and glean details about a person’s cardiovascular and metabolic health.

yumiyum/iStock/Getty Images Plus

When radiologist Pouneh Razavi reads a patient’s mammogram, she hunts for blips in the X-ray image that could indicate breast cancer. Then, a second reader looks at the image, and the two compare results. 

But that second reader is no human doctor — it’s artificial intelligence. Since March, Razavi and her colleagues at Johns Hopkins School of Medicine have been using AI software as a second set of eyes.

It’s early days, so her team is still learning from the software, and it can learn from them, too. Images from Razavi’s practice could help train the AI on blips that it missed, so it improves over time. The jury’s still out on its performance — Razavi’s colleagues are still collecting data — but “we’re excited about it,” she says. Her patients are, too. “So far, everyone has just found it fascinating.” That includes Science News Editor in Chief Nancy Shute, who received her first AI-analyzed mammogram from Razavi in May. 

Mammograms aren’t the only type of medical imaging getting AI assistance. Doctors are using the technology to scan X-rays of people’s chests, ultrasound videos of infants’ hearts and more. AI technology in medicine is growing at a rapid clip, and “imaging is leading the way,” said Stanford University radiologist Curtis Langlotz at The New Wave of AI in Healthcare symposium in New York City in May. 

Since 1995, the U.S Food and Drug Administration has approved nearly 900 AI-related medical devices — about 75 percent focus on radiology. But not all clinics around the country are plugged into the technology, Mert Sabuncu, a technologist at Cornell University said at the meeting, which was hosted by the New York Academy of Sciences and the Icahn School of Medicine at Mount Sinai. In fact, he said, “I would say we’re just starting to deploy them.”

Science News spoke with Sabuncu and other experts about why AI in medical imaging is taking off and what they see as the promise — and potential pitfalls — of the technology. Here’s what we found out.

Massive amounts of data make medical imaging ripe for AI

It’s not unusual for Langlotz to arrive at the hospital early on a Saturday morning to find 150 patient images waiting for him to review. He’ll pore over chest X-rays, looking for abnormal nodules, pneumonia, fractured ribs or fluid in the lungs. 

It’s a job that takes training, focus and attention to detail — but even experts can make mistakes. A radiologist’s daily error rate may be around 3 to 5 percent, researchers have found. Such errors tend to stem from being overworked, scientists reported in 2023 in the European Journal of Radiology. “We need help,” Langlotz said.

The problem isn’t likely to disappear any time soon. More people are getting scans, which means doctors are acquiring more images than they used to, and the number of pixels in each image is also growing. For radiologists searching for a suspicious spot on an X-ray, the needle in a haystack analogy doesn’t quite cut it, Sabuncu said. It’s more like “looking for a needle in a haystack under immense time pressure when the patient’s health is on the line, when you have legal liability, and you really don’t want to miss anything.” 

What can be immensely challenging for humans may be a ripe opportunity for AI. With massive quantities of high-quality digital images, scientists can train AI computer models to seek out specific features in a person’s scan, like a smudge in a breast image or signs of lung disease on an X-ray. 

Such models may help improve radiologists’ accuracy and efficiency, and even flag images that look most alarming so doctors can triage cases based on which ones may need immediate attention. When Langlotz sees his cases piling up in the morning, for example, “I’d love to know which of those is more likely to have a problem,” he said.

AI may help doctors spot signs of disease 

When a person gets a chest X-ray, a radiologist may be checking for lung cancer or signs of an infection. But within that image, details about other aspects of a person’s health are hiding in plain sight. AI models have the potential to squeeze more information out of each scan. There’s a richness to image data that isn’t being mined by radiologists, Sabuncu said. “We’re leaving a lot of information on the table.” 

Take cardiovascular disease. Doctors usually calculate a person’s risk by factoring in information like cholesterol levels and blood pressure. But those calculations can’t be done if data are missing. That’s where AI could step in — by gleaning cardiovascular information from routine X-rays. 

“We’ll be extracting information from images that we don’t normally extract as radiologists.”

Curtis Langlotz

Using a set of previously collected images and follow-up data, an AI model scanned chest X-rays from nearly 8,900 people ages 50 to 75 and identified who might later experience a heart attack or stroke, researchers reported in the April Annals of Internal Medicine. It’s an example of “opportunistic imaging,” where AI trawls images for medical clues beyond the original purpose of the X-ray. The roughly 4,200 people flagged by the AI were 1.5 times as likely to have a serious cardiovascular problem over the next 10 years as other people evaluated. 

A similar study in 2023 used a different X-ray scanning AI to successfully detect type 2 diabetes. In that case, researchers trained the AI on hundreds of thousands of chest X-rays and told it which ones came from patients with the disease. Those people tended to have fat distributed in similar ways, including around the heart and liver, which the AI could pick out. It’s not something doctors typically look for when doing a routine X-ray. But that’s the theme of AI in imaging, Langlotz said. “We’ll be extracting information from images that we don’t normally extract as radiologists.”

Such an AI tool could help people who lack access to regular health care, the study’s authors point out. During an emergency room visit, for example, these people might one day receive a routine chest X-ray that could flag issues other than the one that brought them to the hospital.

Sabuncu sees yet another use for AI, as something that can help us peer into our future by taking a closer look at our past. Most AI image analyses today don’t consider a patient’s previous scans; they suggest a diagnosis based on a single image. Sabuncu is interested in AI tools that examine how people’s medical images change over time. “There’s this whole clinical history that needs to be accounted for,” he said.

His team recently built a model, for example, that scours a person’s mammograms over the years. This historical data helped the model pinpoint suspicious lesions. The AI was more than 80 percent effective at distinguishing between people who would develop cancer within the next five years and those who wouldn’t, the team reported in a paper submitted April 29 to arXiv.org and to be presented in October at the Medical Image Computing and Computer Assisted Intervention conference in Marrakesh, Morocco. The researchers checked the AI’s performance using patient outcome data that the AI hadn’t seen. 

AI can also help scientists comb through even more complex visual data, like ultrasound videos of newborn babies’ hearts, says Julia Vogt, a computer scientist at ETH Zurich. Trained pediatricians can look at those videos and spot infants who might have heart problems, but it’s difficult and time consuming, she says. Doctors at a clinic in Germany asked her team if it could help. 

Vogt’s team created an AI model that analyzes babies’ ultrasounds along with other data. In a test using videos from 192 newborns who had previously been evaluated by pediatric cardiologists, the model could accurately point out which infants had a heart issue called pulmonary hypertension about 80 percent of the time, her team reported February 6 in the International Journal of Computer Vision. By spotting the problem early, Vogt says, “we can actually make a huge impact on newborns.” Such issues can often be treated with supplemental oxygen or medications.

AI could help patients, but it’s not going to replace doctors any time soon

In health care, putting new technology out into the real world is no easy feat. Scientists may publish reports of AI models that can identify malignant cells in brain cancer samples or parasitic worm eggs in children’s poop, but that doesn’t mean the technology is ready to go, Sabuncu said. “There’s so much more work to be done.”

AI models need to be validated and tested in settings that go beyond their training. They need people with the clinical and technical know-how to use them. They need to go through the regulatory process for approval. And ideally, they’d be deployed in the real world and evaluated on their performance, Sabuncu said. 

Two black and white ultrasound images of infant hearts, shown side by side. The infant on the left has a healthy heart, the infant on the right has a heart problem.
Ultrasound images from newborns helped train an AI model to recognize a heart problem. The image at left shows a healthy left ventricle (rounded black shape in center). In an image from a different patient (right), the left ventricle appears squashed into a D shape, which suggests severe pulmonary hypertension.H. Ragnarsdottir et al./International Journal of Computer Vision 2024

Like most AI systems reported in the scientific literature today, the model that Vogt’s team developed isn’t ready for widespread rollout. The researchers first need to train it on more data. At this point, there are too many variables that might confuse it, like how different hospitals capture video and what types of ultrasound machines are used. She estimates that more training plus trials to confirm the model works could take three to five years. 

But that what’s needed to bring the model into more hospitals, Vogt says. “You need something to validate if it actually is useful.” 

Sabuncu thinks the public’s expectations around technology timelines is where AI hype can creep in. “There is no doubt that AI has a lot of potential,” he said. He envisions the technology could someday have a huge impact in medicine. But, he said, “I don’t think we’ll see that impact very rapidly.” 

Using AI tools thoughtfully and carefully could ultimately make health care more efficient, Vogt says. “AI is not magic,” she says, but it does have the potential to solve some problems. By easing workloads, for example, AI could give doctors more time with their patients. “That would be a huge achievement,” she says. 

Neither Vogt nor the other experts Science News spoke with envision AI replacing doctors any time soon, though it’s a concern they’ve all heard. Some patients think they’ll get a scan and then interact only with an AI, no human involved in the process, Razavi says. “They think it’s like the self-checkout counter at Whole Foods.” But that’s not accurate, she says. All images at Johns Hopkins imaging centers are read by a breast imaging radiologist. 

Her team is hoping AI can help doctors spot patients’ cancer earlier, reduce false positives and triage doctors’ work. How widespread those potential benefits are will depend on just how good AI can get, how clinics use the technology and who has access to it. Some breast imaging clinics, for example, charge patients an extra $40 or more to have their scans analyzed by AI, Razavi says. Johns Hopkins does it for free. 

“If there is a technology that can help find a small cancer, we don’t want to deny that to someone,” Razavi says. 

More Stories from Science News on Health & Medicine