The assumption that radiologists would be the first to be replaced by AI has not proven true. Why? While tech experts may understand AI, they often lack insight into the complexities of medicine. In an interview with Dr. Daniel Pinto dos Santos, Senior Physician in Radiology at the University Hospital of Cologne and Chair of the eHealth and Informatics Subcommittee at the European Society of Radiology, we explore the aspects of a radiologists' work that AI cannot yet handle. Geoffrey Hinton—2024 Nobel Prize Winner in physics for his work on machine learning—predicted in 2016 that AI would eventually surpass radiologists, suggesting we wouldn’t need to train them anymore.
Why do you think radiologists still have their jobs?
It’s amusing. Hinton later tweeted that he was wrong about the timeframe—it’s not five years, but ten years. Anyway, AI is excellent at analyzing images, and it’s plausible that AI could eventually perform some of the tasks of radiologists. However, we still have jobs because radiology is more complex than people outside medicine used to think.
Let me explain why. Obtaining the right data to train AI is challenging. Cases aren’t evenly distributed, and there may be only very few examples of rare but critical cases. For humans, it’s relatively easy to learn these from textbooks and recognize them in images. AI needs numerous examples to differentiate between rare but important findings and common, less critical ones. Everything that’s outside “the standard” becomes problematic for these models.
It might be possible that one day, an AI model could learn from textbooks and recognize patterns in images, too. But current training methods still require vast amounts of data, and healthcare data isn’t always readily accessible and adequately labeled. So, while it’s certainly plausible that AI will one day replace radiologists, that day is still far off, in my opinion.
Even if it's still far away, say 20 years, how will your work as a radiologist change as algorithms and AI evolve?
It has changed somewhat already, but the overall work has remained largely the same. There are many AI products on the market that can perform specific tasks, such as detecting intracranial hemorrhages and fractures, analyzing bone age, etc. Specifically, things like bone age estimation can already be done very well by AI. However, radiology involves more than just analyzing images. We also review physicians’ requests, determine the best scan methods for each case, contextualize findings with the patient’s complaints, etc. In 20 years, we may find ourselves overseeing models and algorithms rather than doing everything ourselves, similar to how laboratory medicine has evolved, for example. However, for that to happen, models still need to evolve; sometimes, the difference between a healthy condition and a critical one comes down to a few pixels in an image and some context in the patient’s history.
When we met before the interview, you mentioned that various AI systems can produce different outcomes. What’s the issue here?
The issue boils down to technical details. Take lung nodule detection, for example. A model might perform well on certain datasets. Still, in the institution where I work, variables like reconstruction algorithms, slice thickness, or CT tube current could be different, introducing subtle noise that the AI wasn’t designed for. Radiological images inherently contain such noise because the measurements aren’t perfect. This can sometimes influence the AI. As humans, we do not see such details, but for AI, which relies on detailed pixel data, this noise can lead to significant differences. This is especially pronounced in small nodules – which, on the other side, are often not that relevant clinically. Image characteristics, AI capabilities, and clinical reality all create complexities.
Some imagine a radiologist’s job as simply analyzing an image and determining if there’s cancer or another condition. What does your work actually look like?
That’s a great question. Much of what we do is driven by understanding the full picture—reading referring physicians’ notes, checking lab results, and considering the patient's broader medical history. While AI might eventually incorporate all these factors, the complexity is enormous. This can all make the difference when, e.g., assessing a pancreatic lesion – similar image characteristics can imply very different clinical pathways in a young, healthy person compared to an elderly person with abnormal lab results. AI, lacking this kind of clinical awareness, might miss such nuances. It’s not as binary as some may think. We don’t just look at an image and make a definitive diagnosis. We consider the patient’s condition, often talk to them, and determine the most likely diagnosis. Then, decisions for follow-up or treatment are made, and we see if we are correct. We don’t always have definitive answers immediately.
So, in your work, collaboration with general practitioners and the medical team is essential, right?
Exactly. Collaboration is critical in medicine. As radiologists, we need meaningful clinical information from referring physicians because our reports are only as helpful as the information we receive. Without it, we might miss the point entirely. Ideally, we can take a good clinical question and provide an actionable answer based on the patient's situation. While today’s AI can indicate thousands of findings in a specific case, only one may truly matter. Modeling such contextual workflows for AI is extremely difficult, which may explain why we still have our jobs.
Do you look at the future of radiology with hope or concern that AI will disrupt your work?
I’m hopeful. While I might be skeptical about some current AI products, which don’t always align with our work, the history of technological advances in radiology has always led to patient benefits. AI, in the future, will be just another tool at our disposal. If AI can detect subtle changes I might miss, I’m more than happy to use it. It will allow us to provide more actionable information as radiologists. And I'm perfectly happy being replaced for tedious tasks like bone age estimation from X-rays of the hands—AI can do those better, faster, and more accurately.
What is AI to you—just a tool or more like an assistant? I ask because the approach also determines the level of trust.
That depends. I see AI as both a tool and an assistant. As you said, trust is crucial, along with managing information overload and avoiding automation bias. For every AI system we use, we must ask: Does it truly help? Does it address a real issue? And how do we interact with it to ensure we can trust it? The level of trust then depends on the scenario. For some tasks, like bone age estimation or heart segmentation, it’s easy to verify the AI’s output. But for others, such as predicting treatment responses, we will have to rely on scientific evidence because it’s not as easily verifiable. Trust and human-AI interaction are nuanced. If the collaboration does not work well, AI does not help—and we are already seeing cases where an AI was taken out of clinical application because it just wasn’t helpful in practice, despite initial high expectations.
There’s a growing trend in the market for full-body MRI screenings. Recently, it gained significant attention when Daniel Ek, the CEO of Spotify, opened the first clinic of its kind in London. Does conducting such screenings for healthy individuals as a preventive measure make sense?
I’m not a fan of these screenings. If you’re healthy, the likelihood of finding something serious is very low, but the chances of finding something insignificant are extremely high. This can lead to unnecessary fear and a cascade of diagnostic tests, some of which can become quite invasive. On the other hand, some diseases can also develop while remaining hidden from MRI screening. Lastly, in a recent case, a radiologist was sued for having missed a relevant finding in this health screening setting, which could further incentivize overdiagnosing in order not to get sued. From my perspective, those whole-body MRI screenings are selling a promise that in real life can’t be kept—the best case is they confirm everything is okay with you or very rarely find something relevant; the worst case is they make you very sick until proven otherwise when in fact you were healthy all along.
I’d rather support evidence-based screening programs, like mammography, colonoscopy, or prostate exams, where there’s clear evidence that screening is beneficial. But whole-body MRI without specific indications? I’d advise against it. A healthy lifestyle is a far better preventive measure than an MRI scan that might find harmless anomalies.
What does the future of radiology look like to you?
If I could make a wish, next-generation radiology would involve making the data we already generate more usable. Our reports are currently unstructured text, which limits their utility. We need to digitize this information in a way that makes it usable and accessible. The discussions around the European Health Data Space (EHDS) are a step in the right direction.
We must also integrate more AI in meaningful—and mostly non-diagnostic—ways. For now, radiologists are pretty good at making a diagnosis, but AI can assist in things like quantifying organs, analyzing body composition, or estimating organ function. There are many opportunities to advance radiology and enhance the value we provide.