Generative AI – like Google's Med-PaLM 2 – recently passed the medical exam answering 85% of the questions correctly. What does that mean for healthcare in the future?
I don't think very much at the moment. Diagnostic skills, for example, are influenced by many factors, where experience and so-called illness prompts still play a significant role. AI will not replace the doctor, even in the long term. At least, not in many areas. But AI will support the doctor's work more and more. AI could also become a helpful partner for the patient. In clinical cases, LLM can improve documentation and decision support. AI is not meant to replace doctors. It should support them in processes that machines can solve more accurately. The tasks that doctors are better at, they should continue to perform themselves. Algorithms can make suggestions about the extent to which a specific therapy should be adjusted, for example. However, the decision on how these systems influence the doctor's actions ultimately lies with the doctor.
On the other hand, radiologists say they don't see themselves where they are today in 10 years – I don't think they will be doing the same jobs in the future.
And what does this AI revolution mean for medical education? What and how should medical students be taught today?
No one can predict that – the progress in breakthrough tech is too rapid to make any forecasts. What seems to be emerging, however, is prompting. If you are able to write good prompts, you can use ChatGPT & Co. at work in an efficient way.
Students should also be aware of how generative AI works. If you understand how ChatGPT works, you can assess the advantages and disadvantages; you have the skills to determine where this tool can be used well and where it cannot.
Another critical topic is ethics in regard to using new technologies. Students need to learn how to deal with AI technologies and when it's justified to use AI. In universities, they should already be taught how machine and deep learning work so they can fully embrace it in the future.
Are we driving digitization or being driven by digitization
Unfortunately, it's not the case at the moment. The reason is simple: a lack of experts leads students and teachers to acquire relevant skills outside the universities on their own.
For lecturers, LLMs are very helpful for automated grading, developing compelling patient cases, planning, and creating learning videos. ChatGPT can prepare whole scripts for learning videos, and personalized learning environments can be created. Students can use it as a teaching assistance.
What is still being discussed is the question of whether medical students should also learn programming. Currently, I would say that this is not necessary, especially since the content in medical school is already extensive enough. But that may change in the future.
Given that the education system is very static while new technologies scale quickly, won't universities continue to train doctors utterly unprepared for future healthcare?
Maybe not unprepared. However, they might lack some skills and abilities.
Even without AI, we already have a problem with the digital skills gap. If a patient is familiar with digital tools and wants to apply them to manage a condition, the first point of contact is still the doctor. But if the doctor is unfamiliar with the topic, the patient will probably not use the technology either.
There are now many useful tools, for example, developed to support patients with cardiovascular diseases, diabetes, spine health, and mental diseases. Self-diagnostic tools can also make an essential contribution to improving the health of society in general. In Germany, a doctor can even prescribe certain health apps, but many people don't know about it.
The education system is way too slow in introducing all the technologies in the curricula. An official syllabus on how to use ChatGPT 3 will probably come when ChatGPT 8 is out… I think it's positive that innovation puts pressure on the rigid education system and forces it to adapt. In the best case, this is the breeding ground for disruptive innovations. For example, one medical school did not ban ChatGPT but simply banned types of exams that no longer make sense with the existence of generative AI.
Assuming that ChatGPT is already quite good at diagnosing patients and this is just the prelude to AI's capabilities, shouldn't we consider new professions like AI medical experts?
Yes, I like the idea of thinking about new professions. That should be part of the expected implications of digital transformation. But before quickly inventing new domains, it would probably make more sense to build on existing skills and develop them further to meet the new challenges. Since the current development of AI is so rapid, certificate programs and micro-credentials are probably the way to go. This allows for a much more flexible response to new challenges in the era of AI and emerging technologies.
Have you noticed the impact of ChatGPT on students yet?
Currently, students are still somewhat reluctant to use ChatGPT. Or, to put it another way: they are unwilling to talk about it. This actually has something to do with the fact that at the beginning of the discussion about ChatGPT & Co., we focused all our attention on the suspicion that students would only use these tools to cheat on exams. I am not in favor of such a view of AI. Whenever I talk about generative AI, I ask the audience if they use it: 40% of students and 50% of lecturers answer "yes."
What worries you, and what hopes do you associate with the development of generative AI in healthcare?
I am concerned about one aspect of the development of AI: There are currently so many unaddressed issues regarding AI. I mean copyrights, data protection (when patient data are used, how can LLM access these without data protection issues?), legal issues (who is responsible regarding wrong information and decisions based on LLM?), and basic ethical and moral questions.
There is also a real danger that educational inequalities will be further exacerbated. We already have students who use large language models to improve their scores and, on the other hand, students who are not so good at this and are further left behind. Thus, I'm against allowing it uncontrolled in education.
Another fundamental question that concerns me is: are we driving digitization or being driven by digitization? We're just reacting to the plethora of new AI tools and figuring out how to use them without questioning their relevance and usefulness.
Nevertheless, I stay calm when it comes to technological progress. The mostly unfounded fear of new technologies is nothing new. I used to repeat an anecdote: at the beginning of railroad constructions, there was a theory that human organs would disintegrate if humans moved faster than a horse could run…
Books and television partly feed these illogical fears. AI is pretty much always portrayed as a threat there. AI expert Jürgen Schmidhuber has debunked this, saying that we don't have to worry about AI taking over us because humans are not important enough to be taken over.
Primarily, though, I see AI development as positive. It will help us address current challenges in the healthcare system, such as the shortage of skilled professionals. AI can take over many repetitive workloads, freeing time for more demanding and complex activities. In the best-case scenario, AI can unlock more time for communication between patients and healthcare professionals and within care teams. Unfortunately, most medical errors currently occur due to a lack of communication.
Have medical universities changed how they teach since, for example, after the COVID-19 pandemic?
Universities' goal should be developing learning environments based on the evidence-based findings of learning research and not on the didactic concept from the Middle Ages. In surveys, students repeatedly call for abolishing traditional lectures, but these calls go unheard. Unfortunately, the hope that the pandemic would further develop digital teaching has not come true.
At many universities, everything runs as before the COVID-19 crisis. The so-called New Normal is barely noticeable, despite many examples of good digital teaching concepts being developed and tested. Also, students and faculty have improved their digital skills through the pandemic, but these new skills are fizzling out unused.