ChatGPT is still far from perfect, but there is already much discussion about its potential applications in medicine. Isn't it too early?
No! It's not too early. While it is true that AI language models like ChatGPT are still being developed and are not perfect, there is already promising research on their potential applications in medicine.
AI language models have shown potential for use in medical diagnosis, drug discovery, and patient care. For example, AI language models have been used to analyze medical imaging data to detect abnormalities or diseases, identify potential drug targets, and help develop personalized treatment plans for patients. Overall, while it may be early in the development of AI language models, the potential applications in medicine are already being explored and show promise for improving patient care and outcomes.
Do you think the hype around AI is justified?
We all look for inflection points in systems. For many—including medicine—we thought that COVID was that point. But my perspective is that the real inflection point is the introduction of LLMs and in particular, GPT-3. The hype around AI has been fueled by the significant advancements and breakthroughs that have been made in recent years, as well as the potential for AI to transform a wide range of industries and sectors. Further, the unique relevance and utility by the mainstream consumer has further driven the hype.
Let's face it—for many, GPT models are magical. And as Arthur C. Clarke famously said, "Any sufficiently advanced technology is indistinguishable from magic." That's where we are with GPT. And the interesting aspect here is that "you ain't seen nothing yet!"
While AI has shown promise in areas such as image recognition, natural language processing, and autonomous systems, there are still areas where AI struggles, such as in understanding context, making subjective judgments, and dealing with incomplete or noisy data. Additionally, there are ethical considerations around the use of AI, including issues related to privacy, bias, and fairness.
That being said, there are many exciting and promising applications of AI that have the potential to benefit society in significant ways, from improving healthcare and education to advancing scientific research.
Advances (coming quickly) in learning that will address many of these issues are evolving just as quickly as the struggles—we often fail to see this.
You once wrote that people are more interested in the mistakes of artificial intelligence than in its potential. Are we afraid of AI? Rightly so?
First off and sadly, the majority of the public, including many professionals, source their information on technology and AI from Hollywood. The dystopia perspective lives top of mind for many, and it's a powerful and resonant concept. So, we often let these "mistakes" drive ill-conceived notions that are fun for dinner conversations and generic conference panels. As we know from the press: if it bleeds, it leads. That's the basis for much of the click-bate and fear-mongering that is associated with AI.
Now, there's a flip side to this, and has been championed by Elon Musk. AI certainly may pose an existential threat, particularly as it evolved to Artificial General Intelligence (AGI). So, there are practical and important guard rails that must be incorporated into the development of these systems. I've actually crafted a document to support this titled
The Human Declaration Of Autonomy And Independence From Artificial Intelligence.
IBM Watson Oncology showed that doctors are unwilling to trust AI in decision-making. Does the same destiny await the medical counterparts of ChatGPT?
Remember Newton from Apple? Simply put, too soon without the necessary processing power, memory, and connectivity to support the market needs. I think that IBM Watson was a similar failure. Watson was supported by top marketing that included Ken Jennings on the TV show Jeopardy and powerful communications from major advertising agencies. So, the marketing exceeded the medicine, and that was the tragic flaw.
The reluctance of doctors to fully trust AI in decision-making, as demonstrated by the experience with IBM Watson Oncology, highlights the need for greater collaboration between AI and medical professionals. The success of AI in healthcare will depend on its ability to integrate and augment the expertise of doctors as integral to the model learning.
Will GPT become another IBM Watson? Almost certainly not. And I can't wait for GPT to be a guest on Jeopardy to take on Ken Jennings as well as clinical experts!
Don't you think people tend to romanticize and idealize AI?
I've said many times that innovation lives in the domain of wonder and fear. From fire to flight, these base human emotions—perhaps even limbic—are powerful and often romanticized.
At its core, innovation is about pushing boundaries and seeking new ways of doing things. It is this inherent drive for progress that drives us forward as a society, but it also carries with it great risk.
"Rule number one: Trust but verify."
We must learn to balance our sense of wonder at what innovation can achieve with our fear of the unknown and the unforeseen consequences that innovation can bring. And this balancing act is tricky. It's inherently disruptive and drives a psychopathology of reluctance and resistance. And this might be one of the greatest obstacles for digital transformation—in the home, boardroom, or the hospital. AI is perhaps the greatest expression of this—and the bigger the wonder, the bigger the fear. It's both beautiful and tragic, and that is the essence of romance.
Some argue it's just another innovation, a better way to process massive data sets but not a breakthrough.
First, LLMs and GPT are far from "just another technology." Yes, they have provided a sort of mechanical advantage, like the bicycle. But there's a fundamental difference. One powerful aspect of the GPT revolution extends beyond a mechanical innovation; it provides a "cognitive advantage" that touches upon something also sacrosanct and exclusively the domain of humanity. GPT has emerged as a cognitive catalyst, enabling us to think more expansively by presenting various options that we might not have considered otherwise. By leveraging GPT's capabilities, we can tap into a vast pool of knowledge and perspectives—beyond our individual knowledge base—broadening our horizons and pushing us to explore new choices and solutions.
ChatGPT heralds the development of digital agents to advise us on health issues, among others. Would you trust such a system as much as a human being?
Yes. Ronald Reagan famously said regarding the US relationship with the former Soviet Union, "Trust but verify."I think this might apply to the current state of AI and medicine. But this isn't a fixed reality. It's shifting very quickly where trust is growing, and the need for oversight or verification is lessening. This is exponential, but I wonder if there's a threshold or more of an asymptotic barrier. My guess is that trust will be well established in less than five years.
Please give 2-3 scenarios for the impact of AI on healthcare in the next five years.
Firstly, shifting professional bandwidths as AI augmentation lets various professionals do more with greater competence. We'll see that in many areas, from doctors to nurses to technicians. Further, this will change the staffing cluster needed to provide optimized care.
Secondly, earlier and earlier diagnosis will emerge as AI plays a role in data interpretation and extrapolation.
Thirdly, a lot of data is simply unused and thrown away. We will see the emergence of "data ecology," where, for example, a CT of the chest will be used for more than just a single clinical target, such as a nodule or pneumonia. The acquired data will be automatically leveraged for many other conditions, such as cardiac calcium score to vertebra analysis for osteoporosis.