2023 in review: ai is everywhere at once, but still not in healthcare

Wednesday, December 20, 2023
AI
News

ChatGPT is more empathetic than a doctor

2023 revealed surprising AI-related research. Particularly, one caused a wave of discussion – Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum published in April 2023 in JAMA Network. It demonstrated that ChatGPT's responses to patients' questions were nearly ten times more empathetic than a human doctor's. The quality of the AI's responses was 3.6 times higher than a human's.

In the fall of 2023, another media coverage fueled the fire: 17 doctors over three years were unable to diagnose a 4-year-old boy with rare umbilical cord syndrome. The disease was finally correctly recognized by ChatGPT when a desperate mother entered the symptoms and test results into this 2022-launched large language model.

Both cases show that generative AI can successfully fill the gap in patient care and become the doctor's second pair of eyes. Healthcare is suffering from shortages of healthcare professionals, and AI support could be a long-awaited help in handling simple clinical cases where no doctor-human engagement is needed. ChatGPT has access to much more knowledge than a doctor limited by the cognitive capabilities of the human mind. AI and the doctor can form a harmonious tandem in which each party has different but complementary competencies.

ChatGPT can diagnose patients, suggest treatments, make preventive recommendations, and tell patients how to eat healthily or prevent disease. But AI can also hallucinate; it can make up facts. Thus, the doctor – like an experienced pilot in an airplane controlled by modern navigation systems – must always be in the driver's seat.

More accurate AI models for medicine can be expected. For example, Med-PaLM – a large language model (LLM) by Google – passed the US medical exam correctly answering about 83% of questions. Some studies say as much as 90%.

“The development of AI is as fundamental as the creation of the personal computer. It will change the way people work, learn, and communicate – and transform healthcare,” writes Bill Gates in the foreword of the book “The AI Revolution in Medicine: GPT-4 and Beyond.” The book's authors tested the GPT-4 capabilities in healthcare and concluded that AI could be a perfect co-pilot for doctors. The technology is there.

Is it the end of EDM as we know it?

Doctors hate entering data into electronic medical records (EMR) by clicking, typing, and opening tabs. Working with the EHR as we know it today is time-consuming and frustrating. There is a chance that this will soon change.

In 2023, the first health IT system providers introduced AI-based solutions to EDM. For example, next-generation EHR based on speech-to-text systems "listen" to the doctor's conversation with the patient, recognizing data, which are then automatically entered into the appropriate records in the EHR. The doctor only approves the summary prepared by the AI at the end. Similarly, using voice commands, a prescription or referral can be issued, and the system can be asked to write a summary of the visit for the patient in understandable language. For example, Microsoft has partnered with Epic, a primary provider of electronic medical record software in the US, to bring LLM to doctors’ offices. Google announced a partnership with the Mayo Clinic, and Amazon Web Services launched HealthScribe, a generative artificial intelligence (AI) service for collecting clinical records.

The promise is that doctors will no longer have to click through several tabs to check a patient's medical history. They only need to ask AI to summarize the most critical data. AI will even analyze non-standardized notes that now often get forgotten and are rarely reread. AI can review gigabytes of data in seconds and prepare a summary, which would take hours for a doctor.

AI is also slowly filling the gap in patient care. For example, a chronic patient sees a doctor for about 15 minutes every few months—15 minutes out of 43,200 minutes in a 30-day month. The remaining 43,185 minutes is when patients are alone with the disease: their concerns, doubts, questions about the results of new tests, and dilemmas about whether treatment is on track.

During this time, chatbots using AI will act as mentors and advisors available 24 hours a day. An example is Dave, the first mentor for cancer patients.

(Gen) AI in healthcare. Unofficially used, officially denied

Already, many doctors are experimenting with ChatGPT, but in most cases – unofficially. According to a report, “Dr. AI,” published in Medical Economics, more than 1 in 10 healthcare professionals use AI, and almost half of respondents intend to adopt it in the future. When healthcare professionals tried AI, they changed their minds, and 95% had a more positive view. One in four Americans are more likely to talk to an AI chatbot instead of attending therapy.

“While it’s right to remain skeptical about these innovations, technology has upended medicine for all of human history, so paying attention to the role of such emerging technology is crucial for physicians to stay on top of their field,” summarizes the authors of the Medical Economics’ report.

There is the bad news – it will last years until tools like ChatGPT or large medical language models become certified. By October 2023, the U.S. Food & Drug Administration (FDA) authorized clearance for 692 devices using AI/ML machine learning algorithms and no genAI-based solutions. Healthcare must rethink how it validates medical devices and ensures patient safety while taking advantage of new technologies.

At the end of 2023, it’s becoming clear that generative AI will be a powerful technology impacting our lives. Thus, it must be regulated. President Biden issued an executive order on Safe, Secure, and Trustworthy Artificial Intelligence. In Europe, the EU AI Act has been passed. Critics say that instead of driving the development of ethical AI in Europe, it will cause more confusion – it’s unclear how the companies should implement the new rules.

Focusing merely on regulation will not help. What's needed is a roadmap for AI in medicine based on the trade-off between the vast capabilities of tools like those on ChatGPT and the risks. Every medical device, every medical procedure, every drug has risks. Similarly, in the case of AI in medicine, we need to recognize the dangers, weigh them against the potential benefits, and then create guidelines on minimizing the potentially harmful impacts.

We cannot afford not to use artificial intelligence because that would hamper innovation at the expense of increasing administrative burdens on doctors, not to mention the lost health benefits for patients.