How a Swiss hospital use LLMs to make the best of data

Monday, January 13, 2025
AI
News

"The start was challenging, but now doctors are happy working with Large Language Models (LLMs) in our hospital," says Bram Stieltjes, MD, PhD, Head of Research and Analytic Services at the Department of Radiology, University Hospital Basel (Universitätsspital Basel). In an interview, we discuss how to integrate LLMs and other technologies in hospitals to enhance workflows and reduce administrative burdens.

Hospitals – including University Hospital Basel – rely on numerous IT applications from different vendors. How do doctors navigate such a complex digital ecosystem?

It’s increasingly becoming a challenge. Doctors often need to juggle multiple apps, manually copy-pasting information between systems. We’ve tried to streamline this by enabling all applications to launch from a single framework, but it’s merely a workaround. This fragmentation hampers the overall view of a patient, creating inefficiencies that impact care delivery.

How are you working towards better integration?

Currently, like many hospitals, we use messaging standards like HL7v2 to integrate data—but it’s not true integration. These messages don’t provide a comprehensive or historical view. To address this, we recently completed a public tender for an open data platform. Starting January 2025, we will push vendors to adhere to open standards on this platform. This shift is a step toward a unified digital environment.

At the conference “AI in Health” you’ve mentioned that doctors spend up to 50% of their time on electronic health records (EHRs). Is the situation improving or worsening?

Without significant changes, it’s only going to get worse. The growing number of disconnected applications makes it harder to get a complete view of the patient. That’s why we’ve adopted a new strategy for clinical applications, aiming for standardized data definitions to create seamless clinical work-flows. Beyond hospitals, our health directorate is exploring regional and national standards to tackle this issue on a larger scale.

What are the main pain points with EHRs?

In Switzerland, the issue is foundational—we don’t even have a unified EHR system. Each hospital and system operates in isolation, making it impossible to achieve a condensed, cohesive view of a patient’s medical history, let alone integrate data from external sources like general practitioners or other hospitals.

You’ve started using large language models to summarize data and draft clinical letters. How did this begin?

It started about a decade ago in radiology, with the introduction of machine learning. We built a team of engineers to develop imaging-based algorithms. As demand grew, we realized that fragmented clinical landscapes hinder progress. This inspired our open standard initiative. Later, we experimented with natural language processing for clinical letters, but results were limited. The advent of LLMs changed that. With structured data from our data warehouse, LLMs now help reduce errors like hallucinations and scale efficiently.

What results have you seen so far from implementing LLMs?

A significant application is preparing multidisciplinary board discussions, such as for oncology or spinal surgery. LLMs can summarize a pile of reports and cross-check them against multiple guidelines—American, European, national, or hospital-specific. This capability streamlines decision-making. Another practical use is drafting letters for insurance approvals. Tasks that previously took 20 minutes per patient can now be completed in one or two minutes with LLM support.

What advice would you give hospitals starting to implement LLMs? What challenges should they be ready for?

A key challenge is hardware. Many hospitals lack the infrastructure to run LLMs. If you rely on cloud solutions, you’ll need a robust cloud strategy. For on-premises setups, you need personnel skilled in managing such environments. Another critical factor is engaging clinicians. Doctors are often overworked, so you must allocate resources to involve them meaningfully in the implementation process. Their feedback is vital, but it’s also crucial not to disappoint them with early, underwhelming iterations.

How do you identify the right clinicians to work with on these innovations?

It’s important to find clinicians genuinely interested in implementation. Many are curious about AI but for secondary motivations, like publishing papers; thus, they might lack the persistence for iterative development and the collaboration required to ensure usability. We prioritize those committed to improving workflows and patient care. Once these innovations prove effective, even initially skeptical colleagues tend to adopt them.

Beyond LLMs, what other AI applications are you exploring in the University Hospital Basel?

In radiology, some AI solutions are already part of routine practice. We’ve also implemented AI in administrative areas like billing. Legal contract review is another emerging area. Once our infrastructure stabilizes, I foresee AI applications spreading across all hospital functions, from clinical tasks to operations.

Speech-to-text technologies integrated in the electronic health records are gaining traction in the U.S. Do you see this as a game-changer?

Absolutely. Voice-based systems can make structured documentation the norm, transforming clinical letters and other documents into byproducts of structured data rather than the reverse. We’ve already developed similar systems in radiology, where text and structured data remain synchronized. Widespread adoption could eliminate the need for doctors to type, allowing them to focus more on patient interactions. In settings like operating rooms, voice systems are even more critical, as typing isn’t practical.

Some hospitals are reluctant to implement AI due to strict data protection regulations. How have you addressed this?

One approach is using open-source LLMs on-premises, which mitigates security risks but requires substantial hardware investment. Synthetic data generation is another promising solution. It allows us to train models without relying on sensitive patient data, aligning with regulatory requirements. For inference, hosting models within hospitals can strike a balance between privacy and functionality.

If there were no financial constraints, what technologies would you prioritize over the next 3–5 years?

My vision is a central patient data model continuously updated by various sensors, from MRI scanners to doctor-patient conversations. This model would integrate all decisions and updates, enabling intuitive, real-time insights without the need for repeated discussions. Such a comprehensive system would redefine how we sense and decide in medicine, streamlining every aspect of care delivery.

Any final thoughts on the future of LLMs in hospitals?

We’re on the cusp of significant change. In January, we’ll finalize funding discussions to move from research to clinical implementation. Over the past decade, LLMs have garnered attention that’s driving enthusiasm for integrating these tools into daily practice. It’s an exciting time, and I’m optimistic about the transformative potential of this technology in healthcare.

How is healthcare shaping its future? Thousands of healthcare professionals are discovering what truly works and seizing opportunities. Claim your ticket and experience it at the ICT&health World Conference 2025!