A guide to implementing AI in healthcare amid the EU AI Act

Wednesday, March 27, 2024
AI
News

The good news is that the EU AI Act imposes most obligations on developers of artificial intelligence solutions, not end users. They must follow the rules described within four AI risk groups (unacceptable risk, high risk, limited risk, minimal risk). High-risk solutions in healthcare will include, for example, AI applications in robot-assisted surgery or decision support systems. High-risk AI systems are strictly regulated and must undergo rigorous evaluation and mitigation processes, including risk assessment.

In practice, this means that AI-driven solutions for clinical applications will require mandatory CE marking – they will become medical device-grade solutions. The developers will be obliged to deliver standardized instructions on how to use AI innovations safely. Thus, healthcare facilities should have it easier to verify the safety of AI solutions, which are so far often hard to evaluate and deploy due to a lack of guidelines.

Regulatory Landscape and User Obligations

Of course, many regulations regarding AI are already included in other legislation. Manufacturers of AI that are either medical devices or in vitro diagnostic medical devices that comply with MDR/IVDR meet many requirements that the EU AI Act introduces. Manufacturers of AIs that are either medical devices or in vitro diagnostic medical devices are in a better position than manufacturers of AIs that have not been regulated to date. What's important is that the EU AI Act addresses issues that haven't been covered so far, for example, explainability of AI and human supervision.

The new legislation imposes some obligations on end-users who are healthcare professionals. They will be required to use AI-enabled devices following the documentation delivered by the manufacturer. They must also report observed risks, serious incidents, and errors. The procedure already exists for medical devices or adverse drug reactions.

And what about ChatGPT, which some physicians already use unofficially? Although generative AI models will not be categorized as high-risk, they must adhere to transparency requirements and EU copyright law. They are not classified as "unacceptable" (forbidden) solutions. Healthcare professionals, in theory, are allowed to use them, but they are responsible for all the consequences of their decisions.

The EU AI Act is expected to be published in May/June 2024. From then on, AI developers will have two years (or three years in the case of high-risk systems) to adapt to the new requirements. Although the EU AI Act helps healthcare organizations choose safe AI solutions, it doesn't provide any guidance regarding the value assessment of such innovations. Healthcare providers must develop their own standards and procedures.

Steps to follow when adopting AI

The decision to implement AI solutions begins with analyzing the business case. AI is a tool like any other – its purpose is to ensure that people's work, skills, and time are utilized better, more efficiently, and more safely. Therefore, any AI system needs to be evaluated in terms of the value they deliver.

Another crucial aspect to consider is the integration with existing systems. Implementing an AI tool should involve integration with the facility's current IT ecosystem, including systems for Electronic Medical Records (EMR), Hospital Information Systems (HIS), or other tools and platforms in use. The responsibility for integration should be clearly defined in the vendor contract.

The next question is which database model to choose: on-premises or cloud-based. Both differ, and each comes with its own set of consequences. In the on-premises model, AI runs on the facility's infrastructure, whereas in the cloud model, AI operates on an external server, typically provided by a third-party provider. For healthcare providers based in Europe, cloud infrastructure must be located in the European Economic Area (EEA). The future of healthcare undeniably lies in the cloud. Some cloud providers are already introducing generative AI solutions that enable, for example, easier retrieval of information in electronic medical records.

As discussed, regulatory issues must be thoroughly analyzed amid the EU AI Act. If the AI falls under the category of a medical device (AIMD) or an in vitro diagnostic medical device (AIIVD), it must meet the relevant regulatory requirements, particularly the Medical Device Regulation (MDR) for AIMD or the In Vitro Diagnostic Regulation (IVDR) for AIIVD. It is the responsibility of the technology manufacturer to meet these obligations, but the healthcare organization must double-check that it is using a tool that complies with these requirements.

Vendor Compliance and User Obligations

Moreover, the healthcare facility must ensure that the vendor complies with obligations that directly fall on it, such as obligations under data protection laws like the General Data Protection Regulation (GDPR) or the Patients' Rights Act at the forefront. Since the healthcare facility is the custodian of patients' personal data and maintains patients' medical records, it must fulfill its obligations under these laws, often requiring cooperation from the technology provider.

Regulatory Landscape and User Obligations As mentioned before, the EU AI Act will also impose certain obligations on the users of AI in healthcare settings, such as ensuring adequate human oversight of AI operations or providing patients with information on how AI works, which requires close cooperation with the technology provider. Additionally, the use of AI involves many specific aspects. For instance, it is essential to determine whether patient data will be used to train the AI algorithm further and establish the rules governing this process (What data is needed for further training? How will anonymization be performed?)

Besides, defining the terms of technical support, debugging, and system updates is vital. However, in the case of AI, updates resulting from the implementation of a newer version of the AI model entail additional responsibilities for both the technology provider and the end-user. For example, a new version of an AI model may exhibit different behavior on specific inputs. While these changes are evaluated during the validation process of the latest model version, if they have clinical implications, they must be adequately communicated to users. In such cases, the technology provider should not only provide updated information on system performance but also offer appropriate user training if necessary.

The EU AI Act will bring compliant AI solutions to medical facilities. However, it also imposes obligations on them, such as establishing procedures that guarantee human healthcare professionals can intervene in the operation of high-risk AI systems, preventing autonomous decisions that could harm patients (human-in-the-loop approach).