European doctors call for tighter rules for AI in healthcare

Monday, November 18, 2024
AI
News

The Standing Committee of European Doctors (CPME) has released a policy on how artificial intelligence (AI) should – and should not – be applied in clinical practice. The document “Deployment of artificial intelligence in healthcare” approaches AI in healthcare with skepticism, advocating for stricter controls and robust ethical safeguards before the technology is fully embraced.

Good AI, bad AI

After reviewing the 11-page policy, it’s clear that the CPME is cautious, if not unenthusiastic, about AI. While it acknowledges the potential benefits of AI in healthcare, the policy emphasizes numerous concerns and conditions that must be addressed before its full integration. Many of the proposals are entirely legitimate, but some, if implemented, would lead to overregulation.

The main emphasis is on risks rather than chances. The policy highlights challenges such as deskilling, automation bias, data privacy risks, and high costs. CPME repeats that AI should not replace doctors and that deploying AI cannot mean disinvestment in other areas of the healthcare system, indicating a protective stance toward human expertise and existing healthcare priorities.

In some statements, CPME seems to overreact to the rising role of AI. “Doctors should be free to decide whether to use an AI system, without repercussions, bearing in mind the best interests of the patient and to retain the right to disagree with an AI system,” said CPME Vice President Prof. Dr Ray Walley.

However, there are also practical recommendations for adopting AI in healthcare and its seamless integration into clinical practice. Some concerns are reasonable: “The majority of AI products available on the market are not certified by a third party, rendering them untrustworthy for healthcare applications” or “AI systems must comply with medical ethics, data protection, and privacy rules, but there is currently no consensus on how to enforce these.”

Doctors are willing to use AI, but…

The CPME policy highlights several barriers to the widespread use of AI in healthcare: the fragmented and dynamic nature of the sector, a proliferation of uncertified AI tools, and a lack of confidence stemming from the use of unverified data sources, high costs for infrastructure and maintenance, coupled with limited time for healthcare professionals to explore emerging technologies.

Dr. Christiaan Keijzer, CPME President, said, “AI products should be seamlessly integrated into the healthcare information system. We must avoid situations where they function as standalone tools requiring healthcare providers to manually input the same information across different systems. This is inefficient and causes frustration and administrative burnout.”

AI's opacity – the "black box" problem – also poses ethical dilemmas as healthcare providers struggle to understand and trust AI’s outputs. Doctors are cautious about sharing sensitive patient data for AI training, fearing data breaches or GDPR (and now also EU AI Act) non-compliant actions. These challenges have left many medical professionals hesitant to fully embrace AI in their clinical practice.

Some policy recommendations are valuable and must be addressed by policies, while some are very generic and do not add any new value to the AI debate. For example, CPME calls for AI systems that prioritize the real-world needs of healthcare providers and patients over technological novelty, as if those two objectives couldn’t go hand in hand.

AI tools should be embedded into existing clinical pathways and hospital systems rather than operating as standalone solutions. "AI must be designed with healthcare's unique complexities in mind, not imposed as a one-size-fits-all solution. Its purpose should be to enhance clinical decision-making, not to distract from it," according to Dr. Keijzer.

“Healthcare professionals cannot become the ‘scapegoat’ of AI systems malfunction”

The CPME calls for rigorous certification standards to address AI's ethical and practical challenges. Certification should encompass cybersecurity, data privacy, and bias mitigation. Transparent training data and model performance details should be shared to build confidence among healthcare providers.

“Once deployed, the AI benefit should be continuously observed and measured. A large-scale, long-term scientific study on the impact of AI in healthcare should be pursued to consider, for example, doctor deskilling, medical education and training, diagnosis and treatment decisions, and the impact of AI-generated or influenced data on the training of next-generation AI models in medicine.”

The policy underscores the need for compliance with medical ethics and data protection regulations, including the EU's Artificial Intelligence Act (AIA). Anonymization technologies are essential to protect patient privacy when using AI for training and decision-making. Doctors must also retain autonomy in choosing whether to rely on AI recommendations, ensuring they can act in their patients' best interests.

"Trust is the cornerstone of effective AI adoption in healthcare," said Prof. Dr. Ray Walley, CPME Vice President. "Doctors need clear guidelines on their responsibilities when using AI, alongside assurances that they won’t be scapegoated for system errors."

Digital literacy is a key to AI adoption

A significant part of the policy focuses on equipping healthcare professionals with the knowledge and skills to effectively use AI. The CPME recommends incorporating AI literacy into medical education and providing regular professional development opportunities. Training programs should address common misconceptions about AI, ensuring clinicians view it as a tool to complement, not replace, their expertise.

The policy also warns against "automation bias," where doctors might overly rely on AI-generated recommendations without critical evaluation. Measures must be implemented to maintain clinicians' decision-making skills and foster a culture of continuous learning and critical thinking.

A European approach is needed

CPME also calls for publicly funded research initiatives to focus on the long-term impact of AI in healthcare, including its effects on medical training and practice. The CPME advocates for clear liability regimes that hold developers and deployers accountable, relieving individual doctors from the burden of AI-related errors. Insurance solutions for high-risk AI systems could further alleviate concerns and foster adoption.

As healthcare systems across Europe grapple with rising demands and resource constraints, the CPME’s policy offers a comprehensive framework to harness AI’s potential while safeguarding ethical principles and patient trust. “European doctors stress the importance of publicly coordinated efforts to establish knowledge environments of sufficient scale and clinical expertise within national settings. This coordination is crucial to support sustained AI research collaboration at both the EU and national levels,” according to Dr. Keijzer.

Click here to download the CPME’s policy on AI, “Deployment of artificial intelligence in healthcare.”