Close this search box.

The Action Plan is a direct response to stakeholder feedback to the April 2019 discussion paper, "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device" and outlines five actions the FDA intends to take.

FDA Action Plan for AI in medical software

The US Food and Drug Administration (FDA) has published an action plan for artificial intelligence/machine learning-based Software as a Medical Device (SaMD). What do these regulations concerning AI/ML in software mean?

Information systems are used to collect and share information. They organize data, analyse it, support clinical decision-making and provide therapeutic or preventive guidance. When enhanced with artificial intelligence algorithms, the systems form a new generation of medical devices, capable of making inferences from the data, suggesting treatments and calculating health indicators. They are no longer the human-controlled digital medical records we have been using for years but sophisticated medical devices that determine how care is provided. Therefore they need to be regulated in order to guarantee patient safety.

In the case of conventional devices, such as blood pressure monitors, ultrasound or X-ray machines and drugs, the matter is simple. Manufacturers submit documentation as proof of effectiveness, precision and safety as demonstrated by clinical studies and tests, and then the regulator reviews it. If the product is approved for the market and no new facts (defects, side-effects) come to light while it is used, the product can be certified. For AI-based software used in healthcare, it is no longer that simple. Such systems are constantly being updated, which means that their functionalities change. Still, we can continue to monitor the updates. Algorithms, on the other hand, make decisions based on the data available to them. The data differs in every case, and thus the diagnosis, prognosis and classification of the patient’s condition will also vary from case to case. Besides, without knowing why an AI system made the decision it did (the ‘black box’), controlling becomes almost impossible.

FDA takes up the challenge

Published by the FDA in January, the ‘Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan’ is an attempt to adopt a holistic approach to AI-based IT systems, one involving product lifecycle-based regulatory oversight. The plan outlines the five steps the FDA intends to take:

  • Further developing the proposed regulatory framework, including through issuance of draft guidance on a predetermined change control plan (for software’s learning over time);
  • Supporting the development of good machine learning practices to evaluate and improve machine learning algorithms;
  • Fostering a patient-centred approach, including device transparency to users;
  • Developing methods to evaluate and improve machine learning algorithms; and
  • Advancing real-world performance monitoring pilots.

The 7-page FDA document contains a set of plans that will undergo further consultation. The number of AI algorithms incorporated into medical software is snowballing, from mobile apps for patients to sophisticated hospital systems for planning surgical procedures or allowing physicians to suggest evidence-based therapies.

Legislative monster

The FDA refers to “Good Machine Learning Practice,” which focuses on data management, extracting features from data sets, interpreting results, etc. This approach is akin to the already existing good practices in software engineering or quality systems. Its goal is to guide the industry towards the expected standards for developing AI solutions and make it easier to oversee them. So far, the standards have not been consolidated into a single document or given legal force. This is another challenge, given that the term ‘AI regulation’ involves technical as well as ethical and security issues, which are covered in many different legal acts.

Moreover, AI solutions require the use of completely new criteria: transparency, precision and probability (when making a diagnosis), equal treatment and non-discrimination. Transparency alone, i.e., knowing why the algorithm made one decision and not another or how the inference was made, is often impossible to achieve with current technology. Such difficulties are inherent to machine learning. The data used to train artificial intelligence algorithms needs to be described in detail. Data also usually includes historical information. Treatment practices change over time, which algorithms built a few years ago will not be able to recognize. It is also possible to unintentionally introduce discriminatory evaluation criteria based on such traits as gender or ethnicity.

This FDA document submitted for further consultation is very general. It often mentions ‘patient-centred approach,’ ‘ensuring that users understand the benefits, risks and limitations of AI-enabled systems,’ and ‘trusting the technology.’ The concrete proposals include labelling AI solutions, the idea being that the users should know that such systems are capable of making autonomous decisions. From the user’s perspective, such knowledge is of little use. Since the algorithm was approved for the market, it must have been reviewed and deemed effective and safe.

One mechanism that is likely to work well is monitoring and identifying the usage of systems by their manufacturers or users (medical facilities). As with data security procedures, the users would be obliged to systematically review the algorithms’ functionality. The security issues regarding the use of algorithms would be the responsibility of the AI systems administrator (AI Officer), a role akin to that of a personal data controller, who addresses privacy issues.

Validating healthcare IT systems that incorporate artificial intelligence/machine learning is still an unresolved issue. Validating healthcare IT systems that incorporate artificial intelligence/machine learning still raises many questions. How is it possible to verify that an algorithm does not apply discriminatory rules? Such systems will eventually be regulated because their role in the treatment and prevention processes is increasing. The goal of the regulations is to ensure the systems are safe and precise enough so that physicians and patients can confidently use them.

Download the Action Plan


ICT&health World Conference 2024

Experience the future of healthcare at the ICT&health World Conference from May 14th to 16th, 2024!
Secure your ticket now and immerse yourself in groundbreaking technologies and innovative solutions.
Engage with fellow experts and explore the power of global collaborations.

Share this article!

Read also
Navigating Digital Maturity in Healthcare IT
Digital maturity vs. Reality. How to rethink the IT staff role in a hospital
Online health care icon application on smart phone
End-users of mobile health apps expect far more than a good design
Mayo Clinic started with its innovations for its ten million patients and demonstrated that its model worked, and that data could be ethically and responsibly used to drive innovations.
John Halamka: 'Create the Fear of Missing Out'
Balancing regulatory compliance with seamless adoption, healthcare navigates the integration of AI solutions.
A guide to implementing AI in healthcare amid the EU AI Act
Futurist Amy Webb claims that wearables will evolve into "connectables"
Digital health solutions empower patients to better manage their health and integrate care into their daily lives.
How to improve Digital Patient Engagement to streamline workflows
For people with diabetes, inaccurate blood glucose measurements can lead to errors in diabetes management, including taking the wrong dose of insulin, sulfonylureas, or other medications that can rapidly lower blood glucose.
Smartwatches measuring glucose level: Harmful but easy to buy fake innovations
How to introduce innovation and AI in healthcare organizations if there is no business model for prevention and quality – Our interview with Professor Ran Balicer, the Chief Innovation Officer at Clalit Health Services and founding Director of Clalit Research Institute.
I see no legitimate rationale for delaying the digital transformation in healthcare
Pioneering Cardiac Arrest Detection for Enhanced Survival.
CardioWatch Revolutionizes Cardiac Arrest Detection
Dr. Oscar Díaz-Cambronero, Head of Perioperative Medicine Department at La Fe Hospital, spearheads innovative telemonitoring initiatives revolutionizing patient care
Smartwatches Saving Lives Inside and Outside the Hospital
Follow us