FDA Action Plan for AI in medical software

Monday, February 15, 2021
AI
News

Information systems are used to collect and share information. They organize data, analyse it, support clinical decision-making and provide therapeutic or preventive guidance. When enhanced with artificial intelligence algorithms, the systems form a new generation of medical devices, capable of making inferences from the data, suggesting treatments and calculating health indicators. They are no longer the human-controlled digital medical records we have been using for years but sophisticated medical devices that determine how care is provided. Therefore they need to be regulated in order to guarantee patient safety.

In the case of conventional devices, such as blood pressure monitors, ultrasound or X-ray machines and drugs, the matter is simple. Manufacturers submit documentation as proof of effectiveness, precision and safety as demonstrated by clinical studies and tests, and then the regulator reviews it. If the product is approved for the market and no new facts (defects, side-effects) come to light while it is used, the product can be certified. For AI-based software used in healthcare, it is no longer that simple. Such systems are constantly being updated, which means that their functionalities change. Still, we can continue to monitor the updates. Algorithms, on the other hand, make decisions based on the data available to them. The data differs in every case, and thus the diagnosis, prognosis and classification of the patient's condition will also vary from case to case. Besides, without knowing why an AI system made the decision it did (the 'black box'), controlling becomes almost impossible.

FDA takes up the challenge

Published by the FDA in January, the 'Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan' is an attempt to adopt a holistic approach to AI-based IT systems, one involving product lifecycle-based regulatory oversight. The plan outlines the five steps the FDA intends to take:

  • Further developing the proposed regulatory framework, including through issuance of draft guidance on a predetermined change control plan (for software's learning over time);
  • Supporting the development of good machine learning practices to evaluate and improve machine learning algorithms;
  • Fostering a patient-centred approach, including device transparency to users;
  • Developing methods to evaluate and improve machine learning algorithms; and
  • Advancing real-world performance monitoring pilots.

The 7-page FDA document contains a set of plans that will undergo further consultation. The number of AI algorithms incorporated into medical software is snowballing, from mobile apps for patients to sophisticated hospital systems for planning surgical procedures or allowing physicians to suggest evidence-based therapies.

Legislative monster

The FDA refers to "Good Machine Learning Practice," which focuses on data management, extracting features from data sets, interpreting results, etc. This approach is akin to the already existing good practices in software engineering or quality systems. Its goal is to guide the industry towards the expected standards for developing AI solutions and make it easier to oversee them. So far, the standards have not been consolidated into a single document or given legal force. This is another challenge, given that the term 'AI regulation' involves technical as well as ethical and security issues, which are covered in many different legal acts.

Moreover, AI solutions require the use of completely new criteria: transparency, precision and probability (when making a diagnosis), equal treatment and non-discrimination. Transparency alone, i.e., knowing why the algorithm made one decision and not another or how the inference was made, is often impossible to achieve with current technology. Such difficulties are inherent to machine learning. The data used to train artificial intelligence algorithms needs to be described in detail. Data also usually includes historical information. Treatment practices change over time, which algorithms built a few years ago will not be able to recognize. It is also possible to unintentionally introduce discriminatory evaluation criteria based on such traits as gender or ethnicity.

This FDA document submitted for further consultation is very general. It often mentions 'patient-centred approach,' 'ensuring that users understand the benefits, risks and limitations of AI-enabled systems,' and 'trusting the technology.' The concrete proposals include labelling AI solutions, the idea being that the users should know that such systems are capable of making autonomous decisions. From the user's perspective, such knowledge is of little use. Since the algorithm was approved for the market, it must have been reviewed and deemed effective and safe.

One mechanism that is likely to work well is monitoring and identifying the usage of systems by their manufacturers or users (medical facilities). As with data security procedures, the users would be obliged to systematically review the algorithms' functionality. The security issues regarding the use of algorithms would be the responsibility of the AI systems administrator (AI Officer), a role akin to that of a personal data controller, who addresses privacy issues.

Validating healthcare IT systems that incorporate artificial intelligence/machine learning is still an unresolved issue. Validating healthcare IT systems that incorporate artificial intelligence/machine learning still raises many questions. How is it possible to verify that an algorithm does not apply discriminatory rules? Such systems will eventually be regulated because their role in the treatment and prevention processes is increasing. The goal of the regulations is to ensure the systems are safe and precise enough so that physicians and patients can confidently use them.

Download the Action Plan