Find out six principles for AI in healthcare developed by WHO

26 July 2021
Already in the foreword, it’s highlighted how crucial responsible development of AI is. “Our future is a race between the growing power of technology and the wisdom with which we use it” – Soumya Swaminathan, Chief Scientist at WHO, quotes Stephen Hawking. When algorithms have the ability to learn based on data, thanks to which they may fulfil automated tasks without human intervention, we talk about Artificial intelligence (AI). However, in order to fully exploit the benefits of AI, we need to face challenges posed by the adaptation of this new technology. Whether AI brings benefits to patients, healthcare workers, and healthcare systems depends on the implementation of regulations supporting the development of ethical and transparent algorithms. AI may help medical facilities improve the quality of patient care, increase the precision of diagnoses and optimize treatment plans and care standards. It may become an element of an effective pandemic surveillance system. It may support the process of making decisions on health policies or the allocation of resources. By increasing the accessibility of services thanks to automation, AI may support healthcare systems aiming at universal health coverage. Finally, it can minimize gaps in access to health services. In order to use this potential, healthcare workers and healthcare systems need to have detailed information on the context in which such systems may function safely and effectively. Healthcare professionals should have access to training to acquire digital skills. AI enables patients to take control over their own health and understand their changing health needs. To achieve that goal, patients’ data must be safe and processed in compliance with the best practices, in a transparent and trusted manner.

Six guidelines for ethical AI systems

The six core principles identified by the WHO Expert Group are the following:
  1. Protect autonomy;
  2. Promote human well-being, human safety, and the public interest;
  3. Ensure transparency, explain-ability, and intelligibility;
  4. Foster responsibility and accountability;
  5. Ensure inclusiveness and equity;
  6. Promote AI that is responsive and sustainable.
The first principle is to protect autonomy. According to this principle, the use of AI and other computing systems may not undermine human autonomy. It means that humans should retain control over medical decisions. For example, service providers should have the information required to use AI systems safely and effectively, whereas patients should be informed about their role in the care process. It also involves privacy and confidentiality protection, as well as informed consent based on applicable data protection laws. Secondly, AI solutions need to promote human well-being and safety and protect the public interest. AI designers should be guided by regulatory requirements related to safety, precision, and effectiveness, established for clearly defined applications of algorithms. It is necessary to develop AI quality control measures. The next principle is to prevent harm. According to this principle, AI may not cause any mental or physical damage, which could be avoided by using alternative tools.
AI has enormous potential for strengthening the delivery of healthcare
It is also required to ensure the transparency of AI solutions and algorithms. AI technologies should be understandable not only for their creators but also for healthcare workers, patients, users, and regulatory authorities. Transparency requires developers to document the process of planning and developing AI solutions so that it is easy to verify their functionality, potential benefits, and threats. Patients and doctors should also be included in this process. Moreover, it is necessary to establish principles of responsibility in case artificial intelligence causes any harm. Many questions can arise when using algorithms: Who is responsible for a wrong diagnosis or treatment? How will that person be held liable? It is equally essential to ensure inclusiveness and equity. According to the WHO, AI should be designed in such a way as to encourage the broadest possible application. Algorithms should be trained with high-quality data so that when they make decisions, they do not discriminate against people based on their sex, income, race, ethnicity, sexual orientation, and other characteristics protected by codes of human rights. AI technologies must not code prejudices to the disadvantage of identifiable groups, especially those already marginalized. AI-based tools and systems should be monitored and assessed in order to identify disproportionate advantages for specific groups of people. No technology, including AI, should perpetuate or worsen existing prejudices and discrimination. When it comes to responsiveness, designers, creators and users should continuously and thoroughly assess AI applications in real-time. They should develop tools to report whether AI reacts in an adequate and correct manner, in accordance with requirements. Responsiveness also means that AI technology should be consistent with sustainable development goals in healthcare systems. AI systems should be designed in such a way as to minimize their negative impact on the environment and climate and to increase energy efficiency. Sustainable development requires governments and technology companies to consider potential job losses caused by using automated systems. The long-term effects of AI on society need to be included in strategic plans at the national and regional level by implementing initiatives aimed at minimizing negative consequences.

Advice for creators and users

The WHO report contains practical advice on implementing WHO guidelines for designers, programmers, providers, and Ministries of Health and Ministries of Information Technology. They are also intended for other government agencies and departments which will regulate AI, people who use AI technologies for health, and entities that design and finance AI technologies. WHO guidance document was published after a two-year development process led by two Departments in the Science Division – Digital Health and Innovation and Research For Health. To download the report, click here.