Already in the foreword, it’s highlighted how crucial responsible development of AI is. “Our future is a race between the growing power of technology and the wisdom with which we use it” – Soumya Swaminathan, Chief Scientist at WHO, quotes Stephen Hawking.
When algorithms have the ability to learn based on data, thanks to which they may fulfil automated tasks without human intervention, we talk about Artificial intelligence (AI). However, in order to fully exploit the benefits of AI, we need to face challenges posed by the adaptation of this new technology. Whether AI brings benefits to patients, healthcare workers, and healthcare systems depends on the implementation of regulations supporting the development of ethical and transparent algorithms.
AI may help medical facilities improve the quality of patient care, increase the precision of diagnoses and optimize treatment plans and care standards. It may become an element of an effective pandemic surveillance system. It may support the process of making decisions on health policies or the allocation of resources. By increasing the accessibility of services thanks to automation, AI may support healthcare systems aiming at universal health coverage. Finally, it can minimize gaps in access to health services.
In order to use this potential, healthcare workers and healthcare systems need to have detailed information on the context in which such systems may function safely and effectively. Healthcare professionals should have access to training to acquire digital skills. AI enables patients to take control over their own health and understand their changing health needs. To achieve that goal, patients’ data must be safe and processed in compliance with the best practices, in a transparent and trusted manner.
Six guidelines for ethical AI systems
The six core principles identified by the WHO Expert Group are the following:- Protect autonomy;
- Promote human well-being, human safety, and the public interest;
- Ensure transparency, explain-ability, and intelligibility;
- Foster responsibility and accountability;
- Ensure inclusiveness and equity;
- Promote AI that is responsive and sustainable.
AI has enormous potential for strengthening the delivery of healthcareIt is also required to ensure the transparency of AI solutions and algorithms. AI technologies should be understandable not only for their creators but also for healthcare workers, patients, users, and regulatory authorities. Transparency requires developers to document the process of planning and developing AI solutions so that it is easy to verify their functionality, potential benefits, and threats. Patients and doctors should also be included in this process. Moreover, it is necessary to establish principles of responsibility in case artificial intelligence causes any harm. Many questions can arise when using algorithms: Who is responsible for a wrong diagnosis or treatment? How will that person be held liable? It is equally essential to ensure inclusiveness and equity. According to the WHO, AI should be designed in such a way as to encourage the broadest possible application. Algorithms should be trained with high-quality data so that when they make decisions, they do not discriminate against people based on their sex, income, race, ethnicity, sexual orientation, and other characteristics protected by codes of human rights. AI technologies must not code prejudices to the disadvantage of identifiable groups, especially those already marginalized. AI-based tools and systems should be monitored and assessed in order to identify disproportionate advantages for specific groups of people. No technology, including AI, should perpetuate or worsen existing prejudices and discrimination. When it comes to responsiveness, designers, creators and users should continuously and thoroughly assess AI applications in real-time. They should develop tools to report whether AI reacts in an adequate and correct manner, in accordance with requirements. Responsiveness also means that AI technology should be consistent with sustainable development goals in healthcare systems. AI systems should be designed in such a way as to minimize their negative impact on the environment and climate and to increase energy efficiency. Sustainable development requires governments and technology companies to consider potential job losses caused by using automated systems. The long-term effects of AI on society need to be included in strategic plans at the national and regional level by implementing initiatives aimed at minimizing negative consequences.