With healthcare under pressure almost everywhere - in Western countries partly due to the growing gap between human capacity and growing demand for care - AI applications are increasingly being looked to for relief. However, practice is proving recalcitrant. For instance, research would show that only two per cent of all AI innovations are actually used. Reasons include a poor fit with healthcare practice, ethical dilemmas or discriminatory bias in AI algorithms. TU Delft will advise WHO to help address such issues.
According to Dr Alain Labrique (Director, Department of Digital Health and Innovation at WHO), AI has the transformative power to reshape healthcare and empower people to monitor their own health. ‘The technical and academic partnership with TU Delft's Digital Ethics Centre is crucial to ensure that the benefits of AI reach everyone worldwide through ethical governance, equitable access and collaborative action,’ believes Labrique, speaker at the first ICT&health World Conference in May 2024.
Official accreditation
When using AI in healthcare, it is extremely important to uphold both ethical principles and healthcare standards and values. International guidelines have been drawn up for this, but they have yet to be translated into practice. That is where the TU Delft Digital Ethics Centre will now provide support. On 6 March, the Delft research centre received accreditation making it officially a WHO collaboration partner in the field of Ethics and Governance of AI in Healthcare.
AI can only improve healthcare if it rests on a sound ethical foundation, argues Michel van Genderen, internist-intensivist and associate professor at Erasmus MC. The TU Delft Digital Ethics Centre is working with Erasmus MC and software company SAS in the AI ethics lab (REAiHL) to realise this foundation in practice. The AI ethics lab was created on Van Genderen's initiative. The aim is to develop a generalist framework for how AI can be applied hospital-wide safely and ethically.
Thanks to the collaboration between WHO, TU Delft and Erasmus MC, as well as software company SAS, Van Genderen says it is possible to apply AI responsibly and transparently in clinical practice. ‘An example is an ongoing project within Erasmus MC, where AI helps determine when a patient can be safely discharged after oncological surgery. If we meet all preconditions, this can not only ensure safer patient discharge, but also that they can go home four days earlier on average and there is a halving of readmissions.’
Frameworks responsible use of AI
Stefan Buijsman, Associate Professor Responsible AI, says the TU Delft Digital Ethics Centre is the result of almost 20 years of research into digital ethics and responsible innovation. Together with WHO, the centre has already established frameworks for responsible use of AI and Generative AI in healthcare.
‘Now they are approaching us to start making this concrete. How will it work in practice? At the WHO, the thinking was: if there are answers to be found anywhere, it is in Delft. TU Delft, and in particular professor Jeroen van den Hoven, has been working with the WHO on this topic for several years. Now we are getting official recognition for that through this accreditation.’
AI pilots conceived in Delft can be tested in practice within the AI ethics lab (REAiHL), outlines Buijsman: He believes it is important to be able to see whether what is conceived also works in day-to-day hospital practice. ‘We can work out the ethical frameworks and come up with technological solutions to match, in Erasmus MC they can validate them and identify needs from practice.’