First EU AI Act guidelines: When is health AI prohibited?

Tuesday, February 11, 2025
AI
News

On February 2, 2025, the first part of the EU AI Act regulation regarding the use of unacceptable artificial intelligence practices went into effect. The European Commission has published draft guidelines on interpreting Article 5, which describes the prohibited AI applications. Many of them concern healthcare-related use cases.

What do the EU AI Act guidelines say about AI in healthcare?

Commission Guidelines on Prohibited Artificial Intelligence Practices established by Regulation (EU) 2024/1689 (AI Act)” provides an overview of practices classified as unacceptable under Article 5 of the EU AI Act. A 140-page document delivers examples of AI forbidden in the European Union, such as social scoring systems, facial recognition systems for public surveillance (with some exceptions), or applications that could manipulate individuals and pose a threat to their health. Although the document describes forbidden use cases in all industry sectors, many regard healthcare.

The primary rule says that unacceptable risk under Article 5 of the AI Act applies to AI systems that pose unacceptable risks to fundamental rights and Union values. High risks are classified AI systems that pose high risks to health, safety, and fundamental rights.

Let’s categorize prohibited systems in healthcare according to the guidelines:

AI applications that could manipulate and mislead patients. This refers to AIs that use subliminal or manipulative techniques to influence consumer and patient behavior, which could lead to harm. For example: AI solutions that suggest medical decisions to patients based on manipulative, non-explainable algorithms. However, if health is the primary goal, systems influencing users to achieve behavior change can be used, as we describe later.

Exploitation of sensitive patient groups. AI that intentionally exploits the vulnerability of consumer groups based on age, disability, or economic situation is also prohibited. An example would be AI suggesting excessive or unnecessary treatment to patients who are disadvantaged or mentally disabled.

Example: “AI systems used to target older people with deceptive personalized offers or scams, exploiting their reduced cognitive capacity aiming to influence them to make decisions they would not have taken otherwise that are likely to cause them significant financial harm. A robot aimed to assist older persons may exploit their vulnerable situation and force them to do certain activities against their free choice, which can significantly worsen their mental health and cause them serious psychological harms.”

Scoring in healthcare. The EU AI Act introduces a ban on systems to evaluate citizens based on their behavior. The European Commission intended the so-called “social scoring” systems being piloted in China, with bonus points for compliance with state rules and minus points for their violations. Such systems do not align with the EU's democratic values. With regard to healthcare, for example, AI systems that would evaluate patients based on their behavior, lifestyle, or other social characteristics and, on that basis, limit access to health services or calculate health premiums are prohibited. Such an approach could lead to discrimination in a solidarity-based health system.

Example: “A public agency uses an AI system to profile families for early detection of children at risk based on criteria such as parental mental health and unemployment, but also information on parents’ social behavior derived from multiple contexts. Based on the resulting score, families are singled out for inspection, and children considered ‘at risk’ are taken from their families, including in cases of minor transgressions by the parents, such as occasionally missing doctors’ appointments or receiving traffic fines.”

Predictive AI systems that discriminate. The EU AI Act also prohibits using AI to predict the risk of committing a crime based on personality traits or previous crimes. These provisions have little relevance to healthcare. However, an example would be classifying patients by algorithms as potentially dangerous. Such algorithms could possibly discriminate against some patients.

Illegal collection and processing of biometric data. The EU AI Act prohibits the collection of biometric data used in facial recognition systems or data collected in surveillance cameras, which could be used to create databases to identify citizens without their consent. With regard to healthcare, one can imagine an AI system for public health monitoring integrated with electronic medical records to catch people with infectious diseases in public places. This could compromise patient privacy; however, it could be implemented during the pandemic since this use case can be subject to national regulations.

Emotion recognition systems. The EU AI Act also bans AI algorithms that identify emotions based on, for example, voice, movements, or facial expressions. Such systems could lead to unwarranted surveillance, threatening individuals' privacy. In healthcare, however, the regulation provides an exception: AI solutions are allowed if they have a clearly defined medical purpose. For example, diagnosing a patient based on voice or movements. According to the guidelines, “emotion recognition in specific use contexts, such as for safety and medical care (e.g., health treatment and diagnosis), has benefits.”

Interestingly, emotion recognition is allowed not only in the healthcare context. An example includes an online music streaming platform: “An online music platform uses an emotion recognition system to infer users’ emotions and automatically recommends songs in line with their moods while avoiding excessive exposure to depressive songs. Since users are just listening to music and are not otherwise harmed or led to depression and anxiety, the system is not reasonably likely to cause significant harm.”

Will health chatbots be allowed? Yes, but with a few exceptions

The guidelines allow broad use of AI in healthcare, from diagnosis to therapy. For example, chatbots or robots that help the elderly or those with chronic diseases in their daily lives, improve diagnosis of diseases at an early stage, or offer psychological support and exercise are allowed since they have a clearly defined goal. By far, most chatbots for patients, symptom checkers, and health and fitness apps based on AI algorithms will fall under this category. Of course, another question is if they are high or low risk – the guidelines are out yet.

Medtech companies and startups do not have to be afraid of health data collected from various sources to train AI algorithms to better screen patients.

Example: “The collection and processing of data that is relevant and necessary for the intended legitimate purpose of the AI systems (e.g., health and schizophrenic data collected from various sources to diagnose patients) is out of scope of Article 5(1)(c) AI Act, in particular, because it process relevant and necessary data and typically does not entail unjustified detrimental or unfavorable treatment of certain natural persons.”

A clearly defined healthcare-related goal is key. For example, measuring stress levels to improve patients' well-being is allowed, while monitoring stress biomarkers in the workplace could be prohibited if they are not transparently used to track employees to detect job burnout or depression. Such solutions are unacceptable because they have a hidden purpose and can be used against workers. The guidelines are evident here: “The general monitoring of stress levels at the workplace is not permitted under health or safety aspects. For example, an AI system intended to detect burnout or depression at the workplace or in education institutions would not be covered by the exception and would remain prohibited”.

However, this case can be subject to data protection law and national law on employment and working conditions, including health and safety at work, which may foresee other restrictions and safeguards about using such systems.

Also, chatbots that exploit a user's weaknesses or vulnerabilities to control his behavior are unacceptable. For example, a chatbot suggests an exhaustive exercise program to an overweight person.

Example: “AI-powered well-being chatbot is intended by the provider to support and steer users in maintaining a healthy lifestyle and provide tailored advice for psychological and physical exercises. However, if the chatbot exploits individual vulnerabilities to adopt unhealthy habits or to engage in dangerous activities (e.g. engage in excessive sports without rest or drinking water) where it can reasonably be expected that certain users will follow that advice, which they would otherwise not have done, and suffer significant harm (e.g. a heart attack, or other serious health problem), that AI system would fall under the prohibition in Article 5(1)(a) AI Act, even if the provider might not have intended this behavior and harmful consequences for the persons.

In such a case, the guidelines' authors talk about a significant violation of individual autonomy and distorting the behavior of the users, which can be potentially harmful. AI chatbots that promote self-harm, encourage suicide, or harm others also fall under this category.

Not all the guidelines are straightforward to interpret. Prohibited are AI systems causing mental distress to the user. An example is an AI system that results in physical harm, such as insomnia, mental stress, physical symptoms of stress, deterioration of physical health, or weakening of the immune system. Finding a direct cause-effect relationship between an AI system and such side effects will be problematic.

Example: “An AI system that causes physical harm may also lead to psychological trauma, stress, and anxiety and vice versa. For example, addictive design of AI systems used in products and other AI-enabled applications may lead to psychological harm by fostering addictive behaviors, anxiety, and depression. The psychological distress may subsequently result in physical harm, such as insomnia and other stress-related health issues and physical problems. AI-driven harassment may lead to both psychological distress and physical manifestations of stress, such as insomnia, deteriorated physical health, or a weakened immune system.

A therapeutic chatbot that supports patients with mental health issues and allows people with cognitive disabilities to cope with daily life is permitted. However, such a chatbot can no longer exploit the users' weaknesses, encouraging them to buy expensive medical products.

Example: “A therapeutic chatbot aimed to provide mental health support and coping strategies to persons with mental disabilities can exploit their limited intellectual capacities to influence them to buy expensive medical products or nudge them to behave in ways that are harmful to them or other persons.” This use case also includes an AI system that uses emotion recognition to support mentally disabled individuals in their daily lives. Meanwhile, it manipulates them to make harmful decisions.

Similarly, AI systems that exploit children's mental vulnerabilities are prohibited on a similar basis.

Example: “A game uses AI to analyze children’s behavior and preferences on the basis of which it creates personalized and unpredictable rewards through addictive reinforcement schedules and dopamine-like loops to encourage excessive play and compulsive usage. The game is designed to be highly addictive, exploiting the vulnerabilities inherent to children, including their limited ability to understand long-term consequences, susceptibility to pressure, lack of self-control, and inclination toward instant gratification. The consequences of this AI-enabled exploitation can be severe and long-lasting for children, including potentially addictive behavior, physical health problems due to lack of exercise and sleep, deteriorated eyesight, problems with concentration and reduced cognitive capacities, poor academic performance, and social difficulties. It can significantly impact a child’s development and well-being, with potential longer-term consequences that may also extend into adulthood.”

The principle of health as a primary goal

Particularly interesting are some exceptions suggesting that AI can use subliminal techniques when health and well-being are the higher goals. It could be a chatbot that uses social engineering methods to coach users to live healthier lives or to get them to break bad habits like smoking cigarettes.

Example: “A therapeutic chatbot uses subliminal techniques to steer users towards a healthier lifestyle and to quit bad habits, such as smoking. Even if the users who follow the chatbot’s advice and subliminal therapy experience some physical discomfort and psychological stress due to the effort made to quit smoking, the AI-enabled chatbot cannot be considered likely to cause significant harm. Such temporary discomfort is unavoidable and outweighed by the long-term benefits for users’ health. There are no hidden attempts to influence decision-making beyond promoting healthy habits”.

The European Commission has approved the draft guidelines, but they have still not been formally adopted. For implementation of the new regulations will be responsible national supervisory authorities and the European Data Protection Supervisor. Violations will be punishable by fines of up to 35 million euros or 7% of a company's annual turnover.

The guidelines give some clarity for the AI developers, but they do not cover all use cases in healthcare. Companies with a CE mark for their medical devices or digital solutions do not have to worry. However, those working on sophisticated algorithms that analyze multiple healthcare-related biomarkers and recommend behavior changes will have to double-check the risk group according to the EU AI Act. The word “health” appears 40 times in the 140-page document, highlighting that AI-related risks are one of the focus of the EU AI Act regulations.

Click here to download the guidelines (PDF, English): Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act).