The humanoid Sophia is a PR stunt, not a robot that will accompany you

Wednesday, June 19, 2024
Robotics
News

Recent developments in AI have allowed robots to gain new skills. But can they care for patients like humans do? At the re:publica 2024, we talked to Dr. Katharina Weitz, a psychologist and computer scientist researching Explainable and Human-Centered AI at the Fraunhofer Institute for Telecommunications in Berlin.

When people talk to ChatGPT, they do so like with humans, using "please." Does this mean that we should start treating machines like humans?

We found in studies that humans trust explanations of an AI system more when these systems communicate in a natural, human-like way with us, for example, using text, voice, or a virtual representation. In addition, the more human-like an AI system appears, the more we assume it has human-like intelligence.

Researchers like Kate Darling and Ben Shneiderman highlight that we should design AI systems not like humans but like tools—since AI systems are not humans. We should be very clear about that.

In 1966, Joseph Weizenbaum created the first-ever chatbot, Eliza. Patients interacting with Eliza reported feeling like talking to an empathetic therapist. In 2024, technology is much more developed. What technologies are used to create synthetic feelings and emotions?

When discussing synthetic feelings and emotions, I first want to clarify that AI systems cannot feel emotions.

However, we are getting better and better at teaching AI systems to perceive and imitate emotional responses—especially the classification of emotional facial expressions. Here, we use Deep Neural Networks trained on thousands of images of people with different facial expressions to learn what a "happy" or "sad" face looks like.

We can then use these trained AI systems in real-time to detect a person's emotional expression. And it works pretty well, but only when the person expresses their actual feeling. Imagine a person in a job application interview who is asked a question that makes the person feel uncomfortable—they would still look friendly and smile since they would try to hide their true feelings. The AI system trained on facial image classification would still classify the person as "happy." In contrast, humans would consider the context and assume that the person is unhappy, even if they smile.

Can you please provide some examples of smart robots?

Some robots are related more to animal-like behavior and reactions, like Paro, looking like a white seal toy. It reacts to voice when saying its name, strokes, moves its head and legs, and makes sounds.

Navel is a very facially expressive robot that can express a variety of emotional states. Compared to Paro, Navel is designed to be more human-like, having, for example, a face with eyes, eyebrows, and a mouth. Robots like Sophia are even more humanoid. The developers tried to design Sophia as human-like as possible.

What does "human-centered machines" mean for you?

As a researcher, I found definitions helpful in briefly stating a term's goal or focus. For Human-Centered AI, I like the one from Mark Riedl, saying, "Human-centered artificial intelligence is a perspective on AI and ML that algorithms must be designed with awareness that they are part of a larger system of humans."

For me, it is essential to design AI with humans in mind—the ones who are using the AI systems and those who are affected by the decisions of such systems.

Healthcare is facing health workers shortages. Can you imagine that robots can be a solution to this problem?

No, at least not in the current state of development of our robots. Research is making a lot of progress, and ChatGPT, for example, helps make robots better and more extensively usable for conversations. However, they still have much to learn to have a real benefit in practice.

Caring for people is a complex task that a robot cannot do currently—and it is to discuss whether it should. Perhaps they can be used for smaller tasks, such as courier services, to reduce the workload on clinic personnel. Even that can be a very challenging task for a robot. I recently heard an example of a robot designed to bring people beverages, but it failed when a door was closed because it was not trained to open doors. Even simple human tasks like "bringing a bottle of water" are still challenging for robots in a changing environment.

Can robotics mitigate the epidemic of loneliness? Recent reports suggest that one-third of the population feels lonely.

Some robots are already used for this purpose, such as the robot Parlo from Japan. It is capable of over 300 games and is designed to "care" for and support the welfare of its users. Robots are similar to pets: They can be good listeners and motivate users to be more active. But they cannot replace human relationships.

So, where is the boundary between a healthy approach to machines and a problematic relationship with robots?

Robots can be great companions. Research shows that we create a relationship with robots—even the ones that do not look human—or animal-like.

There is, for example, a study investigating the relationship between vacuuming cleaner robots and their owners. It is a natural habit to attribute feelings and intentions to "living-like" objects, which was already demonstrated in 1944 in a famous study by Fritz Heider & Marianne Simmel. They showed participants moving geometric forms, and participants interpreted them as "subjects with intentions." Try it yourself (click here). It is hard not to interpret intentions into these objects.

Back to the question: I think the borders between a healthy and problematic relationship with robots are fluid. To feel a connection to the robot is normal, but relying on the robot too much could be dangerous. Characteristics for recognizing when things are becoming problematic and when a person needs help can be based on indicators also used for other types of disorders. For example: Is the person's level of suffering so high that they cannot resolve it themselves? Does the person limit themselves so much that they can no longer cope with everyday life? Are external people affected and suffering?

Some can have an impression that these boundaries have already been crossed: In October 2017, Sophia Robot received Saudi Arabian citizenship. Is this the future, a promotion gag, or something creepy?

Let's look at what has changed since then. Have robots been members of our society since then? Have robots gained any further citizenship? What has changed in the development of robots since then? Not much has changed.

Such actions are a successful PR stunt that is great to report on and discuss possible implications. However, I don't want to pay too much attention to these moments because they tend to distract from the actual problems we have with robots now.

I mean technical challenges on the one hand, but also social ones: How can we technically improve robots so that they can take on tasks? How are we handling the energy and resources needed to develop AI-based robots? What must a robot be able to do so that people can benefit from it? How can we guarantee data protection, especially when vulnerable groups use them?

Tesla, NVIDIA, Huawei—almost every big tech company is now building robots. How will healthcare change when it starts to be applied daily? How will society change?

It will change like other areas where we use AI systems now. ChatGPT, launched in November 2022, is an excellent example. Initially, there was a lot of fascination about the system's capabilities and fear that many jobs would become obsolete. Two years later, we adapted to ChatGPT: There are workshops, talks, books, and websites about how to use ChatGPT most efficiently, providing tips & tricks and ideas for usage. This system is now part of our technical world and is here to stay.

The question is: What are the limitations, and how can I use it most beneficially? It also applies to robots. While we have robots that can be used easily, like we use ChatGPT today, the question is more about the following: What are the robot's limitations, and what are suitable tasks for the robot so I can benefit the most from it?

Society will change so that specific tasks are distributed to robots while we have time for other things to do. This leads to the question: What do we want to use our time for? For other tasks? More time for people? More time for ourselves?

What scares you most when you follow the development of robots right now?

It is not the development of the robot itself that scares me; it is more the perspective and goals of the developers. As with every technology, you can benefit or suffer from it. We—individuals, society, and politics—must ensure that social values and regulations are represented in these robots. One of them is keeping humans in the loop. It's human, not the robot, should have the final word.

Not everything that AI enables should be allowed to be done!

In the Oscar-nominated movie "Robot Dreams," a dog makes a great friendship with a robot that dramatically changes its life. Will robots improve our lives by mirroring our dreams, fears, and hopes?

I think robots mirror our strengths and limitations. When I started working in this field, I recognized how many things humans are capable of without paying attention to them—remember the example of carrying a bottle of water from one room to another—and how hard it is to program a robot to do similar things.

When we look at AI biases, for example, the technology shows us how many inequalities there are in our society. The AI system we develop holds a mirror up to us very clearly.