Empathy in Artificial Intelligence

Empathy in artificial intelligence

Share This Post

Share on linkedin
Share on twitter
Share on email
Emotions are not exclusive to humans (anymore). We show how emotions can be used in AI systems to help the user feel more comfortable.

Empathy in Artificial Intelligence

Emotions are not exclusive to humans (anymore). Robots and programs are able to emulate emotions so the robot can simulate empathy. Artificial Intelligence is a driving force behind the research on this matter. Movie and theater history has seen the whole bandwidth of emotions displayed by robots and machines. Sometimes they turn evil and show signs of hatred and anger, sometimes they are caring and can develop a true and resilient love towards something or someone. In reality the bandwidth of emotions include humanoid robots showing facial expressions, therapeutic (robotic) seals reacting to their owner’s actions and even emotion detection in written texts. 

How do software companies decide whether to use empathic or neutral versions? What are the pros and cons of each version? In this post we are going to throw light on the decision making process.

Sad robot

Describing artificial empathy as soothing and beneficial as human empathy would feel wrong. Nevertheless, empathy in artificial intelligence is the most common emotion in technology and a prominent research field. Amazon’s Alexa and Apple’s Siri are the most famous virtual assistant technologies. Overlapping technologies include Empathic AI, Digital Companions and Virtual Agents. One thing each of these have in common is their humanlike interaction with the user.

But let’s step back for a moment and reflect on the idea of empathy in artificial intelligence. One thing that has to be anticipated is that there are innumerable types of empathy in IT-Products and Services. One way to differentiate these types of empathy is by their amount of humanization.

Imagine using a browser without internet connection. Every browser has a unique way to tell the user that there is no connection. Some simply state something along the lines of “you are not connected to the internet”, others have a sad robot popping up and telling you “I am so sorry, but I can’t do this right now. Are you sure your internet is working properly?”. Both versions share the exact same content. The first one neutrally informs about an error, the second one emphatically apologizes and initiates the bug fixing process. What’s the difference? And why and how do services choose between a neutral and an empathic version?

Emotional artificial intelligence

In order to explain the difference between empathic and neutral AIs we are going to compare the pros and cons of both versions and outline which users generally prefer each version.

Pros and cons of empathic and neutral AIs

First and foremost empathic AIs are a great tool to bring some joy to otherwise tedious tasks. Tutorials and tool guides often use virtual assistants or companions as helping guides. This is particularly popular because it has two main benefits. On the one hand it promotes trust and a pleasant user experience – the user is not left alone. On the other hand it promotes corporate identity. Given two service tutorials covering the same information, a customer is more likely to remember the one with a friendly animal or robot popping up and explaining functions, opposed to sheer text bubbles.

Users are highly unlikely to read long text documents of explanation (something that was once known as user manual), although it might save them a lot of time in the long term. This irrational behaviour is known as the “paradox of the active user”, a concept introduced by John M. Carroll and Mary Beth Rosson in the early 1980s. Most users just skip lengthy pure-text explanations, which is opposed to rational behaviour because it would save them lots of time and research in the long run. One way to solve this problem is by accompanying the user on his intuitive stroll through all the functions with a digital companion. This digital companion pops up the first time elements are being used and explains nothing more but the exact function of the element that is used. This way a user learns functionality in a more playful and interactive way.

There are some business sectors that decline playful empathic approaches to explain key functions and generally prefer neutral versions. These usually contain short and on-point explanations. Indeed it is hard to imagine a playful monkey explaining key functions of online banking for instance.

There are various businesses where the decision empathic vs neutral is not that easy. One thing that has to be considered is the learning curve of users. Especially when it comes to User Interfaces, learnability correlates with experience, studies show. Therefore experienced users and professionals in general may be annoyed by virtual assistants, most things are self-explaining and intuitive to them. Less experienced users are thankful for this exact guidance. In a UX-study we did with the Kompetenzzentrum Usability we faced this problem and the obvious question: what’s the best way to deal with this?

The answer is pretty simple, since neutral vs empathic is not a binary decision, hybrid versions are a common solution. In Game-Design users get to choose their experience out of options like “I am familiar with the game”, “I am familiar with this genre” or “I am completely new to the game”. Some softwares use similar ideas in order to get a feeling of how much help the user needs.

Different types of showing emotions in AI systems

In our study participants who work in IT were able to navigate through the interface in no time and instinctively tried to skip or close the virtual assistant. Afterwards, they stated that they would’ve preferred a neutral version of the tool. On the contrary participants who do not work in IT took their time and carefully read the instructions given by the virtual assistant. Afterwards, they stated that they liked the empathic version of the tool. A hybrid version showing where the user gets to decide how much guidance is necessary is the solution.

The three key takeaways from this post:

  • Research on empathy in AI is booming
  • Experts prefer neutral versions, Beginners prefer empathic versions
  • Hybrid versions are a popular solution

More To Explore

Cheat Sheets

Integration Testing

Learn everything you need to get started in integration testing in our cheat sheet.

UI Testing Myths
Blog

Debunking 4 UI Testing Myths

UI Testing remains one of the most feared challenges for business owners and companies. But some myths around UI testing can be debunked.