Facebook Pixel
IT

To search one's soul. Who needs emotion recognition technologies and why

Emotion recognition tools, according to their creators, should help in conflict resolution and crime prevention. But actually, the accuracy of their work raises questions from scientists, and possible dspread—protests from human rights defenders.

The right to be angry, tired and irritable

The heroine of the episode "Be Right Back" of the popular TV series Black Mirror tried to create a digital copy of her deceased husband and placed his consciousness in an artificial body. However, she did not succeed in long-term and effective communication with a digital substitute, partly because the digital twin was an absolutely obedient individual and showed little or no emotion. It was these characteristics that prevented his partner's desire to build a life together. As a result, the avatar was placed in the attic.

Emotions are one of the unique phenomena that make us human beings. In recent years, psychologists and scientists have spoken and written a lot about emotional intelligence. They teach us to develop it. They also convince us that there are no bad emotions, that a person can and should feel sadness, fatigue and irritability: these are the same emotions as calmness, happiness or joy.

The developers of emotion recognition tools that have become popular in China and have begun to be actively used in Russia clearly do not say that there is no right to bad emotions. The task of their products is to identify human emotions in order to use this information, naturally, for someone else's purposes. However, these tools, as well as technology in general, have many controversial points. The accuracy of their work and ethical use, the lack of legislative regulation of their use and the confidentiality of the collected data are among them.

For instance, the developers of the Taigusys emotion recognition system popular in China do not hide that their product can capture and detect facial expressions of company employees using its system, and even an insincere smile. The company claims that the system helps to avoid workplace conflicts and improve work efficiency. For example, they can identify when an employee needs a break. Taigusys' clients include Huawei, China Mobile, China Unicom, and PetroChina. In addition, its developments are used in Chinese prisons: according to senior managers of the company, they help prisoners to remain "more obedient."

In some cases, the use of emotion recognition systems looks strange and even intimidating. For example, a subsidiary of Canon in China uses an emotion recognition system that allows only smiling employees to enter the office. The company calls this system a "workspace management" tool.

Who develops emotion recognition tools and why

Automated systems for analyzing emotions are also actively used in recruiting. For example, the HireVue company uses its own developments in this area: its system analyzes video CVs and video recordings of interviews, taking into account facial expressions, tone of voice and other data to assess the candidate. Another player in this market is Pymetrics. Its development allows one to assess "cognitive and emotional characteristics in just 25 minutes."

Amazon, Microsoft, and IBM have developed their own emotion recognition systems, and in 2016 Apple bought a startup Emotient that develops software for recognizing emotions from photos.

One of the largest developers of emotion analysis systems is the Boston-based company Affectiva founded by the former employees of the Massachusetts Institute of Technology. It has created one of the world's largest emotion databases with over 10 million facial expressions of people from 87 countries. And based on this data the company released many applications that can, for example, detect distracted and risk-averse drivers on the road or assess consumer’s emotional reaction to advertising.

Where else are emotion recognition systems used?

Reading and understanding people's emotions is used in different situations, sometimes not at all obvious. For example, in January this year, Spotify obtained a patent for a technology that generates music recommendations based on a user's voice and mood. The patent describes the technology for analyzing a person's voice to determine their emotional state and other data (for example, where the person is: in a city or in a forest, alone or in a company) and the generating musical recommendations based on this data. However, the patent sparked protests among musicians, and a letter was sent to Spotify's CEO and founder in early May demanding that a number of patents on voice-recognition technology be waived. In their letter, the musicians, together with representatives of the non-profit organization Access Now, accuse the company of the unethicality of such technologies and mention the emotional manipulation of listeners among the risks of their use. The seriousness of the problem is also indicated by the fact that after the first letter, another one was sent. In response, Spotify promised not to incorporate these technologies into its developments.

Emotion recognition technology is actively used by Russian banks. These tools are used, for example, when communicating with contact center employees to assess the level of customer satisfaction. The system analyzes the emotional tone of the dialogue, as well as additional parameters: the number of pauses, intonation, change in the voice volume during dialogue. In addition, as conceived by the developers of these technologies, understanding the emotional state of their vis-a-vis should help bank employees in building a dialogue, for example, contact center employees can show empathy.

Russian Alfa-Bank began testing a system for analyzing the emotions of visitors to bank branches in order to assess the level of service. But Russian banks use a better understanding of emotions not only to improve their own services. A debt collection agency working with one of the largest Russian banks began to use voice recognition of debtors' emotions. At the same time, the bank does not exclude the future use of the technology in other areas. The essence of the technology consists in analyzing the intonations of the bank's clients and, on the basis of these data, building an effective dialogue. The system can very accurately predict the scenario in which the dialogue with the client will develop.

But sometimes there are some highly intimidating ways of using these tools. For example, recently it became known that the authorities of the Indian city of Lucknow will use such a system to assess the psychological state of women—alleged victims of violence. Such a system should help the authorities understand whether the victims are telling the truth or lying.

Technology’s ethicality

The emotion recognition technologies pose many challenges. One of the main ones is the ethics of their application. Assuming that such systems correctly read the mood and emotions of a person, how ethical is it to use this data? Doesn't analyzing this data and making decisions based on it mean a de facto denial of a person's right to emotions, such as irritation or sadness?

And there are many questions to the creation and testing of such tools. For instance, in May this year it became known that the Chinese authorities are testing an emotion recognition system on Uyghurs in the so-called correctional education centers, and there are several companies in the country whose products are aimed precisely at such their use, although they talk about the independence of their products from nationality or religious affiliation.

British human rights activists from the group Article19, having studied the use of emotion recognition technologies in China, urge that such solutions be abandoned, as they are "incompatible with accepted human rights standards around the world."

Researchers from Cambridge have created a project that allows everyone to understand how the emotion recognition system works by connecting to a webcam in a browser. This service creators recommend paying attention to the use of such tools and urge to study all the risks to human rights and freedoms.

The project authors also focus on the false positives of these systems and emphasize that the active interest in such services during the pandemic has become too strong (they are tested, for example, in schools and universities, in order to understand whether students cheat, whether they are distracted during classes). Accordingly, the threat to the confidentiality of this data has increased significantly. After all, it is not always obvious who will have access to the recognition data and who will use it and how.

Emotions are not an exact science

Another problem with emotion recognition technologies is their accuracy and credibility. More than a decade ago, scientists from several countries around the world published the results of a thorough study that demonstrated the inaccuracy of the emotions perception. The authors of the study proved that the initial perception of a person and his mood and memory about him is distorted by the situation, environment and other factors. In other words, different people’s first impressions of another person’s emotions will differ. Another study that already in 2021 tested the accuracy of using artificial intelligence to understand emotions led to similar conclusions: algorithms do not correctly recognize human emotions, and there is no exact evidence that facial expressions, as well as the style of building dialogue , displays the person’s emotional mood.

Keith Crawford, a Research Professor at USC Annenberg and Senior Principal Researcher at Microsoft Research, highlights other issues with the validity and accuracy of such developments:

More troubling still is that in the field of the study of emotions, researchers have not reached consensus about what an emotion actually is. What emotion is, how it is formulated within us and expressed, what its physiological or neurobiological functions could be, how it manifests itself under different stimuli. Despite this, automatic emotion reading still exists. However, scientists understand that recognizing that emotions are not easy to classify, that they cannot be reliably identified by facial expressions, could undermine growing business. Long-standing disputes among scientists confirm the main weakness of this technology: one-size-fits-all detection is the wrong approach. Emotions are complicated, they evolve and change depending on our culture and history—all the diverse contexts that exist outside of artificial intelligence.

Keith Crawford

Keith Crawford

Research Professor at USC Annenberg and Senior Principal Researcher at Microsoft Research

It’s hard to say whether the developers of emotion recognition tools that promise businesses to hire effective employees and increase productivity will heed the views of academics and human rights activists. It is possible that such tools will very quickly become subject to regulation and even restrictions, especially if they continue to use artificial intelligence.


Thank 🎉