Submission
Privacy Policy
Code of Ethics
Newsletter

Emotion Recognition—a Sheep or a Wolf in Sheep’s Clothing? (PART II.)

In our previous post, we briefly discussed the possibilities of analyzing emotions in written texts. Compared to Emotion Recognition (ER), this is a much simpler case, but we have already seen how complex it can be.

ER attempts to enable the analysis of a person’s emotional state through video or audio recordings. For this purpose, ER systems are largely using tools already established in face recognition. However, here the aim is not (exclusively) to identify specific persons, but to determine their mood. For this, the observation of facial expressions and, in the case of more complex systems, posture, metacommunication, and voice inflection can also be utilized.

The method was perhaps first really brought to the fore in the context of the Social Credit System. A few years ago, it was reported that the Chinese state was using ER technology to profile, for example, people under police interrogation, typically members of the Uyghur minority. The Chinese government has long been criticized for its discriminatory treatment of members of this group. It is reported that the lives of members of the minority are pervaded by an aggressive data reporting policy that affects almost every aspect of their lives, from DNA samples and iris scans to the private data on their mobile phones. The information thus extracted will presumably be linked to a collection of Big Data that will later help to build and consolidate even more control over the population. In the long term, it is expected that the Social Credit System will also make use of this data.

Perhaps it is a similarly Orwellian vision of the future that has led to the EU’s recent resistance to the use of ER systems.

Advocates of restricting such systems, in the context of the booming ER market in China, focus on their negative impact on individual freedoms. The most significant of these seems to be the restriction of freedom of expression. It is argued that these technologies are increasingly becoming part of everyday life in China, when authorities use them to identify “suspicious” persons, or when schools monitor the attention level of pupils during lessons. Their main conclusion is that such practices should be banned before they have time to become integrated into the social system.

In contrast, opponents of restrictions claim that the ban on ER technologies is mainly due to a group of “anti-tech” organizations (e.g., Access Now, European Data Digital Rights). They argue that one of the main arguments against the use of the ER is that European governments should not utilize these solutions to violate fundamental freedoms, as civil liberties and human rights in the EU have much stronger and more extensive protection than in China. Some arguments even go so far as to interpret the resistance of similar groups as resistance to law enforcement. It also tends to highlight the contradictory nature of two important arguments often made by opponents of ER technology. One is that technology is highly invasive to everyday life and individual rights. The other is that ER solutions simply do not work, or at least are not reliable. Opponents of restrictions in the EU say that a technology cannot be both invasive and ineffective (therefore simply pseudoscience). This (perhaps only apparent) contradiction is used to discredit the arguments of those who support the restriction.

As with most complex issues, it is not easy to decide which side is right. If we stick to the Chinese example, as a prime example of the misuse of similar systems (from a European perspective), we can see that such a solution, even if it does not work perfectly, can have harmful consequences. The prejudices that are formed in the minds of those who interpret the results of the ER system are harmful in themselves, whether it works well or not. Not to mention the fact that there is no reliable data on the actual performance of the systems used in China.

As we have already seen with the example of more straightforward Sentiment Analysis, there are a huge number of problems and potential for error in such developments. For ER systems, the inherent multimodal nature of the data only complicates this further. The rudimentary nature of such applications is apparently well illustrated by the fact that even a few years ago, papers were published pointing out the difficulty of even producing appropriate training data.

Bear in mind that the EU is not currently aiming for a total ban on ER as a technology, only for some of its use in a few critical areas. This is also in line with the fact that even a few years ago, Horizon2020 grants were awarded to ER-related projects.

If we want to assess how invasive a real-time ER system could be, it is not enough to start with the systems that currently exist. If we think about it, less than a year and a half ago, the peak of Generative Artificial Intelligence (GAI) was represented by systems that were only able to summarize (more or less accurately) the content of texts. Since then, these systems have not only become capable of real-time search and organization of information from the Internet but have also moved towards multimodality. This is illustrated by the long list of developments published by OpenAI in the last year alone.

We cannot be sure when other areas of artificial intelligence will embark on a similarly explosive path. And if we imagine a situation in which a previously permissive regulation suddenly makes the unintended consequences of a technology uncontrollable, it could easily become a quasi-dystopian scenario.

Of course, hindering innovation would be a serious disadvantage for the EU in terms of competitiveness in AI development. The current draft of the AI Act is often criticized for being too defensive and techno-pessimistic. It is true, after all, that any technology is only as dangerous as its user makes it, and this is no different in the case of AI. In any case, the current draft of the AI Act shows, that the EU does not want to give any of its Member States even the possibility of building a surveillance state like China. Whether this is really an overreaction or the result of lobbying groups and to what extent it will limit competitiveness, can only be judged with certainty when the consequences are known.


István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.

Print Friendly, PDF & Email