EU draft laws to regulate facial recognition and artificial intelligence will have an impact on the entire market, in particular on companies from the US and China.
Attempts to legislate technology, in contrast to other laws that are usually local in nature, will be more global. After all, borders in the technological world are a rather ghostly phenomenon, and therefore attempts to regulate and impose technology laws by one country can affect the entire market—if not directly, then indirectly.
This is exactly what happened when the GDPR (General Data Protection Regulation) was imposed in the EU in 2018. This is a set of laws aimed at protecting the personal data of Internet users.
Now we are witnessing a new attempt to regulate the IT market—the European Commission proposed to significantly limit the use of artificial intelligence. The ideas put forward in the draft law published by the European Commission are aimed at combining progress with human rights protection.
Whether the authors of this draft law will manage to achieve a reasonable balance between restrictions, but not hinder the artificial intelligence development, a sector that is directly or indirectly used in many areas, is still an open question.
As well as whether the proposed ideas will become reality. It is only obvious that the sphere of technology is becoming more and more connected with geopolitics, and the confrontation between countries in this sphere plays an important role in the distribution of forces in the world and in the conditionally new "cold war", this time a technological one.
The new rules, announced by the European Commission, are now the most serious attempt to regulate artificial intelligence. They can become the basis for the formulation of global requirements and regulations for this promising, but no less controversial technology.
Ordered intelligence
The European Commission proposals contain a ban on the use of artificial intelligence (AI) in those areas where it can threaten society. Only when technologies meet all the requirements of safety, as well as accuracy, reliability and openness, will they be able to enter the market and begin to be applied.
The document notes that trust should become a basic element of AI development. The authors of the draft law note that ethical technologies can be built on this basis—those that do not threaten the security and fundamental rights of EU citizens.
The authors of this document have identified four levels of risk for artificial intelligence technologies. Technologies with a minimum risk level will be allowed, in addition, the draft law defines those that will have a limited, high or unacceptable risk level. Commenting on this approach, the Vice President of the European Commission Margrethe Vestager explained that the higher the risks to AI is, the stronger the restrictions will apply to it. In general, it is the scope of artificial intelligence that will determine the level of restrictions imposed.
The European Commission proposals contain the idea to prohibit the use of artificial intelligence in certain areas where it can threaten society—in this case, it is about technologies with an unacceptable risk level. Technologies for assessing the reliability of forensic evidence, the use of algorithms for hiring employees and verifying the authenticity of documents, and the use of AI in robotic surgery were named among them.
The use of artificial intelligence in legal proceedings and law enforcement is considered particularly dangerous. In general, artificial intelligence tools for manipulating people and their decisions are considered prohibited.
As for technologies with a high risk level, these are automatic exams grading, loan scoring—in these cases, the authors of the draft law call possible discrimination the main problem. Such technologies require regulatory oversight. Chatbots were called limited access technologies, in this case a human users of such technologies should be notified that they are communicating with an algorithm, and not with a living person.
At the same time, the European Commission has identified artificial intelligence technologies with minimal risk to society, including game algorithms or spam filters. It was suggested not to regulate them in any way.
Mass recognition: allowed in some cases
A separate ban related to facial recognition in public places, except in certain cases, for example, searching for missing people, detecting criminals or preventing terrorist attacks.
The difficulty of implementing regulations for these technologies is in several aspects. Firstly, today face recognition is very actively used both in everyday life and in the activities of many companies and structures. Smartphone owners with Face ID use face recognition dozens, if not hundreds of times a day. At many airports in the European Union, citizens of EU countries can go through passport control automatically due to the use of recognition technologies.
At the same time, it is the technologies of recognition in totalitarian regimes that have become an instrument of repression and police arbitrariness. That is why in certain states and cities in the United States, the use of recognition technologies (San Francisco, Boston, and Portland) was prohibited.
The Chairman of the NGO Electronic Republic, an expert on e-governance, Yevgeniy Poremchuk, explained in a comment to that face recognition (FR) is like a peaceful atom, that is, in good hands it can be of great benefit.
For example, it provides additional authentication in mobile applications, automatically indexes image and video files for media and entertainment companies, and enables human rights groups to identify and rescue victims of the slave trade. And for the police and special services—this is another way to identify criminals.
A common example of FR usefulness is the company Marinus Analytics that uses artificial intelligence in Amazon Rekognition to develop tools such as Traffic Jam. They enable human rights agencies to identify victims of the slave trade and locate them. I think that in European countries the regulation of recognition technologies will be enhanced, first of all in business. In such circumstances, services like Amazon Go are unlikely to enter Europe. Special services traditionally will not be adversely affected by possible bans, especially against the backdrop of endless security incidents.

Yevgeniy Poremchuk
Founder of the IT company BWN Group
Assessing the danger of recognition technologies in Ukraine, Yevgeniy Poremchuk notes that Kyiv is on the verge of digital authoritarianism according to the Chinese model.
According to him, this is no longer about confidentiality. In particular, a passport is referred to as an entry in the state register that belongs to the state and is managed by an official, and not as a protected document in a single copy held by a citizen.
There is talk of online elections that in our reality will leave us without a choice. There is talk of installing police cameras that will allow police officers to identify offenders. This worries me the most. Let me remind you that in China, recognition technologies in the service of the state have turned the country into a "digital GULag" (just recall the "digital concentration camp" of the Uighurs). I am sure that in countries with low confidence in the authorities, corrupt courts and ineffective law enforcement system, citizens will not benefit from implementing the FR tools. On the contrary, the authorities will have a new instrument of pressure on business, activists and the opposition.

Yevgeniy Poremchuk
Founder of the IT company BWN Group
European Union and global technological confrontation
Now the European Union cannot be called a region that plays a significant role in technological development. This is due to the lack of influential global tech companies (except perhaps Spotify, although this service is a niche project). Therefore, Europe is trying to influence the IT market through regulation.
Its first attempt was the introduction of the GDPR, and the laws on artificial intelligence are a continuation of this direction. In addition, it is worth recalling the invalidation of the Private Shield mechanism in the fall of 2020 that allowed the transfer of users personal data from Europe to the United States and the storage of this information on American servers.
As a continuation of this decision, European regulators may impose a requirement to locate equipment in the EU that stores data of Europeans, or, alternatively, refuse to work in the EU countries.
Similarly, the EU may require companies using artificial intelligence tools to meet the requirements presented in the new document. Although it will take quite a long time before the implementation of these bans, because now we are dealing only with a draft law, the punishment for violating these norms is rather severe—fines of up to 6% of the offending companies’ global income.
Implementation problems
More than a hundred countries are trying to regulate artificial intelligence, and Ukraine is among them. However, attempts to regulate this sector face the challenges that the notions of "reliability", "sustainability" and "transparency" in this sector are difficult to achieve or test in practice.
Poremchuk is confident that, given the real market of $22 billion only for face and voice recognition technologies, as well as its promising capacity of another $30 billion, it can be predicted that the European Commission will not ban these technologies completely. Perhaps, it will somewhat narrow the scope of use.
According to some experts, the new regulations will significantly complicate the work of European startups in the field of artificial intelligence, making them uncompetitive in comparison with their Chinese and American counterparts.
Another problem is that some of these regulations are vague, according to Daniel Leufer, a European policy analyst at Access Now who studies digital rights compliance. However, they are an important step forward in identifying the potentially hazardous uses of these technologies.
Some cases of artificial intelligence use have become such a common element of everyday online life that their restrictions or prohibition will affect the operation of many familiar services, for example, online advertising services that analyze user preferences and, in accordance with their choice and behavior, show them online advertising. On the one hand, such restrictions, for example, will prohibit showing ads for games or alcohol to people with addictions, on the other hand, this will change the very principles of digital ad targeting. Therefore, it is difficult to assess how these services will work.
"All online advertising is a manipulation of human behavior. The purpose of bans is to determine what is acceptable and what is unacceptable," Avi Gesser, a partner at the US law firm Debevoise explains, emphasizing that it is in this area that prohibitions and restrictions will significantly affect BigTech's work.
If the EU succeeds in imposing really working regulations to ban certain AI applications, then stories of voting during Brexit and the use of data by Cambridge Analytica to manipulate election results would become impossible. The question is how long will it take to implement these regulations and whether tech giants will find loopholes in these restrictions.