- Prof Stéphanie Laulhé Shaelou, Professor of European Law and Reform; Head, School of Law, University of Central Lancashire, Cyprus campus (‘UCLan Cyprus’); EU-POP Jean Monnet Module Leader and Academic Lead (https://eupopulism.eu/); legal expert on the Horizon 2020 Sherpa project on Smart Information Systems and Human Rights (https://www.project-sherpa.eu/); and alumna Fellow, Law Department, European University Institute, Florence (Summer 2021)
- Constantinos Alexandrou, Researcher, School of Law, School of Law, UCLan Cyprus
This blog post reflects the authors’ preliminary views on the Regulation Laying Down Harmonized Rules on Artificial Intelligence, published by the European Commission on April 21st 2021. The increasing use of AI in every aspect of daily life makes regulation a necessity. Being one of the first initiatives globally of this scale to regulate AI, it is of paramount importance that it will be as comprehensive and transparent limpid as possible. The text, however, leaves certain grey areas, which may lead to whereas certain important factors to be may appear overlooked to the public eye.
A first point of concern is that it may not be entirely clear how the Regulation interacts with other regulatory initiatives on the EU level, notably namely the Digital Services Act. Even though it is expressly stated that the Regulation is consistent with the Digital Services Act, it is not elaborated how the AI as presented in the Regulation relates to the algorithms used by intermediary services in the Digital Services Act. This brings forward another point to draw attention to, namely the concept of ‘High Risk’ AI itself. The term ‘high risk’, as defined in the Regulation and its Annexes, leaves an (intentional?) gap as to what constitutes ‘average risk’ or ‘low risk’ AI.
On the one hand, the Digital Services Act deals with services that host or transmit information online, which encompasses primarily social media platforms. On the other hand, the ‘Regulation for a European Approach for Artificial Intelligence’ apparently deals with ‘high risk’ AI used at ‘high risk’ activities. According to the risk-based approach of the Regulation, AI systems are classified as creating ‘unacceptable risk’, ‘high risk’, and ‘minimal risk’. The only section of the Regulation that applies to AI systems other than ‘high risk’ AI is Title IV on the ‘Transparency Obligations for Certain AI systems’, which is essentially the obligation of AI systems to inform natural persons that they are interacting with an AI system. Other important prerequisites, including requirements, obligations of providers and users and conformity assessments, only apply to ‘high risk’ AI.
Thus, the effective regulation for AI, classified as ‘low or minimal risk’ AI, may not be the type of AI sufficiently addressed and/or missing in the expert and/or public vieweye. Such AI arguably includes smart assistants, such as Google Assistant, Apple’s Siri and Amazon’s Alexa’s, which run on millions of devices daily. By leaving the regulation of for the development of non-high-risk AI allegedly and/or primarily to self-regulation, it is argued that a huge market of AI is left essentially unclassified. A question that arises from the risk-based approach of the Regulation is the gap between ‘high risk’ AI and ‘low risk’ AI, as no ‘average risk’ AI is created/expressly referred to. Thus, the current Regulation appears to have loopholes, which hardware and software AI manufacturers could use to circumvent abiding with certain requirements and maximize practices such as data collection. To the extent that the risk-based approach is the chosen method of regulation, we are therefore calling on the European Commission to take further steps towards the enlargement of the risk-based approach for AI, and to introduce similar/equivalent/proportionate standards for ‘average’ and ‘low or minimal risk’ AI in the current Regulation and/or by virtue of a new legal instrument of secondary legislation or otherwise specifically for this AI. This would enable the Commission to produce a complete EU AI strategy.
AI and its impact in the field of competition
Another point which the Commission could turn more attention to is the competition rules in the digital sphere. It is well known and documented that certain technology giants have influential power in all aspects of technology. An example on point is the penalty the Commission imposed on Google, for promoting its own services in search results, while reducing the rankings of competing services through its algorithm. Considering the increased use of AI in services, it is evident that AI and algorithms have a far-reaching effect nowadays. As such, they may enable large tech companies to gain unfair competition advantages over rivals, as well as modify the intellectual property and other business rights landscape in the EU and beyond. This can be done through AI generated creations and search engines using algorithms of their own to prioritize certain results. It is emphasized again that the proposed Regulation scope should be expanded. The Commission should take steps to discourage and/or further frame such practices and thereby create a pluralistic digital environment for tech companies and consumers alike, while balancing citizens’ other interests in the digital world.
Introduction of new rights
In the European Digital Public Legal Order, the Commission should also consider the introduction of new rights in the Regulation, to ensure that in effect ‘AI is safe, lawful and in line with EU fundamental rights’. The new rights proposed derive from the rapid development and global adoption of AI systems, which have been integrated in digital life, and beyond. It is proposed to formulate the digital version of modern rights deriving from more conventional rights such as the rights for the respect of private and family life and protection of personal data, enshrined in the EU Charter of Fundamental Rights, and analogous to the right to be forgotten as implemented in the GDPR.
The right not to be manipulated and the right to be neutrally informed online appear very important, as algorithms today handle global flows of information and misinformation. It would constitute an additional and necessary safeguard to ensure that the AI in question adheres to the principles of freedom of information. The right to meaningful human contact goes beyond the transparency obligation in the regulation obligation, or human oversight. Where autonomous AI makes critical decisions, such as in medical contexts where meaningful human contact plays a crucial role, patients must have the right to be informed/explained of that decision by a human. The Charter of Fundamental Rights should play a more central role in the Regulation, serving as the basis for new rights for the digital age.
Although the Regulation is an important step towards regulating AI, it lacks a wider perspective. The connection of the AI described in the regulation on AI and the AI enshrined in the Digital Services Act should be clarified. What is more, the risk-based approach followed in the Regulation leaves a lot to be desired. Not only ‘low risk’ AI has a much lower threshold of requirements, but it lacks a clear definition. Also, there is a noticeable gap between ‘high risk’ AI and ‘low risk’ AI, as there is no mention of ‘average risk’ AI. This loophole is one that must be addressed before the Regulation is put into force, as it can create legal uncertainty. Certain aspects of AI such as the impact on competition have been overlooked, and considering the control that big tech companies could have through their AI, the Commission should reflect on the wider effects of AI. Finally, the Commission should consider introducing new rights in the Regulation, similar to the right to be forgotten in the GDPR. The rights proposed are the right not to be manipulated, the right to be neutrally informed online and the right to meaningful human contact.
To see our feedback directly to the Commission on this proposed Regulation, visit: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665299_en