Conformity assessment or impact assessment: What do we need for AI?

B. C. Stahl

In its recently published proposal for a Regulation laying down harmonised rules on artificial intelligence (AI), the European Commission (2021) proposes a set of activities, rules and processes that aim to ensure that AI meets the needs, requirements and expectations of European citizens. The Regulation is risk-based and aims to provide a light-touch approach for low-risk AI systems but recognises that some AI systems pose higher risks for safety, security and human rights. These high-risk systems are subject to specific requirements, including so-called conformity assessments that are required for companies to bring their AI systems to the European market.

A new AI system may have to undergo ex ante conformity assessments required in existing regulatory frameworks, such as those related to safety in addition to the one specifically required for high-risk AI systems. These specific high-risk AI conformity assessments can usually be done by the provider of the system but in some specific cases, such as AI systems involving remote biometric identification, the conformity assessment will have to be undertaken by a “notified body”, which are to be determined by national competent authorities. In all cases, the conformity assessment will need to be supported by clear data, publicly declared, and the European Commission will host a database of all conformity assessments. If the system in question undergoes significant change, the conformity assessment must be updated. 

The requirements for a high-risk AI system that requires a conformity assessment are laid out in Title III Chapter 2 of the proposed Regulation. The providers of such systems must establish, implement, document and maintain a risk management system. Where the AI systems involve the training of models with data, which is the case for most current machine learning systems, they must institute a data governance regime that ensures that the data does not contain errors or (unconscious) biases. The high-risk AI systems need to be accompanied by detailed technical documentation and record-keeping. The systems’ operation must be sufficiently transparent for users to interpret their output. All high-risk AI systems must be designed in a way that allows natural persons to oversee them. They must be accurate, robust and secure. 

Overall this approach that the EC has suggested for high-risk AI appears to be a reasonable attempt to react to the current worries about the ethics of AI while trying to safeguard the economic and other potential benefits of AI. It does, of course, raise many questions that may only become clear if and when the Regulation becomes effective. The ‘devil will be in the detail’. For example, how exactly will a data governance system need to be implemented, in order to be deemed acceptable? How can it be proven that a dataset does not contain errors? What level of record keeping will be sufficient? What does it mean for a natural person to oversee a highly complex system?

One question worth reflecting on in more detail, however, is how the proposed conformity assessment relates to the well-established approach of impact assessment. In the SHERPA project we have proposed the creation of an AI impact assessment as a response to public concerns about AI. This was motivated by the observation that there are numerous types of impact assessment in areas ranging from the environment to human rights. Such impact assessments are also typically undertaken ex ante, i.e. before an activity is initiated with a view to ensuring that negative consequences can be avoided. Impact assessments can thus be seen as ways of managing risks which is consistent with the EC’s conformity assessment of AI.

One striking difference between the EC’s conformity assessment and an impact assessment is the focus on substantive consequences. Impact assessments engage with possible consequences, be these environmental degradation or threats to human rights. The EC’s conformity assessment does not engage with the negative consequences that aims to avoid beyond the general motivation for the Regulation overall to avoid risks to health and safety or fundamental rights of persons. The conformity assessment focuses on procedural aspects, such as the existence of data management or reporting processes without going into much detail on what these should achieve. The work of the EC’s High-Level Expert Group on AI, including its ethics guidelines (AI HLEG, 2019) and the assessment list it produced (AI HLEG, 2020) are referenced in the proposed Regulation but don’t play a major role in its structure. The Assessment List for Trustworthy AI (AI HLEG, 2020) is of particular interest here, because it represents an impact assessment for AI that aims to identify possible substantial concerns. 

The key question arising from this brief comparison between conformity assessment and impact assessment is whether the rather formalistic approach suggested by the proposed Regulation will be sufficient to address the substantive concerns that have been raised about AI. The reasoning of the EC may be that the substantive aspects are already covered by other bodies of law, from data protection law to product liability and even criminal law. If this is the case, then the proposed conformity assessment may fill an important role but one could argue that it will need to be supplemented with more content-oriented impact assessments to ensure that the concerns that AI raise are fully addressed. 


AI HLEG. (2019). Ethics Guidelines for Trustworthy AI. European Commission – Directorate-General for Communication.

AI HLEG. (2020). Assessment List for Trustworthy AI (ALTAI). European Commission.

European Commission. (2021). Proposal for a Regulation on a European approach for Artificial Intelligence (COM(2021) 206 final). European Commission.