A European Agency for AI: Terms of Reference

Click Here to Download this Briefing as a PDF

The current regulatory landscape for Artificial Intelligence (AI) in the European Union is fragmented. Concerns have been raised regarding cooperation, coordination and consistent application of EU law. Though efforts are ongoing to reduce this fragmentation and develop a unified vision for AI – a stronger central, and independent body is needed to act as a proactive champion and strengthen harmonisation, cooperation and consistency, especially at the EU-level. To this end, the SHERPA project has proposed Terms of Reference for a European Agency for AI.

Context

The EU needs a independent European Union Agency to foster international cooperation on AI issues, provide much-needed clarity at the EU-level, create a collaborative environment for AI policy and regulation, and promote good regulation at the Member State level. Such a body should:

  • support other EU-level institutions, particularly the European Parliament, the European Commission (along with already established committees) and national competent authorities in Member States to ensure the ethical and human rights compliant development, deployment and use of AI;
  • protect the rule of law; and
  • help foster a common regulatory vision in the EU for AI, and high levels of ethics, protection, safety.

Roles and Functions

The Agency should:

  • Make Recommendations addressed to the European Parliament, the Council, and the Commission for legislative amendments and adjustments to boost implementation and enforcement of legislation related to AI;
  • Identify potential red lines or restrictions for AI development, deployment and use that violates human rights and/ or has significant negative societal impacts;
  • Develop and promulgate general guidance on legal concepts and regulatory issues of AI;
  • Set benchmarks for enforcement;
  • Support and advise EU-level institutions, bodies and agencies and national competent authorities in Member States to fulfil their ethical and human rights obligations and to protect the rule of law where AI is researched, commissioned, developed, deployed and used;
  • Maintain an AI risk alert system;
  • Assist in coordinating the mandates and actions of the national competent authorities;
  • Develop harmonised and objective criteria for risk assessment and/or conformity assessment and Monitor and/or coordinate the evaluation of such schemes;
  • Cooperate, liaise, exchange information, promote public dialogue, best practices and training activities;
  • Ensure complementarity and synergy between its activities and other Community programmes and initiatives;
  • Promote the adoption of regulatory sandboxes; and
  • Promote the Union’s AI approach through international cooperation.

Proposed Structure and Governance

Below is an indicative structure (presented in line with features of existing decentralised Agencies) that should be adapted to meet the specific needs of AI regulation and taking into account the specific role of the national competent authorities.

 

Operational Principles

The Agency should operate based on the following principles:

  • Principle of respect for human rights/human-centric approach;
  • Principle of independence and impartiality;
  • Principle of fairness;
  • Principle of transparency;
  • Principle of proactivity;
  • Principles of good governance, integrity and good administrative behaviour;
  • Principle of collegiality, inclusiveness and diversity;
  • Principle of cooperation;
  • Principle of efficiency and modernisation.

The Agency’s rules of procedure could be modelled upon the rules of procedure set out for existing Union agencies.

Reporting, Auditing, Evaluation and Review

Reporting by the Agency should be regular and transparent. The annual report should contain an independent section concerning the Agency’s diverse regulatory activities during that year. The auditing of accounts should be undertaken by an independent external auditor. There should be appropriate follow-up to findings and recommendations stemming from the internal or external audit reports and evaluations, as well as from investigations of the European Anti-Fraud Office (OLAF).

Every five years, the European Parliament, with the assistance of an independent external expert, should carry out an evaluation to assess the Agency’s performance in relation to its objectives, mandate and tasks. The evaluation will address the possible need to modify the Agency’s mandate, and the financial implications of any such modification. The report will also evaluate the Regulation underpinning the Agency and may include proposals to amend the Regulation, consider developments in AI and the technological state of progress. Where the Parliament considers that the continued existence of the Agency is no longer justified with regard to its assigned objectives, mandate and tasks, it may propose that this Regulation be amended accordingly or repealed after carrying out an appropriate consultation of stakeholders and the Agency. The findings of the evaluation should be made public.

Key Considerations in Creating and/or Implementing the Agency

  • Making the Agency operational as soon as possible, even if on a provisional or pilot basis.
  • Having a strong underpinning legislative framework (establishing the Agency and its mandate, and setting clear boundaries and scope).
  • Ability to complement and support (not duplicate) work of existing regulatory bodies.
  • Genuine independence and impartiality (e.g., guaranteed funding).
  • Ability to adapt to reflect technological developments, changing societal needs and expectations.
  • A structure that incorporates the right competencies and expertise, including multi-stakeholder representation from diverse backgrounds.

Implementation : A Call to Act Now!

It is critical that the EU institutions act as fast as possible to explore options and include this proposal in the new AI leg-islation. It is important to make the Agency operational as soon as possible even if on a provisional or pilot basis. This is to take into account current increasing investments in AI research at the EU-level, the increasing deployment and use of new AI-based systems (e.g., to address COVID-19) and the concurrent need for guidance on regulatory issues, appro-priate restrictions and benchmarks for enforcement.

Further Reading

SHERPA, “Feasibility of a new regulator and proposal for a European Agency for AI”, Deliverable No 3.6, 30 October 2020. https://doi.org/10.21253/DMU.13168295.v2

Acknowledgement: This policy brief has been prepared by Trilateral Research for the SHERPA project, which has received funding from the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 786641.

Version: 12 November 2020