Regulatory Framework

What is the problem?

The protection of human rights and safety is a societal priority. Soft law initiatives do not address legal issues of liability and other ensuing harms in AI, even though they may be useful in some contexts to complement legislation. Additionally, EU Member States are developing and implementing their own regulatory frameworks, contributing to further fragmentation and confusion.

Who should act?

The European Commission, European Parliament and the European Council.

The Recommendation

The European Commission, European Parliament and the European Council should set a strong legal standard at the EU-level that establishes a baseline and encourages high standards of protection of fundamental rights and societal values.

The EU should develop a mandatory regulatory framework to ensure that AI systems are safe, and do not violate fundamental rights and ethical principles. This framework should include ex-ante (before) and ex-post (after) enforcement mechanisms, and should have a high degree of detail and specificity so that developers and users understand their legal obligations, and EU and national authorities can monitor compliance.

The regulatory framework should include provisions for:

  • Establishment of a European Agency for AI;
  • Centralised safeguards and mechanisms to identify and monitor risks and abuses (‘risk alarms’) of particular AI applications in specific sectors and use cases, particularly with respect to vulnerable populations (e.g., AI impact assessment);
  • Voluntary labelling scheme/certification requirements;
  • Mandatory compliance requirements for all or certain types of AI with high societal impacts and their enforcement;
  • Complaint and redress-by-design mechanisms;
  • Responsible development, implementation and use (e.g., through privacy by design, data protection by design and default, Ethics by Design);
  • Prohibitions or limits (red lines) on certain uses and applications (e.g., ban/moratorium on the use of lethal autonomous weapons systems (LAWS), mass urban biometric surveillance);
  • Addressing power imbalances.

Key Considerations

  • Balance between precise language for legal clarity and the need for adaptability to accommodate technological developments and changing societal expectations.
  • Strict enough to achieve desired outcomes, but without making the cost of compliance too high for small- and medium enterprises (which could risk further entrenching of power asymmetries with Big Tech).
  • Need to act quickly, but decisions must be informed by broad stakeholder consultation.
  • The overarching aim of the regulatory framework is to safeguard fundamental rights.