Perspectives on the Regulation of AI and Big Data
Our literature review (2017-2019) revealed that although there are disagreements, important regulatory, policy and mar- ket actors recommend harmonized rules. Moreover, proposals for regulations almost always address ethical concerns and human rights. However, there is great variation as to the specificity of regulatory proposals. Many push for a heavier rather than a lighter touch, but there are clear disagreements. Regulatory proposals often combine risk-based ap- proaches with principle-based regulation. There is an understanding in industry that regulations are needed, but there is disagreement and ambiguity about self-regulation, co-regulation, or full regulation. The most common worries are that a heavy-touch will restrict innovation, while a light-touch will leave individuals and society exposed to severe risks to fundamental values or human rights. The challenge for any regulation is how to promote beneficial or responsible AI development and use, how to minimize the creation of bad AI or misuse of AI-technology, and how to increase its security (reliability and resilience).
3 Main Regulatory Trends
- A commonly recognised need for AI regulation, soft or hard, and, ideally at a supra-national level
- Proposals for the creation of a regulatory agency/body mainly with soft law powers
- Calls to review the existing legal framework and either revise it to address the challenges and risks of AI or provide for specific legal acts or other instruments (such as frameworks or codes of conduct and tools) to specifically govern AI
Regulatory Options that could be applied at EU-Level
Using a pre-defined set of criteria, the SHERPA project analysed a variety of regulatory options proposed by policymakers, regulators, the research community, civil society, projects active in the area (e.g., SIENNA and SHERPA) based on reviews and analysis of legal issues and human rights challenges of AI and big data.
Benefits and Added Value
The added value of the proposals lies in their potential, as governance mechanisms, to address unresolved gaps such as the lack of legal, regulatory and technical standards for AI. Each of the proposals have their own benefits (see full report). Some specific benefits include, e.g., promoting cooperation on AI/big data legal issues and provide clarity at the EU-level (EU Taskforce); consistency and clarity (Adoption of common Union definitions); enhancing trust (certification); shortening the feedback loop for new regulatory schemes and new technologies (regulatory sandboxes); facilitating explanations about the lawfulness, fairness, and legitimacy of certain decisions (algorithmic impact assessments).
Limitations, Risks and Challenged for the Adoption and Implementation of the Proposed Options
Our research identified several limitations, risks and challenges for the adoption and implementation of the proposed options which should be considered (for detailed analysis see the Regulatory Options report).
Findings and Recommendations : Potential Courses of Action for Policy-Makers
The below regulatory traffic-lights system indicates moving-forward actions (some already under consideration and some novel) for policy-makers and legislators based on SHERPA findings. STOP suggests a need to cease or initiate actions and considerations to promote the cessation of a particular activity. WATCH suggests where we need to both carefully step, carry out further research and watch developments. GO suggests actions that could be immediately taken to boost beneficial and responsible AI.
Key Considerations for Regulation AI and Big Data
Strike a balance between enabling beneficial AI and risk mitigation
– Consider seriously the possibility of regulatory failure and the amplification of risks due to reckless or casual and unconsidered adoption of laws to regulate AI, or even the adoption of bad AI laws. Smart mixing for good results
– Find a smart mix of instruments (i.e., technical, standards, law and ethical) in consultation with stakeholders to facilitate responsible innovation. Super-security for high-risk/high-impact AI
– Support super-secure AI where it has high likelihood and high severity of risk and impact on rights and freedoms of individuals, especially vulnerable populations – children, minorities and the elderly.
• SHERPA, Regulatory options for AI and big data, December 2019. https://doi.org/10.21253/DMU.11618211
• SHERPA workbook: https://www.project-sherpa.eu/workbook/
This policy brief has been prepared by Trilateral Research for the SHERPA project, which has received funding from the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 786641.