Moving forward on regulating AI and big data in Europe
Stakeholder Perspectives on the Regulation of AI and Big Data
Although there are disagreements, important regulatory, policy and market actors recommend harmonized rules. Moreover, proposals for regulations almost always address ethical concerns and human rights. However, there is great variation as to the specificity of regulatory proposals. Many push for a heavier rather than a lighter touch, but there are clear disagreements. Regulatory proposals often combine risk-based approaches with principle-based regulation. There is an understanding in industry that regulations are needed, but there is disagreement and ambiguity about self-regulation, co-regulation, or full regulation. The most common worries are that a heavy-touch will restrict innovation, while a light-touch will leave individuals and society exposed to severe risks to fundamental values or human rights. The challenge for any regulation is how to promote beneficial or responsible AI development and use, how to minimize the creation of bad AI or misuse of AI-technology, and how to increase its security (reliability and resilience).
3 Main Regulatory Trends
- A commonly recognised need for AI regulation, soft or hard, and, ideally at a supra-national level
- Proposals for the creation of a regulatory agency/body mainly with soft law powers
- Calls to review the existing legal framework and either revise it to address the challenges and risks of AI or provide for specific legal acts or other instruments (such as frameworks or codes of conduct and tools) to specifically govern AI
Regulatory Options that could be Applied at EU Level
Using a pre-defined set of criteria, the SHERPA project analysed a variety of regulatory options proposed by policymak-ers, regulators, the research community, civil society, projects active in the area (e.g., SIENNA and SHERPA) based on reviews and analysis of legal issues and human rights challenges of AI and big data.
Limitations, Risks And Challenges For The Adoption And Implementation Of The Proposed Options
There are several limitations, risks and challenges for the adoption and implementation of the proposed options.
Findings And Recommendations: Potential Courses Of Action For Policy-Makers
The below regulatory traffic-lights system indicates actions for policy-makers and legislators based on SHERPA find-ings. STOP suggests a need to cease or initiate actions and considerations to promote the cessation of a particular activity. WATCH suggests where we need to both carefully step, carry out further research and watch developments. GO suggests actions that could be immediately taken to boost beneficial and responsible AI.
- Highly restrictive regulatory prescriptions that excessively and disproportionately limit innovations
- Fostering AI safe havens (in countries with dubious ethics and human rights credentials) fuelled further by knee-jerk political/regulatory responses
- A ‘regulate first ask questions later’ culture in medium to low-risk cases especially where technical solutions or setting standards might be better placed to address concerns
- Setting up agencies or bodies without monitoring and en-force-ment teeth
- Funding/procuring AI/big data research/innovations that are not responsive to social, ethical and human rights concerns
- Ensure that regulatory measures are proportional, practical and effective (regulate what it seeks; aims- and outcomes-based)
- Developments in specific fields to determine where and what type of regulation is most needed
- Get a broader acceptance of the idea of machine consciousness, or autonomous systems are deployed more widely before an EU-level special list of robot rights is deployed
- Re-evaluate (as technology advances) how currently implemented regulatory measures are addressing risks and impacts and the gaps in self-regulation
- Explore/develop a position on privatisation of regulation and regulatory capture in AI and big data
- Adapt regulatory/policy and strategy to move fluidly with developments in new technologies
- Consider if anti-trust regulations cannot be used, the potential for disempowering abuses in dominant positions through fines or mandating that some activities must be blocked as illegal.
- Support secure, ethical, human rights-re-spectful, responsible AI via implementation/ promotion of privacy by design, data protection by design and default, ethics by design, human rights impact assessments, and/or algorithmic impact assessments during R&D and procurement
- Implement a ban/moratorium on the use of lethal autonomous weapons systems (LAWS)
- Explore the establishment of a general fund for smart robots and common Union registration of robots
- Set up schemes for voluntary/mandatory certification of algorithmic decision-making systems
- Establish centralised safeguards and mechanisms for monitoring emerging risks and abuses (‘risk alarms’) especially with respect to vulnerable populations children, minority communities, and the elderly
- Carry out a EU-wide Member State survey on whether we need further regulation for AI and for what purpose, field/industry
- Increase general public awareness of the risks and impacts of AI via mass media campaigns including to counteract misinformation (risk exaggerations)
Key Considerations for Regulating AI and Big Data
Strike a balance between enabling beneficial AI and risk mitigation
Consider seriously the possibility of regulatory failure and the amplification of risks due to reckless or casual and unconsidered adoption of laws to regulate AI, or even the adoption of bad AI laws.
Smart mixing for good results
Find a smart mix of instruments (i.e., technical, standards, law and ethical) in consultation with stakeholders to facil-itate responsible innovation.
Super-security for high-risk/high-impact AI
Support super-secure AI where it has high likelihood and high severity of risk and impact on rights and freedoms of individuals, especially vulnerable populations – children, minorities and the elderly.
- SHERPA, Regulatory options for AI and big data, December 2019. https://doi.org/10.21253/DMU.11618211
- SHERPA workbook: https://www.project-sherpa.eu/workbook/
Acknowledgement: This policy brief has been prepared by Trilateral Research for the SHERPA project, which has received funding from the European Union’s Horizon 2020 research and innovation programme, under grant agree-ment No 786641.