A scoping paper
Prepared by Trilateral Research Ltd on behalf of the SHERPA project
Introduction and background
There is a lot of existing literature (policy, academic and grey) on regulation of SIS.These publicationssometimes focus more broadly on the regulation of AI and at other times onmore specific aspects (e.g., safeguards for automated decision-making, algorithmic accountability, cyberskirmishes). In relation to big data too, this is covered both generally and specifically.Currently, the SHERPA project seeks to:
- examine regulatory options relevant to SIS and highlight, for example, the advantages, risks and challenges, obstacles to implementation of such options, success and failure examples, roles of relevant actorsand impacts on SIS stakeholders.
- explore whether and how present and proposed regulatory interventions relating to SIS adhere to European aspirations for better regulation.
- analyse whether embedding ethics and human rights in SIS is best served by an interventionist and coercive or a flexible and facilitative approach.
Purpose of this scoping paper
This scoping paper seeks your assistance to answer the following questions:
- Has our preliminary research identified all the regulatory options relevant for further analysis in the project?
- Are there any additional unidentified options we should consider, especially in the EU context?
- Are any of the identified options already dead in the water (unlikely to succeed in regulating SIS), and should not be considered for further analysis in SHERPA? (If so, why?)
- What additional criteria should be considered in the review and analysis of each of the identified options?
How to participate
To provide your feedback on the above questions and the options identified and/or the criteria for assessment and any further suggestions, please email: Rowena Rodrigues: email@example.com by 20 September 2019 with SHERPA scoping paper feedback in the subject line.
WE LOOK FORWARD TO YOUR VIEWS!
Our preliminary research has identified the following proposed regulatory options for further study in SHERPA. These options have been proposed by a variety of stakeholders: elected officials, policymakers,regulators, the research community, civil society, projects active in the area (e.g., SIENNA) and the SHERPA project itself based on analysis of legal issues and/or human rights challenges of AI and big data.
Fig: Identified regulatory options
Question: Are there any additional options to consider? Which of these options do not merit further analysis?
Criteria to analyse options
The SHERPA partners will analyse each of the above identified options using the followingcriteria adapted from regulatory impact assessment (RIA) templates,to some extent the Better Regulation objectives and Toolbox questions and other research analysing regulatory options. The criteria were pre-tested in July 2019 against two selected options and refined on the basis of the testing and internal discussions. We seek your help in finalizing these.
Fig: Criteria for use in analysis of identified regulatory options
Question: Are the above criteria well-suited to analyse the regulatory options and their impacts? What additional criteria should be considered in our review and analysis (and why?)
 RIAs are instruments to improve the quality of regulatory decision making.
E.g., Clarke, Roger, “Regulatory Alternatives for AI”, Review Draft of 9 February 2019. http://www.rogerclarke.com/EC/RAI.html