On 3 July 2018, the first SHERPA scenario planning workshop took place at Innovate UK, in Brussels, with 22 representatives from 8 European countries and 17 different organisations (from academia, industry, civil society, standards bodies, and ethics committees), as well as policy officer, Albena Kuyumdzhieva, and other representatives of DG Research.
The workshop was chaired by David Wright, (Director of Trilateral Research and author of several articles on scenario planning) while Matti Aksela (F-Secure vice president in charge of AI) set the scene by outlining the current state of the art in the AI field. Amongst others, the workshop was attended by Felicien Vallet of Commission Nationale de l’Informatique et des Libertés (CNIL), Julian Kinderlerer, European Group of Ethics (EGE), Chiara Giovannini, of the European Association for the Co-ordination of Consumer Representation in Standardisation (ANEC), Maja Brkan, Assistant professor, Faculty of Law, University of Maastrict, and Roberto Cascella of the European Cyber Security Association (ESCO).
The SHERPA project is developing five scenarios exploring emerging AI systems that are likely to be implemented and socially relevant in the near future of 2025. The scenarios aim to highlight critical ethical and human rights issues and cybersecurity arising from the use of such technologies in the future. The scenarios are focused on the following domains:
- AI that mimics people
- AI in warfare
- AI in transport
- AI in law enforcement
- AI in education
The workshop focused on the first scenario i.e., AI that mimics people, and is the first activity in a series of planned scenario iterations with a wider network of stakeholders. Each scenario will explore the underlying forces behind the factors affecting ethical, legal, social and economic aspects of everyday life in 2025. It will describe how such forces can be organised, evaluated and prioritised to identify indicators thus forecasting how the domain may evolve. SHERPA wishes to create plausible scenarios that reflect the emergence of advanced technologies yet are sufficiently grounded in reality. The purpose is to tease out the ethical and human rights issues that will, in turn, help policymakers and other stakeholders to deal with the issues raised in the scenarios by offering actionable information. Hence, SHERPA has devised an approach for developing “descriptive scenarios” that can be combined with backcasting, to highlight the steps that policy-makers and/or other stakeholders need to take now to reach a desired future and mitigate any undesired impacts.
Extensive stakeholder consultation will enable the consortium to develop stable scenarios reflecting societal visions for the contribution of AI in these domains and help unravel the key drivers that will determine their development. If you would like to be involved in one of our future scenario planning workshops and be part of the consultation process, please contact Tally Hatzakis, Senior Research Analyst, Trilateral Research Ltd at tally.hatzakis@trilateralresearch.com.