What is the problem?
Despite much academic research, standardization activities and policy-oriented work (notably including the HLEG (2020) Assessment List), there is no universally accepted baseline of what an AI impact assessment should entail.
There is broad agreement that a risk-based approach to AI is appropriate. For such an approach to work, there must be guidance on how to define, measure, interpret and address relevant risks.
What is needed?
Provide clear guidance on:
- Processes for conducting and evaluating AI impact assessments
- Measures and metrics for AI impacts
- Determination of risk level for a technology or application
- Issues to be included in AI impact assessments
This is a composite recommendation, requiring further work on different subject areas and processes. The recommendation is therefore broken down into sub-recommendations:
Define Processes and Metrics for AI Impact Assessment
Problem: In many cases it is not clear whether a new technology has net positive or negative consequences on society. One key problem is that there is no agreed way to measure and compare different consequences.
Recommendation: Fund research on appropriate measures for broader ethical, social and other impacts, and to design relevant impact assessment for AI technologies.
Who and when: European Commission, national funding organisations, when preparing AI-related calls.
Include determination of risk level in impact assessment
Problem: There is no agreement on how risk in AI is assessed or measured.
Recommendation: the guidance for AI impact assessment should include provisions on how to assess and measure risks.
Fund research to develop rigorous risk classification for AI.
Who: European Commission.
Provide guidance on assessment of specific issues
Problem: There are numerous ethical and human rights issues arising from AI. An impact assessment needs to be informed by current research on these issues.
Recommendation: Establish a publicly accessible and current knowledge base for ethical issues of AI to inform AI impact assessment.
Who: contributors: researchers on AI and related topics, including AI centres of excellence in research; coordination: EU Agency for AI
AI impact assessment can build on and incorporate numerous existing impact assessments, including:
- Data protection impact assessment
- Algorithmic impact assessment
- Human rights impact assessment
- Socio-economic impact assessment
- Environmental impact assessment
- Ethics impact assessment
- Responsible innovation assessment
Initial developments of AI impact assessments exist, for example in the IEEE 7010-2020 standard on Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being or the Dutch ECP AI Impact Assessment.
AI impact assessment should not be an a priori activity, but should be an ongoing process that is regularly reviewed and updated.
AI impact assessment could be implemented as part of due diligence processes.
SHERPA has provided accounts of ethical and human rights issues of AI in organisations as part of its case study research. Likely future issues are discussed in the SHERPA scenarios.