The SHERPA project has published the first draft of its guidelines for ethical development and use of smart information systems (SIS) in our online workbook. These build on recent documents, such as the report by the AI Higher Level Expert Group of the European Parliament, but go further. In particular, the guidelines are operationalised, meaning that they can be more easily applied in context. Rather than saying, “AI should be unbiased,” for example, there are a number of guidelines to help mitigate against bias in SIS. One of these reads as follows:
Engagement with users to identify harmful bias. In the deployment and implementation phase, assess and ensure that:
- a process allows others to flag issues related to harmful bias, discrimination, or poor performance of the system, and establish clear steps and ways of communicating how and to whom such issues can be raised, during the deployment of systems;
- transparency to end-users and stakeholders about how the algorithms may affect individuals to allow for effective stakeholder feedback and engagement;
- when possible, implementation of methods for redress and feedback from end-users at all stages of the system’s life-cycle (e.g., in collaboration with the developing company).
One set of guidelines focuses on development of SIS, and is structured around the CRISP-DM (Cross-industry standard process for data mining) model while the other set focuses on the use of SIS (that is, organisations using SIS for business ends), and is structured around the COBIT (a good-practice framework for IT governance and management) and ITIL (IT service management) models. Each of these models is widely used in business
Anyone is free to read and we welcome comments on these guidelines, as well as suggest alternatives on this webpage. The guidelines will be further developed by the partnering SIENNA project.