How can we ensure that AI is ethical and supports human rights?

In order to ensure that the benefits of artificial intelligence (AI) can be harnessed, and the related ethical issues and human rights concerns can be addressed, we propose recommendations in three areas. These aim to ensure that AI ecosystems are conducive to human flourishing:

  • Concepts: Delineate AI ecosystems
  • Knowledge: Establish and maintain a knowledge base
  • Governance: Institute appropriate governance mechanisms

This website explains these recommendations, provides examples and suggests next steps.

What is the Background of these Recommendations?

Artificial intelligence (AI)

AI is a set of powerful technologies that increasingly affect most countries, organisations and individuals. Like all technologies, AI can be used for different purposes. AI holds the potential to hugely benefit individuals and society. For example, AI can help to better understand and cure diseases, to revolutionize transport, to optimize business processes or reduce carbon emissions. At the same time, AI raises many ethical and social concerns, ranging from worries about biases and resulting discrimination, to the redistribution of socio-economic and political power and the impact on democracy.

AI and Human Flourishing

The recommendations promoted here aim to ensure that AI supports human flourishing, and that the consequence of AI development and use will allow individual human beings to live their lives freely and achieve their potential. This means that AI will support and strengthen human rights, and that ethical issues raised by AI can be addressed. Human flourishing requires the protection of the individual person, but also calls for the social structures to support individuals to achieve their potential.

Shaping AI ecosystems

AI consists of many different types of technology, which are developed, deployed and used by an array of different stakeholders. AI requires many components, including technical, social and organisational ones. It is therefore helpful to think about AI in terms of an ecosystem of interlinked stakeholders, technologies and processes.

These recommendations are based on the view of AI as a set of interlinking ecosystems. They propose ways in which AI ecosystems can be shaped to ensure they are beneficial to people, uphold human rights and, more generally, promote human flourishing.

What is Artificial Intelligence?

There are many definitions of Artificial Intelligence.

The Organisation for Economic Co-operation and Development (OECD 2019, p. 7) states that an:

“AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”

A similar policy-oriented definition by the European Commission (2020) suggests that:

“AI is a collection of technologies that combine data, algorithms and computing power.”

One of the most cited academic definitions is from Li and Du (2007, p. 1), which notes that AI combines:

“… a variety of intelligent behaviors and various kinds of mental labor, known as mental activities, … [to] include perception, memory, emotion, judgement, reasoning, proving, identification, understanding, communication, designing, thinking and learning.”

What do people mean when they say AI?

To better understand the discussion of ethics and AI it may be helpful to ask what people are referring to when they talk about AI. In discussions of ethics and AI there are at least three different meanings of the term:

  1. Machine learning is currently the most prominent “narrow” form of AI – a technology that replicates one specific aspect of natural intelligence, such as speech recognition or pattern classification. Machines can be very good at these individual tasks, but cannot easily apply them to other areas.
  2. AI is often used to mean larger socio-technical systems that incorporate and build on AI, such as machine learning, but that go beyond the immediate AI technology. Examples include autonomous vehicles (self-driving cars), credit rating systems, or commercial recommender systems, i.e. systems used in electronic commerce (e.g. Ebay, Amazon) that recommend items to customers on the basis of prior purchases.
  3. General AI (or artificial general intelligence) stands for machines that can perform human-level cognitive functions. No such systems currently exist, but they figure strongly in the general literature and public imagination. They also serve as inspiration for AI research and development.

Why talk about Human Flourishing?

Most human beings want to live fulfilled lives that allow them to reach their potential, successfully meet challenges and, as far as possible, determine their destiny. Briefly, they want to flourish. These ideas have a long history in many ethical theories, principles and values and remain current in the 21st century. They are expressed in human rights frameworks, such as the Universal Declaration of Human Rights or the European Charter of Fundamental Rights. Flourishing is realised in the individual but typically requires supportive social environments.

What has flourishing got to do with AI?

Artificial Intelligence (AI), like all technologies, can be used for different purposes. From an ethical and human rights point of view there are three main purposes of AI use: efficiency and optimisation, social control, and human flourishing.

These three purposes are not contradictory or mutually exclusive, but point to different driving forces for development and utilisation of these technologies. Framing AI ethics in terms of human flourishing is consistent with numerous national and international ethics guidelines and principles, including those published by the EU’s High Level Expert Group (2019).

What Ethical Issues does AI raise?

SHERPA identified a significant number of ethical issues, through 10 case studies, 5 scenarios, reviews of bodies of literature, an online survey with more than 300 respondents, 45 stakeholder interviews, and an expert Delphi study.

We use the term ‘ethical issues’ to denote the issues that were perceived to be problematic by our respondents. These explicitly include issues that are already covered by human rights and other legislation (e.g. privacy, discrimination), but also cover issues that are less clearly specified (e.g. transparency, loss of human contact), or are closely related to technical aspects (e.g. security)

unnamed

Machine Learning

Some ethical issues are directly related to AI in the narrow sense, most prominently to machine learning, which is currently often implemented through neural networks. This type of AI is characterised by opacity, unpredictability and, typically, the need for large data sets for training and validation.

Ethical issues linked to this type of AI include

  • bias
  • discrimination
  • security breaches
  • data protection issues
unnamed

Socio-Technical Systems

This understanding of AI points to ethical issues arising from living in a digital world. These socio-technical systems appear to act autonomously, structuring the way humans can act, and have significant social impact.

They lead to ethical issues such as:

  • unequal access to power and resources
  • unfair distribution of the costs and benefits of technology
  • impact on warfare, and the killing of humans by machines
unnamed

Artificial General Intelligence

Currently no AI exists that can be described as artificial general intelligence, i.e. it has human cognitive capabilities. However, these systems figure prominently in the literature and in people’s imagination. Such systems would potentially raise ethical issues such as:

  • hostility towards humanity by superintelligent machines
  • changing perceptions of humans based on close interaction with machines (e.g. neural implants)

The ethical issues linked to the three categories of AI listed above are different in nature and scope. Some may be subject to simple and straightforward resolutions, others will require political interventions, while some may be impossible to resolve and require continuous reflection.

Any recommendation that aims to address the ethics of AI needs to be aware of the breadth of issues and ensure that the recommendations are clearly delineated and the relevant concepts well defined.

Why talk about AI ecosystems?

AI consists of a multitude of technologies that are applied to many different tasks by a variety of stakeholders. Combined with the complexity of ethical issues of AI, this means that there is no simple way to address the ethics of AI as a whole. Therefore, we need to find ways of thinking about AI that allows for a broader perspective.

One way of thinking about AI is to use the metaphor of ecosystems. This metaphor has been widely accepted, for example, the European Commission (2020) talks about an ecosystem of excellence in AI, and an ecosystem of trust.

But what can be learned from the ecosystem metaphor that can be applied to addressing ethical issues of AI?

AI Ethics Stakeholders

There are different types and groups of stakeholders involved in ethical issues of AI:

  • Policy stakeholders including national, regional and international policymakers and those involved in implementing policies, such as research funders

  • Organisations including developers, deployers and users of AI as well as organisations with special roles, such as standardization bodies, educational institutions

  • Individuals such as technical experts and developers but also users and, importantly, people who don’t use AI but may still be affected by it

Characteristics of innovation ecosystems

Innovation ecosystems mirror natural ecosystems in that their boundaries are often difficult to define. Members of ecosystems co-evolve; they compete but they also collaborate and learn from each other. There are mutual interdependencies between members, but the relationships are typically dynamic.

Principles to Promote Flourishing

In order to intervene in AI ecosystems to promote human flourishing, the SHERPA recommendations meet three principles:

  • the ecosystem in question needs to be clearly delineated, e.g. in terms of geography, technology, but also conceptually, i.e. with regards to a shared understanding of human flourishing
  • a successful AI ecosystem will need to develop and maintain a knowledge base covering technical but also conceptual and procedural knowledge
  • governance intervention needs to be adaptive, flexible and geared towards learning.

The ethical issues linked to the three different categories of AI listed above are very different in nature and scope. Some may be subject to simple and straightforward resolutions, others will require political interventions, some may be impossible to resolve and require continuous reflection.

However, all of these issues can legitimately be seen as ethical issues linked to AI. Any recommendation that aims to address ethics of AI needs to be cognisant of the breadth of issues and ensure that recommendations are clearly delineated and the relevant concepts well defined.

References

Stahl, B. C., & Wright, D. (2018). Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation. IEEE Security & Privacy, 16(3), 26–33. https://doi.org/10.1109/MSP.2018.2701164

Ryan, M., Antoniou, J., Brooks, L., Jiya, T., Macnish, K., & Stahl, B. (2020). The Ethical Balance of Using Smart Information Systems for Promoting the United Nations’ Sustainable Development Goals. Sustainability, 12(12), 4826. https://doi.org/10.3390/su12124826

Ryan, M., & Stahl, B. C. (2020). Artificial intelligence ethics guidelines for developers and users: Clarifying their content and normative implications. Journal of Information, Communication and Ethics in Society, ahead-of-print(ahead-of-print). https://doi.org/10.1108/JICES-12-2019-0138