An ecosystem perspective on the ethics of AI and emerging digital technologies

How can the economic and social benefits of artificial intelligence (AI) be strengthened while its ethical and human rights risks are addressed? This is a question that drives the current policy debate, that exercises researchers and companies and that interests citizens and the media. It is the question that the upcoming book by Bernd Carsten Stahl provides a novel answer to. Drawing on the work of the EU project SHERPA, Stahl’s book proposes that using the theoretical lens of innovation ecosystems, we can make sense of empirical observations of the role of AI in organisations and society. This perspective furthermore allows for drawing practical and policy conclusions that can guide action to ensure that AI contributes to human flourishing.

 

At the one-hour book launch, co-organised by Springer and DMU, Prof. Bernd Stahl will face critical questions from a high-profile panel featuring Prof. Katrin AmunsProf. Stéphanie Laulhé-ShaelouProf. Mark Coeckelbergh and moderated by Prof. Doris Schroeder. The panel discussion will include a questions and answer session open to members of the audience.

 

The book is published using an open access model. This means that members of the audience who find the argument interesting and would like to engage with it in more detail will be able to download it for free.

Speakers

Prof. Bernd Stahl

Prof. Bernd Stahl is Director of the Centre for Computing and Social Responsibility and the Leader of the Sherpa Project.

Prof. Katrin Amunts

Prof. Katrin Amunts, a neuroscientist, is Director of the Cécile and Oskar Vogt Institute of Brain Research at the Heinrich Heine University Duesseldorf, Director of the Institute of Neuroscience and Medicine at the Research Center Juelich and scientific Research Director of the European flagship, the Human Brain Project.

Prof. Stéphanie Laulhé-Shaelou

Prof. Stéphanie Laulhé Shaelou is Professor of European Law and Reform and Head of the School of Law, UCLan Cyprus. Her areas of specialist expertise include human rights and artificial intelligence. She received the European Citizen 2020 Prize of the European Parliament as Founder of a ‘Social Mediation in Practice’ project.

Prof. Mark Coeckelbergh

Prof. Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna and Vice-Dean of the Faculty of Philosophy and Education. He is the author of 14 books including “AI Ethics” (MIT Press) and “Narrative and Technology Ethics” (Palgrave).

Juliana Pitanguy

Juliana Pitanguy is Publishing Editor at the Geography and Sustainability Research department at Springer, part of Springer Nature.

Prof. Doris Schroeder

Prof. Doris Schroeder is Director of the Centre for Professional Ethics, UCLan, and Professor of Moral Philosophy. Doris will moderate the discussion.

Additional Questions

During the book launch a number of audience questions were answered but the available time was too short to respond to all of them. Responses to the remaining questions are therefore given below:

 

The recent EU AI legislation does not address application of AI to military purposes (e.g. autonomous weapons). It certainly is not an oversight. How much can (should?) ethics drive or limit European research in this area, given the fact that other world powers may not have the same “ethical questioning” as the EU?

Response:

The proposed Regulation does indeed exclude military uses of AI explicitly from its remit. I guess this is related to the growing political desire within the EU to strengthen its common security and defence policy. The use of AI is certainly of interest for military purposes. This includes the highly contentious use of lethal autonomous weapons systems, but also many other uses, for example for reconnaissance, battlefield surveillance, logistics and many other purposes. Lethal autonomous weapon systems are a high profile topic and there are serious international efforts regulate and even outlaw them, often following the logic of existing international treaties covering other weapon systems, such as landmines, chemical or nuclear weapons.

I think there are good ethical arguments against autonomous weapons. In the SHERPA project, the artist in residence, Tijmen Schep, has developed an autonomous smart (toy) gun, to demonstrate the potential pitfalls of such weapons (https://www.sherpapieces.eu/overview/sherpa-smart-guns).

We also need to be clear, however, that this is not just an ethical decision but in essence a political one. Ethics has a role in providing arguments, offering evaluations and clarifying positions, but it has to concede that the eventual decisions are made on the political level. The EU makes strong claims about being a community of values and one would hope that these values inform all its political decisions, including those concerning military technology. Research can query and test these claims and highlight inconsistencies but cannot and should not dictate political decisions.

 

Why does the Hunan Flourishing theory function better than others (such as absolutism or utilitarianism) to tackle the ethical issues of AI? This in an area of research such as lethal autonomous weapons, for example, which would hardly help humans to flourish but achieve destruction.

Response:

I have chosen the ethical concept of flourishing because I think it exemplifies the desire to allow individuals and groups to live a life according to their design, which I think is represented in liberal democratic states. I also think flourishing is compatible with other ethical theories, so I don’t reject utilitarian or deontological ethics, but found flourishing more suitable for the argument in the book.

The reference to flourishing in the context of lethal autonomous weapons resonates with the previous question and I agree that being killed by an autonomous weapon would be incompatible with human flourishing. That is a good reason why I am sceptical about autonomous weapons. Having said that, I do think that one could make legitimate ethical arguments about autonomous weapons that might shorten war or prevent it by deterring an enemy from attacking. Such arguments could be made in a way that is similar to the justification of nuclear weapons. I am not suggesting that this would be my position, but I believe that such an argument would warrant careful consideration.

 

The so-called Social-AI is from being for social good. We see some efforts like e.g. fake news detection, but is seems very limited vs the business use of AI… Is it going to become more balanced?

Response:

What you call social-AI and what I think is more frequently referred to as “AI for good” is comprised by a number of avenues that aim to ensure that AI fulfils some generally accepted ethical good, such as the promotion of the UN’s Sustainable Development Goals, or the furthering of human rights. Such AI for good is often used to justify the development of AI. I fully agree with your observation that such AI for good is a marginal phenomenon when compared with the much more widely used AI for profit maximisation.

In the book I distinguish three purposes of the use of AI: AI for optimisation (leading to profit), AI for social control and AI for human flourishing. These do not have to be contradictory and there are many examples where they overlap. Covid 19 and track and trace programmes have shown, for example, that social control can be central to human flourishing, by helping to avoid infections. Similarly, economic gain can be a crucial component of individual and social wellbeing, in particular if profits are distributed appropriately.

The book argues that AI ecosystems should recognise the different possible purposes and need to be empowered to pursue AI for human flourishing. How to do this is a complex question, but it requires a clear delineation of the ecosystem, a sustainable knowledge base and appropriate governance structures. If these conditions are in place, then at least the AI ecosystems are in a position to pursue flourishing rather than simply focus on optimisation and profit maximisation.

 

Perhaps already addressed and from my publisher’s view how to make it sure that AI does not diminish human value? Seems to me that there is a tendency for technology take over and that people no longer think because of the aids they have.

Response:

There is no doubt a significant possibility that AI can harm humans and reduce flourishing. How to avoid this is the main topic of the book and the recommendations develop some principles and individual steps that should go some way towards avoiding such negative outcomes.

The final part of our question is crucial here. The question how we can avoid that AI (or any other technology) can structure our possible actions and fade into the background and no longer be perceived as an influence on what humans can do. This is a particular danger of current AI, as it tends to work in the background and its consequences for humans are often difficult to detect. I think that this is the reason for the categorisation of some application as high risk in the proposal for the AI Regulation which covers financial decisions, decisions about educational attainment, legal decisions or access to public services. In all of these cases AI could structure human actions without being perceived or subject to scrutiny.

A key aspect of the response will this be transparency and openness, an ability of humans to understand what role technology plays in their lives. We will not be able to avoid technical structures of human action, but we should at least be aware of them and, where appropriate, be given a choice as to what we accept.