Human Rights and Ethical Instruments
Different international treaties recognize and protect rights, which can build and develop a democratic and liberal society. For example, on a European level, Article 11 of the EU Charter and Article 10 of the ECHR protect the freedom of expression and information, which are vital elements for the healthy enjoyment of democracy and freedom of thought and the avoidance of control and manipulation.
The organisations Article 19 and Privacy International are particularly concerned about the impact AI will have on the right to freedom of expression and information, which, as they claim, are essential foundations for open and democratic societies and among the basic conditions for progress, as well as for each individual’s self-fulfilment.
The Positive Side of SIS and Democracy, Freedom of Thought, Control and Manipulation
The positive influence of SIS is very remarkable, and there are only a few important aspects, which outline its significant impact:
- to enable user-controlled information filters;
- to support social and economic diversity;
- to improve interoperability and collaborative opportunities;
- to create digital assistants and coordination tools
- to support collective intelligence, and
- to promote responsible behavior of citizens in the digital world through digital literacy and enlightenment.
Human Rights and Ethical Instruments
Despite the positive aspects of SIS in the protection of Democracy and Freedom of Thought, Control, Manipulation, there are ethical and human rights concerns about the following:
- Lack of the respect for the rule of law, because industry initiatives around AI are narrowly focused on the development of technical standards and ethical frameworks, these may not focus on the rule of law.
- Lack of accountability regarding the freedom of expression, because of the machine learning nature of AI. This was very well exemplified by the fact that the increased use of AI during the COVID-19 virus by Social Media Platforms led to legitimate pieces by, inter alia, the BBC, being removed. Furthermore, the very nature of human communication is so complex and nuanced that the use of AI cannot not (as yet) live up to it.
- Data collection and use can cause concerns about freedom of expression given that the use of AI leads to a variety of phenomena ranging from personalized advertisements to electoral pressure and may also reflect bias and prejudice in the manner in which they have been programmed.
The above threats can lead to a centralization of artificial intelligence, which will be able to influence what we know, what we think and how we act.
Additionally, the danger of manipulation, which is one of many types of attack on elections, is quite clear in human elections. SHERPA’s chapter (Human Rights analysis deliverable) on Democracy, Freedom of Thought, Control and Manipulation discusses all the above, as well the Data for Humanity Initiative and the international legal framework of protection of fundamental rights for purposes of demonstrating non-compatibility between AI and societal values is that they (Smart information systems) could lead to an automated society with totalitarian features.
The potential contribution of AI and SIS to the protection of democracy and freedom of expression and thought is very important. According to the Data for Humanity Initiative (‘Will Democracy Survive Big Data and Artificial Intelligence?’), use of AI and SIS can help to eliminate discrimination and create a fair system of social coexistence as well as to reduce pollution of the environment, because of the available data. The media has a role to play in ensuring that coverage of AI is focused on the current issues and the way, how to raise awareness of them.
The best way forward would be to promote AI systems and develop them in transparent and inscrutable ways, so citizens will also have their role in creating a better functioning society and should be recognized as it is particularly pertinent to front-line defenders of human rights, and vulnerable or minority communities that will be under or misrepresented in datasets.