The EU Declaration of Digital Principles in Context


This blog post discusses the foundations of the principles that should inform the European digital society. As EU citizens’ ‘digital footprint’ is an increasing manifestation of ‘digital life’, a Declaration of Digital Principles by the EU is essential for digital cohesion. The European Digital Public Legal Order should be built upon the foundations of the European Public Legal Order in a digital context, which is in turn based on the principles of the Rule of Law and good governance and the Charter of Fundamental Rights. 

The traditional Principles of the Rule of Law in the Digital World

The establishment of a European Digital Public Legal Order requires the deconstruction of traditional principles of the Rule of Law and their reconstruction in the digital world, based on a modern and dynamic interpretation of conventional legal instruments available in the EU. In the first Rule of Law Report of the EU Commission in 2020, the principles of the Rule of Law were identified as legality, transparency and accountability, legal certainty, prohibition of arbitrary exercise of executive power (good governance), effective judicial protection by independent and impartial courts, effective judicial review for fundamental rights, separation of powers, and equality before the law. It is necessary to further consider and/or adapt these principles in the digital context in order to ensure the integrity of data.

The integration of fundamental rights in the European Public Legal Order is long established, including in terms of privacy and personal data. Fundamental rights are protected not only by EU instruments such as the EU Charter, but also by the ECHR and other international instruments. In the Stauder case (on data protection), the CJEU stated that fundamental human rights are enshrined in the general principles of Community law, which are protected by the Court. In Internationale Handelsgesellschaft, it was stated that ‘fundamental rights forms an integral part of the general principles of law protected by the Court of Justice’. These fundamental rights have been inspired by the constitutional traditions of the member states, and by the international treaties for the protection of human rights the member states are parties to, including the ECHR.

Most fundamental human rights are affected directly or indirectly by AI and Big Data technologies through invasive technologies including, but not limited to profiling; prediction of behaviours; influence of opinions or choices; predicting policing; mass tracking, surveillance and microtargeting; biases and discrimination; distorted access to information (‘net neutrality’ principle) or to competition (‘walled gardens’ of dominant players). Risks include direct/indirect violations of individual rights, erosion of democratic rights and rule of law principles and values, loss of quality of data, polarization of opinions and beliefs, or loss of ‘pluralistic media ecosystem’ and freedom of information and expression.

It should be noted that some of these principles have been introduced in specific secondary legislation, namely the Digital Services Act (2020) and the Regulation for a European Approach for Artificial Intelligence (2021). These proposed instruments can be seen as forming part of a wider plan for an EU digital legal order, aimed not only at allegedly protecting rights in their digital format by catching digital intermediaries, but also at placing the EU on the international innovation scene through free movement flows and high-risk AI assessments.  In addition to existing legal principles and instruments available, there is a need to determine the scope of the European digital public legal order and map out regulatory needs.  To this end, the Sherpa Deliverable 3.3(regulatory options) can function as guide to the policymakers to construe balanced and human centric AI regulations.

Rule of Law in the Digital Sphere: digital importance of basic provisions of EU law

Article 7 on the respect for private and family life, and Article 8 on the protection of personal data of the EU Charter, are the most straightforward provisions relating to a digital order. The unique combination of Articles 7 and 8 can offer extended protection to individuals in the public and private sphere.  The Right to private and family life should be one of the paramount pillars of the Digital Principles, which may be violated by the blatant tracking and data collection of apps. The right to respect for private and family life as enshrined in Articles 7 & 8 EU Charter, complemented by GDPR, has taken on new dimensions with AI dominance in the public and the private sphere.   

Other rights of paramount importance to the Digital Principles include inter alia the freedom of expression and information, non-discrimination, equality of men and women and rights of the child (Articles 11, 21, 23, 24 EU Charter respectively). The use of non-representative/biased data in particular can lead to unequal treatment of people based on characteristics such as sex, age, disability, sexual orientation, ethnic origin, and religion, in breach of Article 21 EU Charter. Moreover, if ‘structural differences’ exist for protected attributes such as gender, ethnic origin or political opinion, the AI through its output can discriminate against certain groups or individuals. AI systems used to predict recidivism by correlating data of suspects (address, income, nationality, debts, employment, etc) may infringe rights of liberty and security, right to a fair trial and the principle of no punishment without law/individual sentencing, against Articles 5, 6 and 7 ECHR and Articles 6 and 47 EU Charter.

Furthermore, the Digital Principles should ensure that fundamental human rights, such as the freedom of expression and information, non-discrimination, gender equality and the rights of the child, with particular emphasis on protecting minor’s privacy and from accessing harmful content are safeguarded.  Instruments such as the GDPR, the Digital Services Act and the AI Regulation are essential for establishing the legal framework, but the principles derived from these instruments should nonetheless be anchored durably into fundamental rights.  These principles are technical robustness, privacy of data, accountability, auditing and reviewing and human oversight of select automating processes upon which the Digital Principles should be built.

Construction of the EU Digital Public Legal Order

With the aim to formulate and/or further strengthen the EU Digital Public Legal Order, it is recommended to deconstruct the Rule of Law principles and values as set out in the previous paragraphs and reconstruct them in the digital legal order. The EU’s High-Level Expert Group on Artificial Intelligence has proposed that the framework for ‘trustworthy AI’, identified as ‘lawful, ethical and robust AI’, is composed of seven core principles, namely the human agency and oversight principle, the technical robustness and safety principle, the privacy and data governance principle, the transparency principle, the diversity, nondiscrimination and fairness principle, the social and environmental wellbeing principle, and the accountability principle. These principles in their digital forms are to be benchmarked as against ‘conceptual interpretations’ of Rule of Law principles to ensure their effective integration into the European Public Legal Order.

As provided above, only the sustainable and effective protection of fundamental human rights in their digital version will trigger the transformation from the proposed ‘EU digital legal order’ to the desired or desirable ‘European digital public legal order’. On this basis, the core question is one of ‘validation’ of legal and ethical processes put in place to balance interests at stake and protect durably data subjects and users as well as citizens more widely. Robust and multi-level protection mechanisms, best centralised at the EU level, must be put in place to effectively sustain individual and collective rights in their digital format and prevent their erosion in emergency situations. Otherwise, there is a real danger that the ‘backsliding’ of the Rule of Law is further and/or permanently exacerbated in the digital world. The benefits acquired through digital technologies may precipitate the backsliding of the Rule of Law principles. Having in mind the recent example of the COVID-19 pandemic limiting democratic output, the same could happen in an emergency where digital means need to be used. For instance, the AI regulation expressly prohibits the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, save in certain situations, including preventing a terrorist attack. If EU countries find themselves targeted by a surge of terrorist attacks in the future, the recurrent use of remote biometric identification could become the new norm. 


Certain recommendations from the SHERPA project can be utilised to substantiate the function of the Rule of Law in the digital world.  The starting point should be using  clear and appropriate definitions for AI and digital technologies, depending on their use case.  The next step is to create the foundations for a trustworthy AI.  For example, the requirement for transparency of AI has been well documented in reports, urging for explainability of decisions made by AI.  Transparency is one of the traditional principles of the Rule of Law which merits an extended application in the Digital Principles.  Such an approach should be coupled with ethical and human rights aspects of AI in the educational system, in order to promote awareness and understanding for emerging technologies.

Another important principle is that of privacy and data governance, which ties with the general need to develop a comprehensive regulatory framework and enforcement mechanisms for AI. Citizens should have full control over the data, which may not be used to harm or discriminate them. This is partly achieved through the GDPR. Nevertheless, as AI is incorporated into a plethora of devices, data collection is becoming more prevalent. Therefore, citizens should be aware of the data collected about them and preventive mechanisms, in the form of ethics by design for instance, should penetrate the technology from inception. Data collection awareness falls in the wider scope of education of citizens on AI and ethics.

Accountability of decisions taken for the use of certain technologies or by AI systems is another principle that needs to be reconstructed in the European digital public legal order. Safeguards should be placed to prevent the blatant abuse of new technologies to achieve certain goals, or exploitation in emergency situations. The establishment of an independent European Union Agency for AI, which among others should develop guidance on legal and regulatory issues of AI, and maintain an AI alert system, constitutes an adequate recommendation. In addition, the establishment of an AI Officer in organizations should be recommended, to ensure compliance of AI with ethics and human rights.

Finally, the Digital Principles could be enforced widely across the public and the private sphere within the EU through a Rule of Law Due Diligence mechanism which would promote the sustainability and the effectiveness of fundamental human rights in digital times across a wide range of stakeholders.


Although the European digital public legal order is still in the making, the core principles of the traditional European public legal order have proved of great significance to the deconstruction and reconstruction of the Rule of Law principles and values into the European digital public legal order. The Charter plays a fundamental role in the establishment of these principles. The preliminary glimpses of a European digital public legal order are evidenced by the GDPR, the Digital Services Act and the Regulation for a European Approach for Artificial Intelligence. As the implementation of Digital Principles as envisioned by the Commission goes forward, the (re-) introduction of the Rule of Law principles and values and corresponding Due Diligence mechanism will become a necessity for the sustainable development of the European digital public legal order.


  • Prof Stéphanie Laulhé Shaelou, Professor of European Law and Reform; Head, School of Law, University of Central Lancashire, Cyprus campus (‘UCLan Cyprus’); EU-POP Jean Monnet Module Leader and Academic Lead (; legal expert on the Horizon 2020 Sherpa project on Smart Information Systems and Human Rights (; and alumna Visiting Fellow, Law Department, European University Institute, Florence 
  • Constantinos Alexandrou, Researcher, School of Law, School of Law, UCLan Cyprus