SIS, Accountability and Liability

The Case/Scenario

Click here to download this Briefing as a PDF

Technological developments have led to ai systems and advanced robots not only being able to carry out activities typically performed by humans, but also having certain autonomous and cognitive features such as the ability to learn from experience and take quasi-independent decisions. Ai systems and robots are therefore able to interact with the environment in a human-like manner which raises the question of whether they should be accountable and liable for their harmful acts. The greater the degree of autonomy they exercise the more independent they become from their manufacturers, operators, owners, and users. Should they consequently be attributed civil or criminal responsibility to the extent that they have can operate autonomously?

Ethical Issues

  • Given the advanced nature of ai systems and robots, it is not clear whether the legal framework has sufficiently developed to tackle cases where ai systems or robots are directly or indirectly involved in harmful acts or omissions that are not attributable to a specific human being.
  • Since there are constant technological developments, a test may need to be established to monitor the evolving relationship between al actions and human beings.
  • The privacy of consumers can be infringed using technology such as google assistant and siri which can record more than what was expected by the consumer.
  • It may be possible to regulate the growing capacities of ai systems and robots by ensuring that harmful acts or omissions are alleviated.

Legal Aspects

  • The product liability directive establishes the strict liability of producers for defective products that cause personal injuries, death, or property damage. Provided that technology items are moveable and come under the definition of a “product”, they fall under the directive.
  • However, advanced robots or empowered devices, in the sense that they have increased capabilities to interpret the environment, execute actions autonomously and are less dependent than other actors. Thus they raise questions of liability where the damage caused by a machine cannot be linked to a defect or a human wrongdoing.
  • Ai robots and autonomous self-learning systems, currently emerging, should meet the essential health and safety requirements laid down in the applicable eu safety legislation ensuring a single market for a wide range of equipment and machines
  • In the context of the digital single market strategy, the european commission has emphasised the need to develop the legal framework particularly in the field of civil liability at the union level. Autonomous ai systems and robots will bring about further unprecedented difficulties, since it may be more difficult to ascertain what caused the damage in certain situations, particularly if the technology is able to advance its learning processes. The european parliament has expressly stressed the importance of legal certainty on liability for innovators, investors and consumers as well as the difficulty in determining who is liable and to what extent in the complex field of digital technologies.

Lessons Learned

An al system or robot can act independently, without the need for constant supervision and support. Therefore, there ought to be a clear framework to tackle the degree of control to be exercised on ai systems and robots in cases of harmful acts and omissions. In july 2015, a contractor in a volkswagen was killed by a robot who grabbed and crushed him against a metal plate. It is evident that the legal framework should evolve to tackle cases where harm was caused by ai systems or advanced robots. Although criminal liability for a person would have been possible in such a case, the mens rea requirement of knowledge and intention needed to criminalise robots is not as clear. If knowledge is defined as “sensory reception of factual data and the understanding of that data”, it may be possible to hold al systems liable, given that they are programmed to take actions to fulfil a purpose. Such legal issues ought to be addressed to attribute liability appropriately given that beyond a certain point, the manufacturer ceases to have control over the robot’s actions.