Dignity, Robots and Care for the Elderly

The Case

European societies are ageing. In 1950, only 12% of the European population was aged over 65. Today the share has doubled and projections show that in 2050 over 36% of Europe’s population will be over 65. At the same time, the social care sector is struggling to recruit and retain the workers it needs to meet this rising demand. Care is not only provided professionally. In all 34 European Union countries and in the UK, long-term care relies heavily on informal carers, mainly spouses and children of the care-dependent person. In most cases, they are women. Smart information systems are already widely used to remind the elderly to take their medication or to keep appointments. More ambitiously, social robots are designed to interact with humans even as companions or to provide simple care functions such as taking vital signs. Can they resolve a major issue in care? And what are the ethical issues?

Ethical Issues

Using robot technology with a vulnerable population, such as the elderly needing care, raises a range of ethical issues, as obvious from the literature:

  • Privacy: The user of social care robots is under constant observation through robot technology and how it is linked to, for instance, support networks in the case of
  • System errors: Malfunctioning robots can create harm in a multitude of ways, for instance, by handing a person the wrong
  • Manipulation: Elderly people, especially those with cognitive impairments, might be unable to realise they are being cared for by robots, and mistakenly believe they are dealing with an empathic

Dignity?

Looking at the ethical issues involved in robots in social care through the lens of dignity is promising for some, and confusing for others.

  • Dignity is used especially for emotional appeals to convey images of loneliness in social care that are interpreted as loss of dignity.
  • Yet gains of dignity are also noted where potential situations of indignity (e.g. incontinence) are dealt with by robots.
  • In 2008, the Canadian Supreme Court decided that dignity was too confusing to apply in courts (in anti-discrimination cases).

Lessons Learned

The case demonstrates that benefits and disadvantages of using robots in social care are both hotly debated. What it also shows is that the use of the concept of dignity does little to advance the discussions. When both parties to a disagreement can use the same concept to strengthen their case, the concept is too stretched to help resolve real-life, pressing issues. Conclusions that remain are:

  • Human beings can vary considerably in their estimation of what robot care can do for A one-conclusion- fits-all-cases suggestion is therefore not useful. Some may prefer robot care to leaving their home, others might not.
  • To resolve concrete issues of privacy, manipulation and potential harm should therefore be given priority to broader, more general
  • The Japanese government hopes that robot care can fill a projected shortfall of 380,000 care workers by Their successes or failures could inform developments in Europe.
  • In the literature a geriatric nurse says: “Care robots don’t substitute for the human being—they help when no one else is there to help”.

More details on this case and further cases and scenarios are available at: https://www.project-sherpa.eu/workbook/