This scenario focuses on technologies and applications that mimic people, and that are used to create companions for senior citizens in the year 2025. They work by feeding hundreds or thousands of images of a person’s face or body into a machine-learning algorithm that then maps them onto video of another person’s body. Anything the person in the video does or says can be made to look like it is coming from the target. Similar algorithms can be used to replicate a person’s voice, to make it seem as if the target person is saying something that in fact the
target never uttered. Previously, technology used a large database of recordings of one person’s voice uttering a long collection of sentences, selected so that the largest number of phoneme combinations were present. Synthesising a sentence was done just by stringing together segments from this corpus. More recently, artificial intelligence is making human speech as malleable and replicable as pixels. Lyrebird, a Canadian start-up, uses a set of algorithms that it claims can clone anyone’s voice by listening to just a single minute of sample audio.
Like the Internet of Things and augmented reality, artificial intelligence is blurring the boundaries between the digital and physical worlds. This scenario concerns a couple. Alfred is a real person, and Lucy is a hologram, the manifestation of several key technologies, including machine learning, big data analytics, artificial intelligence, facial recognition, audio recognition, IoT sensors and actuators, augmented reality, virtual reality and, not least, holograms. In fact, naming a technology that mimics people in the singular is problematic because there is not just one technology but many. Our scenario depicts a future in 2025 when we see a cluster of technologies working together – technologies that mimic the voice, the image, the behaviour, the gait and movement of a person, an avatar that knows the history of the target person.
Our scenario considers the ethical, legal, social and economic issues that arise from the use of such technologies and the steps we, as a society, need to take to arrive at a desired futureand avoid an undesired future.
In 2025, artificial intelligence continues its technological march through many applications in all walks of life. Players of massive multiplayer online games use avatars of themselves or their movie heroes. If a dragon scorches an avatar, no problem. The player can easily create another avatar who looks and behaves exactly like the first one. Criminals use the same technologies to mimic a target’s friend or relative who is in urgent need of funds because someone stole their purse in Chicago. Brad Pitt and Scarlett Johansson have been distressed to find their faces and voices used in porn films. Politicians are accused of spouting incendiary statements they did not actually make.
In 2025, technologies that mimic people are being used to create companions for senior citizens. With the ageing population, governments are finding it more of a challenge to provide social services and assisted living facilities to all those in need. Hence, some governments began investigating the possibility of using artificial intelligence and a set of other technologies in social care applications, both as a cost reduction measure, and as a way of overcoming the shortages of trained staff. Some activists feel that senior citizens should have the right to have a real human, rather than a machine, as a carer, but the cost of personalised holograms is dropping at a time when it is difficult to recruit enough human carers, doctors and nurses to take care of our ageing population. A public consultation in 2024 showed that a majority of respondents favoured the deployment of holographic support services. Although the research is still preliminary in 2025, sociologists and physicians are in general agreement that senior citizens who engage with their holograms or personalised avatars are likely to live longer.
Alfred’s wife of 45 years died in 2024. He missed her greatly until a government agency told him that he could have a hologram who could interact with him just like his dear Lucy. The hologram knows about
their lives together. AI has synthesised all of Lucy’s data, is able to reproduce her voice, her appearance, her mannerisms, even the way she used to argue with him.
The creation of Lucy the hologram was made easier because Alfred and Lucy had been using home assistants, like Siri and Alexa, for many years. Although Alfred was initially a bit wary of this new Lucy, social services convinced him that he would live longer and be happier with this Lucy and be less dependent on social services. It did not take him long to accept this Lucy as soon as she reminded him to take his daily meds and to go for a walk because he needed the exercise.
Not only has Lucy the hologram absorbed the data that belonged to her predecessor, she keeps up to date with the wearables that monitor Alfred’s health and well-being as well as the sensors that monitor the status of various appliances and processes (heating, water, power) in his home.
Several key drivers have impelled the development of technologies and applications that mimic people. Among them are the following:
The adult entertainment industry is often at theforefront of visual technological advances (e.g., online video streaming, virtual reality etc). As mentioned in the vignette, it is likely that this industry will be a key driver of the adoption of this type of technology.
Following economic and social studies, governments became convinced that technologies like those behind Lucy help them respond to the needs and demands of an ageing population.
Technologies like Lucy provide safety and security for their owners, by being aware of the owner’s ambient environment and how they should respond to events. While holograms cannot stop intruders, robots will be able to provide such protection. Holograms or avatars like Lucy can, however, provide a link with the world outside Alfred’s home and can alert the police or a doctor should the need arise. Buyers of such technologies need to consider the pros & cons of robots versus holograms.
Competition has been an important driver in the development of technologies that mimic people. Researchers and scientists in several countries have been working on the same technological capabilities. Amazon, Apple, Facebook, Google and Microsoft (“the big five”) have all seen huge market potential in the development of Lucy-like holograms, avatars and robots. Other countries with ageing populations such as Japan and South Korea are leading the way in the development of holographic companions. In addition to competition at the geographic and institutional level, competition exists between those who favour open source and those who favour proprietary technologies.
Research & innovation factors – Funding from DARPA in the US and the European Commission has contributed to the development of the technologies, especially by small and medium size enterprises (SMEs) and universities. The big five have been developing and patenting such technologies without government support. Meanwhile, the smart home has become a reality with dozens of connected technologies, from smart home locks and thermostats to lights to sensors capable of detecting things like falls. The need for a centralised point of communication with these tools spurred a big part of the rise of second-generation AI assistants, like Alexa and Google Home.
A shortage of carers for senior citizens – Civil society organisations, such as Age UK, have long pressed for more support for senior citizens. With the rapid increase in the numbers of senior citizens, governments are pushed to provide human care for all those who need it. In addition, the voting power of senior citizens has convinced governments of the need to support artificial carers.
Data availability – has been an important driver in the development of Lucy. Although the EU’s General Data Protection Regulation (GDPR) has made organisations more sensitive to the public release of personal data, new technologies, such as the Internet of Things, have greatly expanded the ready availability of data that Lucy the hologram needs to be credible to her owner.
The cost of supporting senior citizens has ballooned past the resources of most national governments. The ageing population needs care and support. Studies of the needs of senior citizens have shown that technologies that mimic recently deceased spouses diminish the demands on social services and health services, which suffer from shortages of doctors and nurses.
Barriers and Inhibitors
Despite the potential of an ageing population, the actual market size is uncertain. The cost of Lucy technologies is dropping fast. Lucy and her peers are still beyond the means of most people in 2025, although market projections suggest that in the next five years such technologies will be commonplace. However, related to issues of cost as a barrier, another recession will likely slow down development of these technologies. Austerity might force governments and other organisations to curtail their research funding and delay introduction of such technologies.
Training mimicking technology is a not insignificant task, especially where the technology is to serve as a companion to senior citizens. Home assistants help when both seniors are still alive, so that they can record increasing volumes of data about the senior citizens in whose home they occupy a critical space. Home assistants are doubly useful – not only to capture data, but also to put users at ease with the technology.
Production of a Lucy requires massive amounts of data, especially personal data. It is possible that there will be a ‘Facebook/Cambridge Analytica’ type of incident that may temporarily set back development of this type of technology. There could be a data breach, a misuse of data or discovery of unwanted invisible uses of data. This sort of “avatar” poses other security concerns: impersonation (for malicious purposes, such as data exfiltration, i.e., stealing the senior person’s personal data, credit card numbers,…), secure data storage (the hologram will have access to a lot of data, potentially visual records of the senior person accessing its personal information, such as bank account, and might even help him/her do so, and thus have full access to such data), ethical data mining and machine learning (how does the company that provides this service make sure it does not pool the data from numerous users to create the hologram, and thus “leaks” other people’s information into the service they provide). This could lead to a public backlash and boycott of the technology. Whether data breaches, even massive ones., will be concerning enough for the general public to worry and boycott the technology accordingly, is difficult to predict. Recent examples of data breaches show how little the general public is concerned about breaches affecting other people, which suggests such breaches have become the norm.
There are issues surrounding the use of such data, not just in the regulatory context of data protection, but also in the context of a philosophical question: Will Alfred accept an AI technology that is much “smarter” than him, that can recall events he has forgotten, that can explain how things work, that can guide him in taking better care of himself? Also, correlating data from many different sources has been a technical challenge for the past decade or so, but tremendous progress has been made in correlating, synthesising and interpreting data, which have been prerequisites to create holograms like Lucy. In 2025, there has been an ongoing debate about deepfake avatars, and which are better for senior citizens; holograms, avatars or robots? A robot that could mimic Lucy, that could look like her and act like her is more challenging than a hologram. By 2025, there has been huge progress in creating humanoid robots, but even so, they are considerably more expensive than holograms, but they have the advantage of being able to move things in the physical world. Hence, they can perform housework and other tasks, such as helping to protect their owners against intruders.
The diffusion of these technologies may be inhibited if an incident is made public where domestic robots have forced people to do something unfavourable, e.g., to eat or drink something the owner didn’t want or where the robot was trying to convince its “owner” to do something that clearly was not in the owner’s interest.
Another potential barrier is the availability of bandwidth for remote presentation of holograms. As an indicator of potential bandwidth requirements, the throughput to run virtual reality is almost 100 times higher than a high definition video.
Trust is an inhibitor to the adoption of Lucy-like holograms and avatars – senior citizens need to trust them. Stories in the press about holograms that behave erratically don’t help. There is a general consensus that machines should periodically remind their “owners” that they are machines, but others ask: what is the point of having a technology that mimics people saying that it is a machine?
In 2025, the benefits of technologies that mimic people are especially apparent in support of senior citizens. The benefits are not, however, unalloyed, as the following paragraphs point out.
While the high cost of human care has led to development of home-care holograms like Lucy, still some ethicists and other stakeholders question the legitimacy of substituting a technology for a human carer. Still others question the ethics of “reincarnating” a deceased spouse as a futuristic Alexa assistant. By giving these devices a human appearance, a line is more thoroughly drawn in terms of making them human replacements: something that has profound application when, for instance, using it to carry out medical diagnosis. Is Alfred more likely to follow Lucy’s diagnosis than a similar injunction on a computer screen? No one knows, but it is a subject of research in 2025.
The interaction of Lucy, Alfred’s wearables and the sensors in his home also raises complex ethical issues about autonomy, equity and sustainability. For example, have the various technologies stripped away Alfred’s ability to function as an autonomous individual? Alfred is privileged because he has Lucy, but it seems likely that he is developing a dependency on his new Lucy. He is one of the few members of the public to have a personalised hologram, which raises issues of social equity.
The holograms raise issues of sustainability too. When Alfred dies, what happens to Lucy? Is she simply switched off and allowed to die a digital death? Will anyone miss all of the knowledge that Lucy has acquired, not only of her human predecessor, but also of Alfred?
Some governments insist on taking partial control of Lucy and her peers. In some instances, the partial control is for Alfred’s own safety and well-being. Lucy can prompt Alfred to take his medicines and encourage him to do some physical exercise or to converse with her instead of watching TV all day long. But in other instances, governments prompt Lucy to quiz Alfred about whether he is working part-time or has some other source of income so that government can reduce its benefits to Alfred. So an ethical issue has arisen as to whether technologies that mimic people should be totally controlled by the “user” (by Alfred) or whether control over Lucy should be shared with governments or the company that has created Lucy. Who controls Lucy raises a question of free will for Alfred. If he does not want to take his medicine, and Lucy wants him to, what happens? Does he get forced to do so (psychologically, physically)?
Many privacy advocates continue to express concerns that the holograms, avatars or care robots are actually sophisticated surveillance agents as they pass on the information they collect about their owners to the big tech companies and government agencies. Because of such allegations, some governments have established ethical committees to advise on issues raised by AI. Some have called for a global agreement to govern ethical issues raised by technologies that mimic people.
There have also been concerns about whether holograms, like Lucy, can make medical diagnoses. Studies have shown that holographic people are more often right in their diagnoses than real doctors. While Lucy is probably perfectly capable of making a much better diagnosis and prescription than a real doctor, who is responsible in case of a mistake? Who gets blamed?
By giving these devices a human appearance, a line is more thoroughly drawn in terms of making them human replacements: something that has profound application when, for instance, using it to carry out medical diagnosis. Does the use of a human-like avatar suggest human-level success at tasks? Does it raise the possibility of human-level failures in events that machines may be able to perform better?
If Lucy is “smarter” (more knowledge-informed) than Alfred, will she always (agree to) be subservient to Alfred? If the holograms like Lucy or robots reach levels of intelligence close to pets or animals or humans, what is the acceptable threshold where it is no longer a robot or a machine that one can freely discard and use or abuse eventually as a tool, and it becomes an entity that should be recognised as a being with rights?
Among other ethical issues being discussed in 2025 are the following:
- Human rights issues such as self-determination. Lucy may be able to manipulate Alfred in various ways, to enforce a routine that may be too restrictive, to induce him to buy certain products or services, or to offer criteria he should consider in deciding his vote.
- Should robots have a legal personality? Should they show respect personally and culturally to human beings?
- How do holograms like Lucy impact our privacy and right to be forgotten? Did Lucy’s human predecessor agree to be replicated by a hologram?
- What enforcement powers should regulators have against instances of manipulation?
- What are optimal mixes between self- and co-regulation and legislation and public supervision of the technologies that mimic people? Should there be restrictions on who can be mimicked? What are the transnational aspects of these technologies?
- How can we embed the precautionary principle in innovations such as Lucy?
- Who should have access to all known data about Lucy? Who should have access to the data that Lucy collects from and about Alfred?
If Alfred has full control of Lucy and all her data, could he renounce such control in order to obtain a better insurance policy? If Alfred were still employed and his employer wanted access to the data Lucy has acquired, would he feel obliged to give it to him or her? If Alfred’s doctor performed a diagnosis, who would be given the results? Alfred? Lucy? The health care system?
In 2025, some legal issues surrounding such technologies have been resolved, yet others are still being debated.
Transparency is one such issue. Lucy’s designers have programmed her to explain why she does something when asked. Transparency is also an issue with regard to data sources. Alfred might ask ‘How does Lucy know so much about me?’ and the answer to that is a data protection transparency issue.
If Lucy malfunctions, who is liable? Is it the company that created Lucy? Is it Alfred? (The company could claim that he sent contradictory commands that confused poor Lucy.) Is it the designer, the manufacturer, the programmer, the trainer, the data provider? What is the threshold for a causal link in case of damages? Such questions plague the courts in 2025.
Closely related to liability, accountability is critical to ensure that AI algorithms perform as expected. Finally, in 2025, the European Court of Justice has ruled that it is not sufficient to hold humans accountable for the actions of the AI algorithms they create, but that the concept is more nuanced, i.e., AI systems need to explain and justify decisions and actions to Alfred and others with whom Lucy interacts.
The issue of a company scretly using data for the secondary purpose of advertising (invisible processing) is also a legal issue. Data protection
Some sociologists and psychologists and psychologists have expressed concern that Lucy can create dependencies, much like home assistants. Users such as Alfred tend to respond to Lucy-type creations in several different ways. Some users are bemused by the technologies. They continue to recognise that the holograms and robots are machine-created, are not the real thing, can never replace the real thing, but the hologram is a noble effort nonetheless in attempting to recreate a loved one. Other users become psychologically attached to the holograms, avatars or robots, much as they form attachments to pets. They treat the holograms as the real thing (“Do you remember our trip to Wyoming?”). Still others reject the holograms in irrational ways: They taunt the avatars for not correctly “recalling” a shared event.
In 2025, with the increasingly lifelike holograms, avatars and robots, experts and many senior citizens continue to debate the rights of such creations. Experts and ethicists thought they had dispensed with this issue in 2020, but it has returned in 2025, in part, because Lucy-type robots are expensive. Users, designers, programmers, manufacturers and social services all wish to protect their investments and what better way than attributing rights to Lucy, e.g., the right to dignity, the right to integrity of the person, the right to security, freedom from non-discrimination, freedom of expression and information, the presumption of innocence and even the right to good administration.
Robots and holograms gaining rights could give rise to further issues in the balancing of rights against those of natural individuals. For instance, in data protection terms, a natural individual may wish to exercise their right to erasure (the right to be forgotten). If exercised, this would require the deletion of much of the data used by the hologram/robot, impinging on its own privacy rights in relation to personality, self-development, etc.
Although the cost of personalised holograms and robots is dropping, some social tension has arisen with claims that such creations are only affordable by rich people, that they widen the gap between rich people and the rest of society. For those who wish to have an AI mimicking a loved one, there is a requirement to have lots of data to train the underlying algorithms. People who historically have had limited access to the types of technologies that harvest this data (e.g., due to cost or disabilities) may not have the requisite data, and therefore may not be able to take advantage of such products.
Where AI mimicry is of poor quality (limited training data), there may be poor quality outcomes for people due to imprecise predictions and decision-making by the AI. An ‘off-the-shelf’ mimic may be sold as being 90% accurate at mimicking loved ones. But this could be 100% accuracy for 90% of the population (e.g., white westerners) and 0% accuracy for 10% of the population (e.g., minorities).
Security and economic impacts
While technologies that mimic people have created thousands of new jobs in 2025, they have also created new opportunities for malefactors (criminals, terrorists) who have hacked some holograms and robots so that they won’t respond to voice commands until a ransom has been paid. There are various sources of the hacking of computer-generated companions like Lucy, including individuals (pranksters, trolls), organised crime (extortion, blackmail, fraud), and government (surveillance, spying). A particular concern is adversarial learning, in which the learning mechanisms of algorithms can be misled and can cause AI systems to make bizarre and unpredictable decisions.
Amazon produced Lucy the hologram. Sometimes, Lucy suggests that Alfred consider buying something that improves the quality of his life. At other times, Lucy tries to convince Alfred to buy stuff he doesn’t need. She does it in a subtle way so that it is not obvious to Alfred that Amazon is manipulating him. Activists and civil society organisations have railed against such practices.
Government studies and research by several think tanks have shown that the use and deployment of holograms, avatars and robots have an overall positive economic impact. They reduce the need for providing healthcare and other social services, because Lucy and her cousins can give medical and healthcare advice to their “owners” (a term in much social contention). Their development, deployment and ongoing research create high-quality, high-value jobs. Indeed, there is much demand for trainers, those who “train” the machine-learning algorithms with every scrap of data that ever existed about Lucy and her peers so that Lucy the hologram appears to know more about dead Lucy than Alfred.
The source of the data necessary to train the algorithms will often be the private sector (e.g., creators of home assistants and IoT devices). There are ethical questions around the role that companies with proprietary products have to play in the provision of social care and healthcare in countries such as the UK and Canada with publicly owned healthcare systems. Private firms are first and foremost answerable to their shareholders, not their customers. With the advent of avatar companions, today’s incumbent tech firms may become even more powerful and see their data monopolies strengthened by 2025.
Proponents of data-driven technologies often argue that it will create new jobs and new skill sets for existing workers. At the same time, critics argue that these technologies are actually resulting in job losses in domains where automated (or autonomous) processes render human involvement redundant in 2025. AI mimics are replacing large swathes of social care jobs while the public and private sectors are not doing much to re-skill displaced workers.
Mitigating the negative and acting on the positive impacts
By 2025, various countries have taken different actions to mitigate the negative impacts of AI in social care and to accentuate the positive impacts.
Several countries – principally, the US and Canada — and the EU have established AI advisory councils. There were (and still are) numerous calls to establish regulators with enforcement powers (“with teeth”). However, industry, some politicians and other stakeholders have argued that because of the rapid increase in AI applications and uncertainties about their impacts, regulatory action could severely retard innovation. In the end, the EU decided to create an AI advisory council (somewhat modelled after the European Data Protection Board) as a first step, with formation of a regulator as the envisaged next step, if necessary.
Some have argued that the advisory council approach is a sop to industry, that industry can ignore such advice and do as it likes. However, such has not been the case. Employees in the big five have become activists. Increasingly, they seek to implement Google’s original dictum – “Do no evil!”. As long ago as 2018, Google employees pressured the company to ban development of AI software in weapons. The company also established strict ethical guidelines for how the company should conduct business in the age of AI. Similarly, Microsoft employees sent a letter of disapproval to their chairman about the dangers of facial recognition technology, which led to calls for regulating the technology.
While governments dithered on the issue of regulation, employees of the big five have been the prime movers in the adoption of an ethical approach to AI. Ethics by design has become as commonly espoused as privacy by design. Nevertheless, civil society organisations have expressed scepticism about the “real” intentions of the big five, alleging that their ethical initiatives are simply charades to put governments off regulation.
Employee activism has precipitated a wave of ethical introspection among the big five as well as many other smaller companies. Given the sensitivity of health and social care, industry created ethical advisory councils to review the development and deployment of holograms, like Lucy, as well as robots, not only those used as companions for senior citizens, but those used in other domains such the porn industry, political campaigns and targeted advertising.
Governments have not been totally lame. Governments have created new offences (use of technologies that mimic people without consent of the person mimicked), punishable by fines and prison sentences of up to two years. Governments have promoted AI standards and public awareness of the issues raised by technologies that mimic people.
In addition to the actions described in this section, standards bodies are playing a role in mitigating the risks of these technologies. For example, IEEE’s P7000 series of standards was developed to address the ethical issues in the design of AI and autonomous systems. Governments may need to assess how effective these standards are.
Recommendations for a desired future and avoiding an undesired future
Our desired future is one where technologies that mimic people are strictly controlled to beneficial applications, like Lucy. Regulators with enforcement powers are deemed necessary if society is to avoid an undesired future where there are no controls over how such technologies are used, whether for healthcare applications, political manipulation, pornography, fake news, etc.
Just as there are different stakeholders in the use of technologies that mimic people, so there are different steps that stakeholders can take toward a desired future. Among the conclusions and recommendations of those who contributed to this scenario are the following:
- Academics should explore the ramifications of the new technologies and, where possible, ensure technologies that mimic people are open source. Before holograms or robots are used in social care applications, such as that depicted in the vignette, developers and/or operators should conduct a data protection impact assessment.
- Industry should develop and use ethics councils within individual companies and as well as across companies. Such councils should be multi-disciplinary with people from backgrounds such as legal, risk, compliance, data science, software development, design, user experience and ethics. Industry stakeholders should come together to create a road map for the development of such technologies and a set of principles to govern their use.
- Policymakers should initiate public consultations about regulatory options governing mimicking technologies, especially where they are used to perform social care functions.
- Regulators should use their enforcement powers proportionately. They should find novel ways to work with industry to support compliant and ethical innovation in AI mimicry.
- Sector regulators and industry bodies should create codes of conduct for the use of AI mimicry in particular contexts, including social and health care.
- Existing regulators should adopt a co-ordinated (co-regulatory) approach to AI mimicry to ensure harmonised, consistent rules for industry. As holograms like Lucy raise various issues beyond the remit of a single regulator, some mechanism is needed to ensure regulatory harmonisation.
- Governments should support secure, compliant access to representative datasets for training purposes. This should help ensure higher quality offerings for traditionally under-represented parts of the population, while also addressing issues around data monopolies by giving SMEs access to training data. These data may include biases or influence certain opinions or actions. Furthermore, competition issues around data cannot be solved exclusively by governments providing training data.
- Governments should embed ethics and compliance into the curriculum, and in particular higher and further education courses in subjects such as computer science, so that data scientists are exposed to scenarios such as those in this deliverable.
- Governments should support training programmes for workers likely to be displaced by AI mimicry. In the scenario, Lucy may displace human social care workers.
- Governments and transnational companies, including the big five, should begin work on a global agreement on the legitimate and unethical uses of such technologies – like a Wassenaar Arrangement on AI.
- Governments and industry should encourage artists, directors, film producers, to create TV films or films showing positive and negative side of technologies that mimic people.
- Governments should encourage shareholders’ participation in major decisions about AI. Policy scenarios are one important way to do so.
- The fundamental question should not be: what can we do with new technologies, but how can new technologies help individuals on their own terms and convince them that new technologies are ethical and promote equality, well-being and trust in democratic values?