Introduction

This scenario considers the nature of warfare in 2025 and, in particular, cyber warfare or information warfare. The scenario makes the point that the nature of warfare has changed, and government research programs can no longer afford a blurred boundary between civilian and military research. The scenario argues that in many instances it is not known who is behind a cyber attack. In other cases, where certainty exists, retaliation seems warranted. In the following paragraphs, we describe how the nature of warfare has changed. We note some of the new technologies and applications before we get to a vignette that casts doubt on who is behind a cyber attack on a nuclear power plant. We then discuss the drivers behind information warfare, its impacts and our recommendations to policymakers who have to deal with the continuing high cost and travails of information warfare.

So now let’s start with the situation in 2025…

The weaponry of war continues to evolve – from bows and arrows to nuclear bombs to algorithms. The field of battle has changed too – from the physical world to an invisible world, but no less dangerous for that, with real-world consequences.

Nuclear war has been avoided so far, because each side knew that nuclear war would be the end of civilisation. In cyber war, there is no similar principle of mutually assured destruction (MAD) to avert disaster(s). Until recently, we knew who the warring states were. In cyber war, such certainty is much more difficult. Cyber attackers can easily cover their online tracks. The nature of warfare is changing. It has become a global phenomenon in 2025. It involves many different actors, from governments to cyber gangs.

Attacking an adversary no longer requires massive bombing runs or reams of propaganda. All it takes is a smartphone and some software readily available on the dark web. There have already been many cyberattacks in recent years sponsored by states or their subsidiaries. The frequency of cyberattacks is increasing. Politicians are calling for stronger action against cyberattacks. The nature of cyberattacks is also changing. Attackers are no longer attacking just critical infrastructure; they are also attacking whole populations, trying to sow disruption of public opinion and electoral processes. As a result, governments and businesses are increasing their budgets for research on how to contend with the increasing frequency of cyberattacks and the huge risks that come when one country uses smart information systems to disable another country’s critical infrastructures and social consensus.

Warfare technologies in 2025

What does artificial intelligence in warfare mean? AI can be used for offensive and/or defensive purposes; it can take many forms, but essentially AI comprises algorithms capable of processing and learning from vast amounts of data and then taking decisions autonomously or semi-autonomously. In 2025, AI is used in many weapons systems, in the tangible world as well as in cyber space. Here are some material examples.

·        Tactical battlespace development – AI is used in autonomous vehicles in reconnaissance and offensive and defensive roles, e.g., killer vehicles, access denial systems like smart mines or automated systems that respond to attack.

·        Sensor fusion – AI is used to bring information from many sources – e.g., satellites, aerial reconnaissance and local information feeds to and from the war fighter – and develop a coherent view of threats and potential threats faster and more accurately than painstaking analysis by trained analysts.

·        High-speed, high-intensity warfare – AI systems identify potential threats and launch countermeasures at high speed since humans are unable to respond fast enough. Machine decision making is required to take action to defeat a potential threat.

·        Target identification – AI drives facial recognition systems used to identify individuals, and even ethnic groups, for potential consequences.

·        Making sense of non-conventional warfare or terrorism – The emergence of driverless cars in the last few years has created another potential to carry out devastating attacks without the risk to the terrorist’s life. The emergence of drones, potentially piloted remotely or pre-programmed to fly to a given location, add yet another layer of threat.  As someone put it – in every garage, a bomb.

In 2025, many states use artificial intelligence in cyberattacks and cyber defences. These states include the US, UK, Israel, Russia, China, Iran, North Korea, among many others. The states have been investing billions in AI-enabled cyberattack and cyber defence systems.

Powered by AI, cyberattacks occur more rapidly and widely. As governments and companies have learned the hard way, they need to invest in cyber defences, in making their organisations more resilient and in raising public awareness of the risks of being manipulated. Governments and companies are spending billions of euro in 2025 to increase their cyber expertise and defences.

Attackers use smart information systems (SIS) that combine AI and big data in multiple ways. They use AI-powered bots to game the algorithms used in other systems. They use driverless cars as bomb delivery vehicles as an alternative to human suicide bombers. They use AI in stealth weapons and in pattern recognition and in deepfake technologies. The latter are particularly insidious as it becomes impossible for ordinary citizens to know whether they are being fed facts or fabrications.

Autonomous weapons systems, including killer drones, drone swarms, robot soldiers, submarines and tanks without a crew, have been developed in many countries, including  the China, Israel, Russia, South Korea, the United Kingdom and the United States. AI powers many routine tasks, which has improved mission times and the precision of attacks on targets. In 2025, warfare is fought by highly intelligent, inscrutable algorithms that speak convincingly of things that never happened, producing “proof” that doesn’t really exist.

AI is used to automatically create personalised phishing e-mails for social engineering attacks to target thousands of people at the same time. AI is used to mutate malware and ransomware more easily, and to search for and exploit vulnerabilities in all kinds of systems. AI produces complex and highly targeted scripts at a rate and level of sophistication far beyond any individual human hacker.

Figure 6 Phishing

Cyber defenders also use AI to process large volumes of data to help detect attacks against critical infrastructures. The big social media companies claim they identify millions of fraudulent or malicious accounts (liars) per day. In addition to detection technologies, AI is used in forensics and fault diagnostics. In 2025, soldiers interpret information faster and more quickly recognise threats like a vehicle-borne improvised explosive device, or potential danger zones from aerial war zone images. Artificial intelligence and machine learning make decisions about what to attack, who to attack, when to attack.

In 2025, the speed of cyber warfare has developed exponentially – we leave some decisions to machines because people can’t decide quickly enough. Hence, military planners and critical infrastructures have incorporated AI into many operations. They excel at performing tasks, but they haven’t included the ability to tell users why one decision is better than another, making some of their recommendations heretofore seem arbitrary or unreliable.

Universities have been contributing to the debate on AI in cyber warfare by developing artificial moral agents that can distinguish between good and bad and that can explain what they do. Users can ask their smart information systems about why the systems accepted some recommendations and rejected others. AI scientists, sponsored by governments and universities, have gone beyond developing explainability criteria to developing a facility so that scientists can debate with AI systems the correct response to a cyberattack.

Applications in 2025

In their cyber war against the West, some foreign powers have been using artificial-intelligence-based applications to sow discord, spread disinformation, polarise society, attack critical infrastructure, including health systems, smart grids and nuclear power plants, and generally disrupt society, especially in NATO countries. Some foreign powers use AI in the automated reconnaissance of foreign networks, extraction of actionable intelligence, and subversion of adversaries’ decision-making processes.

At least one foreign power uses AI systems to study the behaviour of social network users, and then designs and implements its own phishing bait. The artificial hacker is better at composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.

The foreign power uses bots to hijack the public’s perceptions of events and news. Bots, trolls and sockpuppets can invent new ‘facts’ out of thin air leading to a polarised society and a culture of mistrust.

A few  companies have developed software to proactively detect and identify the bad guys from the moment they engage with client brand channels or protected accounts. Under pressure from civil society organisations and parliamentary committees and in order to build trust with the public, those few companies have developed algorithms that explain why they took a particular action.

In their own defence, some companies use AI to find and remove content from rogue governments, organised crime and terrorists from their websites and platforms. They use image-matching software to identify and prevent photos and videos from known terrorists from popping up on other accounts. They use machine-learning algorithms to look for patterns in terrorist propaganda. The big five companies (Amazon, Apple, Facebook, Google, Microsoft) have developed a shared database that documents the digital “fingerprints” of terrorist organisations.

Vignette

Our vignette illustrates the ethical complexity of AI-powered cyber warfare in 2025. In that year, the Citizens Committee Against Nuclear Power (CCANP) has been campaigning against even a minimal proliferation of nuclear power plants because the energy they produce will be at least twice the cost of renewables over the 30-year-life expectancy of nuclear power plants. To sweeten the deal with the high cost and dangerous nuclear industry, the government has said it will bear the cost of disposing of the nuclear waste, which will, of course, be radioactive for thousands of years and threaten water supplies when the waste eventually burns through its glass and steel coffin in abandoned salt mines hundreds of metres below ground.

The CCANP has said it would attempt to disable a power plant to show its opposition to the use of nuclear power. Meanwhile, the security services have detected rogue software embedded in the operating systems of many nuclear power plants, including that targeted by the CCANP. Until recently, the software has been only monitoring the power plants’ operations, but now has been causing sporadic stoppages.

Figure 7 Cyber attacks

Suddenly, one power plant shuts down. After weeks of claiming that it intended to take the power plant down, the CCANP claims it did not do it. Military and security strategists consider three options:

  • The activists did it.
  • A foreign power did it using the activists’ threats as cover.
  • Someone else did it and left fingerprints that would point to the foreign power.

The government wants to strike back, the military wants to strike back, the nuclear power plant operator wants to strike back, the majority of public opinion wants to strike back, but none is sure against whom they should strike. Soon after, a second nuclear plant is disabled. Critics say the government has not done enough to repel such attacks. Some critics say a counter-attack is now justified, but they are still none the wiser about who is behind the attacks.

Meanwhile, the leader-for-life of a foreign power has denied accusations in The Guardianthat it was responsible. Instead, he claims it’s all a Western plot to discredit the foreign power. Trolls and bots ensure the leader’s denials ricochet around the Internet, overwhelming any rebuttals to the contrary.

Question

Drivers

In 2025, many actors are engaged in cyberwar. Most are not in uniform. Governments, organised crime, terrorists and big companies engage in cyberattacks for a variety of reasons, including the following:

Political drivers

Nation states have been at real or de facto war. The foreign power mentioned above has been attacking many countries with the aim of disrupting them and, supposedly, strengthening its own power. In fact, the foreign power, with its armies of cyberwarriors, has become the most powerful nation on the planet, even though the economies of China and the US still dwarf that of the foreign power.

Cost savings in asymmetric warfare

Military budgets are constrained. Investing in AI-powered attack capabilities is less expensive than recruiting, training and maintaining fleets of aircraft and ships. If a few clicks on the keyboard can take down a power plant, is there any need for a bomber aircraft? With cyber weapons, armies don’t need tanks, driverless or not, manned or not. A few hackers can cause millions of euro in damage. In AI-powered, asymmetric warfare, a few Davids can take down military Goliaths.  AI is a game changer.

“Because we can”

The technological imperative is inexorable. Technology marches on. The tools for information warfare are widely available on the dark web. Because they are, malefactors take advantage of them to wreak havoc, come what may. Meanwhile, big companies have invested in many different AI technologies and seek to sell their products no matter what consequences they produce. The technological environment evolves rapidly.

Trust and mistrust

AI-powered disinformation campaigns erode the public’s trust in their governments. People do not know what is true and what is made-up. Malefactors use automated social engineering techniques to manipulate and divide populations against each other. The information society has become the disinformation society, with little accountability.

Modernisation of the military and intelligence agencies

The military, heretofore slow to recognise the change in state confrontation, is questioning its priorities and how to allocate its budget. Should it buy another aircraft carrier or use its budget to recruit hundreds of new cyber defenders and cyber warriors?

Fear

Fear of being overwhelmed by foreign powers, fear of defeat and fear of subjugation drive governments to invest in modern warfare. Fear of the unknown is a factor too. In the past, it was relatively easy to calculate how many aircraft or how many ships or tanks the enemy had using reconnaissance. Today, however, it is much harder to estimate how many cyber warriors the enemy has.

Questions

Barriers and Inhibitors

Several barriers or inhibitors affect the pace of development of information warfare technologies in 2025.

Shortage of people with information warfare expertise

The big five have secured their grip on the world’s economy by recruiting many of the world’s data scientists. Government cybersecurity agencies are unable to match the salaries of the big five and struggle to find relevant expertise in information warfare.

Budgetary shortfalls

The cost of information warfare is high but Although the military does not need to struggle as much as the police to get their adequate share of the national budget, they do not get everything they need. Other national and international demands compete for a share of national budgets. Climate change is having a devastating impact not only on the environment, leading to droughts, failed agricultural yields, flooding of coastal cities, wildfires, earthquakes and super hurricanes, but also on national budgets are trying to contend with the ravaging impacts of climate change.

Black swan events

Black swan events – the x factor – constrain information warfare. The unauthorised release in January 2025 of e-mails between several large defence contractors revealed how they were stimulating warfare, conspiring to create crises to persuade politicians that governments should spend more on cyber defence. Not surprisingly, the public turned against the companies and demanded that politicians end their ties with the offenders.

Climate change

Humanity’s destruction of the planet has begun to affect the extent to which countries are engaged in information warfare. By 2025, the ravages of climate change are felt everywhere, concentrating minds globally that more international co-operation (and fewer cyberattacks) is needed if civilisation is not to be completely undone.

Questions

Ethical, legal, social and economic impacts

Ethical impacts

Unintended consequences

Information warfare raises moral issues. The US and Israel developed Stuxnet specifically to target Iran’s centrifuges, but an unintended consequence was the eventual release of the software into the wild, where it infected “thousands of computers across the world that had nothing to do with Iran or nuclear research”. Hence, critics in the US and Europe have questioned the development of cyber weapons, especially those that could cause collateral damage or have unintended consequences.

Some civil society organisations and leftist politicians call for a strategic and moral re-allocation of national priorities from combatting other countries and refocusing on the collective challenge facing civilisation from the ravages of climate change.

Employee pressures

Employees of the big five have pressured senior executives not to engage in the development of cyber weapons. Employee unions have successfully called upon senior management to install codes of ethics and codes of acceptable corporate practice. The big companies are willing to install such codes as it helps them to forestall stricter regulatory oversight, while they know that ethical principles are sufficiently broadly written, they need not limit the company’s ambitions, no matter what those ambitions might be. The codes enable the companies to hide behind their veils and pay lipservice to corporate social responsibility and ethics.

Autonomous decision-making

For many researchers, giving machines the decision over who lives and dies crosses a moral line. A key ethical issue remains: how much autonomy should AI solutions have? Informed opinion is divided: some say information warfare requires instant decision-making that obviates the possibility of human intervention. Others say that some untoward events involving AI (e.g., driverless cars causing fatal accidents, robots turning on their ‘masters’ or malfunctioning drones blowing up school buses) show that human intervention must always be possible. In any event, there is widespread agreement among stakeholders and the public on the explainability principle (algorithms must be able to explain what they are doing and whom to contact for more information), even if the principle is difficult to implement.

When to retaliate and what is a proportionate response?

For several years, there has been much debate about when to retaliate against cyberattacks and who should do so. The US and European governments have warned companies and citizens not to take the law into their own hands. They should share any information about attacks they’ve suffered with others in their sector and, especially, with national cybersecurity agencies, but this policy has not been an adequate response, in part, because there are so many cyberattacks and because national cybersecurity agencies are unable to defend companies and citizens against all such attacks. Hence, companies and governments have, therefore, adopted a different policy, i.e., it is acceptable to retaliate in certain circumstances. Government officials and companies have set up working groups to debate under which circumstances and how measured retaliatory responses should be against different types of attacks. How should we act when we have only 75 per cent certainty of who is likely responsible for a cyberattack?

A dangerous space

Information warfare involves virtually everyone using the Internet, either as a victim or a warrior. The days of the uninvolved, unattached surfer have long gone. The Internet has become a dangerous space that you enter at your own risk. Decision-makers, from parents to parliamentarians, are confronted with ethical dilemmas every day. Should children and vulnerable people be advised to limit their use of the Internet to the absolute essentials? Should they be trained to recognise aggression and how to respond? How do we spot manipulation? Should we embed algorithms with morality – i.e., to do good and to shun evil – when questions inevitably arise about what is good and evil.

Loss of the high road, loss of trust

Foreign powers say that those in the US and Europe who cast aspersions about the conduct of information warfare by foreign powers are hypocrites, as the US and its NATO allies have been caught out deploying AI-powered malware, just like them.

With so many countries engaged in information warfare to a greater or lesser extent, trust between countries has been a casualty. Foreign powers may deny they are responsible, but the evidence shows otherwise

 

Questions

Legal impacts

The prevalence of artificial intelligence in information warfare raises many legal issues and has many impacts in 2025.

Definition of warfare

The definition and scope of warfare has been the subject of much debate in the US and Europe. If an aircraft from a foreign power bombed a nuclear power plant in Connecticut, the US would rightly view such action as an act of war and retaliate accordingly. However, if some foreign power’s malware disabled the plant, the reaction of the US might not be so clear.

The European Commission has been reluctant to fund military research in the use of AI in its recently concluded Horizon Europe (HE) research. It has been relatively easy to proscribe use of the HE research budget to fund the advancement of killer robots, killer drones, autonomous submarines. However, it has been harder to draw red lines against cyber weapons – the distinction between defensive tools and offensive weapons has blurred given the potential of many tools and information systems for dual use.

Figure 8 Cyber weapons

Liability

With so many players engaged in warfare, ascribing liability continues to pose legal challenges. Some argue that policymakers who turn a blind eye should be held as liable as the companies that develop algorithms that power bots, denial-of-service attacks, ransomware and other malware. Flawed policies lead inexorably to an amplification of warfare. Others blame politicians, the military-industrial complex, right-wing ideologues, and the media for fuelling fears. Still others ask, who should be liable when AI acts on its own? The programmer? The data scientist? The copyright or patent owner? The supplier of the technology? The service provider? Or perhaps the owner of the dataset(s) on which the algorithm was trained? Will companies pursue certain technologies if they are held liable for their misuse?

The multiplicity of players in the AI chain dilutes accountability. Although there are no formal declarations of war, governments, big companies and rogue actors are engaged in information warfare with consequences every bit as deadly as if a foreign aircraft flew over the proverbial nuclear power plant in Connecticut and blew it to smithereens.

New legislation, new regulation

The EU’s General Data Protection Regulation (GDPR) and Police Directive have generally proved remarkably fit for purpose and address most aspects of data, which fuels AI. However, AI raises more than data protection issues. It raises a range of ethical, social, political, economic and other issues too. Data protection authorities have engaged in some mission creep, expanding their remit from regulating pure data protection issues to addressing ethical issues too. Even so, some European regulators have recognised that AI requires special legislation and regulation. On the recommendation of the European Commission, the EU Council and Parliament created a new European Regulatory Agency for AI (ERAAI) in 2024, following six years of intense debate about the agency’s remit and purview. In the end, the vast power of the big five convinced legislators of the need to exercise some political control over their power, as exercised through their algorithms.

Rules of information warfare

In 2004, the UN set up the Group of Governmental Experts on Information Security to agree voluntary rules for how states should behave in cyberspace. Its fifth meeting, in 2017, ended in a stand-off. The group could not reach consensus on whether international humanitarian law and existing laws on self-defence and state responsibility should apply in cyberspace. The stand-off continues in 2025.

One country attempted to promote an international treaty on the rules of engagement in cyber warfare. The power grid and water infrastructure should be off-limits to any attacks. Despite the favourable coverage in much of the world’s media, few countries were willing to subscribe to a treaty that limited their powers. In any event, when the government mooted such a treaty in 2020, all of the major cyber powers had already embedded malware in their enemies’ power grids.

The UK and some other countries have declared that they view the use of cyber technologies to interfere in another state’s elections as contravening international law and norms; consequently, affected states should take whatever action they see fit.

The legal limits of solidarity

Article 5 of the Washington Treaty, which established NATO in 1949, states that “The parties agree that an armed attack against one or more of them in Europe or North America shall be considered an attack against them all.” The limits of solidarity were amply illuminated when Russia launched a denial-of-service attack against Estonia in April 2007, hitting banks, media web pages, the government website. Estonia called upon NATO for assistance, but the other members didn’t think Article 5 applied.

Foreign powers and other malefactors have been successful in exploiting the general perception that cyber war is somehow different from conventional war even though the consequences may be the same or, in many cases, much worse, spilling outside defined battlefields and traditional war zones.

Governments have been cautious about attributing attacks, in part because their origin can be hard to trace, as depicted in our vignette, and in part because they have not wanted to reveal how they have tracked or penetrated the groups. But the US, UK, Canada, Australia, France and other countries changed their tactics several years ago and began naming names and the countries of the perpetrators of cybercrimes.

Questions

Social impacts

The threat of counterstrike requires knowing who launched the initial attack, a difficult thing to prove in cyberspace, especially in a fast-moving crisis, as in our vignette. Deterrence does not work in all circumstances, e.g., where non-state actors are major players in cyberspace. Not all are rational or predictable actors.

Many voters regard a foreign power’s flagrant manipulation of elections as an act of war. Warfare is not just about blowing up bridges and railway lines anymore; it is also about discrediting politicians, planting vast amounts of misinformation, so that voters and the public are unable to distinguish truth from lies. A lie repeated hundreds of times is more powerful than a fact-checker repeated once.

One of the main defences in a state of information war is surveillance. We should expect surveillance to increase, but by 2025, there has already been so much surveillance, that most people are not concerned by more. A decade ago there was serious opposition to national biometric databases with records of everyone’s DNA, fingerprints, photo identity. Now, not so much.

Social cohesion has been a major casualty of information warfare. People don’t know whom to trust or what to trust, even if they are aware of the political struggles underlying information warfare. Some would argue that individual autonomy has been another casualty. If citizens’ voting intentions can be swayed by information warfare, autonomy is so much roadkill.

Questions

Economic impacts

The cost of recruiting a cyberattacker is relatively low compared to the cost to organisations in defending themselves against attacks. The value of the AI-powered cybersecurity applications has soared from $1 billion in 2016 to more than €25 billion in 2025. In other words, there is a huge asymmetry in the cost of attacks versus the cost of defence. Despite the huge expenditures, the UK Parliament’s Public Accounts Committee and the US Government Accountability Office (GAO) have revealed that nearly all of the allies’ weapons systems have cybersecurity vulnerabilities.

Cybersecurity represents a major cost to all organisations. On the other hand, the cybersecurity industry is a correspondingly big employer, with a high-tech workforce for whom there is a big demand no matter how high salaries are. The soldiers in the information wars of 2025 seem like light-years away from the raw soldiers who fought in the trenches of World War 1. 2025’s soldiers use their minds more aggressively, and they come at a price.

Figure 9 Cybersecurity

Information warfare encompasses not only nation-states but also big companies attacking their rivals whether they are in the US, Europe, China or anywhere else. Artificial intelligence has made cyberattacks such as identity theft, denial-of-service and password cracking more powerful and more efficient. AI systems can steal money, cause emotional harm and kill people. They can deny power supply to hundreds of thousands of people, shut down hospitals and compromise national security.

AI helps states and their attackers customise attacks. AI systems help gather, organise and process large databases to connect data points, making attacks easier and faster to carry out. That reduced workload may drive perpetrators to launch lots of smaller attacks that go unnoticed for a long period of time – if detected at all – due to their more limited impact.AI systems draw information together from multiple sources to identify people who are particularly vulnerable to attack.

ch to re-skill displaced workers.

Questions

Recommendations for a desired future and avoiding an undesired future

In this section, taking into account our scenario, we present recommendations to EU and MS policymakers to help us (as a society) to reach the future we want in 2025 and to avoid the future we don’t want.

Other countries in the EU should emulate the actions of Estonia and Sweden to create “whole-of-nation” efforts intended to inoculate their societies against viral misinformation, including citizen education programmes, public tracking and notices of foreign disinformation campaigns and enhanced transparency of political campaign activities, so that citizens are informed about efforts to undermine their democracies.

The European Commission should recognise that cyberattacks are a form of warfare – information warfare, but no less warfare for that. The EC should define cyber warfare. Its definition should include attacks by nation states, crime gangs, terrorists against critical infrastructures and its impact on society and major social groups.

Governments should reveal the full extent of cyberattacks, where they can be traced, but it is not sufficient to merely expose a rogue state’s conduct; law enforcement authority should seek to arrest those who broke the law. Some retaliatory action is needed. For example, in the vignette, in retaliation to the shut-down of the two nuclear power plants in the UK in 2025, the US and UK could demonstrate their ability to turn off the power in the foreign power’s capital city with a one-minute black-out. They could threaten a longer black-out if the foreign power continues to attack their nuclear power plants. But other forms of retaliation are possible too, e.g., exposing the wealth of the foreign power’s leader hidden in the vaults of Zurich, the Cayman Islands and other such havens. Exposing what the leader-for-life and his cronies do for entertainment is another form of retaliation.

The EC should provide funding for studies on information warfare via the European Defence Fund and the forthcoming Horizon Europe research programme and, in particular, how AI is being used to spread misinformation, hate crimes and lies, especially to undermine elections, and how to assess and what to do about the resulting social impacts and what the EU should do about such activity.

European policymakers should not be in reactive mode to the impacts of AI in information warfare. They should be pro-active, considering a wide range of measures, including offensive measures against individual attackers sponsored by governments as well as the governments themselves. The EC, ENISA, national cybersecurity agencies and industry should develop a co-ordinated strategy for countering attacks against individual companies and to what extent they can engage in retaliatory activities. Big companies, such as Amazon, Apple, Facebook, Google and Microsoft, are more capable than most countries in taking more aggressive action against entities abusing the Internet and engaged in misinformation campaigns and cyberattacks, such as the attack on the nuclear power plants in the UK depicted in the vignette. Governments alone do not have the resources to counter all attacks, but there should be a consensus in the EU and elsewhere in what instances companies can engage in offensive strategies.

Compared with traditional armed conflict, the rules of information warfare are not well-defined. The European Commission and/or the United Nations should develop such rules, especially applicable to the private sector. We need the information warfare equivalent of the Budapest Cybercrime Convention.

Tech firms need to step up investment in content moderation; take down those engaged in harassment and foreign influence operations; test their products for dual-use capabilities before they are deployed, not just for cybersecurity vulnerabilities, but misuse by attackers; label bots in order that humans can tell when they are interacting with a machine online; and implement measures to foil the next generation of AI used in sophisticated chatbots and faked imagery.

Politicians and diplomats should call for an end to information warfare, so that more resources can be channeled to combatting climate change.

Questions

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt