It’s year 2025 and weaponry has evolved yet again. From bows and arrows to nuclear bombs and now algorithms! Nation states have been using AI in multiple ways. AI-powered bots game up algorithms used in other weapon systems. They use driverless cars as bomb delivery vehicles as an alternative to suicide bombers. Many countries have developed autonomous weapons systems including killer drones, drone swarms, robot soldiers, tanks and submarines that operate without a crew. They use AI in stealth weapons, pattern recognition and deepfake technologies. The latter are particularly insidious. Because of them, it now is
impossible for ordinary citizens to know whether they are being fed facts or fabrications.
Who would have thought that by 2025 warfare would be fought by highly intelligent, yet
incomprehensible algorithms that speak convincingly of things that never happened,
producing “proof” that doesn’t really exists. Armies of soldiers are now sitting at a keyboard,
wreaking havoc thousands of kilometres away with a few keystrokes. In the days of bows and
arrows and even nuclear bombs, we knew who the warring states were. In cyber war, such
certainty is much more difficult. Powered by AI, cyberattacks occur more rapidly and widely.
Cyberattackers can easily cover their online tracks.
The nature of cyberattacks is also changing. Attackers no longer attack just critical
infrastructure; they attack whole populations, trying to sow disruption of public opinion and
electoral processes. Civilians are no longer collateral damage in 2025; civilians are the targets
and victims in cyber warfare.
As governments and companies have learned the hard way, they need to invest in cyber
defences, in making their organisations more resilient and in public awareness of the risks of
being manipulated. Governments and companies now are spending billions of euros to
increase their cyber expertise. Cyber defenders use AI to process large volumes of data to
help detect attacks. The big social media companies claim they can identify millions of
fraudulent or malicious accounts (liars) per day, but attackers are like a Hydra-headed
monster with more attackers springing up by the day.
Military planners and critical infrastructures have incorporated AI into many operations. The
speed of warfare has developed exponentially nowadays! Cyberattacks require fast decision-
making, almost instant, beyond the reach of most humans because people just can’t decide
quickly enough. In 2025, artificial intelligence and machine learning make decisions about
what to attack, who to attack, when to attack.
AI-powered systems excel at performing tasks, but they can’t tell operators why one decision
is better than another. Some have been raising concerns that their recommendations seem
arbitrary or unreliable. In 2025, a university has developed artificial moral agents that can
distinguish between good and bad and that can explain what they do. Users can ask their
smart information systems about why the systems accepted some recommendations and
rejected others. AI scientists, sponsored by governments and universities, have gone beyond
developing explainability criteria to developing a facility so that scientists can debate with AI
systems the correct response to a cyberattack. The military have yet to adopt the university’s
algorithm, but are under public pressure to do so.
In 2025, even old antagonists, such as the US, Russia, China, have recognised that their cyber
clashes are costing a lot not only in funding, but also in time, political and social capital.
Climate change has also become a huge factor in sharpening people’s minds about the
pointlessness of cyber clashes when the whole planet is at risk. People are becoming so sick
to death with the current situation; hence, now the big powers have begun discussing a treaty
respecting each other’s cyber sovereignty and collaborating in discovering, repelling and
punishing rogue, non-state actors.
Now, please answer the following questions...