Security Issues, Dangers and Implications of Smart Information Systems (SIS)

Definition / Focus

While many risks, weaknesses, and dangers of smart information systems (SIS, the combination of artificial intelligence and big data) are demonstrated today only in academic experiments, we are already observing a growing practical interest in understanding methods and techniques for developing attacks against SIS as well as ways of using those systems for malicious purposes. In this short overview we outline how flaws and biases might be introduced into machine learning models powering SIS; how machine learning techniques might, in the future, be used for offensive or malicious purposes; how SIS can be attacked, and how those attacks can be mitigated. Ethical consequences of the flaws, attacks, and defences are considered with a focus on new issues and challenges brought by specific characteristics and properties of SIS differentiating those from traditional ICT systems.

Structure and Scope

This overview covers three major topics:
In the first topic, the focus is on flaws arising from incorrect assumptions and poor understanding of machine learning methods applicability to a specific problem, bad design decisions, problems with training data, and mistakes in utilization of SIS.
In the second part – applying SIS for malicious purpose – the focus is on methods of intelligent automation used for effectively and efficiently preparing and carrying out attacks and crime; use of SIS for generating and propagating fake news and disinformation; increasing effectiveness of phishing and spam attacks; generation of fake or maliciously modified audio and visual content for impersonation, scams, and various types of social engineering; and finally obfuscation techniques used by malware writers.
Adversarial attacks against SIS and defence approaches are the third topic, in which we consider main types – and notable examples – of attacks against machine learning models: confidentiality, integrity, availability, and replication. Attacker motives are also analysed in the examples, as motive understanding is crucial for selecting defence strategies. Then recent work on detecting and mitigating attacks against SIS is reviewed, with notes on additional serious challenges for the defenders brought by the nature of machine learning-based systems.

Key Insights

A double-edged sword

  • Artificial intelligence has already become powerful to the point that trained models have been withheld from the public over concerns of potential malicious use. This situation parallels vulnerability disclosure, where researchers often need to make a trade-off between disclosing a vulnerability publicly (opening it up for potential abuse) and not disclosing it (risking that attackers will find it before it is fixed).
  • Machine learning will likely be equally effective for both offensive and defensive purposes (in both cyber and kinetic theatres), and hence one may envision an “AI arms race” eventually arising between competing powers.
  • Text synthesis, image synthesis, and video manipulation techniques have been strongly bolstered by machine learning in recent years. Our ability to generate fake content is far ahead of our ability to detect whether content is real or faked. As such, we expect that machine-learning-powered techniques will be used for social engineering and disinformation in the near future. Disinformation created using these methods will be sophisticated, believable, and extremely difficult to refute.

High complexity of the domain and limits of our understanding

  • The capabilities of machine learning systems are often difficult for the lay person to grasp. Some humans naively equate machine intelligence with human intelligence. As such, people sometimes attempt to solve problems that simply cannot (or should not) be solved with machine learning.
  • Even knowledgeable practitioners inadvertently build systems that exhibit social bias due to the nature of the training data used. It is usually difficult to verify if a machine learning model contains any flaws or biases.
  • The issue of bias in training data and trained models is often complicated and confusing. As bias may lead to undesirable, or even illegal, discrimination, it appears a natural regulatory goal to insist on reducing bias. It is, however, important to understand that while we definitely do not want to see our models amplifying bias in their training data, artificially removing bias may produce models that simply do not reflect the reality, resulting in AI systems with poor predictive capabilities.
  • Public services exist that are powered by flawed machine learning models. People use these systems without understanding that they are flawed. This problem exists due to the inherent complexity of the field.
  • As AI capabilities, challenges, and risks are hard to understand well for many social scientists, policy-makers, and general public, the public debate around AI sometimes leads to confusion and hype, slowing down progress in many AI ethics-related matters.

Growing motivation and capabilities of attackers

  • As artificial-intelligence-powered systems become more prevalent, it is natural to assume that adversaries will learn how to attack them.
  • As we witness today in conventional cyber security, complex attack methodologies and tools initially developed by highly resourced threat actors, such as nation states, eventually fall into the hands of criminal organizations and then common cyber criminals. This same trend can be expected for attacks developed against machine learning models.
  • The understanding of flaws and vulnerabilities inherent in the design and implementation of systems built on machine learning and the means to validate those systems and to mitigate attacks against them are still in their infancy, complicated – in comparison with traditional systems –  by the lack of explainability to the user, heavy dependence on training data, and oftentimes frequent model updating. This field is attracting the attention of researchers, and is likely to grow in the coming years. As understanding in this area improves, so too will the availability and ease-of-use of tools and services designed for attacking these systems.

Challenges of defenders

  • Adversarial attacks against machine learning models are hard to defend against because there are very many ways for attackers to force models into producing incorrect outputs.
  • Dependence of AI algorithms and models on potentially untrustworthy data and challenges related to their testing and validation make it hard to detect and analyse flaws and vulnerabilities in AI-based systems and often significantly broaden their attack surface.
  • Many AI-based systems, especially those which are trained regularly or near-continuously, exhibit high behavioural fluidity. Unlike traditional cyber systems, assumptions about such AI systems, based on their testing and validation results, may be highly unreliable, and special care is required when designing and implementing methods of their protection.
  • Methods of defending machine-learning-based systems against attacks and mitigating malicious use of machine learning may lead to serious ethical issues. For instance, tight security monitoring may negatively affect users’ privacy and certain security response activities may weaken their autonomy.

Risks stemming from business interests

  • Companies that devote substantial resources to artificial intelligence research (such as Google, Facebook, Apple, Amazon, etc.) already have a distinct advantage over companies that don’t. As those advantages pay off, the gap will continue to widen, perhaps to the point where it is no longer possible to compete in the marketplace.
  • In an effort to remain competitive, companies or organizations may forgo ethical principles, ignore safety concerns, or abandon robustness guidelines in order to push the boundaries of their work, or to ship a product ahead of a competitor.
  • Trade-offs between profit maximisation and social good (or flourishing) can, unfortunately, be complex and right choices may not be obvious. For example, balancing extensive investments in collecting high-quality data and thorough testing of trained models with getting products to the market on time can be a highly context-dependent task.

Â