Security

What is the problem?

Most safeguards for ethics and human rights in AI rely on the integrity and reliability of technical systems.  However, these AI systems may have security vulnerabilities and be subject to novel attacks.  For example, digital services based on Machine Learning models can be targeted by model poisoning or model inversion attacks. Technical security of AI-powered systems is therefore a necessary condition for their robustness and reliability and an enabler of ethical and human rights safeguards.

Who should act?

Designers, developers, integrators, and operators of AI systems.

The Recommendation

To undertake bespoke analysis of security threats and risks for AI-powered systems.

  • Careful and comprehensive threat enumeration and analysis are a crucial pre-requisite for designing effective AI system protection methods (different systems and attacks against them often require different protection approaches).
  • Careful analysis of assumptions about training data is important (for instance, what can be realistically assumed about the training data distribution across digital service customers).
  • Real-time monitoring and analysis of external inputs are often required when attacks against AI systems can cause substantial harm (in many scenarios, it is hard to ensure operational resilience only by design and implementation time efforts).
  • A managed monitoring service should be considered in cases when the precision or confidence of an automated real-time monitoring system in identifying malicious inputs are not sufficiently high.