Most safeguards for ethics and human rights in AI rely on the integrity and reliability of technical systems. However, these AI systems may have security vulnerabilities and be subject to novel attacks. For example, digital services based on Machine Learning models can be targeted by model poisoning or model inversion attacks. Technical security of AI-powered systems is therefore a necessary condition for their robustness and reliability and an enabler of ethical and human rights safeguards.