While recent innovations in the machine learning domain have enabled significant improvements in a variety of computer-aided tasks, machine learning systems present us with new challenges, new risks, and new avenues for attackers. As such, researchers should consider how machine learning may shape our environment in ways that could be harmful.
Our SHERPA project partners F-Secure, the University of Twente, and Trilateral Research have explored the implications of attacks against Smart Information Systems and how they differ from attacks against traditional systems.
In their report, Security Issues, Dangers and Implications of Smart Information Systems, they explored how flaws and biases might be introduced into machine learning models, how machine learning techniques might, in the future, be used for offensive or malicious purposes, how machine learning models can be attacked, and how those attacks can presently be mitigated.
This study has recently been presented by Andrew Patel (F-Secure) in a webinar on 11 March 2020, as part of the project’s webinar series.
Join our network to stay up to date with our news and upcoming webinars.