Machine learning models are not infallible. In order to prevent attackers from exploiting a model, researchers have designed various techniques to make machine learning models more robust, by Alex Saad-Falcon, content writer for PDF Electric & Supply.
All neural networks are susceptible to “adversarial attacks,” where an attacker provides an example intended to fool the neural network. Any
system that uses a neural network can be exploited. Luckily, there are
known techniques that can mitigate or even prevent adversarial attacks
completely. The field of adversarial machine learning is growing rapidly
as companies realize the dangers of adversarial attacks.An imperceptible noise attack exemplified on a free stock photo
Photo: Alex Saad-Falcon
We will look at a brief case study of face recognition systems and their potential vulnerabilities. The attacks and counters described here are somewhat general, but face recognition offers easy and understandable examples...
Conclusion
The exponential growth of data in various fields has made neural networks and other machine learning models great candidates for a plethora of tasks. Problems where solutions previously took thousands of hours to solve now have simple, elegant solutions. For instance, the code behind Google Translate was reduced from 500,000 lines to just 500.
These advancements, however, bring the dangers of adversarial attacks that can exploit neural network structure for malicious purposes. In order to combat these vulnerabilities, machine learning robustness needs to be applied to ensure adversarial attacks are detected and prevented.
Source: InformationWeek