Translate to multiple languages

Subscribe to my Email updates
Enjoy what you've read, make sure you subscribe to my Email Updates

Friday, April 16, 2021

How To Ensure Your Machine Learning Models Aren’t Fooled | AI/Machine Learning - InformationWeek

Machine learning models are not infallible. In order to prevent attackers from exploiting a model, researchers have designed various techniques to make machine learning models more robust, by Alex Saad-Falcon, content writer for PDF Electric & Supply

An imperceptible noise attack exemplified on a free stock photo
Photo: Alex Saad-Falcon
All neural networks are susceptible to “adversarial attacks,” where an attacker provides an example intended to fool the neural network. Any system that uses a neural network can be exploited. Luckily, there are known techniques that can mitigate or even prevent adversarial attacks completely. The field of adversarial machine learning is growing rapidly as companies realize the dangers of adversarial attacks.

We will look at a brief case study of face recognition systems and their potential vulnerabilities. The attacks and counters described here are somewhat general, but face recognition offers easy and understandable examples...


The exponential growth of data in various fields has made neural networks and other machine learning models great candidates for a plethora of tasks. Problems where solutions previously took thousands of hours to solve now have simple, elegant solutions. For instance, the code behind Google Translate was reduced from 500,000 lines to just 500.

These advancements, however, bring the dangers of adversarial attacks that can exploit neural network structure for malicious purposes. In order to combat these vulnerabilities, machine learning robustness needs to be applied to ensure adversarial attacks are detected and prevented.

Read more... 

Source: InformationWeek