Ben Dickson, software engineer and the founder of TechTalks writes, In the past few years, researchers have shown growing interest in the security of artificial intelligence systems.
There’s a special interest in how malicious actors can attack and compromise machine learning algorithms, the subset of AI that is being increasingly used in different domains.
Among the security issues being studied are backdoor attacks, in which a bad actor hides malicious behavior in a machine learning model during the training phase and activates it when the AI enters production.
Until now, backdoor attacks had certain practical difficulties because they largely relied on visible triggers. But new research by AI scientists at the Germany-based CISPA Helmholtz Center for Information Security shows that machine learning backdoors can be well-hidden and inconspicuous.
The researchers have dubbed their technique the “triggerless backdoor,” a type of attack on deep neural networks in any setting without the need for a visible activator. Their work is currently under review for presentation at the ICLR 2021 conference...
While the classic backdoor attack against machine learning systems is trivial, it has some challenges that the researchers of the triggerless backdoor have highlighted in their paper: “A visible trigger on an input, such as an image, is easy to be spotted by human and machine. Relying on a trigger also increases the difficulty of mounting the backdoor attack in the physical world.”...
But in spite of its challenges, being the first of its kind, the triggerless backdoor can provide new directions in research on adversarial machine learning. Like every other technology that finds its way into the mainstream, machine learning will present its own unique security challenges, and we still have a lot to learn.
Source: TechTalks