Photo: Parham Aarabi |
Photo: Planet Biometrics |
The University of Toronto researchers used existing knowledge of detection software that says "small, often imperceptible, perturbations can be added to images to fool a typical classification network into misclassifying them." Their dynamic "attack" algorithm "produc[es] small perturbations that, when added to an input face image, causes the pre-trained face detector to fail."
Aarabi and Bose designed two different, opposing neural networks — one that attempts to identify faces and the other that works to "disrupt" that identification — using 'adversarial training', a deep learning technique that puts two opposing AI algorithms in a sort of digital cage match.
Read more...
Source: Planet Biometrics