Computer vision has improved massively in recent years,
but it’s still capable of making serious errors. So much so that there’s
a whole field of research dedicated to studying pictures that are
routinely misidentified by AI, known as “adversarial images.” Think of
them as optical illusions for computers. While you see a cat up a tree, the AI sees a squirrel.
There’s a great need to study these images. As we put
machine vision systems at the heart of new technology like AI security
cameras and self driving cars, we’re trusting that computers see the
world the same way we do. Adversarial images prove that they don’t.
But while a lot attention in this field is focused on pictures that have been specifically designed to fool AI (like this 3D printed turtle
which Google’s algorithms mistakes for a gun), these sorts of confusing
visuals occur naturally as well. This category of images is, if
anything, more worrying, as it shows that vision systems can make unforced errors...
Some research
suggests that rather than looking at images holistically, considering
the overall shape and content, algorithms focus in on specific textures
and detail. The findings presented in this dataset seem to support this
interpretation, when, for example, pictures that show clear shadows on a
brightly-lit surface are misidentified as sundials. AI is essentially
missing the wood for the trees.
Source: The Verge