Translate to multiple languages

Saturday, December 02, 2017

Artificial intelligence isn’t as clever as we think, but that doesn’t stop it being a threat | The Verge - Artificial Intelligence

Photo: James Vincent
"A new report tries to bring order to the messy business of measuring AI progress" says James Vincent, cover machines with brains for The Verge, despite being a human without one.

Photo: Bryan Bedder / Getty Images for National Geographic

How clever is artificial intelligence, really? And how fast is it progressing? These are questions that keep politicians, economists, and AI researchers up at night. And answering them is crucial — not just to improve public understanding, but to help societies and governments figure out how to react to this technology in coming years.

A new report from experts at MIT, Stanford University, OpenAI, and other institutions seeks to bring some clarity to the debate — clarity, and a ton of graphs. The AI Index, as it’s called, was published this week, and begins by telling readers we’re essentially “flying blind” in our estimations of AI’s capacity. It goes on to make two main points: first, that the field of AI is more active than ever before, with minds and money pouring in at an incredible rate; and second, that although AI has overtaken humanity when it comes to performing a few very specific tasks, it’s still extremely limited in terms of general intelligence.

As Raymond Perrault, a researcher at SRI International who helped compile the report, told The New York Times: “The public thinks we know how to do far more than we do now.”
To come to these conclusions, the AI Index looked at a number of measures of progress, including “volume of activity” and “technical performance.” The former stat examines how much everything happens in the field, from conference attendance to class enrollment, to VC investment and startups started. The short answer here is that everything’s happening a lot. In graph terms, it’s all “up and to the right.”

The other factor, “technical performance,” attempts to measure AI’s capabilities to outcompete humans at specific tasks, like recognizing objects in images and decoding speech. Here, the picture is more nuanced.

There are definitely tasks where AI has already matched or eclipsed human performance. These include identifying common objects in images (on a test database, ImageNet, humans get a 5 percent error rate; machines, 3 percent), and transcribing speech (as of 2017, a number of AI systems can transcribe audio with the same word error rate as a human). A number of games have also been definitively conquered, including Jeopardy, Atari titles like Pac-Man, and, most famously, Go. 

But as the report says, these metrics only give us a partial view of machine intelligence. For example, the clear-cut world of video games is not only easier to train AI in, because well-defined scoring systems help scientists assess and compare different approaches. It also limits what we can ask of these agents. In the games AI has “solved,” the computer can always see everything that’s happening — a quality known to scientists as “perfect information.” The same can’t be said of other tasks we might set AI on, like managing a city’s transport infrastructure. (Although researchers have begun to tackle video games that reflect these challenges, like Dota.)

Caveats of a similar nature are needed for tasks like audio transcription. AI may be just as accurate as humans when it comes to writing down recorded dialogue, but it can’t gauge sarcasm, identify jokes, or account for a million other pieces of cultural context that are crucial to understanding even the most casual conversation. The AI Index acknowledges this, and adds that a bigger problem here is that we don’t even have a good way to measure this sort of commonsense understanding. There’s no IQ test for computers, despite what some PR people claim. 

Source: The Verge

If you enjoyed this post, make sure you subscribe to my Email Updates!