Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Monday, December 04, 2017

Artificial Intelligence Still Isn't a Game Changer | Bloomberg - Tech

Photo: Leonid Bershidsky
"Machines can beat humans at some things, but they remain one-trick ponies" insist Leonid Bershidsky, Bloomberg View columnist.

Man vs. machine.
Photo: Oleksandr Rupeta/NurPhoto via Getty Images

Not much time passes these days between so-called major advancements in artificial intelligence. Yet researchers are not much closer than they were decades ago to the big goal: actually replicating human intelligence. That’s the most surprising revelation by a team of eminent scholars who just released the first in what is meant to be a series of annual reports on the state of AI.

The report is a great opportunity to finally recognize that the current methods we now know as AI and deep learning do not qualify as "intelligent." They are based on the "brute force" of computers and limited by the quantity and quality of available training data. Many experts agree.

The steering committee of "AI Index, November 2017" includes Stanford's Yoav Shoham and Massachusetts Institute of Technology's Eric Brynjolfsson, an eloquent writer who did much to promote the modern-day orthodoxy that machines will soon displace people in many professions. The team behind the effort tracked the activity around AI in recent years and found thousands of published papers (18,664 in 2016), hundreds of venture capital-backed companies (743 in July, 2017) and tens of thousands of job postings. It's a vibrant academic field and an equally dynamic market (the number of U.S. start-ups in it has increased by a factor of 14 since 2000).

All this concentrated effort cannot help but produce results. According to the AI Index, the best systems surpassed human performance in image detection in 2014 and are on their way to 100 percent results. Error rates in labeling images ("this is a dog with a tennis ball") have fallen to less than 2.5 percent from 28.5 percent in 2010. Machines have matched humans when it comes to recognizing speech in a telephone conversation and are getting close to at parsing the structure of sentences, finding answers to questions within a document and translating news stories from German into English. They have also learned to beat humans at poker and Pac-Man. But, the authors of the index wrote: 

Tasks for AI systems are often framed in narrow contexts for the sake of making progress on a specific problem or application. While machines may exhibit stellar performance on a certain task, performance may degrade dramatically if the task is modified even slightly. For example, a human who can read Chinese characters would likely understand Chinese speech, know something about Chinese culture and even make good recommendations at Chinese restaurants. In contrast, very different AI systems would be needed for each of these tasks.

The AI systems are such one-trick ponies because they're designed to be trained on specific, diverse, huge datasets. It could be argued that they still exist within philosopher John Searle's "Chinese Room." In that thought experiment, Searle, who doesn't speak Chinese, is alone in a room with a set of instructions, in English, on correlating sets of Chinese characters with other sets of Chinese characters. Chinese speakers are sliding notes in Chinese under the door, and Searle pushes his own notes back, following the instructions. They can be fooled into thinking his replies are intelligent, but that's not really the case. Searle devised the "Chinese Room" argument -- to which there have been dozens of replies and attempted rebuttals -- in 1980. But modern AI is still working in a way that fits his description.

Machine translation is one example. Google Translate, which has drastically improved since it started using neural networks, trains the networks on billions of lines of parallel text in different languages, translated by humans...

...up to us to keep this branch of computer science in its place by only giving it as much data as we're comfortable handing over -- and only using it for those applications in which it can't produce dangerously wrong results if fed lots of garbage.  
Read more...

Source: Bloomberg