Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Saturday, July 08, 2017

Google Mimics Human Brain with Unified Deep Learning Model | Datanami

"Despite the progress we’ve made in deep learning, the approach remains out of reach for many, largely due to the high costs associated with tuning models for specific AI tasks" continues Datanami.

Photo: I000s pixels/Shutterstock

Now Google hopes to simplify things with a unified deep learning model that works across multiple tasks – and even apparently mimics the human brain’s capability to take learnings from one field and apply them to another.

Researchers from Google and the University of Toronto last month quietly released an academic paper, titled “One Model To Learn Them All,” that introduces a new approach to unifying deep neural network training. Dubbed MultiModel, the approach incorporates building blocks from multiple domains, and aims to generate good training results for a range of deep learning tasks – including speech recognition, image classification, and language translation – on different data types, like images, sound waves, and text data.

The idea is to get away from the high degree of specialization required to get good results out of deep learning, and to provide a more generalized approach that delivers accuracy without the high tuning costs.

“Convolutional networks excel at tasks related to vision, while recurrent neural networks have proven successful at natural language processing tasks,” the researchers write. “But in each case, the network was designed and tuned specifically for the problem at hand. This limits the impact of deep learning, as this effort needs to be repeated for each new task.”

What’s more, the requirement to focus training regimens to solve specific tasks runs counter to the underlying concept driving modern neural networking theory – namely that mimicking the human brain, with its powerful transfer learning capabilities, is the best approach to building machine intelligence.

“The natural question arises,” the researchers write, “Can we create a unified deep learning model to solve tasks across multiple domains?”

The answer, apparently, is yes.
Read more... 

Source: Datanami