I for one, welcome our AI overl.. (Dave. Dave. You're embarrassing yourself.) |
Deep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won't take us all the way to artificial general intelligence, according to a recent academic assessment.
Gary Marcus, ex-director of Uber's AI labs and a psychology professor at the University of New York, argues that there are numerous challenges to deep learning systems that broadly fall into a series of categories.
The first one is data. It's arguably the most important ingredient to any deep learning system and current models are too hungry for it. Machines require huge troves of labelled data to learn how to perform a certain task well.
It may be disheartening to know that programs like DeepMind's AlphaZero can thrash all meatbags at a game of chess and Go, but that only happened after playing a total of 68 million matches against itself across both types of games. That's far above what any human professional will play in a lifetime.
Essentially, deep learning teaches computers how to map inputs to the correct outputs. The relationships between the input and output data are represented and learnt by adjusting the connections between the nodes of a neural network.
Its limited knowledge gleaned from the data given during the training process means it can only perform in limited environments. AlphaZero may be a single algorithm that combines Monte Carlo Tree Search and self-play, a technique in reinforcement learning, but it required two different systems to play chess and Go.
The same skills learnt from one game can't be transferred to another. That's because, unlike humans, machines don't actually grasp important concepts, Marcus said. It may be selecting to move a pawn, or knight, or queen across the board, but it doesn't learn the logical and strategic thinking useful for Go. In fact, it doesn't even really understand what any of those pieces really represent, it just sees the game as a series of rules and patterns.
This brittleness means that current AI systems struggle with "open-ended inference". "If you can't represent nuance like the difference between 'John promised Mary to leave' and 'John promised to leave Mary', you can't draw inferences about who is leaving whom, or what is likely to happen next," Marcus wrote.
Read more...
Source: The Register