Photo: Obtained by YouTube |
Vestri the robot imagines how to perform tasks
Let me give you a good example: Researchers at the University of California, Berkeley, have recently developed a new robotic learning technology that lets robots predict the future much like humans do.
The researchers took inspiration from the motor babbling of human babies, which is a term used to describe a baby’s frantic playing with toys and with its body in order to learn how to manipulate itself and objects around it.
But how did they do it, and what does that mean for the future of robotic AI software? To answer that question, let’s get into the details of UC Berkeley’s robot, Vestri.
Building a Smarter AI Robot
The idea behind Vestri is pretty simple. Whereas conventional robotics implies pre-programmed responses, Vestri is capable of responding on the fly.
Of course, while the idea is simple, the execution is anything but. Let me try to put things into perspective for you. Optimally, Vestri should have the kinds of motor skills that an adult human has. Currently, however, Vestri is still at the level of a toddler.
This is because Vestri is still in the initial stages of learning. Human brains learn through doing, which is why babies make so many awkward movements. It’s that process we referred to, called motor babbling, and it took Vestri about a week to complete.
At this stage, Vestri is more of a proof of concept for greater things involving a relatively new technology called visual foresight. Visual foresight allows an AI to learn simple manual skills without any supervision. The software sees, it reacts, and it learns, and that’s bleeding edge AI technology in a nutshell.
Vestri uses its visual input to create images that it predicts will happen. By using these predictions, Vestri is able to choose the best one. So far, that takes the form of Vestri moving objects around on a table.
In the future, though, it could lead us to an AI that can anticipate mistakes and protect itself and even humans from harm when things go awry.
Control via video prediction requires autonomous observations by the robot. This means that the robot needs to see outcomes for itself, without the copious amount of supervision that other AI gets. Simply put, the AI needs to learn for itself, by itself.
That level of independence requires imagination, which is what video foresight simulates. It leads me to wonder if this kind of technology will set the direction for the AI robot of the future...
It could even lead to an AI with a human-like consciousness.
Read more...
Source: Edgy Labs (blog) and UC Berkeley Channel (YouTube)