Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Sunday, February 12, 2017

Google Test Of AI's Killer Instinct Shows We Should Be Very Careful | Gizmodo

Follow on Twitter as @rhettjonez
Rhett Jones, Gizmodo weekend editor insist, "If climate change, nuclear weapons or Donald Trump don’t kill us first, there’s always artificial intelligence just waiting in the wings."

Photo: MGM

It’s been a long time worry that when AI gains a certain level of autonomy it will see no use for humans or even perceive them as a threat. A new study by Google’s DeepMind lab may or may not ease those fears.

The researchers at DeepMind have been working with two games to test whether neural networks are more likely to understand motivations to compete or cooperate. They hope that this research could lead to AI being better at working with other AI in situations that contain imperfect information.

In the first game, two AI agents (red and blue) were tasked with gathering the most apples (green) in a rudimentary 2D graphical environment. Each agent had the option of “tagging” the other with a laser blast that would temporarily remove them from the game.

The game was run thousands of times and the researchers found that red and blue were willing to just gather apples when they were abundant. But as the little green dots became more scarce, the dueling agents were more likely to light each other up with some ray gun blasts to get ahead. This video doesn’t really teach us much but it’s cool to look at:

Gathering gameplay


Using a smaller network, the researchers found a greater likelihood for co-existence. But with a larger, more complex network, the AI was quicker to start sabotaging the other player and horde the apples for itself. 
Read more...

Source: Gizmodo and DeepMind Channel (YouTube)