Follow on Twitter as @CadeMetz |
In the mid-1990s, Douglas Eck worked as a database programmer in Albuquerque while moonlighting as a musician. After a day spent writing computer code inside a lab run by the Department of Energy, he would take the stage at a local juke joint, playing what he calls “punk-influenced bluegrass” — “Johnny Rotten crossed with Johnny Cash.” But what he really wanted to do was combine his days and nights, and build machines that could make their own songs. “My only goal in life was to mix A.I. and music,” Mr. Eck said.
It
was a naïve ambition. Enrolling as a graduate student at Indiana
University, in Bloomington, not far from where he grew up, he pitched
the idea to Douglas Hofstadter, the cognitive scientist who wrote the Pulitzer Prize-winning book on minds and machines, “Gödel, Escher, Bach: An Eternal Golden Braid.”
Mr. Hofstadter turned him down, adamant that even the latest artificial
intelligence techniques were much too primitive. But over the next two
decades, working on the fringe of academia, Mr. Eck kept chasing the
idea, and eventually, the A.I. caught up with his ambition.
Last spring, a few years after taking a research job at Google, Mr. Eck pitched the same idea he pitched Mr. Hofstadter all those years ago. The result is Project Magenta, a team of Google researchers who are teaching machines to create not only their own music
but also to make so many other forms of art, including sketches, videos
and jokes. With its empire of smartphones, apps and internet services,
Google is in the business of communication, and Mr. Eck sees Magenta as a
natural extension of this work.
“It’s
about creating new ways for people to communicate,” he said during a
recent interview inside the small two-story building here that serves as
headquarters for Google A.I. research.
The project is part of a growing effort to generate art through a set of A.I. techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data. By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognize a bike. This is how Facebook identifies faces in online photos, how Android phones recognize commands spoken into phones, and how Microsoft Skype translates one language into another. But these complex systems can also create art. By analyzing a set of songs, for instance, they can learn to build similar sounds.
The project is part of a growing effort to generate art through a set of A.I. techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data. By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognize a bike. This is how Facebook identifies faces in online photos, how Android phones recognize commands spoken into phones, and how Microsoft Skype translates one language into another. But these complex systems can also create art. By analyzing a set of songs, for instance, they can learn to build similar sounds.
As Mr. Eck says, these systems are at least approaching the point —
still many, many years away — when a machine can instantly build a new
Beatles song or perhaps trillions of new Beatles songs, each sounding a
lot like the music the Beatles themselves recorded, but also a little
different. But that end game — as much a way of undermining art as
creating it — is not what he is after. There are so many other paths to
explore beyond mere mimicry. The ultimate idea is not to replace artists
but to give them tools that allow them to create in entirely new ways.
Source: New York Times