Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Saturday, October 08, 2016

The Music That Inspires Computers To Write Their Own Songs | Fast Company

Photo: Tina Amirtha
Tina Amirtha, writes about science and technology in the global marketplace with a bent towards women in STEM reports, "Scientists at Google and elsewhere are turning to the 30-year-old digital music standard MIDI to teach neural networks how to write music." 

Photo: Flickr user Sigmadp2j

In May, Google research scientist Douglas Eck left his Silicon Valley office to spend a few days at Moogfest, a gathering for music, art, and technology enthusiasts deep in North Carolina's Smoky Mountains. Eck told the festival's music-savvy attendees about his team’s new ideas about how to teach computers to help musicians write music—generate harmonies, create transitions in a song, and elaborate on a recurring theme. Someday, the machine could learn to write a song all on its own.

Eck hadn't come to the festival—which was inspired by the legendary creator of the Moog synthesizer and peopled with musicians and electronic music nerds—simply to introduce his team's challenging project. To "learn" how to create art and music, he and his colleagues need users to feed the machines tons of data, using MIDI, a format more often associated with dinky video game sounds than with complex machine learning.

Researchers have been experimenting with AI-generated music for years. Scientists at Sony's Computer Science Laboratory in France recently released what some have called the first AI-generated pop songs, composed from their in-house AI algorithms (although they were arranged by a human musician, who also wrote their lyrics). Their AI platform, FlowMachines, has also composed jazz and classical scores in the past using MIDI. Eck's talk at Moogfest was a prelude to a Google research program called Magenta, which aims to write code that can learn how to generate art, starting with music.

Listening to and making music is worth pursuing because, researchers say, both activities can help intelligent systems achieve the holy grail of intelligence: cognition. Just as computers are starting to evolve from simply reading text to understanding speech, computers might start to regularly interpret and generate their own music.

"You can learn an awful lot about language by studying text. MIDI gives us the musical equivalent. The more we understand about music creation and music perception, the more we’ll understand general, important aspects of communication and cognition," says Eck, now a research scientist on Google’s Magenta project.

From Crashing Computers To Making Them More Creative
As synthesizers gained popularity in the 1970s and 1980s, engineers started to experiment with ways to get their electronic instruments to communicate with each other. The result was the Musical Instrument Digital Interface, or MIDI, which the music industry adopted as a technical standard in 1983 after its creators, Dave Smith and Ikutaro Kakehashi, made it royalty-free, offering up the idea for the world to use.

"In hindsight, I think it was the right thing to do," Smith told Fortune in 2013. "We wanted to be sure we had 100% participation, so we decided not to charge any other companies that wanted to use it."

Personal computers soon evolved to read and store MIDI files, which reduce high-level, abstract pieces of music into machine-readable data in a very compact format (a song stored in a 4 MB MP3 file would be a mere few hundred kilobytes in MIDI). MIDI would become standard on electronic instruments, from keyboards and drum machines to MIDI guitar controllers and electronic drum kits. Music composed through MIDI has powered the rise of dance, techno, house, and drum and bass music, and its sound can be heard in most television and film scores...

Deep Learning With Music 
As learning material, MIDI files are a computer scientist’s dream, unlike audio recordings; They are small, available in troves on the internet, and royalty-free, providing a resource that can be used to virtually train AI machines without limit.

The state of the art in training computers is deep learning, artificial learning that uses neural networks, a method of storing information that loosely approximates the information processing of the brain and nervous system. In computer vision, where deep learning has become the standard machine learning technique, scientists know how a computer learns through a neural network when the computer knows what shapes to look for in an image. You can see this process in reverse in the Deep Dream algorithm. Google engineers Alexander Mordvintsev, Christopher Olah, and Mike Tyka used the company's image-recognition software to "hallucinate" images out of everyday scenes, based on the system's memory of other images it had found online.

What perplexes scientists more is how and if computers can perceive something that is more subjective, like music genres, chords, and moods. Listening to music can help computers reach this higher-level cognitive step.
Read more... 

Source: Fast Company