Additional reporting by Ron Miller.
Frederic Lardinois, Writer at TechCrunch says, Earlier this week, AWS launched DeepComposer,
a set of web-based tools for learning about AI to make music and a $99
MIDI keyboard for inputting melodies.
That launch created a fair bit of confusion,
though, so we sat down with Mike Miller, the director of AWS’s AI
Devices group, to talk about where DeepComposer fits into the company’s
lineup of AI devices, which includes the DeepLens camera and the DeepRacer AI car, both of which are meant to teach developers about specific AI concepts, too.
The first thing that’s important to remember here is that DeepComposer is a learning tool. It’s not meant for musicians — it’s meant for engineers who want to learn about generative AI. But AWS didn’t help itself by calling this “the world’s first machine learning-enabled musical keyboard for developers.” The keyboard itself, after all, is just a standard, basic MIDI keyboard. There’s no intelligence in it. All of the AI work is happening in the cloud.
“The goal here is to teach generative AI as one of the most interesting trends in machine learning in the last 10 years,” Miller told us. “We specifically told GANs, generative adversarial networks, where there are two networks that are trained together. The reason that’s interesting from our perspective for developers is that it’s very complicated and a lot of the things that developers learn about training machine learning models get jumbled up when you’re training two together.”...
That’s why the tools expose all of the raw data, too, including loss functions, analytics and the results of the various models as they try to get to an acceptable result, etc. Because this is obviously a tool for generating music, it’ll also expose some of the data about the music, like pitch and empty bars.
Read more...
Source: TechCrunch