Google's Art Project Magenta Creates Its First Machine-Generated Song: Listen

 Christopher Polk/Getty Images
 Becoming Human art installation by Christian Ristow is seen during day 1 of the 2014 Coachella Valley Music & Arts Festival at the Empire Polo Club on April 11, 2014 in Indio, California.

With its new music-and-art project, called Magenta, Google is putting the "art" in artificial intelligence. Sorry. The minds inside the Google Brain team have released a 90-second piano melody that was generated through machine learning. The project, first announced at Moogfest, is built on top of Google’s TensorFlow, the web giant’s open-source AI engine.

Magenta’s algorithm was primed with only four notes to start with, and it took off from there to plunk out a verse and bridge of sorts. The drum parts were added later for texture. According to researchers, the project’s biggest challenge wasn’t getting the machine to create a tune, but to make it surprising and/or compelling. "So much machine-generated music and art is good in small chunks, but lacks any sort of long-term narrative arc," wrote Magenta scientist Douglas Eck in a blog post introducing the project

Eck said that one of the main goals is to "advance the state of the art in machine intelligence for music and art generation," adding that machine learning is already used for speech recognition and translation. "With Magenta, we want to explore the other side -- developing algorithms that can learn how to generate art and music, potentially creating compelling and artistic content on their own."

Microsoft's 'Intelligent' Twitter Bot Was a Fun-Loving, Impressionable Racist on Its First Day

Eck wrote that a second goal of Magenta is to create an open-source tool to bring together artists and coders looking to make art and music in a collaborative space. As part of the initiative, Google will provide audio and video support, tools for MIDI users and platforms that will make it easier for artists to connect with machine learning models.

"We don’t know what artists and musicians will do with these new tools, but we’re excited to find out," the researcher wrote. "Look at the history of creative tools. Daguerre and later Eastman didn’t imagine what Annie Liebovitz or Richard Avalon would accomplish in photography. Surely Rickenbacker and Gibson didn’t have Jimi Hendrix or St. Vincent in mind. We believe that the models that have worked so well in speech recognition, translation and image annotation will seed an exciting new crop of tools for art and music creation."

Developers say they’ll release their models and tools in open source on GitHub.

In the meantime, is the song any good? We would ask Microsoft’s mild-mannered "artificial intelligent chat bot" Tay, but she seems to be on vacation. Here's what another bot thought:


THE BILLBOARD BIZ
SUBSCRIBER EXPERIENCE

The Biz premium subscriber content has moved to Billboard.com/business.


To simplify subscriber access, we have temporarily disabled the password requirement.