Ironically, AI-generated music is nothing “new” in music history. As early as the 1950s, experimental composers were writing music using randomized statistical models. A few decades later, David Bowie worked with former Universal Music Group CTO Ty Roberts to build the Verbasizer -- a program that prompted Bowie to input as many as 25 sentences and word groups into a series of windows, which the program then reordered randomly into new, potentially significant lyrical combinations.
“I can then re-imbue [the Verbasizer] with an emotive quality if I want to, or take it as it writes itself,” said Bowie, as memorialized in the documentary Inspirations. “Some of the things I’ll actually empathize with terrifically.” In other words, Bowie saw this chaotic (if elementary) AI as a means of elevating, rather than stifling, his art -- and he ultimately used many of the Verbasizer’s output in his iconic Berlin Trilogy of albums in the late 1970s, which many consider to be his best work.
Hence, what’s new in 2018 isn’t so much the technology: it’s the money. Major labels, streaming services, VC firms and other stakeholders are investing more and more cash in trying to build scalable, AI-generated music products for the masses.
In Sep. 2016, Sony’s Computer Science Laboratories (CSL) built an AI called Flow Machines that collaborated with songwriter Benoît Carré to write a song in the style of The Beatles, titled “Daddy’s Car.” Less than a year later, CSL’s director François Pachet left Sony to join Spotify as head of the streaming service’s Creator Technology Research Lab, which is headquartered in Paris and focuses on “making tools to help artists in their creative process.”
Almost immediately upon joining Spotify, Pachet reunited with Carré to help promote a new, AI-composed music project on the streaming platform called SKYGGE. One of SKYGGE’s hit singles -- “Hello Shadow,” featuring Kiesza -- appeared on Spotify’s flagship New Music Friday playlist in Dec. 2017, as well as on localized NMF playlists in the U.K., Norway and Scandinavia.
Through its in-house Magenta project, Google is also developing deep-learning algorithms for generating songs, drawings and other artworks. One of its most popular music projects, Performance RNN, uses neural networks to give expressive, human-like timing and dynamics to otherwise stagnant, machine-generated MIDI files. All of Magenta’s tools are open-source, and real artists are already using these tools to write their own songs.
In the startup world, companies like Splice and Amadeus Code are building similarly AI-facilitated assistants for songwriters and producers. Three of the 21 startups in the Techstars Music accelerator roster to date -- Amper, Popgun and SecondBrain -- have built their core product around AI-generated music. Both Amper and Popgun have closed additional funding rounds from the likes of Khosla Ventures, Two Sigma Ventures, Horizons Ventures and Foundry Group since graduating from Techstars.
“Looking at our wider Music portfolio, these types of startups are raising a disproportionate share of capital from outside investors,” Bob Moczydlowsky, managing director of Techstars Music, tells Billboard. “I think elite investors and venture firms are particularly aligned with the field and potential of creative AI.”
Yet, despite all this money flowing in, creative AI still faces financial and legal resistance from many people in music, particularly from legacy label execs who built their businesses around the exploitation of human-owned copyright.
Considering that music is inherently subjective and emotional, the notion of having a machine quantify and optimize for “aesthetic quality” understandably ruffles some feathers. In other operational areas, however, major labels are gradually warming up to the notion that data and “gut” don’t simply cancel each other out, but rather are most powerful when they coexist. Warner Music Group’s recent acquisition of Sodatone, a startup using algorithms to streamline and improve the A&R process, is a prime example of such a paradigm transforming artist development.
What makes creative AI particularly contentious is that labels and streaming services -- who are locked in an arguably faulty mutual dependency -- actually have opposite stakes in the outcome of just how good AI-generated music gets. While labels want to continue making money from their copyrights, streaming services want to stop relinquishing nearly 80 percent of their revenue to third-party rights holders.
In particular, Spotify’s motivations for investing more in AI-generated music may be the same reason why Uber, Lyft and other rideshare companies are investing more in self-driving cars -- namely banking on future costs plummeting in order to justify heavy upfront R&D spend. In this vein, Spotify’s chief R&D officer Gustav Söderström characterizing the company's consumer-facing vision as "self-driving music" is no coincidence.
The tension between labels and streaming services around creative automation was catapulted into the limelight in July 2017, when media outlets and industry execs accused Spotify of placing “fake artists” in its mood playlists. It was revealed that Swedish production- and background-music company Epidemic Sound had developed a robust distribution network for mood music on YouTube and successfully translated that network into majority market share on a handful of Spotify playlists like Peaceful Piano, and Deep Focus and Ambient Chill. The coming wave of creative AI will serve as yet another litmus test for how much the music industry can tolerate -- and compete with -- a more functional, utilitarian music streaming ecosystem.
Unfortunately for labels, a handful of the startups working on AI-generated music are also operating on a more utilitarian outlook on the industry. Speaking at the Copyright & Technology Conference in New York in Jan. 2018, Amper Music CEO Drew Silverstein outlined what he understood as a distinction between “artistic music” (e.g. the orchestral soundtrack from Star Wars) and “functional music” (e.g. Muzak-style elevator jingles).
Silverstein declared that Amper's primary product was indeed “functional music, for the contexts that demand it." In his eyes, not only is a functional music product more sellable, but such an approach also widens the market significantly to users who might not necessarily judge a song on its artistic merits alone -- namely film, advertising, gaming and other adjacent industries.
“Videographers and even many large corporations don’t really care about who owns the copyright, as much as they care about what that music can do,” said Silverstein. “Music is very functional for them.”
To mitigate future risk, labels would be wise to invest in and/or acquire AI-generated music startups -- which is already happening indirectly, through partnerships with programs like Techstars Music. But there’s still a legal elephant in the room that may complicate already-frazzled conversations around IP and ownership in music: as of press time, U.S. law doesn’t allow an AI to own a copyright.
Section 102 of the U.S. Code states that “in no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery.” Musical algorithms fit these criteria almost perfectly: at their heart, they are merely procedures and processes for outputting musical content. The legal complexity lies in figuring out whether human artists using AI tools are truly the “authors” of the end works created, or whether they are simply the programmers behind the AI tools.
Until lawmakers iron out these kinks -- which, if history is any lesson, will take several years if not decades -- the industry will see deals orchestrated across the spectrum on a case-by-case basis. For instance, YouTube personality and singer Taryn Southern formally credited Amper Music as a co-writer in all the tracks for her latest album, I AM AI. On the other end, while many artists have use Google Magenta tools to write their own music, most of them have neither the desire nor the infrastructure to pay royalties back to Google.
As these early deals get under way, another pressing question looms overhead: how do we define “success” for AI-generated music in the first place? Human creativity itself may be the most perplexing “black box” of all -- far more opaque than any algorithm or non-disclosure agreement. If we ourselves struggle to describe how exactly we make music, how can we possibly teach a computer to do it? How can we measure progress and improvement such that we can justify further investment in the space in the long term?
Google is grappling directly with these challenges, having developed many of the most compelling AIs of the past decade, including but not limited to Google Translate and AlphaGo. An important part of why these particular AIs have made such a huge impact on society is that they have beaten humans at solving narrowly-defined problems -- e.g. how to win a board game, or how to translate practically from one language to another.
In contrast, a Magenta researcher, speaking on the condition on anonymity, candidly tells Billboard that “we are working in a domain where we don’t really have any way to evaluate what we’re doing.”
But that openness and flexibility is arguably a strength -- especially for the artist community, which historically felt pressured to conform to what tech companies asked of them, not the other way around. Magenta is involving artists directly in their day-to-day operations, constantly asking artists how they create music and how their current tools could be more user-friendly.
This artist-centric approach may be the ultimate north star for how future creative-AI ventures define success: “It’s not about making the next ‘AI superstar’ album. It’s about empowering creators,” says the Magenta source. “It’s about making sure artists value what we make -- saying that they ‘couldn’t have done it' without Magenta tools -- while ensuring that their resulting creative output still feels like their own.”
Today, there are clear situations where an AI can outperform human artists, but not in the sense of outright displacement. Two of the biggest pain points in any creative process are time and cost, and AI can help reduce both factors significantly. In addition, amidst brutal touring, travel and interview schedules, artists can easily experience exhaustion and be less productive in their songwriting as a result, whereas there is no such thing as “exhaustion” for algorithms; assuming no “bugs,” they can run nonstop at peak performance, 24/7.
Where AI still falls far behind artists -- and where no one in the industry needs to fret about losing their jobs, at least for the time being -- is in building a compelling story around music that will convince fans to pay.
The claim that “AI will replace artists” assumes that we prescribe value only to songs themselves, when in reality musicians are so respected because they are arbiters of culture and context, not just of sound. If they are doing their jobs effectively, the people who help promote and develop artists (labels, distributors, managers, concert promoters, agents, etc.) are also selling the artists’ backstories -- personal histories, cultural and political upbringings, individual passions, etc. -- as much as they are selling the musical works.
In contrast, a typical AI has no “memories” or “experiences” in the world on which to draw for its creativity; just like Google Translate or AlphaGo, it’s working only with a limited set of predetermined rules and rewards. Until we can have an AI tell stories about itself and the music it writes, and until music companies feel comfortable promoting such "AI-generated backstories," human artists will still end up at the top and have the last creative word.
Hence, for all corners of the music industry, today's AI models are no more than evidence-based tools for reducing the time and cost required to make smarter decisions. At least in the short term, creative AI will reshape the industry not by eliminating jobs, but rather by freeing up real estate in the human mind to capitalize on certain perspectives and actions that it could not afford to take previously.
A positive side effect of this shift is that AI-facilitated tools and assistants will continue the work of Garageband, drum machines and other similar technologies in democratizing music creation to more and more aspiring artists all over the world -- which is arguably a net gain for the music industry at large.
“The collective incentive and ability to create has always gone up massively with each new technological advance,” Silverstein said at the Copyright & Technology Conference. “Not only could we do more as composers, but the whole world of musical creation was also opened up to more and more people. If AI is a tool that helps you create when the opportunity isn't otherwise there, then history tells us that you’re going to do it, regardless of the economic incentives.”
In short, AI's aggregate impact on the music industry will not merely be inaction or laziness, but rather increased access and efficiency that enables further intellectual elevation. After all, unlike today’s AI, we ultimately write and dictate the narratives around our own creativity -- for now.