Art used to be done, finished and discrete. The artist stepped away and there was the final artwork. This finished product — be it a painting, sculpture, book or sound recording — could be bought and sold and, in more recent human history, reproduced for a mass market.
The final piece had a life of its own. Its finality obscured the creator or creators’ influences, hiding years of training, thinking and experimenting (and borrowing). It could be owned, with that ownership defined by format — be it a physical object or file type, the way copyright is still defined today.
Artificial intelligence is poised to transform these dynamics. We’re moving from fixed ownership to licensing as our thought framework. We’re moving from imagining art as the final work completed by brilliant individuals to seeing it a series of ongoing transformations, enabling multiple interventions by a range of creators from all walks of life. We’re entering the era of the artprocess.
The early signs of this shift are already apparent in the debate about who deserves credit (and royalties or payment) for AI-based images and sounds. This debate is heating up, as evidenced by the assertion by an algorithm developer that he was owed a cut of proceeds from Christie’s sale of an AI-generated portrait, despite the algorithm’s open-source origins. This debate will only get thornier as more works are created in different ways using machine learning and other algorithmic tools, and as open-source software and code get increasingly commercialized. (See investments in GitHub or IBM’s purchase of Red Hat.) Will the final producers of a work powered by AI gain all the spoils, or will new licensing approaches evolve that give creators tools in return for a small fee for the tool-makers?
We see another part of this shift toward process with the advent of musical memes and the smash success of apps like musical.ly (now TikTok). Full-length songs that are finished works are easily accessible to young internet or app users, but kids often care less about the entire piece than they do about an excerpt they make on their own. Even before the lip synching app craze, viral YouTube compilations connected to particular hits predated musical.ly and predicted it. Think of that rash of videos of “Call Me Maybe” and “Harlem Shake”: In both cases, users got excited about a few seconds of the chorus in a song and made their own snippets. As a collection, these snippets became more relevant to fans than the songs themselves. Users are reinventing the value of content, creating the need for a new framework for attribution and reward.
We may not all respond to this art — or even consider these iterations to be “art” — but users are finding joy and value through new interactive ways of consuming music. It’s not passive, it’s not pressing play and listening start to finish, it’s not even about unbundling albums into singles or tracks. It’s about unravelling parts of songs and adding your own filters and images, using methods not unlike how art and music is made by professionals. It’s creating something new and it’s not always purely derivative. There’s a long history of this kind of content dismantling and reassembly, one stretching back centuries, the very process that created traditional or folk art. People have long built songs from whatever poetic and melodic materials they have at the ready, rearranging ballads, for example, to include a favorite couplet, lick, or plot twist. The app ecosystem is creating the next iteration of folk art, in a way.
It’s also speaking to how AI may shape and be shaped by creators. Though not exactly stems in the traditional sense, stem-like fragments are first provided to app users in a confined playground, and then re-arranged or imagined by these users, in a way similar to how an AI builds new melodies.
To grasp the connection, it’s important to understand how an AI system creates new music. In the case of Amadeus Code, the goal of the AI is to create new melodies based on existing tastes and styles. An initial dataset is necessary for any AI to generate results. The process of curating, compiling and optimizing this ever-evolving dataset demands as much creativity as figuring out how to turn this data into acceptable melodies. Melodies are generated from these building blocks, called “licks” in our system, using algorithms, sets of directions that with enough data and processing power can learn to improve results over time, as humans tell the system what is an acceptable melody — and what just doesn’t work.
What we have learned is, that once a sufficiently complex agent (artificial or not) is presented with the right data, a strong set of rules and a stage to output, creation takes place. Where this creation goes next can only be determined by human users — the performers or producers who create a new work around this melody — but the initial inspiration comes from a machine processing fragments.
This creation parallels practices already gleefully employed by millions of app fans. AI promises to give these next-generation, digitally inspired creative consumers new tools — maybe something like an insane meme library — they can build art with and from. This art may wind up altered by the next creator, remixed, reimagined, enhanced via other media, further built upon. It will be something totally different and it will not be “owned” in the traditional sense. This looping creativity will bear a striking resemblance to the way algorithms create novel results within an AI system.
How could these little bits and pieces, these jokes and goofy video snippets add up to art? The short-form nature of these creations has so far been constrained by mobile bandwidth, something about to expand thanks to 5G. Fifth-generation cellular networks will allow richer content to be generated on the fly, be it by humans alone or with AI assistance. We can do crazy things now, but the breadth, depth and length of time are throttled, which explains the fragmented short form and limited merger of human-AI capacity. Given longer formats and more bandwidth, we could have ever-evolving artprocesses that blur the human-machine divide completely. We could find not just new genres, but perhaps completely new media to express ourselves and connect with each other.
Though with Amadeus Code, we have built an AI that composes melodies, ironically we anticipate that this era of artprocess won’t lead to more songs being written — or it won’t be just about songs. This era’s tools will allow creators, app developers, musicians and anyone else to use music more expressively and creatively, folding it into novel modes of reflecting human experience, via the mirrors and prisms of AI. This creation will demand a new definition of what a “work” is, one that takes into account the fluidity of process. And it will require new approaches to licensing and ownership, one where code, filters, interfaces, algorithms or fragmented elements may all become part of the licensing equation.
Taishi Fukuyama is chief operating officer for Amadeus Code, an AI melody generator. He has also written, arranged and produced for Japanese and Korean pop stars Juju, BoA, TVXQ and more.