In September 2018, Taryn Southern became the first artist to compose and produce an album entirely using Artificial Intelligence. The music industry has always been transformed by technology, from multi-track recording to loop pedals to digital production software. However, AI poses the most sweeping changes yet.  Like most nascent technologies, it comes riddled with debate about its clear advantages and disadvantages, while also forcing us to ponder some previously irrelevant questions. Here's the inside scope on this pioneering approach to bridging the worlds of creativity and technology: a match we're sure to see more of in the future.

The Background Story

Southern was once a full time YouTube content creator, tasked with creating volumes of of creative content built around her music, personality and interests. Through this process, she learnt just how key efficiency (often driven by technology) was to the creative process. A change in algorithms that favored frequency over substance, quickly led to burnout and Southern started experimenting with AI and VR. She received a YouTube grant to create some experimental VR pieces; and during this process, worked with AI to create the background music. This was the initial inspiration that led to the creative challenge of creating an entire album using artificial intelligence, I AM AI.

How It Works

Southern worked with multiple AI software programs, including Amper, AIVA, Google Watson Beat, and Google NSynth, in place of a traditional partner or producer. In simplistic terms, she gave the software direction in either the form of song data which it can learn from (for instance, a series of 1920's jazz hits); or parameters (like beats per minute, key or instrumentation). The software then renders out a piece of raw source material, which Southern then arranges and edits into a cohesive song. The process is similar to editing a film and in many ways, akin to working with human producers. Artists share an idea and inputs with a producer; and continue iteration until they arrive at a version that brings the artist's vision to life. 

Benefits of AI Collaboration

Like most nascent technologies, there are both disadvantages and advantages. Southern relished the autonomy that came from collaborating with AI. If she didn't like something, she could just adjust the inputs and try again.  Further, there's no need to be delicate with AI, or sensitive to the hours it's already worked or creative process; you can just keep on going until you get the result you want. In short: you don't need to rely on anyone else to bring your creative vision to life. Finally, working with software means that you don't need to be able to create the inputs yourself; which means you can compose pieces just by having a good editorial ear and vision, not necessarily being able to play the instruments or set the arrangements.

Potential Pitfalls

Despite the software's ability to churn out synthesized data sets, the pieces are still fragmented. Southern says, "What Amper's really good at is composing and producing instrumentation, but it doesn't yet understand song structure. It might give you a verse or the chorus, but it's up to me to stitch the pieces together into something that matches my vision." In other words, the software doesn't 'think' the way humans do, and as a result, can spit our garbage, unless you set every possible parameter of what to avoid... which would take decades. And the process can be lonely. Southern insists that she missed being able to talk it out and problem solve with other musicians.

Legal Implications 

New technology brings with it a set of new considerations, especially on the legal side. One major issue that is surfacing in this case is around rights and ownership. Technology companies who create the software can lay claim to ownership just as much as the artists who are controlling the inputs and managing editing. While some of the software like Watson Beat remains open source, there is no definitive or standardized solution just yet. In the case of Southern, she said issues of backend ownership/split have been determined on a case-by-base basis. Another legal issue to arise is copyright. For example, if an artist feeds an algorithm music from "The Beatles" as a set of data, and the AI then creates music heavily influenced by these musical heavyweights, does that infringe copyright? Discussions very quickly burgeon into philosophical debates about the origin of human creativity.

The Future

Creativity and technology are already hand in hand in many industries. The artist named 'The Most Famous Artist' partnered with hackers to create AI capable of emulating renowned art styles to create bespoke high-end originals. In a similar vein, Robbie Barrat used AI to create nudes based on the strokes of the original masters. And the fashion industry is readily using AI to create inventive fabrics and production techniques.  Music is no different; there's already a lot of movement and investment underway. IBM, Spotify and Google are all working on AI software to help create music. And for platforms like Spotify, AI already drives key features like their recommendation engines. Southern says, "In the near future, I'm pretty certain we'll see artists soon using machine learning for a plethora of music applications - to mix and master their songs, help them identify unique chord progressions, alter instrumentation to change style, and determine more interesting melody structures."