AI did it My Way

The influence of artificial intelligence (“AI”) in the music industry is nothing new and its transformative impact is showing no signs of slowing up. In this article, Katie Rimmer dissects the various AI applications that have been used in the sector and the interplay of AI and music with IP rights.


AI capabilities have evolved beyond curating playlists based off our individual listening habits and can now imitate deceased artists in autonomously created songs. Earlier this year, Californian-based OpenAI introduced their Jukebox project which has been ‘taught’ to generate new music and lyrics in a variety of genres and styles and has even managed to almost-convincingly replicate the suave tones of Frank Sinatra singing an original Christmas song[1]. Jukebox’s AI musical creations are a signpost of where AI is heading and raises interesting questions on the legal impact of ‘deepfake’ technology such as this.

What are deepfakes?

As described by Danielle Van Lier (Assistant General Counsel at SAG-AFTRA) in a discussion with Tessellate earlier this year[2], deepfakes are falsified audio and video recordings created using AI. Specifically, the ‘deep’ stems from Deep Learning which is a subsection of AI using neural networks to mimic the way in which the human brain operates. In the case of Jukebox, the model was trained with a dataset of 1.2 million songs from the web paired with their corresponding lyrics and metadata which is then used to generate a new track in line with the genre and artist-style you provide it with.

In a wider context, deepfakes have enormous potential to influence society which brings genuine concerns such as the spreading of disinformation, manipulating societal behaviour or even assisting criminals in cyber-theft[3]. Conversely, AI undeniably presents extensive opportunities to drive innovation, improve efficiency and eradicate human error.

Copyright and AI

The Copyright, Designs and Patents Act 1988 (“CDPA”) is the UK’s legal framework that protects authors’ original works and prevents others from copying them. The CDPA however does not directly reference AI and it would be safe to say that the legal interplay of copyright and AI remains nebulous. Important questions are raised such as who is the author, when is a work infringed and how will the law continue to evolve as AI inevitably closes the gap between and human and computer creations.

Copyright automatically protects original literary, dramatic, artistic and musical works (which includes the music and lyrics of a song) under section 1(1)(a) CDPA. Where an AI model is instructed not to replicate existing musical lyrical works but to find available patterns instead, it should be possible for the generated work to avoid infringing an existing one.  More controversial, however, is the process undertaken to ‘teach’ the AI model, which typically involves the ingestion (i.e. reproduction in a database) of very large numbers of works; in the case of music, this might, for example, enable the machine to learn which melodic patterns, harmonies or beats are most likely to lead to the overall composition being appealing.  The holders of the rights in the musical works and sound recordings which are reproduced in this way wish to ensure that the current legal regime (which requires any such use to be licensed) is maintained, in particular through any post-Brexit divergence from current EU copyright law (one of the subjects of the current consultation process being run by the UK Intellectual Property Office (UK IPO))[4].


The UK IPO demonstrated its commitment to gathering information and ideas on this topic when it launched a call for views in September this year[5][6] with Amanda Solloway MP (Minister for Science, Research and Innovation) acknowledging that “a strong AI sector needs a strong IP framework[7].

Similarly, in September 2019, the World Intellectual Property Organisation (“WIPO”) launched a consultation with member state representatives to explore the impact of AI on IP, the results of which were published in May this year[8]. The paper clears up confusion over certain definitions, for instance, AI is defined as “a discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence, with limited or no human intervention.” Further, a distinction was drawn between AI-generated (without human intervention) and AI-assisted (with material human intervention and/or direction) outputs.

The controversial topic of deepfakes was also highlighted with important questions being asked as to whether copyright is an appropriate vehicle for regulating deepfakes, to whom the copyright in the deepfake should belong and whether there should be a system of equitable remuneration for persons whose likenesses and “performances” are used in a deepfake?

Who does the copyright belong to?

Section 9(3) CDPA sets out the starting point:

In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken“.

Computer-generated” applies where there is no human author of the work (s178 CDPA) but locating the person who made the “necessary arrangements” may be challenging. Taking AI Frank Sinatra’s new Christmas tune: should the copyright belong to those who created the dataset of 1.2m songs? Or to those who created the coding of the AI model? Or even to those who came up with the initial idea to bring Frank back to life through the means of AI? Indeed, in some situations, locating any human who fits the CDPA’s requirements may be problematic and the UK courts are yet to directly address this issue. For further discussion on section 9(3) CDPA, see here[9].

AI and the Music Industry

The battle of balancing artistic freedom and protecting a copyright owner’s rights has frequently hit the headlines over the past few years[10] and has resulted in huge damages awards – look no further than the $5.3 million ‘Blurred Lines’ judgment[11]. It now seems deepfakes are also becoming a very real issue for musicians.

Earlier this year, hip-hop artist Jay-Z was impersonated in YouTube videos where AI was used to set the rapper’s unique voice to Hamlet and Billy Joel[12]. Jay-Z’s company Roc Nation filed takedown notices writing, “This content unlawfully uses an AI to impersonate our client’s voice.”. The videos were initially taken down but have now been reinstated by YouTube[13].

AI’s impact on the music industry is still in its early stages. Some have legitimate concerns about computers replacing what has previously been regarded as the inherently human trait of creativity. If AI Frank learns to deliver ‘New York, New York’ with the passion and swing of Frank himself, where does the need for authentic human performance fit in? There will be inevitable legal battles that will shape the development of copyright law in this area and a lot remains to be seen.














Technological advancement in media & sport
Bird & Bird's media & sport specialists are pairing up with our tech experts to deliver complimentary tailored Tessellate workshops on topics like the ones you can find here on MediaWrites. The workshops can be tailored based on your interests and needs.
These complimentary workshops will give media & sports businesses affected by technological shifts valuable insights into the key developments shaping the future of your industry as well as advice on ways in which to respond to the opportunities and challanges the future presents.
Katie works in Bird & Bird's London office, and has an interest in the Media, Entertainment and Sport sector.


Leave a Reply