Hand Built by Robots: AI-generated music is on the rise

The rapid development of generative artificial intelligence (AI) is causing concern among various music industry stakeholders. In the last month, music executives[i], rightsholders[ii], and artists[iii] have raised concerns about the dilution of human artists’ earnings and prominence, as well as the unlicensed scraping of artists’ works for the purposes of AI training.


In April, an AI-generated recording by Ghostwriter titled, “Heart On My Sleeve”, accumulated over 20 million views and streams across digital service providers (DSPs) including YouTube, Spotify, and TikTok[1]. Although the lyrics were reportedly written by Ghostwriter[2], the voices singing them were AI-generated versions of Grammy Award-winning artists, Drake and The Weeknd.

Universal Music Group (UMG), whose roster includes both Drake and The Weeknd, subsequently issued a copyright challenge to the DSPs hosting “Heart On My Sleeve”. This resulted in the song being taken down from all DSPs. UMG also took the additional step of issuing a press release, noting the label’s history of “embracing new technology” and warning the wider music industry about the potential harm to artists and rightsholders if AI firms’ use of copyright protected works continues unabated[3].

DSPs have responded to the recent rise in AI-generated music by removing songs that breach their terms of service. Spotify recently removed tens of thousands of songs uploaded by AI music start-up, Boomy, after UMG flagged allegedly suspicious streaming activity. Boomy is an AI start-up that allows its users to create machine-generated songs using a variety of genre prompts and descriptors such as “sunset vibes” or “relaxing meditation”. Users are then able to release the AI-generated songs onto streaming platforms and receive royalty payments. The songs were removed because of alleged “artificial streaming”, a practice where bots pose as human listeners and inflate streaming numbers, thereby generating greater revenue for the relevant rightsholder(s).

Other firms, such as Believe, have gone even further. Denis Ladegaillerie, Believe’s CEO and co-founder, set out the firm’s response to the arrival of AI-generated music during its Q1 earnings call. Believe is currently testing AI detection technology that can filter out AI-generated works from its platform. Believe has a stated aim of not distributing any AI-created content, whether through Believe or TuneCore, its distribution and services platform. It has also begun experimenting with technology that is able to identify the royalty percentage that rightsholders should claim for recordings that inform AI-generated songs, creating a new derivative revenue stream for artists and rightsholders.

 Is change to the UK’s IP landscape required?

Copyright allows artists and labels to protect their sound recordings (as well as the literary and musical works that underpin them) from copycats. However, the current UK copyright regime does not protect the timbre of an artist’s voice, only a sound recording of it. AI-cloning technology uses deep learning and text-to-speech software to recreate a particular voice. If protected sound recordings of a particular artist’s voice have been used to train the system in the UK, this is likely to give rise to copyright infringement. However, a more complex (and currently unresolved) question is whether the resulting outputs of the trained system are also an infringement of any copyright in the training data. Artists may; therefore, consider alternative routes to prevent the re-use of their voice, such as an action for passing off, if there is a misrepresentation that the AI-generated output is a recording of their voice.

UK performers rights, the network of property and non-property rights granted to performers, are also unlikely to come into play in this context. This is on the basis that the most relevant restricted act, making copies of qualifying performance recordings, does not apply to the processing of training data by an AI system.

Government response 

The UK government published its response to Sir Patrick Vallance’s Pro-Innovation Regulation of Technologies Review in March this year[4]. The review was commissioned by the Chancellor of the Exchequer as part of the Autumn Statement 2022, to assess how the UK could improve the regulatory environment for emerging technologies[5]. A key recommendation of Sir Vallance’s review is that the UK government needs to adopt a clear policy position on IP law in a generative AI context, giving AI firms, rightsholders, and the wider market confidence and certainty.

The UK government accepted this recommendation and announced that the UKIPO will produce a code of practice providing guidance to AI firms seeking to use copyrighted works as an input to their models. A notable aspect of the UK government’s response to this recommendation is that if an AI firm commits to this code of practice, it can expect to have a “reasonable licence” offered by the relevant rightsholder(s) in exchange. However, concrete proposals are yet to be put forward and there was no mention of this in the Government’s AI White Paper published at the end of March 2023.


[i] AI music is danger to artists, Universal chief tells Hunt

[ii] PRS Explores Artificial Intelligence

[iii] This song sucks’: Nick Cave responds to ChatGPT song written in style of Nick Cave

[1] Viral AI-generated Drake and The Weeknd removed from streaming

[2] What’s really going on with ‘Ghostwriter’ and the AI Drake song?

[3] AI ‘fake Drake’ track deleted on Spotify, Youtube, Tiktok after Universal Music Group copyright claim

[4] HM Government Response to Sir Patrick Vallance’s Pro-Innovation Regulation of Technologies Review

[5] Pro-innovation Regulation of Technologies Review Digital Technologies

Toby is a Partner in our Intellectual Property Group, based in London. Since joining us in 2010, Toby has been involved in a wide range of contentious intellectual property matters. With his physical sciences background Toby is often involved in disputes involving complex technologies and his recent experience includes matters involving semiconductor fabrication and layout, analogue and digital electronics, flash memory, communication protocols and equipment, operating systems, software, video coding and artificial intelligence. Toby also specialises in issues relating to the protection and commercialisation of data and artificial intelligence.
Dan is currently a trainee solicitor.

Leave a Reply