When AI generates music, what happens to copyright protection?

Written By: Ayo Adebajo

Use of artificial intelligence (AI) and machine learning is accelerating within society and across the media and entertainment space. Music is a key area within the latter in which this is evident. This article provides an overview of how AI music generators are affecting the music industry and the potential consequences under UK copyright law.

AI Music Generators

There is increasing interest and investment worldwide in AI music generation. Companies such as AudioShake, OpenAI, Splice, Stability AI, Dessa and BandLab have made significant developments in this space. Bytedance (TikTok’s parent company) acquired Jukedeck in 2019, a UK-based AI music start-up, Shutterstock acquired AI-driven music platform Amper Music in 2020 and HYBE recently acquired Supertone.

Utilising Lingyin Engine technology, Tencent Music in China has reportedly released over 1,000 songs with human-mimicking AI vocals, some synthetically recreating those of deceased artists. One track, titled “today”, is claimed to have surpassed 100 million streams!

In 2020, Jay-Z filed DMCA takedown notices against anonymous YouTuber, Vocal Synthesis, who created deepfakes of Jay-Z reciting William Shakespeare’s ‘Hamlet’ and a Billy Joel song. More recently, AI deepfake technology was used to create a posthumous track by Depzman, a grime artist from Birmingham who tragically passed away in 2013, called “Life Cut Short.”

(Holographic image of Depzman (Joshua Ribera) from Life Cut Short music video)

Elsewhere, last year GRM Daily reported FN Meka, a controversial AI-powered rapper that amassed over ten million followers on TikTok, as the first AI artist to be signed by a major record label (Capitol Records, distributed under Universal Music Group) before being dropped due to social backlash on stereotyping. This begs the question, how far are we from the likes of Stormzy, Adele, Dave or Ed Sheeran battling with AI artists for chart-topping singles?

(AI powered virtual rapper, FN Meka)

These instances of automation within music raise questions around whether AI music generators are capable of creating musical compositions that are protected by copyright,  whilst also infringing them.

AI Legal Authorship

The increasing prevalence of AI music generation is likely to have significant consequences on intellectual property (IP), especially copyright law. Whilst the requirements under copyright law are dependent on jurisdiction, the author of a work is generally the copyright owner, and this right is infringed upon if another entity copies and reproduces that work without consent or an adequate licence to do so.

Under English Law, the Copyright, Designs and Patents Act 1988 (the “Act”) delineates the requirements for authorship, copyright protection and infringement. A primary question in this area is: can AI or machines be authors for the purposes of copyright protection?

Within the UK legal framework, s9(3) of the Act indicates that where a literary, dramatic, musical or artistic work “is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.” So, in instances of AI/computer-generated music (i.e., where there is no direct human author (s178)), the person(s) responsible for organising the AI systems may be deemed as the author. It is unclear how far back in the AI generative chain UK courts will go to identify such authors and the extent of originality required. Note that in other jurisdictions, computer-generated works may be unprotectable by copyright.

Copyright Protection and Infringement for Automated Music

UK person(s) responsible for arranging AI-generated music, such as programmers, may be considered authors of music even if they do not provide direct creative input into the musical process, but are these works protected by copyright?

For copyright to subsist in a musical work, it must be an original creation of minimum effort that has been recorded. The level of originality required for works to be protected by copyright law has been highly disputed across Europe (see the landmark Infopaq case) but the basic summary of the UK’s originality threshold is that the work in question must originate from the author’s independent skill, labour and judgement. In the absence of much case law, it is unclear how to apply this requirement to AI-generated works which has led to varied interpretations i.e. originality is required within the human author’s arrangement of the AI, originality of AI-generated works should be assessed objectively as if they were authored by humans, or whether AI-generated works are exempt from the requirement altogether. We await legal clarity as to  whether originality simply applies to the training and designing of AI systems, the music generated by AI, or if the threshold for originality is still applicable in this context.

AI-induced copyright infringement may occur under s16 of the Act, if the ‘whole or substantial part’ of an original musical work is reproduced by AI (perhaps by copying excessive data from existing tracks) without the licence of the copyright owner or an adequate legal exception. AI lacks legal personality and liability would be attributed to the acts or omissions of the relevant human individual or company, such as an AI developer.

AI Generated or Assisted Music

The use of AI to create music is highly complex and several parties may be involved, for example, those inputting data may be separate to those creating the AI system itself.

Copyright infringement lawsuits between human musicians can take years to conclude and so you can imagine the difficulty in getting a musicologist to prove that AI has copied a musical composition. How can we identify the original works and the copyright holders if AI combines, or is trained on, innumerable short extracts from existing songs to create a new single? Whilst the hypothetical questions on where AI music generation leaves the industry are endless, there is a critical distinction between two forms of AI outputs which underpins many of these quandaries:

  • AI Generated Works – works created by AI without a human author;
  • AI Assisted Works – works created by a human author using AI as a tool;

These two AI outputs can be easily conflated – and perhaps rightly so – because there is no clear distinction between them in practice. Is it the level or timing of human interaction that dictates whether a work is ‘AI-generated’ or ‘AI-assisted’? Is it even possible for works to be AI-generated in a pure sense, without any human involvement? AI is increasingly more than a “tool” for creating music, but this appears to be a sliding scale dependent on the use case. Where AI is clearly an assistive “tool” used by a human musician, the musician is recognised  as the ‘first owner’ and copyright holder of the protected musical work (s11(1) of the Act).

Research scientists and engineers generally agree that some level of human interaction is currently required in programming AI (at the very least at its inception), which further indicates that individuals or companies are ultimately liable for the infringing acts of AI. Liability may be shared where there are multiple parties, for example, involvement from AI software engineers and data programmers from different companies; however, greater legal clarity is required to establish exactly how liability for infringement should be apportioned.

Automation Outstripping Legislation

AI and machine learning is developing at a fast pace, leading to concerns that guidance and legislation has been outstripped by technological advances. The UK IPO (UK Intellectual Property Office) has provided a consultation to begin to address this. The body has also been reviewing the proposed new text and data mining (TDM) copyright exception, which would permit a third party with lawful access to a dataset to train AI on that data for commercial use.

Whilst considering the interests of stakeholders within the music industry, the UK could employ various solutions in the future e.g. substantively change legislation to directly cover AI, add AI-related exceptions to existing laws, rely on precedents from cases to clarify application of existing laws, rely on licensing to protect human authors whilst enabling AI-training on musical data with consent, or a combination of these.

The general consensus from stakeholders seems to be that updated regulation is required to deal with the complexity of AI, but opinions on the ideal form of such updates differ widely.

The Future of AI within Music

AI will continue to affect the music industry and challenge existing legal frameworks on ownership. There will be concerns that AI and machine learning could exacerbate music piracy, further saturate music streaming by generating innumerable tracks, as well as making human artists and producers redundant.

Creating music is often a collaborative process and so AI could simply become a more prominent part of this chain, rather than replacing humans altogether. However, the trajectory indicates that AI will be more than a “tool,” eventually generating music with minimal human involvement (or, depending on your definition, possibly none whatsoever).

How we regulate the use of AI within music, such as mimicking artists or sampling existing production, is an ongoing issue that requires updates in line with technology. Whilst some suggest that new legislation is required to deal with the novel impacts of AI, others argue that it is the application of existing legislation and licensing practices that requires further delineation. What is clear is that the increasing use of AI will have substantial consequences on copyright protection. Whether your favourite artists and producers will be replaced by dystopian AI remains to be seen!