In 2024, Artificial Intelligence (AI) technology is advancing at a phenomenal pace, reminiscent of the internet’s explosive growth in the late 1990’s. From virtual assistants providing evidence in criminal cases to algorithmic toothbrushes helping with dental hygiene, AI is permeating nearly every aspect of our every-day lives. It is profoundly revolutionising how we work, communicate, and interact with the world around us, ushering in a wealth of opportunities and challenges across multiple sectors.
These opportunities are particularly felt in the music industry. Some examples include AI aiding artists in music production and lyrical composition, integrating with Virtual Reality (VR) to create immersive concert experiences, pioneered by artists like Travis Scott, and creating personalised, engaging playlist recommendations on Spotify. Furthermore, AI algorithms are further revolutionising the gaming industry through generating music that adapts to player actions and gameplay scenarios in real-time.
Some prominent artists are leveraging AI to expand the boundaries of their musical creations. In 2023, electronic musician Grimes launched Elf.Tech, an AI tool that allows users to transform recordings of their own voices into her voice, in an effort to “open source all art and kill copyright.”
Icelandic singer-songwriter Bjork collaborated with Microsoft to create an AI-driven composition titled “Kórsafn,” which uses sounds from Icelandic choirs and reacts to environmental changes in real time.
Despite AI’s potential, it brings complex legal and ethical challenges. One of the primary issues pertains to intellectual property (“IP”) law, which involves the protection of property created by human creativity through copyright, trade marks, patents and design rights. A key question is whether AI-generated music qualifies as a “work” under the 1988 Copyright, Designs and Patents Act. Under UK copyright law, the nature of what constitutes “authorship” and “creative input” is not, unlike in the US, distinctly defined in terms of the necessity of ‘human’ creation. However, ambiguity crucially remains with regards to AI’s ability to provide ‘creative input’ – a crucial factor in qualifying for copyright protection.
A recent case underscoring this issue involves the UK government’s abandonment of plans to broker an industry-led AI copyright code of practice. The failure to reach a consensus highlights the complexities of balancing AI developers’ need for quality training data and content creators’ rights to control and commercialise their works, and the on-going lack of clarity surrounding AI’s implications for copyright.
Amidst this uncertainty, a copyright challenge has emerged due to the advent of “deepfake” technology. Deepfakes – media content created by AI technologies that are generally meant to be deceptive – are, within the music industry, songs that can convincingly mimic the vocal tones, lyrics, or melodies of established artists, often without their consent. Their development relies on feeding thousands of copyrighted works into artificial neural networks in order to ‘train’ to to identify and ultimately generate new musical patterns. This technology has been made available to the wider public through the establishment of platforms such as the startup ‘Voicify’ who aims to “create AI covers with your favourite voices” using over 3000 unlicensed available voice models.
Both the Recording Industry Association of America (RIAA) and The British Phonographic Industry (BPI) expressed strong opposition. In March 2024, BPI sent a letter to Voicify’s solicitors threatening legal action for infringing artists’ copyrighted sound recordings. This marks the first time the BPI has taken legal action against a ‘deepfake’ service, who subsequently changed its name to ‘Jammable’ whilst continuing to offer users access to its voice models. This on-going case underscores the urgent need for legal frameworks to “to play catchup” in response to the use of copyrighted materials by AI technology.
Lawmakers have already begun to respond. In June 2023, the European Parliament enacted the AI Act, introducing a requirement for developers of “general-purpose AI models” to obtain permissions for using copyrighted materials during their AI systems’ development. The Act aims to protect the rights of artists and content creators, fostering a more equitable environment for the use of copyrighted music as evidenced by the fact that one of the Act’s principles is rooted in the recognition that AI should ‘assist human capabilities’ rather than replace it. The Act’s technical robustness and safety principles ensure that AI used in music production is reliable and predictable, minimising the risk of unintended or harmful outcomes and safeguarding both artists and consumers from the adverse effects of unreliable AI-generated content. The Act further requires that AI systems handle the privacy and data governance lawfully, fairly, and transparently, embedding privacy considerations from the outset to protect artists’ rights and personal information.
Furthermore, in January this year in the US the No AI FRAUD Act was introduced in the House of Representatives, which embodies similar principles to the European AI Act. If passed, this bill would enshrine a federal intellectual property right that would outlive the individual whose digital likeness and voice is protected.
Despite these legislative advancements, questions persist about the effectiveness of these measures in curbing AI-related infringements. For example, how will using copyrighted materials in AI development be interpreted as “temporary acts of reproduction,” and how should general- purpose AI be defined? Following the passing of the AI Act, European creators and rights-holders – including ICMP, the global trade association for the music publishing industry and IMPF, which represents independent music publishers globally – called for the European’s Parliament’s effective enforcement of these rules, highlighting ongoing concerns about practical implementation.
The implications of deepfake technology extend beyond copyright infringement, touching on ethical questions about authorship and originality. If an AI can create music indistinguishable from that of a human artist, who is the true creator? This raises broader concerns about the authenticity and value of artistic works in an era where machines can replicate human creativity.A collaborative approach in developing the best practices for the ethical use of AI in music is essential to navigate these challenges. This can be achieved through the promotion of transparency in AI training data, ensuring that artists have control over the use of their likeness and work and industry leaders demonstrating the importance of proactive measures to protect creative rights. The latter can be evidenced by Sony Music’s very recent decision to “opt-out” of having its content used in the training of AI. Furthermore, as the music industry explores the potential of generative AI, estimated to be worth $2.6 billion by 2032, it is vital to balance innovation with the legal considerations that ensure respect for human artistry.
Leave a comment