There are a multitude of cases around the world at present regarding AI created works and copyright infringement. Various of them emanate from the US, with the claimants (ranging from high profile celebrities to large publishers) alleging that their copyrighted works were used to train LLMs without permission in a way which does not fall within the remit of fair use/fair dealing.

One argument goes that using copyrighted works to generate entirely new content is no different when done by a machine than when done by a human (for a practical example of that, think of how many young musical artists openly say their works are directly inspired by their heroes of decades gone by; no one would seriously allege copyright infringement in those circumstances). The counter argument is that the speed and volume with which AI platforms can assimilate and convert copyright protected works means it is not the same as an up-and-coming talent working hard in their one-bedroom apartment for years in a bid to emulate success. In any event, recent statements from key players that content creators should indeed be compensated for their works, to some degree, opens the doorway to settlement for all these cases. Let’s see how it plays out.

With the headline-grabbing nature of these claims, one kernel of the AI/copyright matrix that gets less airtime is the status of copyright ownership in any newly created works. Many of the user terms and conditions for multi-modal platforms explicitly say that the user owns (and is free to use) whatever they generate, and in some cases the platform owner or their major investors are going beyond by offering to indemnify the user if they get subsequently sued for copyright infringement (albeit there are niche and clever limitations to these indemnities). But what about where another user seeks to replicate the works “created” by the original user, and argues that the original user doesn’t own anything at all because they only typed in a few words as ‘prompts’? In other words, that the base-level criteria adopted by many major jurisdictions (Australia, the UK, China, the EU) – that copyright only exists where the necessary ‘skill and labour’ or ‘intellectual expression’ (or some other equivalent verbiage) was used to create some original output – has not been satisfied because of the nature of how users interact with multi-modals. 

It seems the Beijing (China) Internet Court has just given us the best answer yet, and its not hard to see things working out similarly in other countries.

In a nutshell, a Chinese user spent much time and effort generating a photo on Stable Diffusion by using 20 prompts, 120 negative prompts, setting the Height, CFG Scale and Sampling Step, and modifying the Seed. All concepts with which advanced Stable Diffusion users will be familiar. The user was able to generate some highly customised photos, over several iterations. For example, to get from one iteration to another, the following prompts needed to be added “shy, elegant, cute, lust, cool pose, teen, viewing at camera, masterpiece, best quality”.

Some days later, the user discovered that precisely the same image was being used in a widely published article (without consent). So began infringement proceedings.

The Court was clear in its decision. The amount of time invested by the user, together with the very specific prompting and advanced configurations set by the user, could not be regarded as anything other than an intellectual achievement. The customisation that was built into the images by the user’s personal choices over the various iterations of images created also clearly established originality according to the Court, though it is noteworthy that neither the addition of the blue sash or the removal of the fringe was requested by the user in the above-mentioned prompts. Regardless, the user was the owner of the intellectual property in these images.

To further underline the point, the Court drew an analogy between the advent of photography a century ago when some commentators considered that clicking a button on a camera shouldn’t constitute sufficient creativity. This view does not persist, and rightly so as photographers put much effort into finding just the right shot. The argument goes that, in the same way as no one would dispute copyright ownership in a photo today just because the photographer wasn’t physically holding a paintbrush or pencil, copyright ownership should not be denied as it relates to AI generated images. 

The law in the UK has been slightly ahead of the curve in this respect since 1988, when the CPDA 1988 expressly provided for copyright protection of computer-generated works that do not have a human creator per se, and by providing that where a work is “generated by computer in circumstances where there is no human author“, the author of such a work is “the person by whom the arrangements necessary for the creation of the work are undertaken”. However, while clear that a human being or organisation can be the author where software creates the work, the legislation is not clear about which human being or organisation (of the possible candidates) would be the author. Is it the owner of the AI platform itself? Is it an organisation engaged for LLM training purposes and computational runs? Is it the individual who wrote the prompts? Some combination of the above?

It is not difficult to envisage a UK court coming to similar conclusions to the Chinese courts because the prompts are “necessary”, and the act of prompting is closest in proximity to the creation of the work. That means the person writing those prompts is the person most closely associated with the creation of the work.

That all said, we should not expect every court around the world to preside over these issues in the same way, as shown by the Washington District Court in the US coming to the opposite conclusion in August last year when it said that, irrespective of the level prompting, a human creator of the work (in the natural sense) was the “bedrock” of US copyright legislation. The decision has not been met with universal acclaim, and it is little wonder that two weeks later the US Copyright Office published a request for commentary on a variety of AI-related copyright issues, saying in the process that “questions remain about where and how to draw the line between human creation and AI-generated content.”

In this context, the UK Government’s own Consultation of copyright in AI (2022) said it was too early to opine on the issue or making legislative changes; this may have been a wise move, especially as more advanced LLMs, trained on bigger data sets and coded with bigger parameters, mean that less and less human involvement will be needed to create the same output. The question, therefore, that the courts need to decisively answer is – in a world where technological advances invariably mean that the amount of human investment needed to produce output that mirrors exactly what one has in mind will decrease – is there a threshold beyond which it is inappropriate to attribute authorship to a human being? 

Fascinatingly, while this article is of course entirely my own work, if I’d asked an LLM to write it for me having invested time and effort in typing the appropriate prompts, right now I’d be the author in some countries only.