On 23rd March, the UK Government published its response to a call for views on methods of protecting AI and other programme-generated creations that would not otherwise be protected under current legislation. The call for views focused on a number of areas of consideration, including patents, copyright, designs, trade marks, and trades secrets.

Whilst the responses generally indicated a positive outlook towards AI and creativity, the Government was also warned that further development of the current regulatory framework to include AI may remove many of the human aspects of creativity.  Ironically this risks undermining the very protections and rewards which are currently afforded to human creators which the existing framework of IP rights was designed to achieve. This note: sets out key outcomes and government actions; explains how the government reached its conclusions; and touches upon other work that the UK’s Intellectual Property Office is carrying out in this area.



  • Consult later in 2021 with a range of potential policy options.
  • Publish enhanced IPO guidelines on patent exclusions for AI inventions. Engage with industry to enhance understandings of patent practice and AI.
  • Commission an economic study, and consult governmental departments, to further understand how the IP framework works to incentivise investment in AI.
  • Work with stakeholders and international partners to establish the feasibility, costs and benefits of a deposit system for data used to train AI systems disclosed within patent applications.


  • Review and consult on the manner in which copyright owners license their works for use within AI in order to facilitate easier processes.
  • Consult on whether to limit copyright protections to original works where human input is involved.
  • Consult on whether to replace existing protections with a related right reflecting the investment in particular works.
  • Consider whether action should be taken to reduce confusion between human and AI works and the risk of false-attribution.

Trade marks, Designs, Trade Secrets:

No change for now but will be closely monitored.


The following paragraphs highlight a number of key take-aways from the response and how they have the potential to impact those working within the creative sectors as well as those working in industries, such as financial services and life sciences, where AI continues to develop and revolutionise standard business practices.


Summarised below are two key issues drawn out from the responses: whether AI can devise inventions without human involvement, and patent infringement by AI systems.

AI as an Inventor

There was a mixed response to the question of whether AI systems are capable of devising inventions without human involvement.  However, there was a general consensus that AI systems are not, or not yet, capable of seeking patent rights as independent agents, with many respondents arguing that such an arrangement would require artificial general intelligence. This follows from the decisions of both the European Patent Office and the UK Intellectual Property Office that held that AI are not technically capable or permitted under the current regime to be deemed as creators.

The questions then turned to whether patent law should allow AI to be identified as an inventor (either sole or joint), and whether inventorship criteria should change.  Some responses argued that an AI system itself should not qualify as an inventor, but rather the definition of “inventor” should be broadened to include “a person by whom the arrangements necessary for devising an invention are undertaken”.  Other respondents felt that inventions generated by AI were already patentable, and therefore changes to inventorship criteria were not necessary.

Overall, a large number of respondents argued that the current approach to inventorship criteria was insufficient, and that if inventions generated by AI systems are not recognised under the patent system, then such inventions might go unpublished (potentially leading to trade secret protection at the expense of innovation).

Outcome: Recognising the concerns raised by respondents, in particular with regard to potential detrimental impacts on innovation, the Government indicated it would consult later this year on a range of potential policy options for protecting AI generated inventions (as summarised above).


The majority of responses agreed that legal persons (an individual or corporation which has legal personality) would be liable in the case of patent infringement by an AI system, and not the AI itself.  However, there was no consensus on which specific legal persons should be liable, with some respondents arguing that liability should sit with the legal person who performs the infringing act through the AI, and others arguing that the owner or developer of the AI should be liable.

A large number of responses argued that there would be problems in proving patent infringement by AI, with potential issues such as proving the territory in which infringement took place (i.e. where AI is a neural network hosted on different servers in multiple countries) or where actions of the AI are unclear due to “black box” processing.  However, it was acknowledged that many problems related to proving AI patent infringement are shared with other complex technologies. (It is also fair to say that these challenges are faced in other legal contexts, most obviously data protection where data location and transparency are key.)

Outcome: The Government concluded that courts should continue to have appropriate flexibility to make decisions on a case by case basis in this area; it does not intend to intervene at this stage.


As with patents, the response highlighted two key issues with AI and copyright: the use of copyrighted works as datasets to train AI and the protection of works ‘created’ by AI programmes.

Use of Copyrighted Works and Data

Responses indicated a general consensus that the current copyright regime[1] adequately addresses instances where protected materials are used to train or develop AI. In the area of text and data mining, organisations noted that amendments to the current exceptions which permit use to teach software in non-commercial research could lead to function creep and undermine the legitimate interests of owners.

Outcome: The Government sees these concerns as valid concerns and, in response, committed to further review the ways licenced materials are used to train AI.  It also  proposed that further consultation to determine whether amendments to current licencing models and exceptions would minimise these concerns.

Protection of AI ‘Created’ Works

Many respondents noted that the current regime protects “computer-generated works” [2], meaning that AI “created” works are already potentially protected.  However the requirement for work to be “original”  if it is to be protected did cause legal uncertainty due to the potential requirement of human intervention.  While it was generally accepted that works created by AI as a tool would be protected, doubt was cast over whether works created exclusively by AI without human intervention would also be covered.

Outcome: The government acknowledged that while the current regime offers protection where AI is used as a tool it remains unclear where no human input is involved and revaluation of the regime is therefore required. It intends to consult on this matter to determine whether to replace existing protections with comparable rights with the scope and duration necessary to reflect the investment made in the work. The Government will also  consider the need for measures to reduce confusion between human and AI developed works and the potential risk for false attribution.

Trade marks

For trade marks, the Government invited responses in two areas: the potential impact of AI on trade mark law and whether AI actions could amount to trade mark infringement.

The potential impact of AI on trade mark law

Respondents were asked to comment on whether AI systems were likely to affect trade mark law, including the legal concepts such as the “average consumer”, which is used to assess the likelihood of confusion between signs, as well as the wider trade mark regulatory regime.  The responses acknowledged that AI systems could potentially reduce consumer participation in purchasing decisions.  For example, the use of AI in e-commerce to make purchasing suggestions results in only a limited range of product suggestions (personalised to the individual consumer) being offered. This could potentially reduce opportunities for confusion as an “average consumer” and therefore minimise instances of infringement.

It was also acknowledged that AI has not developed enough to be deemed or act as a consumer or legal personality in its own right, and therefore there was not currently a requirement to deviate from current established legal concepts within the trade mark regime to allow AI systems to participate in interactions other than as a tool.

Infringement and Liability

In general, the respondents agreed that AI systems are capable of being used to infringe trade marks.  By way of example, voice assistants presenting a consumer with various options for purchase may amount to consumer misdirection in circumstances where the consumer had asked for a specific brand of the product.  However, most respondents argued that, while AI systems may infringe trade marks, AI should be regarded as a tool being deployed under human direction and that liability therefore lies with a legal person.  There were a range of suggestions as to the all-important question of who such legal person should be, including the user of the AI, the provider of data via the AI tool, or the relevant brand owners.

Outcome: The Government acknowledged that legislative changes were not currently necessary, but the situation should be monitored as AI technology evolves.


The general view expressed by respondents (and accepted by the Government) was that AI systems should not be considered authors or owners of a design.  Similarly, on infringement, the Government acknowledged that, while AI systems may be used to carry out infringing acts, in such circumstances liability lies with the individual using the AI system. Accordingly, the Government accepted that the “informed user” test should continue to apply to designs generated by AI.

Outcome: Overall, the Government concluded that current legislation was generally fit for purpose, but that the situation should be monitored as AI systems used in the design process continue to develop.

Trade Secrets

Overall, respondents found that trade secrets fill an important gap in protections, particularly in the development stages of software, in areas that they wanted to keep difficult to reverse-engineer. It was however noted that trade secrets may not be suitable in instances where disclosure of information is required, such as in instances where training datasets are derived from personal information.

Outcome: The Government determined that no urgent change to this area is required. It did however note that this would remain an important area of interest and would be closely monitored.

ashbShould you have any questions about the subjects of this article or any questions about AI or IP more generally please contact the authors or your regular DLA Piper contact.

[1] See generally: Copyright, Designs, and Patents Act 1988.; Intellectual Property Act 2014.;  Copyright and Performances (Application to Other Countries) Order 2016 (SI 2016/1219).; Intellectual Property (Copyright and Related Rights) (Amendment) (EU Exit) Regulations 2018 (SI 2019/605).

[2] Copyright, Designs, and Patents Act 1988. Section 9(3).