On 22 November, Lord Holmes of Richmond (Conservative) introduced the Artificial Intelligence (Regulation) Bill (“Bill”) into the UK’s House of Lords. Introduced against a backdrop of rapid advancement in international regulatory development targeted at artificial intelligence (“AI”), it is interesting to see a Private Members’ Bill in an area of specific focus to the current Government. This does make it less likely that the Bill will make it through the legislative process and onto the statute books, but it provides an interesting insight into issues that are being discussed among those in the UK’s upper chamber.

The Bill introduces several regulatory concepts familiar from other proposals and legislative drafts from around the world and aligns with many regulatory and governance concepts that have been developing in more comprehensive regulatory frameworks. These include recent actions by the United States (including Artificial Intelligence Research, Innovation, and Accountability Act of 2023 and the Executive Order on Safe, Secure, and Trustworthy AI) and the EU (including the proposed AI Regulation (“AI Act”) which has been making steady progress through the EU’s lengthy processes for around two years).

An Authoritative Voice for AI

In similar fashion to the creation of an Office of AI within the EU as an authoritative body for effective governance and regulation of AI, the Bill would introduce an AI Authority (“AI Authority”) within the UK.

At the point of introduction, the Bill would establish that the AI Authority should perform the following functions:

  • monitor regulator activity and focus on AI in the UK;
  • promote alignment in regulatory approach across the UK with respect to AI;
  • undertake a gap analysis of regulatory responsibilities in respect of AI;
  • coordinate a review of relevant legislation, including product safety, privacy and consumer protection, to assess its suitability to address the challenges and opportunities presented by AI;
  • monitor the overall regulatory framework’s effectiveness and the implementation of the principles established in the Bill, including the extent to which they support innovation;
  • assess risks across the UK economy arising from AI;
  • conduct horizon-scanning to inform a coherent response to emerging AI technology trends;
  • support testbeds and sandbox initiatives to help AI innovators get new technologies to market;
  • accredit independent AI auditors;
  • provide education and awareness to give clarity to businesses and to empower individuals to express views as part of the iteration of the framework; and
  • promote interoperability with international regulatory frameworks.

For those tracking approaches across the world, many of these responsibilities will be familiar. They are reflective of other legislative approaches aimed at protecting interests and encouraging businesses to continue to develop advanced technologies.

A Principled Approach

The UK Government had already outlined a principle-based approach to regulation of AI as its preferred course, having established a set of principles in their existing white papers on AI. The Bill follows a similar approach and mandates that the AI Authority must have regard to the following principles:

  • regulation should deliver:
    • safety, security, and robustness;
    • transparency and explainability;
    • fairness;
    • accountability and governance; and
    • contestability and redress;
  • any business developing, deploying, or using AI should:
    • be transparent in their activities;
    • ensure thorough and transparent testing; and
    • comply with applicable laws (including data protection and intellectual property);
  • AI should be:
    • applied in compliance with equality legislation;
    • designed to encourage inclusivity;
    • designed to minimise discrimination, prevent unlawful discrimination, and to the extent practicable, mitigate unlawful discrimination within input data;
    • designed to meet the needs of vulnerable groups including lower income groups, elderly, and those with disabilities; and
    • used in a manner that generates data that is findable, accessible, interoperable, and reusable.

The proposed principles mirror many of those commonly established by international approaches, including those of the OECD, the EU, the US, and recently the G7.

Sandboxes and Sandcastles

Creating a safe space for experimentation is a good way that regulation can balance safety and innovation in AI. A step of this kind is by no means a new regulatory development – we have already seen similar steps taken by Spain in response to the EU AI Act and the US in their proposed framework for testing of certain AI systems.

The Bill would require the AI Authority to collaborate with UK regulators to construct appropriate sandboxes and testing environments for AI. A primary goal of the sandboxes is to allow businesses to test innovative propositions with real consumers and provide firms with support in identifying appropriate protections and safeguards.

These sandboxes would be expected to have clear objectives at the outset and would be conducted on a small scale.

AI Officer on Deck

A more novel approach proposed by the Bill is the introduction of an ‘AI Officer’, which will be required in any business which develops, deploys, or uses AI. If implemented, this would represent a significant change for UK businesses.

The AI Officer will be required to ensure the safe, ethical, unbiased, and non-discriminatory use of AI within the organisation and ensure that (to the extent possible) data used in AI technology is unbiased.

To implement this, it is currently proposed that amendments be made to the Companies Act 2006, which acts as one of the primary sources of commercial regulation and obligations in the UK.

A Fresh Approach to Regulation
The Bill would also require that additional regulations be developed to require that any person involved in the training of AI must:

  • supply to the AI Authority a record of all third-party data and IP used during training; and
  • assure the AI Authority that they use all data and IP with the appropriate consents and that the organisation complies with all applicable IP and copyright laws.

Where an organisation is supplying a product or service involving AI, the organisation would have to give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance.

Finally, businesses which develop, deploy, or use AI would be required to allow independent third parties accredited by the AI Authority to audit their processes and systems.

A Public Interest

Development and deployment of innovative technologies is of little value if the public is apprehensive about its use. The Bill would therefore require the AI Authority to implement several initiatives to increase public awareness, understanding, and widescale engagement in the development and regulatory process.

These would include:

  • implementing a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI; and
  • consulting the public and such persons as it considers appropriate as to the most effective frameworks for public engagement, having regard to international comparators.

A Developing Definition

A final aspect of the Bill is the development of a new definition of AI to be used within the UK. This follows a trend in more recent legislative proposals across several countries (see, for example the US) of moving away from the previous harmonised approach of using the definition provided by the OECD.

Under the Bill, AI is defined as:

“technology enabling the programming or training of a device or software to:

(a) perceive environments through the use of data;

(b) interpret data using automated processing designed to approximate cognitive abilities; and

(c) make recommendations, predictions or decisions, with a view to achieving a specific objective”

The Bill also clearly establishes that recent technologies referred to as ‘generative AI’ are also included, and separately defines these technologies as:

“deep or large language models able to generate text and other content based on the data on which they were trained”

What happens next?

It is unclear whether the Bill will ever become law for several reasons. It is common for many bills of this nature, that are introduced at the House of Lords level, to fail to gain sufficient traction in the legislative process in any case. That is doubly so when the law touches upon subject matter which has clearly been under recent specific consideration by the current administration.

Notwithstanding this, the Bill offers a view on controls that might be reflected in some way within the Government’s own regulations as we move to more widescale use across every sector and aspect of society in the UK.

DLA Piper is here to help

As part of the Financial Times’ 2023 North America Innovative Lawyer awards, DLA Piper has been shortlisted for an Innovative Lawyers in Technology award for its AI and Data Analytics practice.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI Chatroom Series.

For further information or if you have any questions, please contact any of the authors.