In November 2021, the UK Government (“Government”) issued the National Artificial Intelligence (AI) Strategy, with the ambition of making the UK a global AI superpower over the next decade. The strategy promised a thriving ecosystem, supported by Government policy that would look at establishing an effective regulatory framework; a new governmental department focussed on AI and other innovative technologies; and collaboration with national regulators.
On 29 March 2023 the Government published the long-awaited white paper (“Paper”) setting out how the UK anticipates it will achieve the first, and most important, of these goals – the creation of a blueprint for future governance and regulation of AI in the UK. The Paper is open for consultation until 21 June 2023.
The Paper, headed “A pro-innovation approach”, recognises the importance of building a framework that engenders trust and confidence in responsible use of AI (noting the key risks to health, security, privacy, and more, that can arise through an unregulated approach), but cautions against ‘overbearing’ regulation which may adversely impact innovation and investment.
This theme runs throughout the Paper and expands into recommendations that support a relatively light touch, and arguably a more organic regulatory approach, than we have seen in other jurisdictions. This is most notably the case when compared to the approach of the EU, where the focus has been on development of a harmonizing AI-specific law and supporting AI-specific regulatory regime.
The Paper contends that effective AI regulation can be constructed without the need for new cross-sectoral legislation. Instead, the UK is aiming to establish “a deliberately agile and iterative approach” that avoids the risk of “rigid and onerous legislative requirements on businesses”. This ambition should be largely achieved by co-opting regulators in regulated sectors to effectively take direct responsibility for the establishment, promotion, and oversight of responsible AI in their respective regulated domains. This would then be supported by the development of non-binding assurance schemes and technical standards.
Core Principles
This approach may be different in execution from the proposals we are seeing come out of Europe with the AI Act. If we look beneath the surface, however, we find the Paper committing the UK to core principles for responsible AI which are consistent across both regimes:
- Safety, security, and robustness: AI should function in a secure, safe, and robust manner, where risks can be suitably monitored and mitigated;
- Appropriate transparency and explainability: organisations developing and deploying AI should be able to communicate the method in which it is used and be able to adequately explain an AI system’s decision-making process;
- Fairness: AI should be used in ways that comply with existing regulation and must not discriminate against individuals or create unfair commercial outcomes;
- Accountability and governance: appropriate measures should be taken to ensure there is appropriate oversight of AI systems and there are adequate measures to follow accountability; and
- Contestability and redress: there must be clear routes to dispute harmful outcomes or decisions generated by AI.
The Government intends to use the principles as a universal guardrail to guide the development and use of AI by companies in the UK. This approach that aligns with international thinking that can be traced back to the OECD AI Principles (2019), the Council of Europe’s 2021 paper on a legal framework for artificial intelligence, and recent Blueprint for an AI Bill of Rights proposed by the White House’s Office of Science and Technology Policy.
Regulator Led Approach
The UK does not intend to codify these core principles into law, at least for the time being. Rather, the UK intends to lean on the supervisory and enforcement powers of existing regulatory bodies, charging them with ensuring that the core principles are followed by organisations for whom they have regulatory responsibility.
Regulatory bodies, rather than lawmakers or any ‘super-regulator’, will therefore be left to determine how best to promote compliance in practice. This means, for example, that the FCA will be left to regulate AI across financial services; the MHRA to consider what is appropriate in the field of medicines and medical devices; and the SRA for legal service professionals. This approach is already beginning to play out in some areas. For example, in October 2022, the Bank of England and FCA jointly released a Discussion Paper on Artificial Intelligence and Machine Learning (DP5/22), which is intended to progress the debate on how regulation and policy should play a role in use of AI in financial services.
To enable this to work, the Paper contemplates a new statutory duty on regulators which requires them to have due regard to the principles in the performance of their tasks. Many of these duties already exist in other areas, such as the so-called ‘growth duty’ that came into effect in 2017 which requires regulators to have regard to the desirability of promoting economic growth. Regulators will be required by law to ensure that their guidance, supervision, and enforcement of existing sectoral laws takes account of the core principles for responsible AI. Precisely what that means in practice remains to be seen.
Coordination Layer
The Paper recognises that there are risks with a de-centralised framework. For example, regulators may establish conflicting requirements, or fail to address risks that fall between gaps.
To address this, the Paper announces the Government’s intention to create a ‘coordination layer’ that will cut across sectors of the economy and allow for central coordination on key issues of AI regulation. The coordination layer will consist of several support functions, provided from within Government, including:
- assessment of the effectiveness of the de-centralised regulatory framework – including a commitment to remain responsive and adapt the framework if necessary;
- central monitoring of AI risks arising in the UK;
- public education and awareness-raising around AI; and
- testbeds and sandbox initiatives for the development of new AI-based technologies.
The Paper also recognises the likely importance of technical standards as a way of providing consistent, cross-sectoral assurance that AI has been developed responsibly and safely. To this end, the Government will continue to invest in the AI Standards Hub, formed in 2022, whose role is to lead the UK’s contribution to the development of international standards for the development of AI systems.
This standards-based approach may prove particularly useful for those deploying AI in multiple jurisdictions and has already been recognised within the EU AIA, which anticipates compliance being established by reference to common technical standards published by recognised standards bodies. It seems likely that over time this route (use of commonly recognised technical standards) will become the de facto default route to securing practical compliance to the emerging regulatory regimes. This would certainly help address the concerns many will have about the challenge of meeting competing regulatory regimes across national boundaries.
International comparisons
EU Artificial Intelligence Act
The proposed UK framework will inevitably attract comparisons with the different approach taken by the EU AIA. Where the UK intends to take a sector-by-sector approach to regulating AI, the EU has opted for a horizontal cross-sector regulation-led approach. Further, the EU clearly intends exactly the same single set of rules to apply EU-wide. The EU AIA is framed as a directly-effective Regulation whereby the EU AIA applies directly as law across the bloc, rather than the ‘EU Directive’ method, which would require Member States to develop domestic legislation to comply with the adopted framework.
The EU and UK approaches each have potential benefits. The EU’s single horizontal approach of regulation across the bloc ensures that organisations engaging in regulated AI activities will, for the most part, only be required to understand and comply with the AI Act’s single framework and apply a common standard based on the use to which AI is being put, regardless of sector.
The UK’s approach provides a less certain legislative framework, as companies may find that they are regulated differently in different sectors. While this should be mitigated through the ‘coordination layer’, it will likely lead to questions about exactly what rules apply when, and the risk of conflicting areas of regulatory guidance. This additional complexity will no doubt be a potential detractor for the UK, but if adopted effectively the benefits of having a regime that is agile to evolving needs and technologies, could trump the EU with its more codified approach. In theory, it should be much easier for the UK to implement changes via regulatory standards, guidance, or findings than it would be for the EU to push amendments through a relatively static legislative process.
US Approach
There are clear parallels between the UK and the likely direction of travel in the US, where a sector-by-sector approach to the regulation of AI is the preferred choice. In October 2022, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights (“Blueprint”). Much like the Paper, the Blueprint sets out an initial framework for how US authorities, technology companies, and the public can work to ensure AI is implemented in a safe and accountable manner. The US anticipate setting out principles that will be used to help guide organisations to manage and (self-) regulate the use of AI, but without the level of directional control that the UK anticipate passing down to sector specific regulators. Essentially the US position will be to avoid direct intervention into state or federal level regulations which will be left to others to decide. It remains to be seen how the concepts framed in the Blueprint might eventually translate into powers for US regulators.
A Push for Global Interoperability
While the Government seeks to capitalise upon the UK’s strategic position as third in the world for number of domestic AI companies, it also recognises the importance of collaboration with international partners. Focus moving forward will therefore be directed to supporting global opportunities while protecting the public against cross-border risks. The Government intends to promote interoperability between the UK approach and differing standards and approaches across jurisdictions. This will ensure that the UK’s regulatory framework encourages the development of a compatible system of global AI governance that will allow organisations to pursue ventures across jurisdictions, rather than being isolated by jurisdiction-specific regulations. The approach is expected to leverage existing proven and agreed upon assurance techniques and international standards play a key role in the wider regulatory ecosystem. Doing so is therefore expected to support cross-border trade by setting out internationally accepted ‘best practices’ that can be recognised by external trading partners and regulators.
Next steps
The Government acknowledges that AI continues to develop at pace, and new risk and opportunities continue to emerge. To continue to strengthen the UK’s position as a leader in AI, the Government is already working in collaboration with regulators to implement the Paper’s principles and framework. It anticipates that it will continue to scale up these activities at speed in the coming months.
In addition to allowing for responses to their consultation (until 21 June 2023), the Government has staggered its next steps into three phases: i) within the first 6 months from publication of the Paper; ii) 6 to 12 months from publication; and iii) beyond 12 months from publication.
Find out more
You can find out more on AI and the law and stay up to date on the UK’s push towards regulating AI at Technology’s Legal Edge, DLA Piper’s tech-sector blog.
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.