On 31 August 2023, the UK Parliament’s House of Commons Science, Innovation, and Technology Committee issued its interim report (“Report”) on the governance of artificial intelligence (“AI”) – a matter that has for some time been the focus of policy development across the world. The report is a result of an enquiry by the Committee, to which both Coran Darling and Tim Clement-Jones contributed evidence as we push towards a new era of technology governance, in the wake particularly of rapid recent developments in multi modal and large language models.
The Report seeks to examine the many recent AI policy developments including the UK Government’s White paper on AI Governance entitled “A pro-innovation approach to AI regulation”. In doing so, the Report captures many of the benefits and challenges policymakers across the world are set to face (whether through comprehensive frameworks such as in the EU or through outcome/principle-based approaches as proposed in the UK) in the pursuit of effective governance mechanisms and policy for AI.
‘A General-Purpose Technology’
The Report correctly concedes that the term AI exists without any clear and standardized definition to date (a point that we see raised daily in our conversations with our clients: “how should we define AI for our organization”?). In the rapidly developing proposed EU AI Act, regulators have adopted a closed ‘AI System’ definition of:
“a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
Other definitions offer a much broader definition, such as that offered by the Alan Turing Institute, which defines AI as the
“design and study of machines that can perform tasks that would previously have required human (or other biological) brainpower to accomplish”.
Through the course of discussion of what exactly AI is, whether this be identifying specific types such as foundation models, or through considering if certain uses apply, the Report identifies several benefits that recent technological developments offer, including:
- Everyday application – use of AI to boost productivity or the effectiveness of other technologies and hardware such as smartphones and satnavs.
- Medicine and Healthcare – use of AI in the provision of healthcare and tailored treatment plans and in development of medical and pharmaceutical advancements at the research stage.
- Education – use of AI as an assistive technology for those requiring extra help and to help teachers speed up time intensive administrative tasks.
- Delivering future benefits – use of AI to overcome many future problems, such as the growing resistance to antibiotic medication, transitions to automation, and development of sustainable energy sources.
The 12 labors of AI Governance
Much like Pandora’s box, once opened the rapid proliferation of AI cannot be stopped and the lid cannot be closed. Governments across the world must therefore actively determine how they seek to approach AI. Many jurisdictions have taken different paths to reaching the end goal of comprehensive and effective governance that encourages innovation and enhances the potential of humanity.
Faced with a mix of hard law in certain jurisdictions, and principles-based law in others, how should multi-national clients respond, and indeed, how should governments and regulators respond to technological issues which are (again) those which transcend borders? In our view international cooperation will undoubtedly be the most effective method of accomplishing this goal particularly through widescale international acceptance of AI governance standards (such as those offered by the International Standards Organizations, the British Standards Institute, or the US National Institute of Standards and Technology), based on the commonly accepted Principles on AI established by the OECD. The OECD is itself actively seeking a path to convergence of these standards.
Standing in the way of this goal are 12 Herculean challenges to AI governance that governments across the world must overcome:
- Bias – AI can introduce or perpetuate biases that society finds unacceptable (how should that be counteracted? And indeed, can a perfect system be designed, when we ourselves cannot say that we are without bias, whether conscious or unconscious).
- Privacy – AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants (how can rights of the individual be balanced with the rush to train an AI with the ‘best’ and ‘most up to date’ data – particularly if the richest seam of training data would be personal data?).
- Misrepresentation – AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.
- Access to Data – The most powerful AI needs very large datasets, which are held by few organisations (and so the concentration-risk raises its head, and that has not been lost on anti-trust regulators).
- Access to Compute – The development of powerful AI requires significant compute power, access to which is limited to a few organisations (so how do we avoid AI increasing the gap between the technology-have’s and have-nots?)
- Black Boxes – Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.
- Open-Source – Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms (and the open-source software business model has proven that this need not mean foregoing commercial returns).
- Intellectual Property and Copyright – Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.
- Liability – If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
- Employment – AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption (to society, to wellbeing, for state revenues from taxation and so forth).
- International Coordination – AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.
- Existential Concerns – Some people think that AI is a major threat to human life: if that is a possibility, governance needs to provide protections for national security.
Given the international applicability of these challenges, it is expected that these will form the basis of many discussions at the AI Safety Summit that is set to be held in Bletchley, often coined as the birthplace of modern computer science and artificial intelligence.
Where does the world go from here?
The bottom-line conclusion of the Committee is that “We urge the Government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed.” It is clear that regulatory frameworks need to be put in place to meet the challenge in a way that allows innovation to flourish, while ensuring that toxicity and risks are minimized to the greatest extent. The next steps, as the Report notes, are to encourage international cooperation to harmonize approaches and offer a united front on regulation.
A key takeaway from the report, and many of its investigatory meetings and calls for evidence is clearly stated however: AI cannot be ‘un-invented’. AI is here to stay.
Find out more
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Excited to dive into the future of AI? Be ahead of the curve by pre-registering for our upcoming AI Report – AI Governance: Balancing Policy, Compliance and Commercial Value.
Gain insights and expert perspectives that will help shape your AI Strategy. Pre-register here.
DLA Piper continues to monitor updates and developments of AI and its impacts on industry across the world. For further information or if you have any questions, please contact Coran Darling, Mark O’Conor, Tim-Clement Jones, or your usual DLA Piper contact.