Red tape binds the nation together
The ‘80s sitcom “Yes, Minister” has been a favourite of mine for many years. I was a bit young to watch it when originally shown, but I came across it on VHS when doing a Government and Politics A-Level in the mid-90s, and it taught me more about the functioning of the civil service than the textbooks ever did. It has been speculated that a combination of “Yes, Minister”, “House of Cards” (the original with Ian Richardson) and “Have I Got News For You” played a bigger part in my essays than the less fanciful sources we were supposed to use… well, you may think that, but I couldn’t possibly comment.
A constant theme in “Yes, Minister” is the battle between Jim Hacker, the Minister for Administrative Affairs, trying to reduce ‘red tape’, and Sir Humphrey Appleby as the oleaginous and verbose Permanent Secretary ever trying to increase the Civil Service’s influence and headcount. Hacker’s assumption that regulation is necessarily a bad thing to be done away with may be a key component of the comedy in that show, but similar comments are often made by real life politicians in many countries.
One common characterisation is that regulation imposes a burden on business which places the domestic market operators at a disadvantage relative to businesses in more laissez-faire markets. Deregulate and our newly freed entrepreneurs will forge ahead…
The standard approach
On the other hand, standardisation tends to be viewed as a good thing. Without common standards, markets descend into a wasteful chaos of competing standards, with secondary negative effects like customer lock in and a greater probability of anticompetitive behaviour.
It is easy to cite examples of beneficial standardisation ranging from shipping containers to the Internet TCP/IP stack. These standards provide businesses with that most valuable thing: certainty. The certainty for manufacturers and product developers that if they follow the standard, the resulting product of service will be compatible with the rest of their customer’s operations. For customers there is the obvious and matching benefit that they can choose between standardised products knowing that all of them should align with their needs.
It is equally easy to point to various ‘format wars’ where a lack of a single agreed standard led to wasted efforts within corporates, and more disappointingly a collection of consumers who find they’ve invested in the losing technology. After all, I wasn’t watching “Yes, Minister” on Betamax…
One shift in perspective for those instinctively suspicious of regulation is to instead think of regulation as standardisation, but with set consequences for those not following the standard. Even in areas that might initially seem more esoteric, and less directly product related, regulation also amounts to standardisation. Just one example that is close to home for any data or technology lawyer is the plethora of products and services that have grown up around data protection. Without the core regulation (the EU’s global-standard-setting GDPR) and the additional laws and guidance, exactly what standards should be applied would be unclear: different businesses would claim to apply wildly differing protocols. In an increasingly connected world this would lead to consumer fear, confusion and slow down adoption of digital services. A solid argument can be made that the certainty that regulatory standards provide – for operators, consumers and the wider ecosystem – more often than not aids rather than impedes all market participants.
Intelligence standards
We’re about to see a wave of new regulation around AI. The EU is leading the pack with its AI Act, with very well progressed drafts of the 4th compromise text in circulation at the time of writing.
While nay-sayers looking at the legislation mutter darkly about stifling innovation and handing an advantage to supposedly less-regulated nations, such comments fail to appreciate the standardisation and certainty benefits. In addition, any suggestion that AI might be less closely regulated by other major global power seems to wilfully ignore the reality of how heavily technology and the internet are controlled outside the West. In contrast, I am buoyantly optimistic that regulation, and the certainty and standardisation it inevitably brings, will provide a yardstick against which developers can measure their efforts in bringing useful, transparent, explainable, trustworthy and well -governed AI to market for the benefit of all.
If AI regulation is still being redrafted and discussed, what can businesses wanting to take advantage of AI do today? Happily, what works at a supra-national level also works at a hyper-local level for corporates in this case. Just as regulatory standards support innovation by providing certainty, so can corporate governance and policy. Adopting a set of clear policies and relevant governance structures to oversee and implement those policies does at least three things:
- First, it sends a clear signal to the entire organisation that AI is something to be investigated and (where done in accordance with relevant policies and standards) to be adopted. In effect, by adopting policies and creating a governance framework, you ‘give permission’ to the more innovative members of your business to get to work on AI-driven projects. Could it be smarter, more automated, more efficient, more productive with AI? Great – we want you to realise those benefits, and the policy sets out the things you need to take into account as you do so.
- Secondly it provides a chance to set out the corporate values and vision around AI – a set of standards that teams working on AI should meet or exceed. This provides the benefit of certainty for teams working within the organisation: if you follow these policies and standards, your work is very likely to be approved. While laws are still being drafted in many places, the direction of travel is obvious. Key concerns addressed by legislation, guidance, white papers and ‘bills of rights’ in relation to AI fall into the same core areas. AI needs to be deployed in a way that is clear and transparent, free from bias, explainable, trustworthy and subject to oversight. With some legislation in fairly near to final form already, policies can already be heavily inspired by definitions and requirements, and revised to provide a ‘common denominator’ across the territories in which your organisation operates as new standards, laws and guidance are published.
- Third, documenting AI decision making now helps to future proof the organisation. We’ve seen organisations get caught up in a need for regulatory-driven remediation projects before. The many significant compliance, remediation and repapering exercises that occurred around the time of the GDPR’s adoption are the stand-out example, but on a smaller-scale changes in sector-specific regulation in financial services, insurance, life sciences and others have precipitated similar exercises for industry players in recent years. Adopting a set of policies and guidelines now that are strongly aligned with the clear direction that regulation will take will pay significant dividends in avoiding costly remediation efforts.
Return on investment
For legal teams within large organisations, starting a new AI policy project will often run into questions about budget and resources. However, where the justification for a regulatory compliance project might normally centre around the need to avoid potential fines, for AI there is a much better (and much more positive) reason to start the project: the potential for a significant return on investment.
The whole topic of AI regulation and AI policy has gained significant traction in recent years. We’ve been fortunate enough to work with a few prescient early adopters in sectors as diverse as life sciences, financial services and consumer goods. While it is still early days (even the earliest adopters haven’t had material policy frameworks in place for more than a year or so), there is already strong evidence to show that organisations that adopt AI compliance policies and governance frameworks do then see a significant uptick in AI project adoption. The reasons tend to be two-fold – employees see the existence of policies as implicit permission to investigate AI projects, and then having clear standards to measure the project against provides some certainty that approval from senior stakeholders will not be withheld. Anecdotally it seems that the majority of these new AI projects are aimed at increasing profitability based on a do-more-with-the same productivity increase rather than a do-the-same-with-less process efficiency mentality. Either way, given the enormous potential for AI to accelerate productivity, these early adopters are stealing a significant march on their competitors who are not yet adopting those technologies at the same rate or with the same agree of organisation-wide consistency and certainty.
AI policy today drives more profit-generating AI projects tomorrow. Could there be a more powerful argument for in house legal and compliance teams to use when discussing budgets for these projects?
Next steps
You can find more views from the DLA Piper team on the topics of AI systems, regulation and the related legal issues on our blog, Technology’s Legal Edge.
If your organisation is deploying AI solutions, you can undertake a free maturity risk assessment using our AI Scorebox tool.
If you’d like to discuss any of the issues discussed in this article, get in touch with Gareth Stokes or your usual DLA Piper contact.