Compliments of our colleague Imran Syed, Legal Director, IPT London.
As a new era of artificial intelligence (AI) technologies emerges, companies in almost every industry are exploring how they can use AI to drive improvements to their business. After all, the commercial gains are tantalizing, offering businesses a better understanding of their operations and their customers than ever before, as well opportunities for significant cost reduction and quality improvement.
Yet harnessing the benefits of these transformative technologies does not come without risk, particularly when compared with the risks generated by more human modes of failure. Errors in AI solutions can scale more quickly. AI may exhibit bias that discriminates against certain classes of individuals. Indeed, a broad array of ethical considerations may arise from use of AI that may impact the workforce, customers and society at large.
In any business, the board of directors is ultimately responsible for the management of risk. It is the board members who will need to make informed decisions about the deployment of AI technologies, balancing these risks against potential benefits. This is a key theme of the recent report published by the Institute of Business Ethics (IBE), which explores in detail the challenges faced by the board when coming to grips with AI in its business.[1]
In this article, we explore top areas of focus for boards that aim to harness the opportunities offered by AI in a responsible and controlled way.
1. Education: does the board understand the technology?
A key theme of the IBE paper is the need for the board to understand the technology being deployed within the business. This does not mean all board members must be data scientists, but it does require sufficient comprehension of the technology to be able to make an informed choice about its use. Without this, the board is unable to properly analyze the technology and it risks becoming overly reliant on the views of those who understand (and favor) AI, but who may lack a more holistic appreciation of the potential impacts on the wider business.
The boardroom may go about this up-skilling in a variety of ways. The board itself may well be made up of individuals well versed in how the technologies function; of key importance here will be the technology literacy of the CTO and/or CIO. But whether or not the board is AI-literate, its members should also insist upon a full briefing from the business as to how the key technologies that are being deployed actually work. Where appropriate, the board may choose to bring in external consultants or advisors – to assist with general skills training or to provide a specific assessment of a particular AI technology or tool being rolled out.
2. A multi-disciplinary approach
Closely related to the theme of education is the need for the board to promote a multi-disciplinary approach when looking at AI in the business. The risks posed by AI are neither solely a “legal” nor a technology issue, although experts from both disciplines have their part to play in the advisory ecosystem.
Drawing upon the expertise of a range of advisors will help the board form a more balanced view of AI’s likely risks and benefits for the business. Data scientists will be needed to explain the AI tool and underlying data sets; an HR professional will provide insight on the impact of AI on the workforce; a compliance professional can advise on available controls to manage risk; and a legal team will advise on such concerns as data protection, contracting and regulatory issues. Finally, of course, the board members will wish to include their own potentially diverse views, based on their more holistic view of business risk.
3. Controls and oversight
If the board is to allow AI to be put to work, it will also want confidence that robust controls are in place to manage the risks. For significant deployments, the board should require the creation of a suitable risk management framework that identifies and monitors risks, but then also sets out steps to address and mitigate risks should they crystallize. For the time being, this framework is likely to include ensuring there is a human in the loop at all times − that there is oversight of the AI’s workings by qualified individuals. As well as supporting an ethical approach, this may well be a legal requirement, particularly in regulated sectors such as financial services, healthcare or manufacturing.
As part of this framework, maintaining rapid escalation paths to the board will be vital, particularly where issues are scaled or have a high risk of reputational damage or regulatory breach. Mitigating issues that do arise could well involve steps that lead to operational business disruption, which in itself will require further board oversight. For example, serious issues may require activation of a kill switch to the AI tool in order to prevent liability from proliferating, but this may have a direct impact on the frontline business (particularly if a tool is customer facing).
4. Embedding an ethical culture
If AI is to become an accepted part of business and society at large, it must be trustworthy. One part of this is ensuring the technology is sound, but simply ensuring that AI does not go “wrong” is not going to be enough. To achieve longevity, businesses will need to show to their customers, employees and to society that the technology will work to the greater good.
Embedding an ethical culture around AI will have to start at the top. As with any dramatic new technology, AI brings with it a risk that a business becomes so eager to reap commercial benefits that it fails to consider longer term impacts that may harm its workers or alienate its customers. To maintain the trust of consumers, and of employees, it is up to the board to promote a culture of restraint and ethical thinking around the use of AI. The baseline is to consider the effect it is likely to have on people, both inside and outside the business.
A code of ethics can help with this. When pondering this, the board may look externally,[2] although there is still no front runner in this regard, or may choose to do something more bespoke. Once a code has been drafted that satisfies the board, the broader cultural work of supporting and promoting the ethical use of AI inside the company actually begins. This process may include such things as forming a multi-disciplinary ethics committee and ensuring that reporting lines are in place so that employee concerns about the deployment of AI are heard and addressed.
5. Security
Finally, and not at all unique to AI technologies, cybersecurity should be of paramount importance to businesses in almost every sector. This should already be a major focus of the board. Businesses today face the potent combination of increasingly sophisticated cyberthreats and greater regulatory pressure. AI solutions, of course, themselves present cybersecurity risks: the very thing that makes it valuable − its potential to join the dots in vast data sets like never before, with more and more sensitive information being held in one place − presents an ever-growing concentration of risk. Hackers find such data pools enormously appealing. Similarly, AI algorithms or the data used to train AI solutions themselves could be vulnerable to hackers looking to derail the accuracy of outputs, with potentially material impacts for the business in question. Security around all this data is paramount, both to ensure regulatory compliance and to guard the reputation and efficacy of the business.
Cybersecurity must therefore remain a high priority for the business. The board will need to maintain investment into security infrastructure and ensure that the business remains ready to deal with a data breach should it occur. But, beyond this, the board should also ensure that the business regularly reflects on whether its risk profile is changing from within. AI is continuously finding new ways to shape data, and boards need to respond in a proactive way. A dynamic, informed approach may seem daunting, but it is the safest and most protective path through this rapidly changing new world.
[1] Corporate Ethics in Digital Age, by Peter Montagnon
[2] See, for example, the high-level Ethics Guidelines for Trustworthy AI published by the European Commission.