In July 2019, the FCA announced that it was partnering with the Alan Turing Institute to explore transparency and “explainability” of AI in the financial services sector. On June 11th 2021, the Alan Turing Institute published their report on “AI in Financial Services“. This FCA commissioned report highlights the opportunities, challenges and issues involved with AI,  and the role of transparency in pursuing AI innovation in the financial services sector. The report is exploratory in its nature, and seeks to provide both a framework and language that will equip stakeholders (including regulators) with an understanding necessary to navigate the evolving landscape of AI in pursuit of responsible and socially beneficial innovation. The report states from the outset that AI is already having transformative impacts on the delivery of financial services and further AI developments in the sector are still to come. Although this is not a binding report and there is no hint of future regulation within it, the FCA will likely take into account the concepts and risks highlighted when considering a future regulatory framework of AI in the financial services sector.

What is AI?

The report begins by defining artificial intelligence as the “science of making computers do things that require intelligence when done by humans” and provides some markers to distinguish and compartmentalise the various types of AI (symbolic, statistical, general and narrow). Most relevantly to financial services, the report goes on to focus on three elements of technological change that drive AI innovation in financial services: machine learning, non-traditional data and automation. The report explores in these three types of AI innovation in detail and provides examples of how they have and can be used in the financial services sector.

AI Challenges and Guiding Principles

The report discusses the major considerations and challenges when encountering AI innovation.

First, AI developers should understand and manage data quality, and ask questions about the accuracy, recency, conceptual validity, completeness and representativeness of data that AI systems draw on during their operation and data used in its development.

A second consideration should be the novelty of the model being used. Mostly in relation to machine learning, models can hold a number of characteristics, such as opacity, non-intuitiveness or adaptivity, that make it more or less appropriate for different uses.

Thirdly, the use of AI can change the structure of technology supply chains. For example, use and adoption of AI systems can lead to further complexity in supply chains and increases in outsourcing from third parties.

The last consideration the report highlights concerns the scale of the impact (both positive and negative) associated with AI systems. It highlights a few examples for the reader to consider: the replacement of a humans with an AI in the execution of business tasks, the failure of humans to intervene when necessary leading to “cascading impacts” and lastly, where a third party AI tool is used across multiple industries and sectors, the potential positive and negative impacts are amplified in the success/failure of that third party’s AI.

When navigating these considerations, the report highlights some specific concerns that may arise about an AI System: the system’s performance (trust in an AI’s ability to perform a given task), its compliance with legal and regulatory requirements (FCA handbook, PRA rulebook and competition equality and data protection laws), the ability to use the AI competently with appropriate human oversight, the ability for the AI to provide explanations, the level of the AI’s responsiveness and the AI’s social and economic impact. The report reminds the reader of AI ethics principles that serve to guide the adoption of AI- fairness, sustainability safety, and transparency, the latter of which the report dedicates a full chapter to in the context of financial services.

AI Benefits and Harms in Financial Services

One of the more interesting aspects of the report is its exploration of  AI in financial services and its associated benefits and risks. It highlights five areas where AI has or can have both positive and negative implication in the financial services sector.

1) AI can have both positive and negative implications on consumer protection, for example, where an AI system provides the customer with the most appropriate financial product/service, screens customers to prevent fraud, seeks to eliminate differential treatment of customers, provides investment strategies or furthers consumer empowerment.

2) AI can be used in financial crime, including prevention and enforcement of fraud, money laundering, and in the context of securities, insider trading and market manipulation.

3) AI can have both a direct and indirect effect on competition in the market.

4) the use of AI could have implications on the stability of businesses and markets by ensuring micro and macro-prudential risks are well understood and accounted for.

5) AI can have significant impact on strengthening cybersecurity and  protecting of businesses digital infrastructures.


The report focuses on one of the general principles of AI that is particularly pertinent for AI in financial services. The report states that the main harms and potential risks in the financial services sector make it necessary to ensure and to demonstrate that AI systems are trustworthy and used responsibly. It states that due thought must be given to the different types of information and the different types of stakeholders and their corresponding interests. For any type of transparency, there is a need for AI developers to answer how information can be obtained, managed and communicated in an effective and meaningful way to the various stakeholders. In covering these topics, the report provides a framework that allows developers, users, regulators and other stakeholders in AI to consider AI’s ethical implication and define the expectations about AI transparency in the financial services sector.

What’s next?

The report admits that it does not aim to assess the adequacy or extent of existing regulation or industry practices are adequate to address the challenges and opportunities created by AI in the financial services sector. The report seeks to address the challenges and concerns identified and concludes that there is further work to answer questions on the future regulatory and risk landscape of AI. Although there is a great expansion of the transparency principle in the report, it also concludes that there is further work to be considered when applying the conceptual framework of transparency provided into specific use cases of AI in financial services and determining what form AI transparency should take.

In light of recent developments in AI regulation on a European level and the release of this report, it will be interesting to see how the FCA (and UK regulators generally) use this report, the risks it highlights and the necessity for transparency it calls for, to develop the UK’s own AI regulatory framework.

If you’d like to discuss any of the issues discussed in this article, get in touch with Duncan Pithouse, Arian Nooshabadi or your usual DLA Piper contact.