What is FS2/23?
In October 2022, The Bank of England, Prudential Regulation Authority (PRA), and Financial Conduct Authority (FCA) published a discussion paper, (DP) 5/22, regarding the use of AI and Machine Learning (ML) in UK financial services and the regulators’ supervisory activity. A subsequent feedback statement, FS2/23, was released on 26 October with responses from 54 stakeholders across different sectors. The feedback statement does not indicate specific regulatory plans. However, it does summarise what we might see based upon the common themes amongst respondents.
DLA Piper reported on “The human factor in AI” in October 2022, considering how (DP) 5/22 sat within broader ongoing AI-related work including a report by the AI Public Private Forum in February 2022. Chapters 2, 3, and 4 of the feedback statement capture the responses corresponding to the same chapters in (DP) 5/22. The key points to take away from the feedback statement are:
- A formal AI definition is not useful; there should be a focus on principles and risks;
- Regulatory guidance should be updated due to rapid AI changes;
- Ongoing industry engagement is important;
- AI regulation should be coordinated and aligned across regulators;
- Respondents seek alignment in data regulation, especially for fairness and bias;
- There needs to be prioritisation of consumer outcomes and ethical aspects;
- Regulators should address concerns about third-party AI models and data;
- Collaboration across business units for AI risk mitigation should be encouraged;
- There is a desire for the strengthening and clarification of guidance for AI model risk; and
- Current governance and regulatory structures are sufficient for AI risk management.
Should there be a financial services sector-specific regulatory definition of AI?
(DP) 5/22 asked whether a sector-specific regulatory definition of AI would be conducive to safer adoption. The majority of respondents said such a definition would not be beneficial. Concerns were raised over how future-proof a specific definition would be given the rate at which AI technologies are developing – any definition may be too broad or too narrow.
Whilst some respondents asserted clear definitions avoid misinterpretations, others flagged that definitions would vary across jurisdictions, only adding further complexity. A more general regulatory definition of AI may be welcomed, and overall, respondents felt a definition specific to the financial services sector may prove ineffectual. Moving forward, financial regulators may avoid such definitions when determining their approach to AI within the sector.
Numerous respondents favoured a technology-neutral, outcomes-based, and principles-based approach to regulation. They noted that many AI-related risks can be addressed within existing legislative and regulatory frameworks.
Reasons for suggesting a principles-based approach centred around its adaptability to the pace at which AI is developing. This would overcome the rigidity of a sector-specific definition, and facilitate customization of risk identification, assessment, and management for specific AI use cases. Moreover, respondents stressed the importance of an approach accounting for outcomes and proportionality, looking to consumers and markets to inform regulatory strategy.
Whilst risks related to specific AI applications may require a targeted approach, the consensus is that this approach should remain consistent with the current regulatory framework, i.e., one that focuses on positive outcomes for both consumer and markets.
Where are key risk areas are expected to be prioritised?
(DP) 5/22 asked respondents to highlight areas of benefit but also, areas of risk. Key risk areas identified across the responses included:
- Consumer protection – whilst AI has potential benefits for consumer experience, a significant risk is posed by potential for bias and discrimination.
- Competition – the impact of AI on competition is uncertain, with varying views on entry barriers and competition risks.
- Market integrity and financial stability – both are at risk due to AI’s speed and scale, potentially leading to new forms of market manipulation.
- Governance – there is significant concern over a lack of sufficient oversight, with some firms lacking the skills for effective AI monitoring and risk mitigation.
- Evidencing -third-party AI providers should offer better evidence of responsible development and governance to mitigate systemic risks associated with outsourcing.
With growing dependence on third party providers, firms might not fully comprehend the models or data used by external parties, creating potential control issues. Whilst firms need to develop in-house AI expertise to ensure effective governance, there may be a shortage of experts with a deep understanding of emerging technologies and associated risks.
Where might there be challenges novel to financial services?
There was no consensus among respondents on novel challenges specific to AI use, with some suggesting there are no new challenges at all. Open-source models and third-party AI services could make due diligence and regulatory conformance difficult. There are fears over hostile actors gaining access to tools causing systematic risks to financial markets. Generative AI in particular was highlighted as a cause for concern in relation to disguising money laundering and fraud.
What risks might AI pose to sharing protected characteristics and how can these be mitigated?
A majority of respondents emphasised the importance of addressing the risk of bias and discrimination against consumers with protected characteristics or vulnerabilities that AI may pose. They stressed the need for representative, diverse, and bias-free data for AI systems. Some respondents meanwhile noted that AI can help mitigate consumer harms by identifying unfair or discriminatory patterns, creating products for vulnerable consumers, and enhancing financial inclusion.
In terms of support from regulators, respondents recommended that guidance and case studies be issued clarifying regulatory expectations and establishing best practices.
What metrics are most relevant to assessing benefits and risks of AI in financial services?
As might be expected, respondents stressed that the relevant metrics depend on the specific use case of AI. Metrics should align with the application (e.g. payment processing, anti-money laundering, etc).
Around half of respondents emphasized that metrics focusing on consumer outcomes are crucial for assessing AI benefits and risks. These metrics include customer-centric product design, customer engagement, customer satisfaction, and complaint data.
Important areas for metrics include data quality, representativeness, drift, model accuracy, robustness, reproducibility, and ease to explain. Measuring the system’s autonomy and its ease of being turned off without business disruption were also noted as significant.
How should regulatory frameworks for financial services pertaining to AI be clarified and enhanced to ensure the safe and responsible adoption of AI?
One respondent emphasised that existing regulation covering AI matters in the finance space is sufficiently extensive, cautioning against unnecessary new requirements. Others highlighted the relevance of various laws relating to discrimination, intellectual property, contract, and electronic communications.
The feedback statement flags substantial concerns over data protection laws, particularly the challenge of applying the “right to erasure” to AI training data under UK GDPR.
Highlighted by one respondent was the importance of a consistent approach by regulatory authorities. Others argued for an industry-wide data quality standard, as existing legal and regulatory requirements lack a common standard for regulated firms. Some suggest international regulatory harmonization, citing the EU AI Act, and emphasise the importance of cooperation mechanisms for information-sharing across jurisdictions.
What can current industry standards contribute to the safe and responsible adoption of AI in UK financial services?
Regarding international standards, respondents cited several examples, of initiatives such as: International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) joint committee on AI international standards. Another that was mentioned was the AI Standards Hub (established jointly between the Alan Turing Institute, the British Standards Institution, the National Physical Laboratory, and HM Government).
Are there actually any regulatory barriers to the safe and responsible adoption of AI in UK financial services?
Respondents generally argue that existing data regulations fall short in addressing AI-related risks, particularly in areas such as data access, protection, privacy, and monitoring bias. Many suggest a need for alignment in supervisory authorities’ data definitions and advocate for greater coordination and harmonization among regulators in different sectors.
What elements or approaches promote or hinder effective competition in the UK financial services sector?
Respondents stressed that addressing regulatory uncertainty by providing clear expectations is crucial for effective competition in the sector and could encourage firms to innovate, particularly smaller firms which are often disproportionately affected by such uncertainty.
A proportionate regulatory approach was urged to avoid excessive costs that could hinder innovation and widen the gap between regulated and non-regulated financial services firms. Moreover, there were also calls for collaboration between financial firms, regulators, academics, and technology practitioners. Some suggest that open banking could also enhance data access.
In conclusion, the feedback statement, FS2/23, does provide valuable insights into the use of AI in UK financial services. The key takeaways of the feedback given by respondents include a focus on principles and risks over a formal AI definition, updating regulatory guidance to keep pace with AI advancements, and prioritizing consumer outcomes and ethical aspects. Respondents largely favour a technology-neutral, principles-based approach to AI regulation for its adaptability and customization in the financial services sector.
Key risk areas and challenges in AI adoption were identified, emphasizing the importance of diverse and bias-free data, addressing data bias, and enhancing financial inclusion. Metrics for assessing AI benefits and risks should align with specific use cases, with a focus on consumer outcomes and important technical aspects.
Regulatory frameworks should clarify existing regulations, harmonize international standards, and address challenges posed by data protection laws. Currently industry standards contribute to safe AI adoption, while regulatory barriers need alignment and harmonization.
To promote effective competition in the UK financial services sector, clear regulatory expectations, a proportionate approach, and collaboration among stakeholders are all crucial. Overall, the feedback statement underscores the need for flexibility, adaptability, and a consumer-centric approach in regulation to consider the dynamic nature of AI technology.
FS2/23 does not specify the timeframes in which the financial regulators will finalise their approach to AI. There is, however, growing national and international focus on AI regulation, evident in the recent declaration signed by 28 countries and the EU following the UK’s AI safety summit at Bletchley Park. In this context, business within the financial services sector should remain alert to further regulatory updates.
By Sophie Lessar
Find out more
For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.
Gain insights and expert perspectives that will help shape your AI Strategy. Pre-register here.
DLA Piper continues to monitor updates and developments of AI and its impacts on industry across the world. For further information or if you have any questions, please contact Sophie Lessar or your usual DLA Piper contact.