The Big Data Challenge

Earlier this month, we highlighted that there will be an increased focus by the Data Protection Commission (DPC) in Ireland on ensuring privacy by design and default during 2020. From the regulator’s perspective innovation is encouraged, provided it is done in an accountable, ethical and fair way. Already, with the codification of Privacy Impact Assessments and accountability under the GDPR, we are seeing clients realise that key decision makers in the organisation ought to be fully briefed on how risks are mitigated in relation to Big Data and artificial intelligence (AI) – enabled technology projects.

As we reported back in June, the promise of new technology, such as AI – based automation and enhancement has turned many businesses into data hoarders. That approach might not work for long as the EU recently urged European businesses to capitalise on their vast data resources.[1] The use of AI solutions is one way to do so. The holy grail for many financial services (FS) firms in particular, is the single customer view; mine the data for knowledge and automate delivery, in order to enhance the relationship with the customer. This can be achieved, but not without large databases of quality data and very sophisticated technology solutions.  Investment in both continues to grow.

AI Leaders in Financial Services [2]

Legislating for AI

Much of the focus in terms of regulatory and compliance risk in relation to AI – enabled innovation is around privacy. But that lens is too narrow. Indeed, current EU legislation on data protection, competition and consumer protection does not define ‘big data’ clearly or at all, which arguably creates a regulatory blind spot that will need to be addressed.

The EU is advocating for the need to prepare for the socio-economic changes brought by AI and to ensure an appropriate ethical and legal framework for it. Last year, the EU Commission published the Ethics Guidelines for Trustworthy Artificial Intelligence and the Report on liability for Artificial Intelligence and other emerging technologies. Expect more during 2020 from the Expert Group on Liability & New Technologies, particularly in relation to whether existing legal frameworks, such as the product liability regime, is fit for purposes when it comes to AI enabled products.

The state of country level regulatory activity in relation to AI currently remains varied across Europe. However, the regulators and legislators in most countries recognise the importance of AI and have started formulating their policies.


In the US, the White House issued guidelines for federal agencies on how to approach AI regulation, and it emphasises the need for proportionate approach where less regulation may be preferred:

Fostering innovation and growth [of AI] through forbearing from new regulations may be appropriate”.

The recently proposed Algorithmic Accountability Act 2019, while still in draft, is an example of a concrete legislative attempt in the US to address concerns in relation to ethical and accountable use of AI. It would require bias and security impact assessments (thereby going beyond privacy concerns) to be conducted by a wide range of players prior to implementing new AI based product and service.

Much of the regulatory activity internationally is being informed by OECD’s values of trustworthy AI, which include: AI driving inclusive and sustainable growth, diversity and fairness, transparency, security, and accountability.[3]


Regulatory Guidance in the Financial Services Sector

In relation to financial services, we think the distinction between privacy and more ‘general’ regulation will be increasingly blurred due to more co-operation and convergence between the two regulatory competencies in relation to AI – enabled innovation.

As the old adage goes, you will get out of [AI] what you put in to it. Bad, unstructured data makes for unsatisfactory results and sometimes additional risk.

The Central Bank of Ireland (CBI) has publicly stated that there is a significant challenge with data sourcing and management amongst firms it regulates:

Firms need to have a single source of their key data if they are to rely on it for critical intelligence and decision-making.   Those that manage this transition best are likely to be the firms that survive and thrive.” [4]

In other words, banks and other financial services firms need to substantially invest more in their technology and data capabilities. Effective management of technology risk requires IT systems that are integrated and up to date. Data cannot be captured, interrogated and exploited in business and operational silos. The governance, legal and risk management structures and skill sets in financial services firms should not be playing a catch-up with the technological innovation.

The CBI is not alone in expressing such views. Just like the potential of AI technology, the regulatory challenges and perspectives around AI are also borderless.

The Monetary Authority of Singapore (MAS) has confirmed that when used responsibly and effectively, AI has significant potential to improve business processes, mitigate risks and facilitate stronger decision-making. It worked closely with the Personal Data Protection Commission (PDPC) to develop a set of principles for firms to use in their internal governance structures to govern use of technologies that assist or replace human decision-making.[5] The principles are built around four key concepts and below we give a brief outline of each, an example of how failure to adhere to such principles might manifest itself when using AI in financial services and a possible risk mitigant:


Principle Rationale for Principle AI Use Case Risk Mitigant



That AI – enabled decisions are aligned to the companies’ own code of conduct and are held to at least the same ethical standard as human decisions.

Advancements in conversational AI make it difficult to distinguish interactions with digital assistant tools from real human to human interaction.


Include a notification at the start of interaction to indicate the customer is engaging with an automated assistant, speaking on behalf of the human and giving an option to divert to human interaction.



That decisions are regularly reviewed and validated for accuracy and relevance, and to minimize unintentional bias.


A credit score decision is generated to predict suitability for new credit (and related interest rates) based on prior repayment history and other unstructured external data sources. The  output is affected by the bias in the algorithm and data and disproportionately ranks women lower than men and favours certain customer classes over others. Implement controls during the data preparation and feature engineering phases to detect and prevent bias and discrimination. Use data skewness analysis to the training dataset to verify that the different classes of the target population are equally represented (e.g. oversample under-represented classes ). Use crafted data sets to test models against discriminatory behaviour and regularly monitored the AI system in production to ensure that it does not display  discriminatory behaviour.



Firms are accountable for both internally developed and externally sourced AIDA models and customers are

provided with channels to enquire about, inform,  submit appeals for and request reviews of AIDA-driven decisions that affect them.



A customer claim is brought in relation to the mis-selling and incorrect pricing of a financial product and part of the claim alleges that use of AI contributed to the loss.

Track and document the criteria followed when using the AI model in a way that is easily understood and  keep a register of the evolution of the models. Ensure there is a means to enable the repetition of the process by which a decision was made, the correct version of the model and data could be used. This may require that the AI model and data will need be recovered from repositories with previous versions of models and data.


Make sure the use of AI enablers is traceable through the use of audit logs and establish an audit programme for both the technical design and operation use of AI but also the governance structures that resulted in that use being approved.



To increase public confidence, use of AI is proactively disclosed to customers as part of general communication and customers are provided, upon request, clear explanations on what data is used and the consequences of AI decisions made.


A customer complains to a regulator querying how much of the data being used by the bank within its AI solution is being taken from external sources e.g. social media and online interactions and how. Make the data, features, algorithms and training methods available for external inspection e.g. by giving the customer a definitive list of data sources and offering clearly illustrated contrastive explanations  i.e. generating two contrasting explanations, for example in the form of two opposite feature-ranking diagrams, showing the features that contributed most to that result (with their importance) and the features that least contributed to that result.

The MAS confirmed that existing risk management models need some refinement to deal with the challenges of AI – enabled innovation.

In October 2019, the Basel Committee on Banking Supervision’s (BCBS) Supervision and Implementation Group (SIG) held a workshop on the use of artificial intelligence (AI) and machine learning (ML) in the banking sector which publicly confirmed a view that AI models may amplify traditional model risks for banks.[6] A key area of risk highlighted by participants the quantity and quality of vast data sets, data access and engagement with third parties that use or store data.  This also raises the challenge of effective third party vendor management in support of a AI strategy.

The European Banking Authority has gone a step further. It considered credit scoring as a clear example of where AI – enabled manipulation of data can deliver real benefits for banks. It highlighted the need to manage legal, conduct and reputation risk but also noted that risk could be higher if external providers are involved in such AI-enabled solutions than it is for services developed in house. It also flagged that ICT change and security risk may possibly increase, as ICT systems would need to develop to be more open to different data sources or technology providers and allow more agility in the use of data. Therefore, the procurement, diligence, negotiation and contract management strategy within organisations should be support any AI innovation strategy. In Part 2, we will take a closer look at third party vendor – supported AI.

Financial services regulators are promoting principles and giving guidance through public statements to set the expectation that firms need to ensure their governance model is fit for purpose when applied to AI enabled innovation. In time, these regulators will become more proactive in asking firms to demonstrate they fully understand their data assets and to explain how that data is exploited and how the associated risk is mitigated when using AI – enabled technologies. Financial services firms should develop a coherent AI strategy now in a way that anticipates how they will answer that question when it inevitably comes.


[1] Financial Times, Europe urged to use industrial data trove to steal march on rivals,, accessed 15 Jan 2020

[2] Deloitte Development LLC, Deloitte Insights, AI Leaders in financial services, 2019,, accessed 10 Jan 2020

[3] OECD Principles on AI,, accessed 15 Jan 2020

[4] The need for resilience in the face of disruption: Regulatory expectations in the digital world – speech Deputy Governor Ed Sibley, 03 October 2018,, accessed 14 Jan 2020

[5] Monetary Authority of Singapore, “Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector” (“FEAT Principles”), 12 November 2018,, accessed 14 Jan 2020

[6], accessed 14 Jan 2020