Earlier this year DLA Piper UK’s AI team reported on a world-leading project to pilot the use of algorithmic impacts assessments (“AIAs”), in which a framework for assessing the social impact of applying AI systems to health data is being tested by the UK’s National Health Service. Although the pilot relates to public sector health data, the AIAs under review may be used elsewhere, with the goal of fostering enhanced trust in the use of AI systems in the private sector, for example by assisting software developers to create healthcare apps.

Underpinning the need to implement frameworks that reflect the social conscience of the people building AI systems – and, more importantly, that address the concerns of those using them – is the concept of AI ethics. Since guidelines were first laid out as to how robots should interact with humans, the best way to address the gap between AI and social intelligence has been discussed and prescribed. But where should businesses looking to develop AI systems start and, from a practical perspective, how can the concept of ‘AI ethics’ be operationalised within AI technologies?

Where to start with AI ethics?

Answers may well lie in the application of “algorithmic accountability”.

Algorithmic accountability aims to examine the ethical constitution (or what we might consider to be the “soul”) of AI. Accountability mechanisms allow AI to be put through a series of tests and undergo scrutiny applying ethics-based criteria. An algorithm’s score in such tests determines how the AI performs against ethics standards and determines if the AI system can be considered ethical based on test outcomes.

Taking the position that ‘AI ethics’ is the distillation of the principles that AI needs to implement by applying accountability metrics, in this article we compare AI ethics with algorithmic accountability to understand the distinction between the two concepts before going on to consider how algorithmic accountability may be applied to real-life projects.

Applying ethics through algorithmic accountability

Computer programs are only considered intelligent if they are capable of displaying traits associated with human intelligence. Such traits include social conscience or, more specifically, an understanding of the ethical framework that every human applies to the way they live. But how do we measure ethics; what is the yardstick that demonstrates if ethics are being applied to a course of action? That yardstick is accountability; ethics without accountability has no tangible application, so we need algorithmic accountability to ensure the distillation of ethics into AI.

Let’s examine three practical illustrations where AI ethics principles are operationalised through algorithmic accountability mechanisms:

ETHICS PRINCIPLE(S) ALGORITHMIC ACCOUNTABILITY MECHANISM
Equality and social inclusion Transparency
Trustworthiness Claims-verification
Fairness and absence of bias Data accountability
  1. Transparency Promotes Equality and Social Inclusion

Transparent reporting mechanisms are critical given the increased use of AI solutions in industry and by government and society. The transparency requirement is, in essence, a series of steps that trace and track the development of an AI solution, i.e. the setting-up of a transparent reporting mechanism to ensure ethics are woven into the design, development and build of AI technologies.[1] Transparency and consistent reporting mechanisms are a primary apparatus that customers, regulators and society can use to scrutinise AI solutions, make inquiries of developer businesses and hold them to account if AI violates legal and ethical principles.

  1. Claim-Verification Explains Black-Box Models and Imparts Trustworthiness

When faced with an algorithm, the following may be used to establish accountability:

(i) processes to explain black box models, by which we mean learning models that are machine decipherable only and/or are proprietary functions. This leads to lack of interpretability in AI because the structure of black box models when examined does not reveal any human-understandable insight into outputs or predictions;

(ii) techniques to understand the outcomes of such models;

(iii) means and ways to inspect them; and

(iv) approaches to design transparent systems.

The ethical principle of trust is upheld through a “degree of justification for emitted choices” that can be shared with those seeking an explanation of the system’s accuracy and reasonableness.[2]

The UK Information Commissioner’s Office/ICO and The Alan Turing Institute in Explaining Decisions Made with AI highlight two subcategories for explaining AI decisions:

  • Process-based explanations” that cover the design, build and deployment stages of an AI solution and provide details on its governance; and
  • Outcome-based explanations” that aim to examine a specific decision and expose its detailed workings.  

Articles 13 and 14 of the EU General Data Protection Regulation/GDPR set out requirements regarding the information to be provided to data subjects. These Articles are further supported by Recital 60 of the GDPR, which mandates that data subjects be notified of the processing of their personal data along with reasons for such processing, which together establish the GDPR’s principles of fair and transparent processing. Trust is further fostered when (under Article 22 of the GDPR) AI does not subject an individual to a decision based solely on automated processing, for example profiling, if such processing is likely to have any legal effect or significantly affect the individual.

The intrinsically voluntary character of ethics principles calls for the implementation of a combination of institutional, software and hardware mechanisms that could include:

  • red teaming exercises (a structured attempt by dedicated professionals, known as a “red team”, to find flaws and vulnerabilities in organisations and their technical systems, through the red team’s adoption of an attacker’s mindset and practices);
  • AI incident sharing (publication by AI developers of case studies recording incidents and risks, or instances of undesired or potentially harmful behaviour involving AI); and
  • privacy preserving machine learning (methods that seek to preserve the privacy of personal data in datasets throughout the AI development and deployment lifecycle).

These mechanisms provide the foundation for evidencing responsible AI and allow the verification of claims on AI development. Independent third-party auditors could also play a vital role in assessing developer claims around fairness, privacy and security, as publicly available audit results can lead to increased confidence in both the audit process and the AI solution itself.

  1. Data Accountability Engineers Bias-Detection into AI Development and Promotes Fair and Unbiased Decision-Making

Financial decision-making is a key area where AI can generate and/or perpetuate unfair bias against people of a certain race, place of residence and/or gender. It is of critical importance to ensure that AI systems and their developers are held accountable for automated decisions that contribute to the shaping of worldviews through the creation and propagation of stereotypes that incubate and amplify social rifts and/or that exacerbate economic issues by preventing affected groups of persons from, for example, having access to fair mortgage rates or credit scores based on their race, gender or other unjustified criteria.

Accountability principles that aim to detect harm and bias in AI solutions should be implemented so that algorithmic decision-making abides by ethical standards equivalent to those assigned to human assessments. To be held accountable to the fairness principle, it is key that AI solutions be accompanied by verifiable explanations where decisions impacting the financial standing of individuals are to be made along with justification of the data types that will be processed and corresponding evaluation metrics.

Real-world scenario

Extrapolating the above to the real world, operationalising transparency into the build of a mortgage assessment AI solution could be achieved through what is known as an ‘AI Ethics Charter’. As an enabler of algorithmic accountability, the AI Ethics Charter would capture facts and features of the mortgage assessor AI, span the solution’s development lifecycle and include inputs from stakeholders across an organisation. The voluntary provision of information on training data, inputs and outputs, performance metrics through the AI Ethics Charter would then provide internal and external stakeholders with insight into the model’s decision-making process and provide evidence of the mortgage assessor having embedded the ethical metrics of equality and social inclusion.

In terms of the benefits for both AI developers and end users that this approach would provide:

  • Developers will gain from the accrual of trust in and societal acceptance of their mortgage assessor solution.
  • The inclusion of bias-detection measures prohibiting the use of characteristics such as an applicant’s race, ethnicity and gender in mortgage application assessments generates product goodwill for AI developers and reassures end users that their mortgage applications are dissociated from their personal traits and/or background.
  • Operationalised ethics assists AI developers position their products as having high-quality outcomes and their AI enterprises as trailblazers in the AI, tech-ethics and societal welfare space.
  • The wellbeing of end users will be increased through the ethical and responsible build and deployment of AI.
In conclusion

Algorithmic accountability is the practice of overseeing the incorporation of principled behaviours in AI and their translation into ethical decision-making/output. ‘AI ethics’ is best represented through accountability mechanisms that are essential for a regulator or court of law to review and take action against AI creators where technologies violate ethics principles. Algorithmic accountability is, therefore, the critical substrate on which AI ethics rests and a practical route through which human aspirations for ethical AI can be realised.

 

[1] Richards, J., Piorkowski, D., Hind, M., Houde, S., & Mojsilović, A. (2020). A Methodology for Creating AI FactSheets. Available at: https://arxiv.org/abs/2006.13796.

[2] L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter and L. Kagal, “Explaining  Explanations: An Overview of Interpretability of Machine Learning,” 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 2018. Available at: https://arxiv.org/abs/1806.00069.