The Digital Regulation Cooperation Forum (“DRCF”) has issued a call for views on the benefits and risks of algorithmic processing uncovered in their research and set out their plan of action for the coming year.

Over the past year, DLA Piper has covered a number of developments in the area of AI and algorithms, from initiatives to increase transparency to practical considerations organisations may take in achieving algorithmic accountability. In doing so, a trend of increased use and acceptance of ‘algorithmic processing’ in business has become apparent.

In many ways, this is incredibly beneficial – such processing can be used to detect fraudulent activities in financial services or provide better care to patients in the healthcare sector. However, as noted by the DRCF in a recent press release, algorithms, and particularly those utilising modern AI and machine learning (“ML”) practices pose significant risks (such as harmful biases leading to discrimination or reinforcement of inequalities) if not adequately checked. The publication goes on to note that while progress has been made in addressing the concerns raised by the technology, it has largely been a fragmented effort and more work is required.

The pervasive nature of algorithms throughout sectors in the UK therefore requires a holistic approach by regulators to:

  • highlight the nature of these risks to stakeholders and the public; and
  • take measures to mitigate them, while limiting the impact of actions on the positive aspects of algorithmic processing.

In effort to meet these requirements, and in response to the findings of their research, the DRCF have included algorithms as a key priority in their latest workplan for regulatory collaboration.

In this article DLA Piper briefly:

  • details the constitution of the DRCF and their purpose;
  • reviews the research by the forum on algorithmic processing; and
  • summarises the workplan of the regulators for the coming year.

The Digital Regulation Cooperation Forum

In response to the absence of a coherent regulatory approach in the UK to digital technologies, the DRCF was formed. Initially the collaboration comprised of the Competition and Markets Authority (“CMA”), the Information Commissioner’s Office (“ICO”), and the Office of Communications (“Ofcom”). In April 2021, the three regulators were joined by the Financial Conduct Authority (“FCA”), establishing an even greater level of regulatory cooperation when addressing the concerns of digital technologies within the UK.

In carrying out their work, the DRCF approach their role with a focus on three overarching goals:

  • to promote greater coherence, so that where regulatory regimes intersect the DRCF can resolve tensions and clarify regulatory positions;
  • to work collaboratively on areas of common interest and jointly address complex problems; and
  • to work together to build the necessary capabilities, developing from the learnings of each regulator.

It is hoped that the increased collaboration of regulators within the UK, and by focusing on the three primary goals of the DRCF, that the UK will serve as a coherent and responsive location for the development of digital markets.

Findings of the Algorithmic Processing Workstream

Alongside the publication of their workplan for 2022 – 2023, the DRCF published two discussion papers from their algorithmic processing workstream on the benefits and harms of algorithms and on the landscape of algorithmic auditing in the UK (the “Research”).

The benefits and harms of algorithms:

The first paper sets out an initial assessment of the benefits and harms that can arise from the use of algorithmic processing in the delivery of digital services and technology. The paper primarily covers:

  • where and how algorithmic processing is being deployed in sectors regulated by members of the DRCF;
  • the benefits and harms associated with such deployments;
  • the extent to which those harms are currently mitigated; and
  • the types of issues that may potentially arise as algorithmic processing evolves.

The paper does so by setting out six common areas of focus among the regulators:

  • transparency;
  • fairness;
  • access to information;
  • resilience of infrastructure;
  • individual autonomy; and
  • healthy competition

The paper goes on to outline a number of current and potential harms (including lack of accountability and biased outputs) and benefits (including more effective utilisation of data) in relation to the aforementioned focus areas. Finally the paper goes on to explore the roles UK regulators may take in their future work.

Among others, the paper highlights a number of key ‘takeaways’ from the research of the DRCF. These include:

  • Algorithms offer a number of benefits which can be leveraged with responsible innovation;
  • Harms can occur both intentionally and inadvertently;
  • Those procuring or using algorithms often know little about their origins and limitations and the potential risks this brings;
  • There is a distinct lack of transparency in algorithmic processing, which has the potential to undermine accountability;
  • A ‘human in the loop’ is by no means a fool proof safeguard, and further checks and protections are required; and
  • There are limitations to the understanding of regulators of the risks of algorithmic processing.

On conclusion of their research, the DRCF noted that there are a number of opportunities for regulators to collaborate to enable greater regulatory action, including:

  • Working with industry to improve overall understanding of the impact algorithms can have on individuals and society;
  • Supporting the development of algorithmic assessment practices which can identify inadvertent harms, improve transparency, and provide confidence in the deployment of an algorithmic processing system;
  • Helping organisations communicate in greater detail to consumers about where and how algorithmic systems are being used;
  • Engaging with researchers in the field of ‘human-computer interaction’ to better understand issues with human-in-the-loop oversight; and
  • Conducting or commissioning further research or drawing the attention of external researchers to important open questions and areas of research in relation to algorithmic processing.

Auditing algorithms:

The second paper summarises the key issues encountered during stakeholder engagement and establishes an initial DRCF position on auditing algorithms. The paper then goes on to submit a number of hypotheses that are to be tested through the call for input by stakeholders, which will be used to direct the work of their current and future workplans.

In order to do so, the paper highlights a number of issues currently faced when auditing algorithms. For example:

  • It is often difficult to replicate technical conditions under which the algorithm is normally used and therefore it may provide limited insight into its actual behaviour and functioning when applied;
  • Feedback loops may develop during audits as a number of algorithms adapt their behaviour to inputs. In the case of artificial inputs (during the audit), loops may occur and therefore have the same issue as previously noted in that there may be little insight to take from the process;
  • Due to the potential for behaviour to change within the algorithm, it would be difficult to certify that an algorithm behaves as it did when the audit took place, therefore rendering the certification of little use unless restricted to a specific period of time; and
  • Personalisation of otherwise standardised algorithms would result in the algorithm performing differently from other instances due to the variation in inputs. It is therefore difficult to determine whether an algorithm is performing in a faulty manner or just performing differently based on the personalised input data.

The paper acknowledges that should auditing take place, this could be done on a number of different levels – ranging from a light touch review of a system’s governance framework and documentation, through to a more comprehensive audit involving testing of outputs and the technical nature of a system. A helpful breakdown of three distinct audit examples and their uses can be found in the table below.

Governance audit Empirical audit Technical audit
Description Assessing whether the correct governance policies have been followed. Measuring the effect of an algorithm using inputs and/or outputs Looking “under the bonnet” of an algorithm at data, source code and/or method.
Methods Impact assessment, compliance audit (including transparency audit), conformity assessment. Scraping audit, mystery shopper audit. Code audit, performance testing, formal verification.
Example The draft EU AI Act has mandated conformity assessments for highrisk applications of AI. Propublica undertook an investigation into the use of the recidivism algorithms by COMPAS through comparing predicted rates of reoffending with those that materialised over a two-year period. Internal code peer reviewing has become a common practice in Google’s workflow development.

Source: DRCF. Auditing Algorithms. 2022. Pg 15.

In reviewing the existing landscape of algorithmic auditing, the DRCF concluded that the ecosystem is still very much in its infancy. It includes a scattered approach of regulatory intervention, such as the draft Online Safety Bill, non-binding intervention by standards bodies, such as the British Standards Institution, and contributions from industry and research/academic parties. While efforts to proactively identify issues and create more effective means of audit, progress is slow and is still in need of work to be of a suitable level.

These findings largely translated to the submissions of what the potential future audit landscape could look like. It is anticipated that some regulators (likely those more focused on specific technical elements and activities, such as credit worthiness decision making) will take more of a hands-on approach, while others may seek to encourage self-governance. Where audit is required, the type and depth (as detailed in the example table above) will likely vary depending on the nature and size of potential harms, whether clear rules or regulations are in place, and the anticipated costs for those involved. While no outright answer or correct approach to auditing algorithms appears clear, the contributions and insight from stakeholders in responding to the call for evidence is likely to be invaluable in determining details on when, how, and by whom should auditing of algorithms be carried out.

What’s in store for 2022 – 2023

While the first workplan of the DRCF focused on building strong foundations needed to facilitate collaboration, their latest version sets out the priority projects (such as the algorithmic processing noted above) for the forum over the next twelve months. Although broad in its approach, the DRCF acknowledge that they are working in an area of rapid change, and therefore are open to amending their collaboration as progress is made.

The plan is split into three sections, in accordance with their goals: coherence, collaboration, and capability, and highlights a number of goal-specific priorities.

Coherence:

  • Protecting children online: A particular focus, in tandem with the Online Safety Bill, is the improvement of outcomes for children and parents online by synchronising the approach to online regulation. This is set to include a joint working agenda to support Ofcom’s Video Sharing Platform regulatory framework and the ICO’s Age Appropriate Design Code regime, as well as joint research on age assurance.
  • Promoting competition and privacy in online advertising: While competition in online markets and economic growth is encouraged, it must be done so while respecting consumer and data protection rights. A notable part of this will therefore include a review by the CMA and ICO of Google’s emerging proposals to phase out third-party cookies and Apple’s ‘App Tracking Transparency’ and ‘Intelligent Tracking Prevention’ features.
  • Further work to ensure a coherent approach across regimes: The DRCF alongside the above priorities seeks to maintain their focus on creating greater coherence between regulatory regimes. It seeks to do so by:
    • mapping interactions between relevant regulatory regimes;
    • publishing a joint statement on its plan to work together to address areas of interaction between the online safety and privacy regimes;
    • developing a clear articulation of the relationships between competition and online safety policy;
    • developing a greater understanding of end-to-end encryption; and
    • building on engagement between Ofcom and the FCA on online fraud and scams.

Collaboration:

  • Supporting improvements in algorithmic transparency: The DRCF note that a key method of supporting the use of algorithmic processing, while mitigating risks, is to increase overall transparency in the technology. A workstream is therefore expected to be established that explores methods of improving transparency and audit capabilities, in a similar fashion to the workstream responsible for the above Research. It is hoped that the regulators themselves will be able to improve their ability to understand algorithms and their ability to audit specific processes, while promoting transparency within procurement of the technologies.
  • Enabling innovation in the industries we regulate: Although regulation is considered necessary, the DRCF want to continue encouraging ‘responsible innovation’ within the UK. They seek to do this by assisting organisations deliver protections to individuals expected under the regulations, and working with them to encourage wider trust in the use of emerging technologies. This particular workstream is therefore set to explore different models for how the forum can coordinate its work with industry to support innovation.

Capability:

  • Improving knowledge sharing through expert networks: It is expected that this workstream will involve supporting relationships between experts within each of the regulators to drive a complimentary approach to policy developments. It is hoped that in doing so, a more informed and impactful approach to policy and regulation can be taken.
  • Building on synergies and bridging gaps in horizon scanning activities: This activity is expected to compliment the ongoing ‘horizon scanning’ programmes already in place within each individual regulator. This particular scanning aims to develop knowledge on areas of rapid development and bridge gaps in understanding through a collective approach and information sharing initiative.
  • Recruiting and retaining specialist talent across all four regulators: While each member of the DRCF plays a different role in regulation within the UK, the forum acknowledges that common skills and expertise are required in regulating technology. The regulators therefore intend to collaborate in attracting, building, and retaining expertise needed to deliver on their current and future responsibilities as digital watchdogs.

How to get involved

The DRCF seek to continue building dialogue with other regulators and stakeholders with overlapping interests. As such, in addition to their latest workplan and research, the DRCF also plans to engage with stakeholders through briefings, events, and a number of industry meetings.

Those wishing to comment or provide feedback specific to the algorithmic processing research carried out by the DRCF are requested to complete the questions set out in Annex A of the Research and send their responses to: drcf.algorithms@cma.gov.uk by 8th June 2022.

Those wishing to comment or provide feedback specific to the workplan for 2022 – 2023 are requested to do so by sending responses to: drcf@ofcom.org.uk throughout the course of the year.

DLA Piper continues to monitor updates and developments of AI and its impacts on industry in the UK and abroad. For further information or if you have any questions please contact the author or your usual DLA Piper contact.