Organisations increasingly use AI-enabled tools throughout the recruitment process. These tools screen CVs, score suitability, run online assessments, and analyse behaviour. They can speed up hiring and may help reduce the human bias found in traditional recruitment. However, their use often clashes with data protection rules that limit decisions based only on automated processing. On 31 March 2026, the Information Commissioner’s Office (ICO) published a report and draft guidance on automated decision-making in recruitment. The report draws on evidence from more than 30 employers. It also includes public perception research reflecting views from graduates, civil society, government, trade unions, and industry bodies.

A key finding is that many employers fail to recognise that they are using automated decision-making (ADM). As a result, they do not put essential safeguards in place. These safeguards include transparency, bias monitoring, accountability, and respect for data subject rights. The ICO’s message is clear. Employers must follow the guidance. Where organisations fall short, the ICO signals that enforcement action may follow.

Summary of the ICO’s Key Findings

Most employers told the ICO that they use automated tools only for decision support. In practice, however, the evidence showed that many tools make decisions without meaningful human involvement. The ICO stresses that human involvement must be active and genuine. It cannot be a token step or a rubber-stamping exercise.

A human must be able to influence the decision before it takes effect. They must have the authority, discretion, and competence to change the outcome. If this standard is not met, the process counts as solely automated. This remains true even if a person appears in the decision chain.

The Impact of the Data (Use and Access) Act

The new Data (Use and Access) Act (DUAA) offers greater flexibility for employers using automated tools in recruitment. Under Article 22 of the UK GDPR, the law treated automated decision-making as largely prohibited, with narrow exceptions. The DUAA reframes this position. It creates a right to challenge automated decisions, supported by safeguards, rather than a general ban.

This change gives employers more scope to use automation, provided they put proper protections in place. Where special category data is involved, however, the stricter rules still apply.

Two Routes for Employers from the ICO

The ICO sets out two options for employers:

  1. Acknowledge solely automated decision-making.
    Employers can accept that the process lacks meaningful human involvement. They must then recognise that they are carrying out ADM and apply the required safeguards.
  2. Ensure meaningful human involvement.
    Employers can redesign their processes so a human plays a genuine role in each decision for each candidate.

This second option sets a high bar. For organisations handling large volumes of applications, it will often prove impractical. In reality, many employers will need to follow the first route.

The ICO’s Required Steps Where Decisions Are Solely Automated

Where employers rely on solely automated decision-making, the ICO expects them to take several steps.

First, employers must identify a lawful basis for processing. The DUAA removes the previous limitation to consent or contractual necessity in recruitment, provided no special category data is involved. Employers may now rely on legitimate interests.

Second, employers must provide clear and timely transparency. They must explain how the automated decision works and its likely effects. A brief mention hidden in a general privacy notice will not be sufficient.

Third, employers must implement safeguards. Candidates must know about the automated process. They must have the chance to make representations, request human review, and challenge the decision.

Fourth, employers should carry out fairness testing and bias reviews. This includes questioning vendors about their own bias testing during procurement. Employers should also run trials, monitor outcomes over time, and share clear information about the tools’ accuracy and performance.

Finally, employers must complete data protection impact assessments (DPIAs). The ICO found that many existing DPIAs lack the detail needed to meet legal requirements.

What This Means for Organisations

Organisations should review their use of AI in recruitment. They should assess whether processes involve solely automated decision-making. This is particularly relevant where CV filtering, suitability scoring, or behavioural assessments play a role.

DLA Piper works closely with clients to support compliant use of ADM in recruitment. This work includes developing due diligence processes to assess fairness and bias in vendor tools. It also includes drafting transparency information that meets the ICO’s expectations at each stage of recruitment and preparing detailed DPIAs and legitimate interests assessments.

DLA Piper also helps clients design processes for handling candidate objections and requests for human review. Awareness of data protection rights is growing. Candidates increasingly exercise these rights, often using AI-generated requests. This area demands careful attention. The ICO’s guidance provides limited operational detail, making practical legal support especially valuable.

For organisations operating across borders, DLA Piper uses its global network to align UK and EU approaches to automated decision-making. In the EU, organisations must consider not only the GDPR but also the AI Act. You can view our Data Protection Laws of the World to gain insight on any current or upcoming regulations that may affect you. You may also view our AI Laws of the World to understand the ever-changing landscape of AI regulation.