Part 2: You’re Only Liable Twice: Agentic AI errors.

Agentic AI and its ability to perform tasks autonomously on behalf of users, without the regular prompting associated with more traditional AI models, is the new frontier of AI. Many organisations are betting big on the significant advancements and conveniences it affords.

But with great autonomy comes great responsibility — and risk. As with any agent operating under permission to act, questions of accountability, explainability, and liability loom large.

In Part 1 of our series exploring the legal issues associated with the deployment of Agentic AI, we explored some of the overarching key legal considerations.

In this Part 2, we do a deeper dive into some of the underlying liability issues particularly concerning “explainability” issues, vicarious liability and the ability of AI agents to contract on behalf of their principal. We also consider potential mitigations to assist your organisation in extracting the maximum benefit from this potentially game-changing technology in a safe and responsible manner.

Explainability

One of the most pressing issues is “explainability.” This is not an issue unique to agentic AI, but as agentic AI operates on a goal-based approach, working more autonomously than traditional AI models, the challenges from a regulatory and liability perspective are amplified given the complex and dynamic nature in which the AI agent works towards its goals.

If you are not able to explain why an AI agent is making certain decisions it may leave your business exposed to allegations of bias and unethical behaviour. Naturally, such allegations (even if unfounded) present significant scope for financial and reputational harm. Further, any bias which is “baked into” an AI system will be multiplied each time the AI system autonomously makes a decision.

The provisions of the EU AI Act, adopted in 2024 are important to consider in any AI deployment, including agentic. Even for those outside the territorial reach of the legislation, it provides a clear insight into the direction of regulatory travel in this regard, with other countries also exploring issues of explainability, for example the NIST framework in the United States. 

Under the AI Act:

  • in relation to high risk AI systems – such as those used in employment, education, law enforcement and healthcare – there are strict transparency and explainability standards, including clear documentation setting out the systems capabilities and limitations, outputs that can be interpreted by human users and traceability, in the form of log and records that allow for the tracing of decisions and system behaviour, as well as facilitating audits;
  • for general purpose AI systems, there are transparency requirements prompting disclosure of the presence of an AI system, particularly those systems that generate or manipulate content, as some agents may do, and the offering of clear instructions to ensure safe and appropriate use;
  • for other AI systems, there are no mandatory requirements, but voluntary codes of conduct are encouraged.

Whilst agentic AI systems are not automatically classified as general purpose AI (which focuses more on AI models that can be used in many contexts, such as large language models), it is worth also noting that the application of the relevant GPAI provisions may depend on the architecture itself. In reality, we might expect that most agentic systems will make use of GPAI somewhere in the technology stack. In such cases, the baseline transparency obligations in relation to the publication of summaries regarding training data, descriptions of the systems and the supporting of downstream compliance will be relevant, so it’s important to take these into consideration in any deployment of an agentic AI system, whether being done in house or through a contract with a third party. Particularly where engaging with third parties, it can be a challenge to secure these provisions in a contract, as technology vendors are keen to project their “crown jewels” and the black box nature of the technologies. Therefore, for non-High Risk AI systems at least. we are seeing business focus on the need for comprehensive record keeping that can be interrogated should an issue arise.

Liability

There are also difficult questions of contractual liability arising from the use of agentic AI, for instance:

  • An AI agent may be operating perfectly but still produce an unexpected output – would that constitute a breach?
  • Who is to blame if things go wrong – if there are unexpected outcomes, is that because of the original dataset that the system learned from prior to its deployment in the customers, environment, underlying data made available by the customer which is learned from, the training of that data or even the underlying development of the system?
  • Can transactions executed by an AI agent create binding contractual relations, and in what capacity do they act?

The answers to these questions will either depend on the wording of the parties’ contracts or remain open questions under English Law. There are, however, some existing concepts of English Law which a court may utilise to address such challenging issues in the absence of any regulatory intervention.

First, the courts may treat agentic AI in the same manner as it would treat human agents or employees of the organisation deploying such a system. With AI developing at such a rapid pace, the justification for distinction in treatment between human and machine for liability purposes is growing weaker by the day. Further, the law already recognises that relationships “akin to employment” can give rise to vicarious liability (see Trustees of the Barry Congregation of Jehovah’s Witnesses v BXB [2023] UKSC 15).  On this analysis, if there is an adverse consequence resulting from an agentic AI system, it would be the deployer of that system who would be vicariously liable of the actions of its “employee” AI agent.

In a similar vein, the law of agency may assist with issues arising from agentic AI systems contracting with third parties.  For instance, given most contracts can be concluded electronically the AI agent will likely have the technical means, and subject to the nature of the initial goal-based prompt, the power to bind its principal.

However, in the event that the agent goes beyond its powers, acts in an unexpected manner or misinterprets the initial scope of its instructions, we then get into the thorny issues regarding apparent authority, which can be difficult for principals to disprove, and also potential disputes between the principal and the developer of the AI systems in which they seek to pass on any contractual liability to a third party.

Difficult questions regarding liability are not restricted to the four corners of a contract. The concept of vicarious liability applies equally to the law of tort, and it remains to be seen whether an AI agent itself would be deemed to have sufficient legal personality to owe a duty of care to those interacting with it. More likely, courts will rely on vicarious liability principles and/or holding organisations that utilise AI agents accountable for ensuring proper technical oversight. For instance, you can see how a case for negligence could be built against an AI agent’s principal for failure to take some of those mitigations detailed below.

Agentic AI and causation

Once a potential breach and the correct defendant has been identified however, that is only part of the story. It is a central, and trite, concept of English law that (in generally, and putting to one side the question of indemnities), to recover losses for breach of duty and/or contract, the breach of the contract and/or duty needs to have caused those losses. The burden of establishing causation is on the claimant. This could make it very difficult for a claimant who alleges it has been wronged by the use of AI, particularly in the case of agentic AI whereby actions and decisions are made autonomously, to establish what particular action was taken to result in a particular consequence and who was to blame for that action.

To counter this, the proposed (but now scrapped) EU – AI Liability Directive, suggested that in a consumer context that courts should apply a presumption of causality. This would in effect have reversed the burden of proof and puts the onus on the defendant to prove that the action or output of the AI was not caused by them. It therefore emphasised the need for organisations to document how they are using AI technologies, including the steps that have been taken to protect individuals from harm. Despite the directive being scrapped these issues are not off the table and it is being assessed as to whether another proposal or alternative approach should be submitted.

The UK government’s White Paper on AI regulation, recognises some of the difficulties which perhaps were the reason for the failure to reach a consensus on these issues at an EU level. The White Paper emphasised “the need to consider which actors should be responsible and liable for complying with the principles” but goes on to say that it is “too soon to make decisions about liability as it is a complex, rapidly evolving issue”.

In March of this year, The Artificial Intelligence (Regulation) Bill [HL] (2025) a Private Members Bill, has illustrated the ongoing balancing act between a “pro-innovation” approach and ensuring sufficient regulatory guard rails are in place to realise the benefits of that innovation.

From a UK perspective, there is therefore a degree to which we will need to wait and see.  For now, at least each case will remain very much fact dependent, and we can foresee that courts will have to rely heavily on experts to unpick the AI models to establish liability. This is not necessarily different to other complex IT related disputes, so for now it is to a certain extent business as usual, but we will be watching the courts closely.

Mitigations

The liability risks associated with agentic AI should not necessarily dissuade parties from unlocking the potential and savings they can afford. There are also mitigations which can be put in place.

For instance, ensuring that agentic AI systems are explainable by design and may go a long way to assist with rebuffing allegations of bias or wrongdoing. In addition, ensuring that the “human-in-the-loop” is providing the correct prompts, in a clear and unambiguous manner and is aware of the strengths and limitations of the agentic AI system is vital. AI insurance is already on the market and could also be a key consideration for those seeking to use agentic AI. 

OpenAI has published additional suggested practices for keeping agentic AI systems Safe and Accountable[1], including:

  1. Constraining the actions of the agent and placing limits on certain types of important decisions, and ensuring that there remains a “human-in-the-loop”;
  2. proactively shaping the models’ default behaviour according to certain design principles;
  3. automatic monitoring;
  4. a system of attribution with each agentic AI instance assigned a unique identifier, similar to business registrations, which contains information on the agent’s user-principal and other key accountability information; and
  5. an ability to “turn an agent off” and maintain control.

As described in the first part of this series it is also vital that in using agentic AI you ensure that your contractual terms are fit for purpose. For instance, thought needs to be given to:

  • AI specific exclusions and limitation clauses;
  • Disclaimers and/or active acknowledgment of risk;
  • Specific allocations of responsibilities for failings with the AI; and
  • Clarity on the limitations of the AI.

While liability issues remain a key concern in the deployment of agentic AI, it is only one part of the broader legal landscape. The next article in this mini-series will start to explore wider business considerations, turning to the implications for HR and employment law when deploying Agentic AI systems.


[1] https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf