Part 1: Key Legal Considerations for Agentic AI.

Do you have GenAI fatigue? Well, there is a new kid on the block which promises to transform the use of AI. 

The name’s AI… Agentic AI. Licensed to skill…   

Agentic AI can independently act and adapt to changing environments without relying on existing data patterns like GenAI. Agentic AI promises more goal-oriented autonomous systems, ideal for matters involving complex decision-making and iterative planning. It has the potential to transform businesses that are exploring whether GenAI can truly offer the evolution first promised. 

Use cases examples for Agentic AI are evident across all sectors:

  • AI e-commerce agents, that can manage product placement based on consumer purchasing dynamics to enhance visual merchandising, predict customer demand based on seasonal product launches and trends or ensure greater precision in stock levels and faster go-to-market; 
  • AI writing agents that can independently generate marketing content based on personalised writing styles to suit specific audiences; 
  • HR assistant AI agents that can manage payroll processing or interview scheduling on behalf of hiring managers; 
  • AI agents that can stand in for employees at sales meeting to represent your tone of voice and capture meeting notes; and
  • AI coding agents that can improve development productivity by supporting automated testing. 

At our recent Outsourcing Symposium: Dynamic Outsourcing, DLA Piper explored the transformational impact Agentic AI is likely to have on the business process outsourcing industry. Agentic AI can enable autonomous, real-time, management of workflows without the need for cost heavy human oversight. This can ultimately automate tasks that had previously been labelled as “too complex”, with greater scope for end-to-end issue resolution learning from past interactions.

There is already a drive for innovation and operational efficiencies, with a Boston Consulting Group survey suggesting nearly two-thirds of leaders are using AI to reshape their organisations. The adoption race is on. However, there is potential for premature reliance on these systems without proper controls. Effective governance is crucial for ensuring compliant development, deployment, and use of AI agents. Only 38% of respondents to our 2024 Tech Index confirmed that they had a comprehensive risk framework, underpinned by ethical principles, impact assessments and review boards, to oversee use of AI.

Legal considerations

So, what legal considerations are at play when engaging with Agentic AI and ensuring responsible governance?

In the first of our three-part series, we delve deeper into ensuring effective contracting, transparency and establishing best practices in the roll out of Agentic AI.

1.              Effective contracting 

Contracting with a technology vendor for Agentic AI presents multiple drafting issues. Increasingly, Courts are placing emphasis on precision in drafting. The scope of responsibility for AI Agents and their output, technology lock-in, and IP ownership are some of the more significant issues to consider.

Scope of Responsibility 

Agentic AI provisions in contracts should make it clear that the AI Agents are, like human agents, representatives of, and therefore the responsibility of, the party that is deploying them. This becomes more challenging when the AI Agents are trained on and autonomously adapt from ingested customer data, particularly when trained on multiple data sources. This risks the ability to clearly understand and interrogate the reason the system took a particular course of action.  

Clearly defining Agentic AI capabilities and the extent of its autonomous decision-making powers is therefore crucial. One approach is setting “hard boundaries” within the contract provisions, outside of which different liability regimes apply. Another is to focus on “human in the loop” oversight, whether by the vendor or customer, depending on the services. However, if oversight is the customer’s responsibility, it raises questions about the supplier’s liability for errors and AI hallucinations within the system, especially since customers can only perform spot-checks. As a result, there is need for an increased focus on ensuring contracts are drafted with clear provisions that deal with transparency and explainability (which we discuss further below), as well as ensuring that once issues are identified, the risk of further error repetition is correctly allocated (which we’ll discuss in part two).  

Technology Lock-in

As businesses increasingly utilise Agentic AI, there is an increasing risk that they will become “locked-in” to particular third-party technology solutions due to the financial and time-based costs associated with going back to an alternative untrained model, limiting leverage to secure a good deal when renewing.  It’s therefore crucial to plan for exit strategies upfront, considering future transitions and data portability and consider what, if any, elements of the trained solution belong to the customer that can be utilised in the future. Even if there are elements that can continue to be utilised, the practical reality is that transferring data and learnings between different systems can be complex and resource intensive. Therefore, clarity on data portability practically and legally under the contract is important.  Businesses could consider modelling what any data transfer may look like on a trial basis, so as to ensure they are ready to actually move technology vendors when required. Businesses may also begin to engage multiple Agentic AI vendors as a solution to reduce dependence and enhance flexibility.

IP ownership and right to use rights

IP ownership should be front of mind when contracting for Agentic AI. First, consider if there is likely to be resulting IP created by Agentic AI? This is a question still undetermined in relation to AI generated content more generally. Arguably the prospect of IP rights being created by Agentic AI is lower since, by its very nature, Agentic AI is more autonomous. Content created by it therefore may lack the necessary ingredient of human input and creativity to qualify for protection. Second, whether IP rights are created or not, the contract should specify which party owns materials created by agents. The ability to potentially re-use outputs can be essential for business continuity if a business wants to move vendor. Even if the customer can negotiate ownership of an agent’s outputs, it’s likely that the provider will want to retain some rights, especially those relating to outputs based on pre-trained models, and the underlying AI model itself. If that’s the case, the customer will want to ensure it has an appropriate post-term licence that permits it to use those retained rights to the extent necessary to continue using the outputs.

Agentic AI providers often require rights to use client data to improve their models (so called ‘feedback loops’). The contract should provide for or prohibit this, or at least set the terms upon which a loop may operate (for example if data should first be anonymised).

2.              Transparency and Explainability

Transparency and explainability are imperative for the ethical use of Agentic AI systems to increase trust in AI outputs. Explainability is being integrated directly into the model design to address challenges on fairness, like GDPR “privacy by design”, demonstrating that compliance can be a path toward innovation and revenue expansion.

So, what are the strict legal requirements? Fundamentally, the EU AI Act is a product safety regulation ensuring the safe development and use of AI systems. With a couple of exceptions, it does not create any new rights for individuals.  In contrast, the GDPR, a fundamental rights law, provides individuals with broad data processing rights. However, parallels can be drawn when it comes to transparency.  

Users must be informed when they are interacting with an AI agent (as a deployer must disclose when content has been artificially produced or modified), unless it’s obvious to a reasonably informed person or another exception applies in Article 50 of the EU AI Act. This information must be presented clearly and separately, at the first point of interaction (e.g. akin to a just-in-time GDPR notice). Further, AI-generated content must also be identifiable, especially if it’s intended to inform the public. 

Articles 11 and 13 of the EU AI Act also requires providers to prepare instructions for AI system use and provide information about its input data to ensure the deployer can interpret the system. Technical information about how certain high-risk systems monitors performance must also be maintained, although some simplified obligations apply to SMEs and startups in this regard. These commitments aren’t offered by default by the large players in the market contractually, but we are seeing customers seeking this out in the template addendums. It might therefore be more readily acceptable to consultants and systems integrators. 

It is therefore a tricky balance between vendors wanting to protect the “crown jewels” and maintain a competitive advantage against transparency requirement and the need to interpret and monitor performance. However, there is an inherent distrust in this “black box” dilemma. Ultimately biased data can lead to costly mistakes, so building that accountability into the system itself, whilst potentially complex, is critical for successful responsible deployment. 

Emerging trends in technical controls, like feature attribution, model distillation, counterfactual explanations, and federated learning, help clarify AI decision-making and improve compliance. These open-source techniques ensure users can understand AI decisions and intervene if issues arise.

3.              Evaluating Best Practices 

To accelerate digital transformation, AI agents will integrate with vast databases of information. Unrestricted (or insufficiently restricted) database access by AI agents can lead to vulnerabilities and cyber risks. The large datasets analysed by AI agents may inadvertently include sensitive information if a company has poor data governance or classification guardrails.

Meanwhile, as adversarial attacks become more sophisticated, attackers can manipulate data, causing incorrect outputs and data poisoning. If an AI agent solution is compromised, depending on the configuration, this could act as a gateway to access broader systems, networks and databases and reveal costly vulnerabilities. Regular data testing and avoiding over-reliance is therefore crucial. Refreshing backup strategies and conducting tabletop cyber exercises and data protection assessments is strongly recommended. Aligning with ISO 42001 can also help mitigate risks and, if it is approved as a harmonised standard, can result in presumed conformity with certain elements of the EU AI Act relevant to high risk systems.

Next Steps

Please reach out to Linzi Penman or Chloe Forster if you would like to give further thought on how you can ensure that these systems are deployed ethically with a suitable contractual risk allocation. We can help ensure you Live and Let AI with responsible oversight and governance.

Agentic AI will return in…

You’re Only Liable Twice: Agentic AI errors. Part two in this series will have explainability issues, vicarious liability for AI agents and the ability of AI agents to enter contracts in its sights – together with some practical tips for building in levels of oversight of Agentic AI.

No Time To Hire: Will Agentic AI transform the workplace? The final chapter will consider bias, discrimination, human in the loop and the impact on the workforce (including the impact on wellbeing working with AI agents). Stay tuned.