Following the publication of a White Paper on AI by the European Union on 19 February 2020, which we analysed here, we, together with H5, IEEE and the Future Society, hosted a roundtable discussion on AI and rule of law with participants from the European Commission, Council of Europe, Tim Clement-Jones from DLA Piper and chair of the House of Lords Select Committee on AI – as well as academics, general counsel and industry practitioners.

The Commission is looking closely at regulation, but the question is, how?  Should regulation apply (horizontally) to all activities and businesses to regulate universally accepted issues? For example, is it agreed that a human should always have the final say on whether a life support machine is switched off?  Or should regulation apply (vertically) on a sector-by-sector basis, acknowledging that some sectors represent a higher risk than others?

Below are the key discussion points from the roundtable.

On AI in context

At the outset it was remarked that AI presents many opportunities to society but is often characterised as a threat to humanity. The panel discussions explored policy and regulatory developments, particularly in the light of the recent EU White Paper and the likely direction of travel of any regulation of AI, adequacy of the current legislative environment, and business community concerns.

On legislative efforts

  • It was acknowledged that legislating this area is no small task; the relevant laws are scattered – the applicable laws, regulations, and guidelines cover a wide range of issues, including data protection, product development, defence, healthcare and education.
  • Regulators are at the stage of analysing whether the current frameworks are sufficient and if not, identifying where the gaps are. There is a risk that new laws introduced too early, for example, regulating the secondary use of data for AI-training purposes, may undermine the perceived adequacy of existing provisions.  Indeed article 22 of GDPR has already, and recently, in the view of some, dealt adequately with the issue of automated decision-making and profiling.
  • EU competition law is a flexible tool, but has started to focus in particular on the accumulation of data by the major platforms.
  • There’s a paramount requirement for transparency – a user should always know whether they are dealing with a human or a machine.
  • The narrative of regulation can be a burden distracting from the issue itself.

On the current state of AI

  • There are potential challenges in applying AI norms based on one culture in other parts of the world.
  • From a commercial perspective, there is a danger that AI is treated as just another tool with which business can be done.
  • There is also a risk of Balkanisation of the regulatory framework between different countries and regions while businesses want certainty and convergence. Current reviews may take standards to the highest regulatory bar (as with GDPR), however, there is potential for confusion.
  • Ownership of IP – do we need new laws? Or can we extend the application of existing laws – for example, database rights – “an investment (in human and technical resources and effort and energy) in the obtaining, verification or presentation of the contents of the databases”.

On issues which will impact the new regulations

There is no agreement on the definition of AI.  What is the purpose of the definition in terms of what we want to achieve in the law? We should keep the definition relatively broad, to ensure that relevant technology is not inadvertently excluded, and resist limiting the definition to unsupervised learning.  However, though we should err on the caution of limiting the definition,  automated systems that use very clearly defined “if, then” processes only should not be included in the definition of AI.

About the authors:  Cezary Bicki, Katherine Hurrell, Holly Pearsall and Mark O’Conor are members of the DLA Piper technology sector team.