In the rapidly evolving landscape of artificial intelligence (AI) and, in particular, generative AI, significant legal developments have recently unfolded that affect our understanding and approach to possible antitrust/competition interventions in the AI sector. Within the last month, two key documents have emerged:

  • On 30 October, in Washington DC, President Biden issued the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (U.S. Executive Order), which has a separate section on competition.
  • On 8 November, on the other side of the Pacific Ocean in Tokyo, the G7 competition authorities issued a Digital Competition Communiqué (G7 Communiqué), which also included a special focus on AI and generative AI,

For the sake of completeness, it is important to note that another document originating from the other side of the Atlantic Ocean, the Bletchley Declaration by Countries Attending the AI Safety Summit in the United Kingdom, was published in November 2023. Rather than focusing on antitrust issues, however, this document sets out the shared aims of those nations attending the summit as to identifying AI safety risks of shared concern and building risk-based policies to address such risks. For example, it states that, amongst other things, “fairness” as an issue with AI needs to be addressed and underscores the urgency to act “in the context of the acceleration of investment in technology”.

Returning to the U.S. Executive Order and the G7 Communiqué, it can be remarked that both of these applaud the various advantages stemming from AI, while at the same time warning against certain competitive risks:

  • The G7 Communiqué very clearly recognises the key question in respect of currently emerging AI markets: should competition authorities “sit and wait” until the waves of competitive forces form the proper structure of the markets, or should they intervene early to dismantle bottlenecks? The G7 Communiqué states: “Inaction can be especially costly in these markets because consolidated power can stifle [innovation] the development and use of AI [may] become dominated by a few players”. In light of this, AI companies should therefore anticipate vigorous oversight (and possible active intervention) from the U.S. antitrust authorities, even at this relatively early stage in the development of generative AI.
  • Regarding substantive risks, both the G7 Communiqué and the U.S. Executive Order focus on ensuring access to “key inputs” for AI. In the G7 Communiqué these key inputs may be summarised as:
  • the massive amounts of data necessary to train generative AI models;
    • a skilled engineering and research work force; and
    • significant computational resources (such as cloud computing).
  • The U.S. Executive Order covers essentially the same elements, naming “key assets” as including semiconductors, computing power, cloud storage, and data.
  • Regarding conduct, the G7 Communiqué expressly mentions bundling, tying, exclusive dealing and self-preferencing as possible key antitrust concerns, while the U.S. Executive Order merely refers to dominant firms disadvantaging competitors in general.
  • Interestingly, the G7 Communiqué also discusses mergers by stating that incumbents could use acquisitions or partnerships to facilitate the above conduct and underlines that – in terms of market structure – control over data and the network effects of AI markets could create barriers to entry and enable concentration or dominance.
  • Regarding anti-competitive agreements / concerted practices, the G7 Communiqué notes that with increasing reliance on AI by various companies (for example in the setting of their own prices e.g. in real estate rental or hospitality) there is a risk that such tools could facilitate collusion or unfairly raise prices. Again, the U.S. Executive Order is vaguer here, merely making a general mention of unlawful collusion as a key risk.
  • Finally, it is interesting to note that the U.S. Executive Order also mentions “workers” as category of subject that need to be protected also in an antitrust context. This is clearly in line with the wider ambit of the order (which tackles a number of AI-related issues, not only antitrust) as well as with the recent US antitrust enforcement focus on the interplay between labour markets and antitrust.

As the world stands on the brink of a technological revolution, the recent developments highlighted in the U.S. Executive Order and the G7 Communiqué serve as a clarion call for vigilant and proactive antitrust regulation in the AI sector to be developed. At the same time, the nuanced approach towards key inputs, conduct, and anti-competitive agreements underscores the need for comprehensive oversight to foster innovation while preventing market dominance and unfair practices. These developments will also be relevant for other national competition authorities, who wish (or will be compelled…) to navigate the complex, dynamic world of AI and competition.

For more information regarding the U.S. Executive Order, please see our recent publication Safe, secure, and trustworthy: White House publishes Executive Order setting the foundations for future AI regulation in the US.

Learn more about DLA Piper’s Artificial Intelligence and Data Analytics and our and Antitrust and Competition capabilities on our website.