Last week we were privileged to host a London event with W@Competition, bringing together a dynamic roster of expert speakers with truly diverse perspectives – from regulators, to in-house lawyers, to economists – to discuss the hot topics around antitrust and the rapidly evolving world of Generative AI.
Welcomed by Rebecca Saunders (W@ Executive Committee), Jessica Lennard (CMA Chief Strategy & External Affairs Officer) gave the keynote speech, followed by two panels:
- Panel 1: “Exploring the Generative AI value chain, including the challenges posed by AI business models for merger control” – chaired by Jenine Hulsmann (Partner, Weil) with panellists Annemiek Wilpshaar (European Commission), Jantira Raftery (Meta), Katie Curry (RBB Economics) and Dr Liza Lovdahl Gormsen (BIICL).
- Panel 2: “Navigating the interplay between AI regulation and antitrust tools” – chaired by Lucy Chambers (Associate, Weil) with panellists Professor Annabelle Gawer (University of Surrey Business School, IMD and CMA Independent Digital Expert), Claudia Berg (ICO), Kristina Barbov (Microsoft) and Luisa Affuso (Ofcom).
The room was full (as was the waitlist) and the discussions were passionate and deeply insightful. For those in the room and not, here are our five key takeaways:
- AI business models pose new jurisdictional challenges
New and evolving AI business models and related commercial arrangements have attracted increasing regulatory scrutiny, in particular given persisting concerns over missing perceived “killer acquisitions”.
For its part, the UK Competition and Markets Authority has been testing the limits of its jurisdictional powers to review partnerships and staff- or “acqui-hires”. The CMA’s transparency over its assessment of recent cases – particularly those found not to qualify – is welcome and provides much needed clarity to businesses active in the Generative AI value chain.
Subject to different (and less elastic) jurisdictional tests, the European Commission’s desire to review concentrations falling below the EU thresholds – and to regain the flexibility it claimed via Member State referrals prior to the European Court of Justice’s recent Illumina/GRAIL judgment – will undoubtedly be discussed with the EU’s new competition chief, Teresa Ribera. Longer-term options include (i) revising the EU turnover thresholds, (ii) introducing a new safety valve, and (iii) modifying the Article 22 referral mechanism. In the short-term, the Commission will seek to rely on the use of (increasing) Member States’ call in powers to refer cases for EU review.
- Regulators need to get to grips with the substance to harness innovation
The development of AI technologies represents a huge opportunity for innovation and economic growth. Whilst AI markets may be fast-paced, a slower, more considered approach by regulators may be preferable to avoid the potential chilling effects of over-enforcement. Indeed, Mario Draghi’s recently published competitiveness report warns that: “a weak AI ecosystem would represent an obstacle to EU companies’ digitalisation and productivity.” As such, regulators must perform a careful balancing act, requiring an in-depth understanding of the relevant technologies and market structures, and when and how to intervene, and stakeholder engagement will be especially important.
Meanwhile, concerns about potential under-enforcement are fuelled by criticisms levied during the first wave of digitalization in the early 2000s. But closer examination of the AI value chain may reveal more differences than similarities, and AI may not be Web 2.0. While some digital markets can be characterized by strong network effects, providing significant advantages to early-movers, AI markets are characterized by a diverse range and constantly increasing number of players and evolving business models, coupled with entry throughout the value chain. Digital access points (aka devices like laptops and phones) are few in number, whereas AI access points will increasingly take on new forms, including wearables, and could soon be effectively integrated into our physical world. Therefore, fears around entrenchment in such nascent and evolving markets may seem difficult to rationalize. In this context, regulators should consider carefully which regulatory tool to use before enforcing, and whether to enforce or simply monitor. The CMA has the benefit of using market studies, which will be especially valuable to build up expertise.
- Concurrency is a privilege
The borderless nature of Generative AI and heightened regulatory scrutiny mean that cooperation between concurrent UK and international regulators will increase, with regulators able to develop and share vital expertise. For example, the CMA’s Digital Markets Unit has been working closely with Ofcom and may use its expertise when resources/specialization is limited. Wider UK cooperation has picked up pace via the Digital Regulation Cooperation Forum (comprising the CMA, Ofcom, FCA and ICO). The DRCF recently launched an AI and Digital Hub and publishes reports/guidance, for example around the interplay between consumer and data protection in AI.
Equally, international collaboration has picked up pace with alignment on timing and process where appropriate (for example see the recent joint statement by the US’s DOJ and FTC, the Commission and the CMA). However, incentives and enforcement goals will not always be aligned.
- We need to think beyond the antitrust box
AI has the potential to be transformative for society in a myriad of ways, with opportunities for advancement in areas such as productivity, medicine and education all the way to agriculture. Antitrust is just one piece of the global regulatory jigsaw puzzle alongside areas such as data protection, online safety and consumer protection. For example, the UK Information Commissioner’s Office will look to data protection enforcement in the training of AI models. Meanwhile, following the passing of the Online Safety Act 2023, Ofcom recently published a discussion paper on deep fakes, noting its ambition to “liaise with the Government to identify potential regulatory gaps”. It is therefore clear that effective regulation of AI-generated content will require a holistic approach, plus an appreciation by competition enforcers that antitrust is not the only game in town.
- We need to consider antitrust’s role in a sustainable AI value chain
Thought needs to be given to the ESG-related implications of energy consumption further up the AI value chain on the ability to meet relevant climate targets and on-shoring imperatives, and what role (if any) antitrust should play. Whether antitrust should intervene is ultimately a policy point, and one that may come into the spotlight with the change in leadership at the Commission: it may be that Commissioner Teresa Ribera’s background in energy may lead her to show a keen interest in sustainability considerations in antitrust. Collaboration between regulators and policy-makers will also be relevant to enable continued innovation and competitiveness throughout the AI value chain without sacrificing sustainability goals.