Flags of EU countries before the European Parliament building showing AI training by Big Tech

Big Tech Alliance Criticizes EU Decisions on AI Training

Headed up by Meta, a collection of the biggest names in tech and AI research (along with some unrelated businesses such as Prada) has sent a letter to the European Union warning that EU decisions on regulating AI training threaten to hold the region back.

The letter follows decisions to indefinitely halt EU training made by Meta, X and LinkedIn in recent months, mostly out of concern about potential General Data Protection Regulation (GDPR) actions. These outfits have largely focused on the perceived inconsistency and uncertainty of the application of GDPR rules to emerging AI models that by nature are opaque to users and difficult to inventory for collected personal data.

Open letter from tech industry lambastes EU decisions on AI

The open letter calls for “harmonised, consistent, quick and clear decisions” from the bloc, where member states have some individual leeway on how to approach AI training regulation using the GDPR and other enforcement tools. This was illustrated in 2023 when Italy issued a temporary ban on ChatGPT within its borders, something that then prompted a number of other EU nations to initiate their own investigations.

In terms of clarifying those AI training rules, the European Commission has thus far responded only by saying that tech outfits are expected to abide by established data privacy rules and EU decisions that are legally based on them. But handling certain AI systems, particularly the large language models (LLMs) that are most commonly used, has proven challenging in this area because of their core design. The companies that design them have legitimate trouble restricting their personal information intake, plumbing their depths to determine what they are storing, and making changes to the models to dissuade them from logging or using this information. The solution is often to simply build out a new version of the model and scrap the old one, a process that can involve months of new training.

The EU was the first part of the world to pass comprehensive AI regulation in the form of the AI Act, but its terms are staggered and will continue rolling out until at least 2026. Due to this and uncertainty about how existing GDPR rules will be applied across EU decisions, a number of tech’s biggest names have opted to halt all AI training in the EU for the time being; this also means certain AI-driven products, such as Meta’s Threads, also face indefinite delays in being introduced in the region.

Will the EU fall behind on AI training?

The open letter warns the EU of the imminent possibility of missing out on two “cornerstones” of AI training: “open models” that are provided to the general public free of charge, and “multimodal models” that combine the ability to process multiple forms of media (such as speech and video) in concert with the mostly text-based capabilities of today’s LLMs. It also cautions that the next generation of AI models will be deficient in European culture, language and knowledge if regional data is not available to train it with.

Harmonizing of EU decisions seems to be the central immediate request that the tech firms want to see fulfilled. That is something the EU AI Act was supposed to do, at least in theory, but unresolved tension between the two laws remains. The two rule sets are supposed to complement each other, but can come into conflict when a company is designated a “deployer” of AI systems; this can clash with an otherwise valid designation of “controller” or “processor” under GDPR law. There are also unresolved legal questions about exactly how “automated” an AI system’s decisions are with regards to GDPR requirements. These are just a couple of examples of remaining grey areas that can impact AI training approaches.

A June study by trade association DIGITALEUROPE declared that the EU is lagging global rivals not just in AI training, but in seven of eight critical technology sectors. It finds that lack of regional investment in AI is a key factor, with the bloc only seeing about one-seventh of the money that goes to the US. It also cites the regulatory burden as not just putting off AI firms looking to do research, but also hampering ability for companies to scale. And while Europe produces a  great deal of tech talent, EU decisions often lead to that talent taking off for the US or other more prosperous destinations.

A more worrying report was recently issued by the much more neutral European Commission, which found that current EU decisions are about 100 years off of making the 2030 goals set for AI by the Digital Decade project. Issues include a very low rate of adoption by businesses (only 11% at present), inability to produce tech “unicorns,” and a general lack of technical skill among the general public.