Analysis

April 17, 2023

EU’s new AI law could decimate generative AI on the continent, warn founders

Founders in the space are worried about blanket rules being applied across very different areas of AI


Tim Smith

6 min read

Aleph Alpha CEO Jonas Andrulis

The EU’s proposed AI legislation is set to deal a serious blow to companies working with generative models if it is ratified as is, founders in the sector have warned — putting the region even further behind the US and China in the AI arms race. 

According to entrepreneurs working with these models, the legislation as written would subject large language models (LLMs) to the highest degree of scrutiny and red tape, even if they’re not being used for sensitive use cases like hiring. 

European companies and investors have become increasingly concerned about the region’s ability to compete with US rivals like OpenAI, NVIDIA and Anthropic without the support of regulators. The bloc has the world’s strictest privacy rules; just earlier this month Italian regulators said they were banning OpenAI’s ChatGPT over privacy concerns.

Advertisement

“We need to play catch up and now we’ll be doing it with a handicap,” says Jonas Andrulis, CEO of LLM builder Aleph Alpha, a European rival to OpenAI.

The AI Act draft, which is still under discussion, outlines different risk categories of AI use, ranging from “low-risk” to “high” and “unacceptable risk” — with the latter being applied to use cases that will be outright banned, such as facial recognition technology in public places.

Given that the act will be the world’s first regulatory framework, and so likely provide a template for other geographies — like the Californian privacy laws inspired by GDPR — politicians say it’s important to get it right. 

Risky business

“High-risk” use cases — the most sensitive applications that will be permitted — include things like using AI in recruitment or analysing creditworthiness. Companies using these technologies will be obliged to provide regular reporting to EU regulators on their tech, alongside third-party auditing.

The issue, Andrulis says, is that “general purpose AI” like LLMs would be treated as high-risk if they could hypothetically be used for a high-risk application.

Andrulis says this will be a “resource drain” — time and money needed for reporting — that’ll make it even harder for companies like his to compete in a global market.

Peter Sarlin, cofounder and CEO of Silo AI, which is developing generative models for corporate clients, agrees that classifying all “general purpose AI” as high-risk will result in companies being regulated unnecessarily.

“If we are sort of generalising across generative AI technology, and saying that all use cases that utilise generative pre-trained transformers (GPTs) are high-risk, then I think we will also be regulating quite a lot of use cases that aren't actually high-risk,” he says.

'Bad regulation isn’t good'

Sarlin is keen to emphasise that he supports regulatory efforts to make AI safer (he was a signatory of a recent open letter calling for a six-month pause on building more powerful AI models than OpenAI’s GPT-4), but says there are limits. 

“I'm not saying regulation is bad. I'm saying that bad regulation isn't good,” he argues.

Classifying all generative models as high-risk is part of a wider issue with the legislation, he says — that it applies a sweeping set of rules across a broad range of technologies that face very different challenges related to AI.

Advertisement

“The major challenge is this sort of horizontal perspective to regulation, where you are trying to generalise across a number of different verticals or use cases,” he says. “There's going to be a very, very big difference comparing, say, autonomous vehicles to retail or to finance.”

His criticisms echo a call from AI investor Ian Hogarth last week in the Financial Times, who argued that general-purpose AI should be regulated entirely separately from “narrowly useful” AI systems.

Nicklas Bergman, strategic adviser to the EU’s European Innovation Council and founder of deeptech investment firm Intergalactic Industries, says that European regulators should be looking to harmonise rules with the US. 

A headshot of Nicklas Bergman
Nicklas Bergman

The world’s largest economy is still figuring out how it will regulate powerful AI, but Senate majority leader Chuck Schumer has laid out the areas that he believes Congress should look at — including transparency around data, algorithm training and explainability of how AIs are working.

“The more countries that can be on board with a discussion like that, the better it is,” says Bergman. “Like we have with genetics, nanomaterials and nuclear, the more that regulation can be aligned, the better it is.”

Bulgarian MEP Eva Maydell is rapporteur on the AI Act for the EU Parliament’s Industry, Research and Energy Committee — one of the bodies helping shape the legislation. She agrees that the classification of all generative models as high-risk shouldn't be in the final law.

“I do not believe all generative AI models should automatically be considered 'high-risk' — there are simply too many possible use cases of varying risk levels,” she tells Sifted. “That being said, we also need to address powerful foundation models. Exactly how we do so is currently under negotiation but there is a political will to act on this topic."

A headshot of MEP Eva Maydell
Eva Maydell, MEP

Impacts

Europe doesn’t have the same number of large generative AI companies as the US, but it is home to a number of smaller companies building generative models for corporate clients, such as Amsterdam-based Zeta Alpha and Oslo-based Iris.AI.

In a recent survey of 14 European VCs by the Initiative for Applied AI, 11 said they'd be less likely to invest in a startup given a high-risk classification, while eight said it would also negatively impact the startup’s valuation.

Andrulis says that if the EU gets the regulation wrong, the eventual impact will be that the bloc becomes a buyer rather than a seller.

“This will take away all the creative energy and all the resources we have that we should use for innovation,” he says. “We will live in a world that is defined and structured by others.”

Some might read comments like that as contributing to “race dynamics” — Sam Altman accelerationism with a European tint — but Andrulis points out that, while experts are calling for a pause on models more powerful than GPT-4, Europe is still only at “GPT-three-point-something”.

“We are one generation behind with three orders of magnitude less funding,” he says. “All this is making it harder for Europe to compete.”

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn