Meta declines to abide by voluntary EU AI safety guidelines • The Register

Two weeks before the EU AI Act takes effect, the European Commission issued voluntary guidelines for providers of general-purpose AI models. However, Meta refused to sign, arguing that the extra measures introduce “legal uncertainties” beyond the law’s scope.
“With today’s guidelines, the Commission supports the smooth and effective application of the AI Act,” Henna Virkkunen, EVP for tech sovereignty, security and democracy, said in a statement on Friday.
“By providing legal certainty on the scope of the AI Act obligations for general-purpose AI providers, we are helping AI actors, from start-ups to major developers, to innovate with confidence, while ensuring their models are safe, transparent, and aligned with European values.”
The EU AI Act regulates the use of AI models based on four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. Its goal is to prevent the amplification of illegal, extremist, or harmful content, and to ensure that models refuse disallowed requests, such as instructions for creating a bioweapon.
The General-Purpose AI (GPAI) Code of Practice focuses on general-purpose AI models trained with computing resources that exceed 10^23 FLOPs – almost any recently trained large‐scale model. It asks for voluntary transparency and copyright commitments from those offering such models, as well as extra safety and security commitments from those distributing models that present systemic risk – “general-purpose AI models that were trained using a total computing power of more than 10^25 FLOPs.”
Europe is heading down the wrong path on AI
Over 30 AI models from companies like Anthropic, Google, Meta, and OpenAI appear to have been trained with at least 10^25 FLOPs.
Meta, long criticized for its data-hungry tactics in the EU, doesn’t want to play along, however.
Meta says it will ignore the GPAI, a stance that allows its Llama 4 Behemoth (5e25 FLOPs) to roam unhindered.
“Europe is heading down the wrong path on AI,” said Joel Kaplan, chief global affairs officer at Meta, in a LinkedIn post. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”
Kaplan noted that European businesses and policymakers have objected to the EU AI Act, pointing to the recent open letter from the likes of Siemens, Airbus, and BNP that urged EU leaders to halt the implementation of the rules.
“We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,” said Kaplan.
Meta in April was fined €200 million (~$232.6 million) by the EC for failing to meet consumer data privacy obligations with its “Consent or Pay” business model, which has been deemed to violate Europe’s Digital Markets Act (DMA).
Last week, according to Bloomberg, the EC told Meta in a letter that the company’s “Consent or Pay” model remains non-compliant. Meta was also fined €797.72 million (~$927.19 million) by the EC in November for tying its online classified ads service Facebook Marketplace to its Facebook social network in violation of antitrust rules.
EC spokesperson Thomas Regnier told The Register via email that all GPAI providers will have to comply with the AI Act when it comes into force on August 2 this year.
“The Code of Practice is a voluntary tool, but a solid benchmark,” said Regnier. “If a provider decides not to sign the Code of Practice, it will have to demonstrate other means of compliance. Companies who choose to comply via other means may be exposed to more regulatory scrutiny by the AI Office.” ®