Meta has formally declined to sign the European Union’s voluntary AI Code of Practice, part of the forthcoming AI Act framework. Joel Kaplan, Meta’s chief global affairs officer, described the EU’s requirements—such as continuous documentation updates, bans on training AI with pirated content, and mandatory respect for content‑owner opt‑outs—as an “overreach” that creates legal uncertainty and risks throttling frontier AI development in Europe.
The EU’s AI Act, set to take effect later this year, categorizes certain applications—like social scoring and cognitive behavioral manipulation—as “unacceptable risk” and prohibits them outright. It also imposes strict registration and risk‑management obligations on “high‑risk” AI uses, including biometric systems. The voluntary Code of Practice is intended to help companies align with these new standards.
Meta’s refusal highlights the growing friction between Europe’s robust regulatory agenda and global AI developers wary of stringent rules that could hamper innovation. Observers say similar objections may emerge from other major tech firms, posing potential hurdles for the EU’s goal of fostering both transparent and competitive AI growth on its soil.