Connect with us

News

Meta Pushes Back: Why It’s Not Signing the EU’s AI Code of Practice

Meta has just drawn a bold line in the sand—and it’s aimed straight at Europe’s upcoming AI regulations.

As the European Union prepares to roll out its landmark AI Act, Meta announced that it will not sign the bloc’s voluntary Code of Practice for general-purpose AI (GPAI) models. This decision is already sparking intense debate across the tech world, and it hints at a growing rift between Silicon Valley and global regulators.

Let’s break down what’s going on, why Meta is pushing back, and what this could mean for the future of AI—both in Europe and around the world.


🧠 What Just Happened?

Meta—the parent company of Facebook, Instagram, and the open-source AI model LLaMA—has officially declined to support the EU’s new AI Code of Practice.

This Code is a voluntary framework meant to help companies get ready for the EU’s AI Act, which will soon become binding law. The AI Act is the world’s first major attempt to regulate artificial intelligence systems based on risk.

However, Meta isn’t on board. In a public post on LinkedIn, Joel Kaplan, Meta’s VP of Global Affairs, warned that the EU is going too far:

“Europe is heading down the wrong path on AI… This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”


📋 What Does the Code of Practice Require?

Although non-binding, the Code outlines key responsibilities for developers of general-purpose AI—tools used across many industries. It asks companies to:

  • Clearly explain how their models are trained and applied

  • Avoid using pirated or unlicensed content

  • Respect opt-out requests from content creators

  • Conduct risk assessments

  • Regularly update and disclose model changes

The idea is to get developers to start following these principles now—before the AI Act takes effect this August.


❌ Why Is Meta Refusing?

Meta’s rejection boils down to two main arguments:

1. Legal Uncertainty

Meta claims the Code blurs the line between voluntary guidelines and binding regulation. This, they say, creates confusion and risk for developers.

2. Overregulation

The company believes the EU’s approach is too strict. It fears this could slow innovation and hurt Europe’s ability to stay competitive in AI.

In short, Meta sees the Code as a barrier—not a bridge—to building safe, useful AI.


🌍 A Growing Rift Between Big Tech and Brussels

Meta isn’t alone. Other AI leaders—including Google (Alphabet), Microsoft, OpenAI, Mistral AI, and Anthropic—have raised similar objections. They’re urging the EU to slow down or adjust the law’s scope.

But the European Commission isn’t backing down. It plans to move forward with full enforcement of the AI Act.

Under the current plan, any company offering general-purpose AI models before August 2, 2024, must fully comply with the AI Act by August 2, 2027. That includes powerful systems like GPT-4, Claude, Gemini, and LLaMA 3.


⚖️ Why It Matters

This standoff is more than a policy disagreement. It reflects a global power struggle over who gets to shape the future of AI.

The EU wants to lead in ethical AI governance, hoping to repeat its success with the GDPR on data privacy. Meanwhile, U.S. tech giants worry that tough laws will slow development and push startups out of Europe.

Meta’s open-source model, LLaMA 3, raises even more questions about transparency, accountability, and who controls fast-moving AI tools.

If major companies refuse to comply, the EU may face legal fights—or risk seeing top AI innovators move to more regulation-friendly countries.


🌐 A Turning Point for Global AI Rules

Like it or not, the EU’s AI Act could become the world’s model for AI regulation—just as the GDPR influenced global privacy standards.

But if Big Tech keeps resisting, we have to ask:
Can governments and innovators find common ground fast enough to keep pace with AI’s rapid growth?


🤔 Final Thought

Whether you support Meta’s stand or back the EU’s cautious approach, one thing is certain:

The rules of the AI game are being written right now—and the decisions made today will shape how the world builds and uses AI for years to come.


💬 What’s Your Take?

Do you think stricter AI rules are necessary—or do they risk killing innovation?

Drop your thoughts in the comments. And if you found this useful, share it with a friend in tech.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Copyright © 2022 Inventrium Magazine