The Challenge

Siemens CEO Roland Busch has issued a stark warning: the European Union’s approach to AI regulation risks losing a €1 billion industrial AI investment to the United States and China. The message is unambiguous—Europe’s regulatory framework is creating compliance friction that makes industrial applications less attractive than competing markets.

Busch argues that the EU’s AI Act and Data Act impose excessive oversight on industrial use cases that are already governed by sector-specific regulations. This layered approach to compliance, he suggests, fails to account for the fundamental differences between industrial AI applications (which operate in controlled environments with specific safety standards) and consumer-facing technologies (which require broader safeguards).

Why This Matters

This isn’t idle corporate complaining. Siemens represents precisely the kind of deep-tech manufacturer that Europe needs to build AI leadership in critical sectors like manufacturing, energy, and infrastructure. When a company of Siemens’ scale begins actively redirecting investment away from the bloc, it signals a structural problem with how the EU is implementing its regulatory vision.

The timing is particularly significant because Ireland’s AI Office is preparing implementation guidelines for the EU AI Act, with a mandatory operational deadline of August 1, 2026. If major industrial players are already making investment decisions based on regulatory uncertainty, Ireland’s implementation approach could either help attract or further deter this critical funding.

The Real Problem

Busch’s complaint points to a nuanced issue: the EU AI Act was designed primarily to manage risks in consumer-facing generative AI systems. But applying the same risk-management framework to industrial automation creates inefficiencies. A robotic arm in a Siemens factory operates under vastly different conditions than a chatbot serving millions of users—yet both face comparable compliance burdens.

This regulatory misalignment comes at a crucial moment. The Digital Omnibus negotiations are currently debating whether to delay high-risk AI system compliance deadlines from August 2, 2026 to December 2, 2027. If timelines keep slipping, regulatory uncertainty deepens—and investment votes with its feet.

What This Means for Irish Builders

For Irish AI developers and companies, this warning is actionable intelligence. If you’re building industrial or manufacturing-focused AI solutions, the regulatory pathway may become increasingly complex. Conversely, if the EU adapts its framework to distinguish between industrial and consumer applications, Ireland could position itself as a hub for manufacturing AI innovation.

The AI Office’s upcoming guidelines on high-risk classification (expected through 2026) will be critical. Clearer distinction between industrial and consumer applications could unlock significant investment. Without it, Europe risks ceding manufacturing AI leadership to more permissive jurisdictions.

Open Questions

  • Will the Digital Omnibus negotiations address industrial AI specifically, or maintain a one-size-fits-all approach?
  • How will Ireland’s AI Office interpret high-risk classification for industrial use cases?
  • Could sector-specific carve-outs emerge from the compliance roadmap?

Siemens’ ultimatum is a reminder that regulation must adapt to technology—not the reverse.


Source: Siemens Chief Executive Statement