Amnesty International Sounds Alarm on EU AI Act Transparency Rollbacks

Among the push to “simplify” EU tech regulations, a critical safeguard protecting oversight of high-risk AI systems faces potential elimination. According to Amnesty International’s latest analysis, Digital Omnibus proposals could strip away one of the few transparency mechanisms currently embedded in the EU AI Act—the mandatory publication of risk assessments on the EU database.

What’s Changing

Under the current AI Act framework, companies must document whether their systems qualify as high-risk and publish these assessments publicly. This creates a minimal but essential layer of accountability: external stakeholders and regulators can scrutinise a company’s self-assessment, and the decision trail is transparent.

The proposed changes would reverse this. AI companies would no longer be required to publish risk assessments, handing them complete discretion to classify their own systems internally without public disclosure. The implications are stark: a company could determine that a high-risk system (facial recognition, credit scoring, hiring algorithms) falls outside the definition and face no obligation to prove it.

Why This Matters Now

The Digital Omnibus process is accelerating through trilogue negotiations in April 2026, with final rules still in flux. The European Parliament and Council positions suggest postponing core implementation deadlines to December 2027 or August 2028, ostensibly to give regulators and industry breathing room.

But Amnesty’s concern points to a deeper tension: delays aren’t just about timing—they’re bundled with substantive weakenings of oversight mechanisms. The proposal to eliminate mandatory assessment publication signals a broader regulatory philosophy shift: from proactive transparency to reactive auditing, and from industry-agnostic rules to company self-determination.

Practical Impact for Builders and Users

For AI developers, this creates legal ambiguity. Without clear benchmarks for what constitutes high-risk classification, companies face uncertainty about compliance obligations. Some may over-comply; others may take aggressive interpretations of exemptions.

For citizens and civil society, the impact is more concerning: algorithmic systems affecting employment, credit, and public services could operate under regulatory classifications determined entirely by the deploying company, with no public record of the justification.

For regulators across Ireland and the EU, the reduction in transparency data means slower detection of non-compliance and reduced ability to perform meaningful post-market surveillance.

Open Questions

Several critical unknowns remain:

  • Will the final Digital Omnibus text include this transparency rollback? Trilogue negotiations are still active, and Parliament pressure could restore the mandate.
  • How will Member State regulators adapt enforcement strategies if mandatory publication is eliminated? Will they create parallel transparency requirements?
  • Does eliminating the database requirement weaken the EU’s competitive advantage in demonstrating trustworthy AI governance globally?

What’s Next

Stakeholders should monitor trilogue outcomes closely over the next weeks. The European Commission’s simplification agenda needs scrutiny: streamlining implementation timelines may be reasonable, but eliminating transparency mechanisms represents a qualitative weakening of the framework’s enforceability.


Source: Amnesty International