AI Coding Agents Flood Enterprise Systems with High-Risk Vulnerabilities at 87% Rate
New DryRun Security research reveals AI coding agents introduce vulnerabilities in nearly 9 of 10 pull requests, exposing enterprises to critical security risks.
AI Coding Agents: A 87% Vulnerability Rate That Should Concern Every Enterprise
As enterprises across Ireland and Europe rapidly adopt AI-assisted development tools, a critical new study reveals a sobering reality: nearly nine out of every ten pull requests generated by AI coding agents contain at least one security vulnerability.
Research from DryRun Security examined 30 pull requests produced by AI coding agents and found that 26 of them—an alarming 87%—introduced vulnerabilities. This finding arrives at a particularly sensitive moment for Irish and European organisations already struggling to implement the EU AI Act’s safety requirements by August 2026.
What’s Actually Happening
The vulnerability surge represents a fundamental mismatch between the speed of AI code generation and the maturity of security practices in development workflows. While AI agents like GitHub Copilot and similar tools promise faster development cycles, they’re simultaneously flooding codebases with security issues that manual code review processes are struggling to catch.
This trend aligns with broader patterns tracked by Georgia Tech’s Vibe Security Radar project, which documented 35 new CVEs directly attributed to AI-generated code in March 2026 alone—a fivefold increase from January’s six CVEs.
Why This Matters for Irish and European Organisations
For Irish organisations preparing for EU AI Act compliance, this research underscores a critical gap: most development teams lack the security infrastructure to safely deploy AI coding agents. The issue isn’t the tools themselves—it’s that adoption has vastly outpaced the implementation of compensating security controls.
With 2,130 AI-related CVEs disclosed in 2025 (a 34.6% year-over-year increase), enterprises are increasingly exposed to attack vectors they may not even realise exist in their own codebases. For organisations in high-risk sectors subject to stricter EU regulations, this vulnerability rate could create significant compliance liabilities.
Practical Implications for Development Teams
Irish and European organisations deploying AI coding agents need to immediately reassess their approach:
- Enhanced code review processes: Treat AI-generated code with at least the same scrutiny as external dependencies, if not more
- Security scanning integration: Mandate automated security analysis on all AI-generated pull requests before merge
- Developer training: Update teams on common AI-generated vulnerability patterns
- Tool configuration: Configure coding agents with security-aware prompts and constraints
- Audit trails: Maintain detailed logs of AI agent decisions for compliance and post-incident analysis
The Broader Context
This vulnerability rate also reflects a concerning trend in how enterprises are integrating AI into critical infrastructure without adequate governance frameworks. As the EU AI Act implementation deadline approaches, organisations that have adopted AI tools without corresponding security investments face potential regulatory and operational risks.
Open Questions
Several crucial questions remain unanswered: Are certain categories of vulnerabilities more prevalent in AI-generated code? Do different AI platforms or models produce significantly different vulnerability rates? Most importantly, what security frameworks should enterprises implement to safely scale AI-assisted development while maintaining compliance with emerging EU regulations?
The answer isn’t to abandon AI coding agents—the efficiency gains are too significant. Rather, it’s to treat their adoption as a security initiative that requires investment, governance, and continuous improvement, not just a developer productivity tool.
Source: DryRun Security Research