Critical Vulnerabilities Expose Millions of AI Applications

Three critical security vulnerabilities disclosed on March 27, 2026, have exposed serious flaws in LangChain and LangGraph, two of the most widely-used AI development frameworks. The vulnerabilities include CVE-2026-34070, a path traversal flaw allowing arbitrary file access, and CVE-2025-67644, an SQL injection vulnerability in LangGraph’s checkpoint system.

The timing couldn’t be worse—LangChain recorded over 52 million downloads in a single week, with LangChain Core seeing 23 million downloads and LangGraph crossing 9 million. This massive adoption means a single vulnerability can cascade across hundreds of dependent applications.

Industry Faces Unprecedented Security Crisis

At RSA 2026, industry leaders painted a stark picture of the current threat landscape. Security experts described entering “an unprecedented two- to three-year period of upheaval” where AI systems are discovering vulnerabilities exponentially faster than defenders can respond.

Kevin Mandia’s firm Armadin recently tested a Fortune 150 company with strong security practices and found either remote code execution vulnerabilities or data leakage paths in every application tested. The results shocked both the testing team and the client.

The numbers tell the story: at least 35 new CVEs in March 2026 were directly attributed to AI-generated code, up from just six in January. If current trends continue, 2026 could see between 2,800 and 3,600 AI-related CVEs—a potential 69% increase from 2025.

European Regulatory Response

The EU AI Office has responded to the escalating security concerns with its second draft Code of Practice on AI-generated content transparency, open for feedback through March 30. The new requirements significantly narrow compliance discretion, reflecting growing regulatory concern about AI security risks.

Practical Implications for Developers

Organisations using these frameworks should immediately update to patched versions and conduct comprehensive security reviews of their AI deployments. The Model Context Protocol, another popular AI integration standard, has seen 30 CVEs filed in the past 60 days, with 38% of public MCP servers lacking authentication entirely.

Open Questions

The security community faces fundamental questions about whether traditional vulnerability management can keep pace with AI-accelerated threat discovery. As one expert noted, when advanced AI capabilities become widely available through open-source models, “you’re going to have every 19-year-old in St. Petersburg with the same capability” as elite vulnerability researchers.

For Irish and European organisations building AI systems, the message is clear: security can no longer be an afterthought in AI development.


Source: Multiple Security Research Sources