AI Systems Discover Hundreds of Critical Vulnerabilities in Major Software
OpenAI and Anthropic's AI tools found over 10,000 high-severity flaws in popular software, including decades-old vulnerabilities in OpenSSL.
Curated updates on significant developments in AI — from security incidents to breakthrough research.
OpenAI and Anthropic's AI tools found over 10,000 high-severity flaws in popular software, including decades-old vulnerabilities in OpenSSL.
European Commission publishes refined code of practice for AI content marking, with final rules taking effect August 2026.
New Trinity research shows Irish AI adoption nearly doubled in a year, while European startup Nscale raises massive funding round.
New research reveals AI-assisted development has doubled security flaws while attackers exploit 32% of CVEs on disclosure day.
New 8B-parameter model enables tracing every token back to training data, addressing critical AI transparency challenges.
Google's Quantum Echoes algorithm demonstrates verifiable quantum advantage, running 13,000x faster than classical supercomputers.
University of Michigan researchers create AI system that interprets brain MRI scans in seconds, identifying neurological conditions and urgent cases.
AISLE AI system uncovers unprecedented concentration of OpenSSL vulnerabilities while state actors weaponize AI for cyberattacks.
Claude Opus 4.6 found 500+ high-severity vulnerabilities while GPT solved exploit scenarios in under an hour, reshaping cybersecurity dynamics.
Two frontier AI models dropped simultaneously on Feb 5th, both claiming coding supremacy with autonomous agent capabilities.
Major AI labs shift focus from bigger models to efficient, interpretable systems with breakthrough hybrid architectures and monitoring tools.
New open-source models from DeepSeek and Xiaomi deliver GPT-5-level reasoning with dramatic efficiency gains for developers.
Chinese threat actors used Claude AI to execute 80-90% of cyberattacks autonomously while critical flaws emerge in Microsoft 365 Copilot and ServiceNow.
New vulnerabilities in popular AI frameworks and the first AI-orchestrated cyberattack signal a dangerous escalation in AI security threats.
New vulnerabilities in Chainlit and Docker AI tools emerge as Anthropic reports the first fully AI-orchestrated cyberattack campaign.
NVIDIA unveils groundbreaking physical AI models alongside Falcon-H1R 7B and optimized Qwen 30B for Raspberry Pi deployment.
Multiple critical vulnerabilities in n8n and ServiceNow AI platforms expose organizations to remote code execution attacks.
NVIDIA releases Alpamayo for self-driving cars and Nemotron 3 Nano with 1M context window, signaling industry shift toward specialized models.
New research exposes how leading AI models including Claude and GPT-4 reproduce nearly entire copyrighted works, challenging industry claims.
Breakthrough molecular AI hardware that physically encodes intelligence is reshaping how future AI systems will be built and deployed.
Chinese state-sponsored group used Claude AI to execute 80-90% of cyberattacks autonomously, targeting 30 global organizations.
OpenAI, Meta, and Mistral release flagship models with massive context windows, multimodal capabilities, and 40% fewer hallucinations.
OpenAI, Mistral, and NVIDIA release competing flagship models with dramatic improvements in reasoning, cost efficiency, and physical AI capabilities.
OpenAI's latest model, O3, demonstrates significant improvements in mathematical reasoning, coding, and complex problem-solving through extended test-time computation.