AI Code Tools Linked to Increased Vulnerabilities in Software
Severity: Medium (Score: 51.1)
Sources: Theregister
Summary
Researchers from Georgia Tech SSLab have identified a significant rise in vulnerabilities linked to AI-generated code. In March 2026, they reported 35 new CVEs, with 27 attributed to Claude Code, a popular AI coding tool. This follows a previous finding in August 2025, where only two CVEs were linked to Claude Code. The total number of CVEs attributable to AI-generated code now stands at 74 out of 43,849 advisories analyzed. The researchers caution that the actual number of vulnerabilities could be much higher due to detection limitations. They emphasize that the low number of confirmed AI-related vulnerabilities does not imply that AI-generated code is more secure than human-written code. The findings align with earlier research from Georgetown University, which indicated that nearly half of AI-generated code snippets contained bugs. The ongoing analysis highlights the need for caution when using AI tools in coding practices. Key Points: • Georgia Tech researchers linked 74 CVEs to AI-generated code, with 35 identified in March 2026. • Claude Code authored 27 of the 35 new CVEs, reflecting its growing use in public repositories. • Detection limitations may mean the actual number of vulnerabilities is significantly higher.
Key Entities
- CVE-2025-55526 (cve)
- GitHub (platform)
- N8n-workflows (platform)
- X402 SDK (platform)
- OpenClaw (platform)
- Claude Code (tool)
- Code Llama 7B Instruct (tool)
- Esbmc (tool)
- GitHub Copilot (tool)
- GPT-3.5-Turbo (tool)