Context Hub Vulnerability Exposes AI Coding Agents to Supply Chain Attacks
Severity: High (Score: 64.5)
Sources: Theregister
Summary
Andrew Ng launched Context Hub, a service for coding agents to access API documentation, which has been found to have significant security vulnerabilities. Mickey Shmueli demonstrated a proof-of-concept attack showing that malicious documentation can be submitted via GitHub pull requests, allowing attackers to poison AI agents with harmful instructions. The service lacks content sanitization, making it easy for attackers to exploit the system. Among 97 closed pull requests, 58 were merged, indicating a high likelihood of successful attacks. The attack method involves suggesting fake dependencies that coding agents incorporate into their projects, leading to potentially harmful code execution. Current security measures appear inadequate, with no automated scanning for malicious content in submitted documentation. The situation raises concerns about the broader implications for software supply chain security in AI development. Key Points: • Context Hub allows for submission of documentation that can be poisoned with malicious content. • 58 out of 97 closed pull requests were merged, indicating a high risk of exploitation. • The lack of content sanitization in the review process poses a significant threat to AI coding agents.
Key Entities
- Supply Chain Attack (attack_type)
- T1195 - Supply Chain Compromise (mitre_attack)
- Context Hub (platform)
- GitHub (platform)
- Haiku Model (platform)
- Opus Model (platform)
- Plaid Link (platform)
- MCP Server (tool)