OpenAI Launches Safety Bug Bounty Program to Address AI Abuse Risks
Severity: Low (Score: 39.9)
Sources: Cybersecuritynews, Openai, Infosecurity-Magazine, Tipranks, Feeds2.Feedburner
Summary
On March 25, 2026, OpenAI announced the launch of its Safety Bug Bounty program aimed at identifying AI abuse and safety risks across its products. This initiative complements the existing Security Bug Bounty program, which has rewarded 409 security vulnerabilities since April 2023. The new program encourages researchers to report issues that pose meaningful abuse and safety risks, even if they do not qualify as security vulnerabilities. Specific scenarios include agentic risks and integrity violations, while general content-policy bypasses without demonstrable impact are excluded. Researchers can submit issues through Bugcrowd, and submissions will be triaged by OpenAI's teams. The program aims to foster collaboration with the safety and security research community to enhance AI system security. OpenAI also conducts private bug bounty campaigns targeting specific harm types. Key Points: • OpenAI's Safety Bug Bounty program launched on March 25, 2026. • The program focuses on AI abuse and safety risks, complementing the existing Security Bug Bounty. • Researchers can report issues via Bugcrowd, with specific scenarios outlined for eligibility.