Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Invoice Fraud Costs UK Construction Sector Millions, NCA Warns

March 27, 2026

Interlock Ransomware Exploits Cisco FMC Zero-Day CVE-2026-20131 for Root Access

March 27, 2026

OpenAI Expands Bug Bounty to Cover AI Abuse and ‘Safety’ Concerns

March 27, 2026
Facebook X (Twitter) Instagram
Friday, March 27
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»Cyber Security»OpenAI Expands Bug Bounty to Cover AI Abuse and ‘Safety’ Concerns
Cyber Security

OpenAI Expands Bug Bounty to Cover AI Abuse and ‘Safety’ Concerns

Team-CWDBy Team-CWDMarch 27, 2026No Comments2 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


OpenAI has launched a new bug bounty program to engage researchers in addressing AI abuse and safety risks across its products.

The new Safety Bug Bounty program was announced on March 26 and is hosted on Bugcrowd.

It complements the firm’s Security Bug Bounty, also hosted on Bugcrowd, that has rewarded 409 security vulnerabilities in OpenAI’s product offerings since its launch in April 2023.

With the Safety Bug Bounty, OpenAI wants to encourage disclosures of issues in its products that pose “meaningful abuse and safety risks, even if they don’t meet the criteria for a security vulnerability.”

The scenarios covered by this new program encompass:

  • Agentic risks, including model context protocol (MCP) abuse, third-party prompt injection, data exfiltration, disallowed actions at scale on OpenAI’s website or other potentially harmful unlisted behaviors
  • Violations of account and platform integrity (e.g. bypassing anti-automation controls, manipulating account trust signals, evading account restrictions/suspensions/bans)
  • OpenAI proprietary information abuse (e.g. model generations that return proprietary information related to reasoning; vulnerabilities that expose other OpenAI proprietary information)

Key Differences: OpenAI’s Security vs. Safety Bug Bounty Programs

OpenAI outlined that integrity violations involving a user having access to features, data or functionalities beyond authorized permissions should be reported to the Security Bug Bounty rather than the new Safety Bug Bounty.

The company further clarified that general content-policy bypasses without clear safety or abuse impact are not eligible for rewards.

For example, it specified that “jailbreaks” that only result in rude language or easily searchable information are out of scope.

However, researchers who identify flaws enabling direct user harm with actionable fixes may still qualify for rewards on a case-by-case basis.

OpenAI also stated that it periodically runs private bug bounty campaigns targeting specific harm types, including biorisk content issues in ChatGPT Agent and GPT-5.

Researchers can already submit issues to the Safety Bug Bounty program via Bugcrowd. An OpenAI team responsible for both Safety and Security Bug Bounty programs will triage submissions, which may be rerouted between the two programs depending on scope and ownership.

Image credits: Samuel Boivin / Stock all / Shutterstock.com

Read now: Why AI’s Rise Makes Protecting Personal Data More Critical Than Ever



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleCritical Unpatched Telnetd Flaw (CVE-2026-32746) Enables Unauthenticated Root RCE
Next Article Interlock Ransomware Exploits Cisco FMC Zero-Day CVE-2026-20131 for Root Access
Team-CWD
  • Website

Related Posts

Cyber Security

EtherRAT Techniques Bypass Security Via Ethereum Smart Contracts

March 26, 2026
Cyber Security

Rapid Exploitation of CVE-2026-21962 Hits Oracle WebLogic

March 26, 2026
Cyber Security

Hackers Exploit Compromised Enterprise Identities at Industrial Scale

March 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views

Malicious Nx Packages in ‘s1ngularity’ Attack Leaked 2,349 GitHub, Cloud, and AI Credentials

September 5, 20258 Views

Near-ultrasonic attacks on voice assistants

September 11, 20256 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views
Our Picks

Mobile app permissions (still) matter more than you may think

February 27, 2026

Why the tech industry needs to stand firm on preserving end-to-end encryption

September 12, 2025

Beware of threats lurking in booby-trapped PDF files

October 7, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.