We all know that ChatGPT is the most popular chatbot on the Internet. Although the new Bing rivals it a bit, the truth is that both work very well because they use the same generative model, GPT (although Bing’s is GPT-4, the most advanced one).
However, OpenAI does not want to rest on its laurels and wants to continue making its ChatGPT the benchmark for current chatbots. To this end, OpenAI has launched a very juicy announcement: those who discover bugs or vulnerabilities in its AI services will receive a large monetary reward.
Moreover, the amounts are not exactly small. At the moment, the rewards range from $200 (for “low severity findings”) to $20,000 (for “exceptional discoveries”). All reports can be submitted via the Bugcrowd cybersecurity platform, a website specializing in bug bounty.
However, not everything goes here. OpenAI excludes from its bounties anyone who engages in hacking ChatGPT or jailbreaking. In fact, the Bugcrowd page itself makes it clear: jailbreaking is excluded.

In this case, jailbreaking ChatGPT involves introducing a number of parameters that causes the system to bypass its own security filters. The consequences of this would lead to having a sort of “evil” version of ChatGPT that causes it to generate hate speech or be used for harmful purposes.
OpenAI explains that such security issues do not fit into their bounty program, as it is not something that can be directly fixed. Although such jailbreaks show system vulnerabilities, it is more important for OpenAI to fix traditional security flaws.
Some of the links added in the article are part of affiliate campaigns and may represent benefits for Softonic.