Advertisement

News

OpenAI now accuses The New York Times of having manipulated ChatGPT

Sam Altman's company doesn't flinch in the face of the accusation.

OpenAI now accuses The New York Times of having manipulated ChatGPT
Daniel García

Daniel García

  • Updated:

OpenAI, the company that was accused by The New York Times of illicit use of its intellectual property, has taken a new step in its defense and has formally accused The New York Times of manipulatively forcing the conversations with ChatGPT in a way that deliberately violated the rules.

This complaint is a new development in the litigation that The New York Times and OpenAI have been involved in for several weeks. The newspaper does not forgive that ChatGPT did not comply with the ban on training with its content, and the technology company insists that this violation of the terms and conditions between both brands was not deliberate or intentional.

The New York Times Download

OpenAI claims The New York Times cheated

OpenAI, through a new document for the Southern District of New York Court, insists on the innocence of the company, which never intended to breach the agreement with The New York Times; but, beyond defending itself, OpenAI has gone on the offensive accusing the New York newspaper of manipulating the ChatGPT system with a large number of commands, exploiting the bug that the company has already reported, to make the chatbot demonstrate access to the content published by The New York Times, despite the newspaper denying permission to OpenAI.

With this, OpenAI shows that it will not stand idly by, and opts for a more aggressive defense than in recent weeks. However, it is still evident that the infringement committed by ChatGPT, accidental or not, forced or not, is a reality, and that it used strictly prohibited content. Therefore, it will be up to the United States Justice to decide whether it is a punishable infringement for OpenAI.

Privacy and Artificial Intelligence

It’s a quite tricky issue, but for better or worse, companies working on Artificial Intelligence have already studied the current regulations to be able to borrow all the information they want as a training process for their systems. And that is a clear indication that many contents hosted on the web do not have the proper protection against systems that seek to replicate such human creations in an instant way.

That is why, there are already major institutions that have started to take action in order to guarantee greater protection against potential abuses that Artificial Intelligence can carry out with total impunity. An example of this is the European Union, which is already preparing a law to regulate this technology, or the United States, which has the authority to thoroughly review the content with which AI’s are trained in their country.

The New York Times Download
Daniel García

Daniel García

Graduated in Journalism, Daniel specializes in video games and technology, currently writing for Andro4all and NaviGames, and having written for more Difoosion portals such as Alfa Beta Juega or Urban Tecno. He enjoys staying up-to-date with current affairs, as well as reading, video games, and any other form of cultural expression

Latest from Daniel García

Editorial Guidelines