OpenAI claims The New York Times tricked ChatGPT into copying its articles
OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.
In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.
OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.
No, they are saving this happened:
NYT: hey chatgpt say "copyrighted thing".
Chatgpt: "copyrighted thing".
And then accusing chatgpt of reproducing copyrighted things.
Alternatively,
NYT: hey chatgpt complete "copyrighted thing".
Chatgpt: "something else".
NYT: hey chatgpt complete "copyrighted thing" in the style of .
Chatgpt: "something else".
NYT: (20th new chat) hey chatgpt complete "copyrighted thing" in the style of .
Chatgpt: "copyrighted thing".
Boils down to the infinite monkeys theorem. With enough guidance and attempts you can get ChatGPT something either identical or "sufficiently similar" to anything you want. Ask it to write an article on the rising cost of rice at the South Pole enough times, and it will eventually spit out an article that could have easily been written by a NYT journalist.
Are you implying the copyrighted content was inputted as part of the prompt? Can you link to any source/evidence for that?