The New York Times filed a lawsuit in a US court against OpenAI and Microsoft, claiming that the two firms used millions of articles from the newspaper—which they did not have permission to use—to train their highly effective artificial intelligence models.
The Times launched a complaint on Wednesday, claiming that both businesses had used its journalism in their AI chatbots without permission or payment, thereby profiting from The Times’ substantial efforts in creating high-caliber content.
As the fast expanding generative AI industry grows, copyright has become a key source of dispute as creators—publishers, musicians, and artists, among others—resort to legal action to guarantee just recompense for their works that are employed in technical advancements.
Unlike other media organizations that have entered into content agreements with OpenAI, including Germany’s Axel Springer or the Associated Press, The New York Times took a confrontational approach in reaction to the growing dominance of AI chatbots by filing a lawsuit.
The case emphasized how important it is to defend independent journalism, emphasizing that if news outlets like The Times are unable to defend their work, there will be serious societal repercussions, including a decrease in journalistic production with far-reaching effects.
In addition to requesting an injunction to stop utilizing The Times’ content for AI model training and the deletion of already obtained data, the lawsuit seeks damages. Although the precise number of damages is still unknown, The Times estimated that possible losses might reach,OpenAI and Microsoft defended their use of the content, saying it was “transformative” technology and therefore did not require a commercial deal, even in the face of attempts to negotiate a content agreement. This position was contested in the case, which emphasized that it was improper to use The Times’ content for rival products without paying for it.
In addition, the lawsuit claimed that the content produced by AI closely resembled The New York Times’ style, at times attributing inaccurate information to the reliable news outlet. It also highlighted the importance of using decades’ worth of news archives to train AI models.