The New York Times has initiated a significant legal battle against the artificial intelligence startup Perplexity AI, alleging the unauthorized duplication and misuse of copyrighted articles. The complaint asserts that Perplexity has built its business by leveraging the work of journalists without consent or compensation, raising concerns about the ongoing conflict between traditional media publishers and the rapidly evolving AI sector.
According to the Times, Perplexity is not merely a search engine but functions as a content replacement service. The newspaper claims that the AI company summarizes its reporting, including paywalled articles, in a manner that discourages users from visiting the original source. This practice, the Times argues, threatens its subscription-based business model, which relies on both reader traffic and advertising revenue.
In its legal filing, the Times characterizes Perplexity’s business practices as “freeriding” on the financial and intellectual investments of other content creators. The newspaper points out that high-quality journalism incurs significant costs, from paying talented reporters to navigating legal risks. By merely repackaging existing content, Perplexity benefits financially from work it has not contributed to or funded, the lawsuit alleges.
The complaint includes specific instances where Perplexity’s chatbot reportedly copied extensive portions of Times articles verbatim. In some cases, the AI tool provided such detailed summaries that users would have little incentive to seek out the original news source. The Times contends that this constitutes not fair use but a commercial undertaking aimed at competing with other news outlets.
This legal argument centers on the concept of substitution. Unlike traditional search engines like Google, which often direct traffic away from their own pages through links, AI-driven platforms are increasingly retaining users on their own sites. The Times warns that if this model prevails, it could endanger the financial viability of the news industry, as publishers depend on reader contributions to fund journalism.
Alongside issues of copyright infringement, the lawsuit raises concerns about quality and brand integrity. The Times accuses Perplexity of damaging its reputation by associating the newspaper with misleading information, a phenomenon known in AI as “hallucination.” This occurs when an AI model confidently presents fabricated facts as truth.
The complaint outlines instances where Perplexity has cited the Times as a source for information that the newspaper never published. Such actions, the lawsuit argues, violate trademark laws by misleading the public and tarnishing the Times’ reputation for accuracy. The newspaper seeks to prevent Perplexity from using its trademarks in ways that create confusion among readers.
This aspect of the case underscores the technical limitations of generative AI applications. While these tools can produce text that mimics human writing, they often struggle to discern between factual and fictional information. For a news organization that prides itself on accuracy, being linked to AI-generated falsehoods poses a serious risk. The Times is asking the court to restrain Perplexity from engaging in practices that undermine its credibility.
The legal action follows months of unsuccessful negotiations aimed at resolving the dispute amicably. The Times claims it sent a cease-and-desist letter to Perplexity in October, requesting that the company stop using its content. The newspaper also attempted to negotiate a licensing agreement, similar to those established with other tech firms, but alleges that Perplexity continued to scrape its website.
Another allegation in the lawsuit points to Perplexity’s disregard for “robots.txt” files, standard digital commands that instruct automated bots on which parts of a website they can access. The Times accuses Perplexity of ignoring these directives, thereby breaching the terms of service and security protocols of its website. This behavior, the lawsuit contends, reflects a calculated approach by Perplexity that prioritizes growth over compliance with legal and ethical norms.
Perplexity has responded to the lawsuit by asserting that it is merely a search engine designed to index information and provide citations, not a tool for plagiarism. Company representative Jesse Dwyer criticized the legal action as an attempt to stifle innovation, claiming that Perplexity aims to organize and disseminate information in a manner akin to traditional libraries and search engines of the past.
The outcome of this high-profile case could have significant implications for the intersection of journalism and artificial intelligence, as courts will need to navigate the complexities of copyright, trademark law, and the evolving landscape of digital content creation.
See also
AI News Risks Eroding Trust: How Misinformation Threatens Journalism’s Future
Globant’s Converge 2025: Industry Leaders Set to Transform AI from Ideation to Impact
Elon Musk’s Grok AI Allegedly Doxxes Users, Reveals Sensitive Home Addresses
India’s Power Minister Unveils AI Solutions to Transform Electricity Distribution, Boost Efficiency
Applications Open: 2026–2027 AI for Science Master’s at AIMS South Africa with Full Scholarships


















































