Connect with us

Hi, what are you looking for?

Top Stories

Music Publishers Sue Anthropic for $3 Billion Over 20,000 Pirated Songs

Music publishers file a $3 billion lawsuit against Anthropic for illegally downloading over 20,000 copyrighted songs via BitTorrent, escalating copyright tensions in AI.

A coalition of major music publishers, including Concord Music Group and Universal Music Group, has filed a federal lawsuit against Anthropic PBC on January 28, 2026, claiming that the AI company illegally downloaded over 20,000 copyrighted musical compositions through BitTorrent from notorious pirate websites. The lawsuit seeks damages that could exceed $3 billion.

The complaint names Anthropic, its CEO Dario Amodei, and co-founder Benjamin Mann as defendants, alleging that the company used BitTorrent to acquire millions of pirated books from Library Genesis and Pirate Library Mirror, including hundreds of songbooks and sheet music collections containing copyrighted lyrics owned by the publishers. The lawsuit argues that while Anthropic claims to operate as an AI ‘safety and research’ company, its actions indicate a reliance on piracy for its business model.

This marks the second significant copyright action against Anthropic by the music publishers. They previously sued the company in October 2023 for unauthorized usage of 499 musical compositions in training and output from its Claude AI models, a case known as Concord Music Group v. Anthropic PBC. The publishers attempted to amend their original complaint to address newly discovered torrenting violations after Judge William Alsup highlighted Anthropic’s BitTorrent activities in a July 2025 ruling in a separate case. Anthropic successfully contested the amendment, claiming the allegations were unrelated to the initial lawsuit.

The current lawsuit identifies two distinct categories of alleged infringement: the downloading and distributing of copyrighted works via BitTorrent, and ongoing copying of publishers’ works in training newer Claude models. Anthropic has released several new versions of the Claude model since the first lawsuit, including Claude 4.5 Sonnet, Claude 4.5 Haiku, and Claude 4.5 Opus, all of which the complaint alleges were trained using unauthorized copies of the publishers’ musical works.

Details from the lawsuit reveal that Anthropic executives, including Amodei and Mann, discussed and approved the illegal downloading of millions of books via BitTorrent. Mann reportedly downloaded approximately five million copies of pirated books from Library Genesis in June 2021, after consulting with senior leadership. Despite acknowledging the copyright violations, Anthropic leadership opted for piracy, citing the speed and lack of cost, as reported in the complaint. In July 2022, the company further downloaded millions of books from Pirate Library Mirror.

Each downloading incident violated the publishers’ exclusive rights of reproduction and distribution. The complaint asserts that the two-way nature of the BitTorrent protocol allowed Anthropic to simultaneously upload unauthorized copies while downloading them, further exacerbating the infringement. The publishers claim this piracy resulted in substantial revenue loss, as each pirated work was likely shared thousands of times.

In addition to the torrenting claims, the lawsuit alleges that Anthropic continues to infringe upon the publishers’ copyrights through ongoing AI training, identifying over 20,000 musical compositions that were allegedly copied for training newer Claude models. The complaint outlines that Anthropic collects training data from various sources, including scraping websites and using third-party datasets that contain unauthorized copyrighted material. This includes the Common Crawl dataset, which allegedly scrapes lyrics from publishers’ websites without permission.

Another allegation involves Anthropic’s removal of copyright management information during AI training, which the lawsuit claims is a violation of Section 1202 of the Copyright Act. The complaint states that Anthropic has intentionally stripped out vital copyright information, including song titles and author names, to obscure its infringement. High-ranking employees discussed using extraction tools to filter this data, with preferences shown for tools that effectively removed copyright notices.

The lawsuit further asserts that Anthropic’s Claude models are designed to “memorize” and reproduce training data, including copyrighted lyrics. Previous internal studies by Anthropic employees have indicated that the models are prone to regurgitating copyrighted material, confirming the company’s awareness of this issue. After the first lawsuit, Anthropic attempted to implement guardrails to prevent AI output from copying copyrighted works, although the complaint argues these measures are inadequate.

The implications of this lawsuit extend beyond Anthropic, reflecting broader tensions in the AI industry regarding copyright and licensing. The music publishers indicated they have begun seeking licenses for AI use of their works, showcasing a willingness to engage ethically with technology. However, the lawsuit underscores the challenges of maintaining ownership and control amid the rise of AI, raising critical questions about the legality of content acquisition in the AI era.

As the legal landscape evolves, cases like this one will have significant ramifications for content licensing, the viability of AI tools, and how companies navigate the complex intersection of technology and intellectual property rights.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Anthropic enhances Claude Cowork with custom plugins and sub-agents, streamlining workflows and boosting productivity, as demonstrated by NASA's 50% time reduction in task execution.

AI Regulation

Anthropic CEO Dario Amodei warns of significant AI risks in his new essay, urging immediate regulatory measures to prevent societal harm.

Top Stories

Anthropic CEO Dario Amodei warns that by 2027, a "country of geniuses" powered by 50 million advanced AI entities could pose unprecedented threats to...

Top Stories

Dario Amodei warns at Davos that selling advanced AI chips to China could jeopardize U.S. national security, likening them to nuclear weapons.

Top Stories

OpenAI and Anthropic vie for a $10B enterprise AI market, with Anthropic's revenue forecast surging to $10B in 2025 amid rising compute costs.

Top Stories

World leaders at Davos predict AI will drive hundreds of billions in capital investments while raising concerns over job displacement and surveillance risks.

AI Cybersecurity

Anthropic reveals a state-backed Chinese hacking group exploited its Claude AI model in cyberattacks on 30 global targets, signaling a new era of automated...

Top Stories

Dario Amodei warns that allowing Nvidia to sell advanced AI chips to China could replicate "selling nuclear weapons to North Korea," risking U.S. national...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.