Connect with us

Hi, what are you looking for?

Top Stories

Getty Images’ Copyright Case Against Stability AI Leaves Key Legal Questions Unresolved

Getty Images’ copyright claim against Stability AI falters, as the court rules Stable Diffusion isn’t an infringing copy, leaving critical legal questions unanswered.

In a landmark ruling earlier this month, the court delivered a verdict in the pivotal copyright case of Getty Images v Stability AI, a decision that promised to clarify the legal ramifications surrounding AI model training and copyright infringement. However, the outcome left many in both the creative and AI sectors seeking clearer answers, as fundamental questions about copyright were largely left unresolved.

At the heart of this case was the contentious use of Getty Images’ expansive photo library by Stability AI to train its AI model, Stable Diffusion. Getty Images described this library, established in the 1990s with funds from the Getty family, as its “lifeblood.” Stability AI utilized images from this collection without obtaining prior consent, and notably, the training occurred outside the UK.

During the trial, Getty Images presented evidence claiming that specific generated images from various iterations of Stable Diffusion could be traced back to its library, with some outputs even reproducing Getty Images watermarks. This seemingly bolstered Getty’s claims of copyright, database right, and trademark infringement, along with passing off. However, the court noted significant gaps in Getty’s case. Notably, Getty conceded that there was no proof demonstrating that the training and development of Stable Diffusion took place within UK jurisdiction, leading it to abandon that claim. Furthermore, Stability AI had blocked user prompts that could generate the contested outputs, which prompted Getty to withdraw its copyright infringement claim regarding the AI outputs altogether.

Ultimately, Getty was left with a trademark infringement claim related to the watermarks on the AI outputs, alongside a contention that the Stable Diffusion model itself constituted an infringing article. This raised critical questions about whether Getty’s licensed photos allowed for concurrent rights with the photographers, thus entitling it to seek remedies for copyright infringement. The key victory for Getty, albeit limited, was a judicial acknowledgment that creators of AI models could be held liable for infringing outputs generated by their tools.

The court’s findings focused on instances of “double identity” and “confusion” trademark infringement, as delineated under sections 10(1) and 10(2) of the Trade Marks Act 1994. Nevertheless, Getty was unsuccessful in its “detriment” claim under section 10(3) of the Act and in its passing off claim. The trademark infringement verdict came as no surprise, given the evident reproduction of Getty’s trademarked names in the watermarks on AI-generated images. However, the judge emphasized that these findings were both “historic and extremely limited in scope,” implying they would unlikely result in substantial damages for Getty when the quantum is eventually determined.

Legal experts glean from this that a clear connection between an intellectual property right and an AI output can lead to liability for the AI company in cases of infringement. For instance, one might envision a scenario where singer Taylor Swift could pursue legal action against an AI song generator for outputs that replicate significant portions of her existing work. However, questions linger, such as whether the AI outputs could be defended as “parody” or “pastiche,” possibilities not addressed in the Getty case.

Though it was determined that Getty Images had exclusive control over some of the disputed photos, its claim of secondary copyright infringement ultimately failed. The crux of the matter rested on the court’s determination of whether an AI model could be classified as an “infringing copy” under the Copyright, Designs and Patents Act 1998. Getty argued that the act of training involved reproducing photos, thus meeting the definition of an infringing copy. Stability AI countered that its model was trained on copyrighted works in the US, asserting that since copies of those works were never present in its AI model, it could not be deemed an infringing copy. The judge sided with Stability, declaring that Stable Diffusion “does not store or reproduce any copyright works” and thus fails to qualify as an “infringing copy.”

The ruling underscores a significant transformation in the digital landscape, highlighting a complex interplay between traditional intellectual property principles and innovative technologies. As data emerges as the new oil, the Getty Images case signals that while the path forward in defining copyright in the realm of AI remains fraught with challenges, the implications of this ruling will resonate throughout future legal disputes involving artificial intelligence.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

UK regulators, led by the CMA and ICO, prioritize fostering AI innovation through regulatory sandboxes while addressing competition concerns and public safety.

AI Government

RCP reveals 70% of UK physicians endorse AI in NHS, urging immediate government action to overhaul digital infrastructure for enhanced patient care.

Top Stories

Liverpool City Region seeks AI experts to form a taskforce by January 2026, aiming to ethically harness AI for its 1.6 million residents' benefit.

AI Research

UK universities are closing humanities programs, endangering vital AI user trust research as PhD students like Chris Tessone face an uncertain future.

Top Stories

In 2026, the UK introduces the Employment Rights Act with over 30 reforms, while the EU mandates pay transparency to combat gender pay gaps...

AI Tools

Ofcom investigates Elon Musk's X for potentially breaching UK law over Grok's AI-generated non-consensual images, risking fines up to £18 million.

AI Generative

Locai Labs halts image generation services and bans users under 18, as CEO James Drayson warns all AI models risk producing harmful content.

Top Stories

Stability AI launches Stable Audio 2.0, allowing users to generate high-quality, coherent 3-minute tracks with new audio-to-audio features and flexible subscriptions.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.