Connect with us

Hi, what are you looking for?

AI Generative

OpenAI Faces Defamation Lawsuit Over False Claims from ChatGPT Outputs

OpenAI faces defamation lawsuits in multiple countries, as generative AI’s false outputs provoke significant legal challenges and reputational risks for public figures.

The rise of generative AI has seen an increase in litigation related to false outputs that can damage personal reputations. In South Africa, the legal framework acknowledges that defamatory outcomes caused by AI can lead to actionable claims. As AI systems increasingly generate misleading or harmful content, courts are beginning to grapple with the implications of AI defamation, both locally and internationally.

One of the earliest notable cases occurred in 2023 in Australia when Brian Hood, the Mayor of Hepburn Shire Council, filed a defamation lawsuit against OpenAI, the owner of ChatGPT. Hood’s complaint stemmed from a false assertion generated by the AI, which claimed he had served time in prison for bribery, a claim that contradicted his role as a whistleblower. The lawsuit was resolved in early 2024 after OpenAI took steps to correct the inaccuracies in ChatGPT’s outputs.

In the United States, Robert Starbuck, an American filmmaker and journalist, filed his defamation suit against Meta Platforms, the parent company of the Meta AI chatbot, in April 2025. Starbuck described the distress of discovering that the AI was disseminating false claims about him being involved in the Capitol riot on January 6, 2021, and facing misdemeanor charges related to the event. Despite his attempts to alert Meta to the inaccuracies, the damaging statements persisted for nine months, ultimately prompting his legal action. Although the case was resolved with a public apology from Meta’s Joel Kaplan, it raised critical questions about liability regarding AI-generated defamation.

Another significant case involved Mark Walters, a media personality and Second Amendment advocate, who also initiated a defamation lawsuit against OpenAI in 2023. Walters claimed that journalist Frederick Riehl utilized ChatGPT to produce false statements linking him to embezzlement. However, in May 2025, the Superior Court of Gwinnett County, Georgia, ruled in favor of OpenAI, concluding that Walters, as a public figure, needed to prove actual malice on the part of the AI. The court determined that disclaimers provided by ChatGPT indicated to users that the outputs could contain inaccuracies. Walters’ status as a public figure, coupled with the cautionary language, meant that the court did not find the output defamatory.

In South Africa, no AI defamation cases have been decided yet, but experts suggest that the outcomes may differ from those seen in the U.S. Legal professionals assert that disclaimers may not absolve platforms of responsibility for defamatory content. Under South African law, AI-generated publications could still be viewed as defamatory even with disclaimers in place. Courts may require platforms to demonstrate that they acted without negligence, potentially leading to a duty to act reasonably once alerted to harmful content.

As generative AI technology continues to evolve, the legal landscape surrounding its outputs will likely become a focal point for courts. With increasing public awareness and scrutiny, companies involved in AI development may need to revamp their internal processes to mitigate the risk of defamation lawsuits. The implications for reputation management, freedom of speech, and accountability in the digital age underscore the necessity for robust regulations and ethical guidelines as AI systems become more entrenched in society.

* Dario Milo is a partner at Webber Wentzel and a member of the firm’s AI specialist team in dispute resolution, advising clients on emerging AI-related disputes, legal issues and potential risks.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Harvard Business School integrates AI into its MBA curriculum, enhancing learning with tools like ChatGPT for over 900 students, fundamentally transforming case discussions.

AI Generative

Educators must adapt Bloom's Taxonomy to emphasize iterative learning cycles, ensuring students effectively collaborate with generative AI for deeper cognitive skills.

Top Stories

Mistral AI secures €1.7 billion funding, positioning itself as Europe's leading generative AI player with a valuation between $6 billion and $14 billion.

Top Stories

Demis Hassabis warns the rapid commercialization of AI, particularly through ChatGPT, risks overshadowing transformative breakthroughs like AlphaFold, which predicts protein structures in seconds.

Top Stories

Minneapolis City Council proposes legalizing bathhouses to enhance LGBTQ+ health and safety, with a focus on consent and community input amid rising public interest.

AI Generative

Generative AI techniques advance rapidly with models like OpenAI's GPT-4 transforming content creation, raising ethical challenges around bias and misinformation.

AI Education

A study reveals that 80% of health profession students actively use generative AI like ChatGPT, reshaping trust and learning dynamics in medical education.

AI Technology

Anthropic embarks on custom AI chip development to enhance supply chain stability and control, targeting $30 billion in revenue as competition intensifies.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.