The rise of generative AI has seen an increase in litigation related to false outputs that can damage personal reputations. In South Africa, the legal framework acknowledges that defamatory outcomes caused by AI can lead to actionable claims. As AI systems increasingly generate misleading or harmful content, courts are beginning to grapple with the implications of AI defamation, both locally and internationally.
One of the earliest notable cases occurred in 2023 in Australia when Brian Hood, the Mayor of Hepburn Shire Council, filed a defamation lawsuit against OpenAI, the owner of ChatGPT. Hood’s complaint stemmed from a false assertion generated by the AI, which claimed he had served time in prison for bribery, a claim that contradicted his role as a whistleblower. The lawsuit was resolved in early 2024 after OpenAI took steps to correct the inaccuracies in ChatGPT’s outputs.
In the United States, Robert Starbuck, an American filmmaker and journalist, filed his defamation suit against Meta Platforms, the parent company of the Meta AI chatbot, in April 2025. Starbuck described the distress of discovering that the AI was disseminating false claims about him being involved in the Capitol riot on January 6, 2021, and facing misdemeanor charges related to the event. Despite his attempts to alert Meta to the inaccuracies, the damaging statements persisted for nine months, ultimately prompting his legal action. Although the case was resolved with a public apology from Meta’s Joel Kaplan, it raised critical questions about liability regarding AI-generated defamation.
Another significant case involved Mark Walters, a media personality and Second Amendment advocate, who also initiated a defamation lawsuit against OpenAI in 2023. Walters claimed that journalist Frederick Riehl utilized ChatGPT to produce false statements linking him to embezzlement. However, in May 2025, the Superior Court of Gwinnett County, Georgia, ruled in favor of OpenAI, concluding that Walters, as a public figure, needed to prove actual malice on the part of the AI. The court determined that disclaimers provided by ChatGPT indicated to users that the outputs could contain inaccuracies. Walters’ status as a public figure, coupled with the cautionary language, meant that the court did not find the output defamatory.
In South Africa, no AI defamation cases have been decided yet, but experts suggest that the outcomes may differ from those seen in the U.S. Legal professionals assert that disclaimers may not absolve platforms of responsibility for defamatory content. Under South African law, AI-generated publications could still be viewed as defamatory even with disclaimers in place. Courts may require platforms to demonstrate that they acted without negligence, potentially leading to a duty to act reasonably once alerted to harmful content.
As generative AI technology continues to evolve, the legal landscape surrounding its outputs will likely become a focal point for courts. With increasing public awareness and scrutiny, companies involved in AI development may need to revamp their internal processes to mitigate the risk of defamation lawsuits. The implications for reputation management, freedom of speech, and accountability in the digital age underscore the necessity for robust regulations and ethical guidelines as AI systems become more entrenched in society.
* Dario Milo is a partner at Webber Wentzel and a member of the firm’s AI specialist team in dispute resolution, advising clients on emerging AI-related disputes, legal issues and potential risks.
See also
Sam Altman Praises ChatGPT for Improved Em Dash Handling
AI Country Song Fails to Top Billboard Chart Amid Viral Buzz
GPT-5.1 and Claude 4.5 Sonnet Personality Showdown: A Comprehensive Test
Rethink Your Presentations with OnlyOffice: A Free PowerPoint Alternative
OpenAI Enhances ChatGPT with Em-Dash Personalization Feature


















































