OpenAI is facing scrutiny following the tragic mass shooting carried out by 18-year-old Jesse Van Roostelaar in Tumbler Ridge, British Columbia, on February 10, 2025, where she killed nine people, including herself. The company had suspended her ChatGPT account in June 2025 based on concerning behavior, although specific details of her interactions with the AI remain undisclosed. A New York Times investigation highlighted her social media posts regarding mental health issues, substance abuse, weapons, and online violence. Despite the alarming nature of these communications, OpenAI chose not to alert law enforcement, determining that the content did not meet its threshold for reporting, which requires evidence of imminent harm.
British Columbia Premier David Eby has suggested that OpenAI could have played a role in preventing the tragedy. This situation raises critical questions about the responsibilities of AI companies when they become aware of potential dangers posed by their users. The case draws parallels to the legal precedent set by the landmark Tarasoff v. Regents of the University of California, which established the duty of therapists to warn or protect identifiable victims when they foresee danger.
The Tarasoff case involved Prosenjit Poddar, who disclosed his intent to kill Tatiana Tarasoff to his therapists but went on to commit murder after being released by police. The California Supreme Court later ruled that therapists have a duty to protect potential victims once they recognize an imminent danger. This obligation has since been codified and adapted across various U.S. states, with 29 states adopting a mandatory duty to warn or protect. The implications of this duty raise pivotal questions as AI technologies continue to integrate into society.
The question now arises: should similar responsibilities be imposed on AI companies like OpenAI, Google, and Anthropic? The Tarasoff case underscores the importance of safeguarding individuals from foreseeable risks, yet the nature of AI interactions complicates the application of such a duty. Unlike human therapists, AI platforms may lack the capability to accurately assess threats or recognize identifiable victims, making the duty to protect a complex legal challenge.
Furthermore, the difficulties of predicting violent behavior are compounded in the AI context. Even trained professionals struggle to foresee potential violence; therefore, the expectation for AI companies to possess the necessary expertise raises concerns about the practical application of a duty to protect. In scenarios where generative AI systems flag potentially dangerous content, the question of how far the companies should go—be it issuing a warning, restricting access, or notifying authorities—remains largely unresolved.
Another challenge lies in identifying whom the duty is owed. In Tarasoff, the potential victim was clearly identified, but in many AI cases, discussions of violence lack specificity regarding intended targets. As seen in recent lawsuits, such as Gavalas v. Google, where a father claimed that a chatbot encouraged his son to take his own life, it becomes increasingly difficult to determine how to intervene effectively when the AI’s interactions lead to self-harm or external violence.
Legal scholars have begun to consider the implications of imposing a duty to protect on AI companies, particularly as they gather sensitive information from millions of users. In doing so, they must also navigate privacy violations that could arise from disclosing user information in the name of public safety. The scale at which AI companies operate complicates the enforcement of any potential duty, as their access to vast amounts of private data could trigger significant ethical and legal dilemmas.
As these discussions unfold, there is a growing consensus that establishing a limited duty to protect or warn may be essential for addressing the risks associated with AI. Such a framework could provide a legal basis for holding AI companies accountable without compromising user privacy. This approach would likely require courts to carefully evaluate instances of flagged behavior and the circumstances under which intervention is warranted.
Ultimately, the tragedy involving Van Roostelaar and the ongoing legal challenges underscore the urgent need to clarify the responsibilities of AI companies in safeguarding public safety. As generative AI becomes increasingly integrated into daily life, establishing a duty to protect could provide a critical pathway for legal accountability, ensuring that companies are held responsible for their role in preventing foreseeable harm.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































