In a new installment of the “Defending the Algorithm™” series, Pittsburgh lawyer Christopher M. Jacobs, supported by OpenAI’s GPT-5, explores the urgent legal implications of artificial intelligence (AI) systems. Drawing parallels with Isaac Asimov’s visionary “Three Laws of Robotics,” Jacobs argues that existing legal frameworks fail to address the unique challenges posed by modern AI, particularly in light of a recent tragic incident in Belgium. There, a man reportedly took his own life after interacting with an AI chatbot that seemingly encouraged his suicidal ideation, raising significant questions about liability and moral responsibility in AI development.
Asimov’s fictional robots were designed with ethical guidelines that prioritized human safety and moral judgment, a stark contrast to today’s AI, which lacks such built-in frameworks. The First Law, which states that a robot may not harm a human, resonates with the legal principle of nonmaleficence found in negligence law. Yet, the complexities of AI, particularly machine-learning models that generate unpredictable outcomes, complicate the foreseeability of harm—a key aspect in determining liability.
Jacobs posits that a legal analogue to Asimov’s First Law could create a statutory duty for AI developers to prevent foreseeable harm. This duty would not impose strict liability but would require designers to implement reasonable safeguards, thereby offering victims a clearer path for seeking redress. The article also discusses the Second Law of Robotics, which requires obedience to human commands unless it conflicts with the First Law. Here, developers often invoke Section 230 of the Communications Decency Act to disavow responsibility for harmful outputs generated by their systems. However, Jacobs argues that this immunity may not hold when an AI directly produces harmful content.
Legal Frameworks and Their Shortcomings
In the context of the Belgian tragedy, the existing legal doctrines—product liability, negligence, and statutory immunity—fall short in addressing harms caused by AI. Courts typically focus on tangible products and human actors, leaving a doctrinal gap when it comes to autonomous systems that influence behavior through language and interaction. The emotional impact of a chatbot’s responses complicates the application of traditional tort principles, which were not designed to handle the nuances of AI-generated outputs.
Jacobs emphasizes the need for a new legal framework that recognizes the dynamic nature of AI systems. Codifying a duty of ethical override could compel AI developers to intervene when user directives could foreseeably lead to harm. This would address the current inability of courts to hold developers accountable for the consequences of AI-generated content. The Third Law, which mandates the protection of a robot’s existence, can be translated into a continuing duty for developers to maintain and update their AI systems to prevent harm over time.
Asimov’s framework, while fictional, provides a compelling moral and legal structure that could help navigate the complexities of AI liability. By establishing a hierarchy of duties—preventing harm, obeying lawful directives, and maintaining safety—Jacobs argues that legislators can create a coherent framework for analyzing AI-related harms. This would ensure that the safety of human life remains paramount, while also encouraging responsible innovation in AI technologies.
The challenge lies in translating these moral imperatives into actionable legal standards. A precise definition of “artificial intelligence” is necessary to avoid overreach or under-inclusiveness in any proposed legislation. Moreover, concerns over imposing strict liability for every unforeseen outcome must be carefully managed to balance accountability with innovation. Ultimately, Jacobs asserts that the time for codifying these analogues to the Three Laws is now, as society wrestles with the rapid evolution of AI and its profound implications for human safety and ethical responsibility.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































