OpenAI’s global policy chief, David Lehane, has criticized the so-called “doomer” narratives surrounding artificial intelligence (AI), which he suggests may have contributed to recent violent incidents directed at Sam Altman, the company’s CEO. In an interview with The Standard, Lehane acknowledged the need for OpenAI to improve its communication regarding the positive aspects of AI and to address the associated risks more effectively. He stated, “some of the conversation out there is not necessarily responsible” and warned that these narratives could lead to “dire consequences.”
Lehane’s comments come in the wake of a Molotov cocktail attack on Altman’s home, an incident which has raised concerns about the societal perception of AI. “This is not fun and games,” Lehane said emphatically, adding that “This is really serious shit.” He did not name specific individuals or sources of the doomer rhetoric but pointed to a polarized discourse within the AI community. On one end, there are optimists proclaiming that AI will usher in an era of prosperity, while on the other side, the doomers espouse a bleak view, suggesting that AI poses existential threats to humanity.
While acknowledging the potential dangers associated with AI, Lehane emphasized the need for collective efforts to tackle these “very real problems.” He argued that society has historically faced similar challenges with technological advancements, noting, “you’ve had a series of things that have been put out there about extreme things that are going to happen,” referencing previous technological shifts where doomsday predictions have not materialized.
Lehane, representing OpenAI’s stance, insists that AI can be beneficial for individuals, families, and society as a whole. However, he stressed that it is incumbent upon OpenAI and other companies in the sector to articulate these benefits more effectively. Following the attack on his home, Altman responded with an open letter acknowledging the legitimacy of public fears regarding AI but emphasized the power of words. He referred to a New Yorker article that highlighted concerns about his leadership, stating, “I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me.”
Reports indicate that the attacker, identified as 20-year-old Daniel Moreno-Gama, was motivated by a belief that AI represented a significant threat to humanity. This incident underscores the urgent need for more responsible discourse around AI and its implications on society.
As the conversation around AI continues to evolve, it is becoming increasingly clear that the narratives surrounding it can have real-world repercussions. The challenge for companies like OpenAI is to foster a balanced discussion that acknowledges both the potential benefits and the risks posed by AI technologies. This balanced approach will be crucial in shaping public perceptions and ensuring that future advancements in AI are aligned with societal values and safety.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health





















































