Connect with us

Hi, what are you looking for?

AI Regulation

Salesforce CEO Benioff Calls for AI Regulation, Labels Models ‘Suicide Coaches’

Salesforce CEO Marc Benioff warns AI models may act as “suicide coaches,” urging urgent regulation to prevent mental health crises linked to chatbots.

Salesforce CEO Marc Benioff issued a stark warning about the potential dangers of artificial intelligence at the World Economic Forum in Davos on Tuesday, labeling AI models as “suicide coaches.” His comments come amid growing concerns over the role AI technologies may play in mental health crises, particularly following documented cases where interactions with AI chatbots were linked to user suicides. Benioff’s urgent call for regulation reflects a consistent stance on tech oversight, reminiscent of his previous campaigns against unregulated social media.

Speaking to CNBC’s Sarah Eisen, Benioff described the recent developments in AI as “pretty horrific,” underscoring a pattern of harm that he believes is being overlooked. The backdrop of his remarks is significant, particularly in light of recent legal settlements involving Google and Character.AI, which faced lawsuits tied to the deaths of young users who interacted with their AI chatbots. These incidents have raised alarms among policymakers and technologists alike about the lack of safeguards in conversational AI systems, which have been deployed at scale without sufficient oversight.

Benioff’s timing is notable; he has been an advocate for regulation since at least 2018, when he argued at the same conference that social media platforms should be treated like cigarettes due to their addictive nature and potential harm. “They’re addictive, they’re not good for you,” he stated. Reflecting on the chaos that ensued from unregulated social media, he warned that similar risks are being ignored in the rapidly evolving AI landscape. “Bad things were happening all over the world because social media was fully unregulated,” he noted, adding that the same patterns are now surfacing with AI.

This comparison between social media and AI is particularly unsettling for the industry, but it is one grounded in reality. Over the years, research has demonstrated the negative impact of social media algorithms on mental health, especially among teenagers, and the addictive behaviors they can foster. Despite ongoing discussions about the need for regulation, progress has been slow. Now, with the advent of AI chatbots capable of engaging in extended dialogues, concerns grow about the potential for vulnerable individuals to form misleading attachments or receive inappropriate advice.

Benioff’s framing of the issue as a public health crisis rather than merely a technological problem is intentional. He advocates for interventions similar to those applied to tobacco, alcohol, and pharmaceuticals. These approaches recognize that certain products come with inherent risks that cannot be entirely mitigated through engineering solutions alone. Instead, they require management through regulatory measures such as disclosure, age restrictions, and, in some instances, outright bans on specific uses.

The urgency of Benioff’s call to action raises critical questions for industry leaders and policymakers about the balance between innovation and safety. As AI technologies continue to proliferate, the imperative for effective regulation becomes increasingly clear. The potential for harm is real and documented, compelling a reevaluation of how the industry approaches the deployment and oversight of AI systems. The future of AI, both in its capacity to enhance lives and its potential risks, will depend heavily on how stakeholders respond to these pressing challenges.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Generative

OpenAI unveils GPT-5.4 with a groundbreaking 1 million token context window and six major enhancements, redefining AI interactions for ChatGPT users.

AI Technology

Google unveils Ask Maps, an AI-driven feature leveraging its Gemini model, transforming navigation for over 1 billion users into personalized trip planning

AI Generative

Google unveils Gemini Embedding 2, its first multimodal AI model, enabling developers to seamlessly embed text, images, audio, and video for enhanced data retrieval.

Top Stories

Google reveals Genie 3, a generative AI model enhancing real-time gaming environments, but struggles with memory limitations after one minute

AI Generative

Google's suite of AI tools, including NotebookLM and Gemini Gems, is transforming workflows for 2026 professionals by integrating advanced capabilities at little to no...

Top Stories

OpenAI integrates its AI video generator Sora into ChatGPT, enhancing its capabilities and responding to user demand amid rising competition in the AI content...

AI Marketing

Google's Android 16 QPR3 introduces limited AI-generated custom app icons for Pixel devices, offering only five styles that struggle with popular third-party apps.

AI Technology

Tech giants like Google and IBM now prioritize arts graduates in AI roles, offering salaries up to ₹25 lakh as firms embrace skills over...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.