Artificial intelligence companies may not be prioritizing the safety of humanity amid growing concerns about the potential harms of their technologies, according to a new report card released by the Future of Life Institute. The Silicon Valley-based nonprofit published its AI Safety Index on Wednesday, highlighting the industry’s lack of regulation and the insufficient incentives for companies to enhance safety measures.
As AI increasingly influences human interactions with technology, risks are surfacing, including AI-powered chatbots being misused for counseling, leading to tragic outcomes such as suicide, and AI being utilized in cyberattacks. The report raises alarms about future threats, including the potential for AI to develop weapons or facilitate governmental overthrows.
Max Tegmark, president of the Future of Life Institute and a professor at MIT, emphasized the urgency of the situation. “They are the only industry in the U.S. making powerful technology that’s completely unregulated, so that puts them in a race to the bottom against each other where they just don’t have the incentives to prioritize safety,” Tegmark stated.
The highest grades in the index were a C+, awarded to two San Francisco-based companies: OpenAI, known for its ChatGPT model, and Anthropic, which produces the AI chatbot Claude. Google’s AI division, Google DeepMind, received a C, while Meta, the parent company of Facebook, and xAI, founded by Elon Musk, earned D ratings. Chinese companies Z.ai and DeepSeek also scored a D, with Alibaba Cloud receiving the lowest grade of D-.
The overall scores were derived from an assessment of 35 indicators across six categories, including existential safety, risk assessment, and information sharing. The findings were based on evidence from publicly available sources and surveys completed by the companies themselves. A panel of eight AI experts conducted the grading, with members drawn from academia and AI-related organizations.
Notably, all companies ranked below average in the existential safety category, which evaluates internal monitoring, control interventions, and safety strategy. “While companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control,” the report noted.
In response to the findings, both OpenAI and Google DeepMind stated their commitment to safety. OpenAI asserted, “Safety is core to how we build and deploy AI. We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts.” Meanwhile, Google DeepMind emphasized its “rigorous, science-led approach to AI safety,” highlighting its Frontier Safety Framework aimed at mitigating risks from advanced AI models.
However, the Future of Life Institute’s report criticized xAI and Meta for lacking robust commitments to monitoring and control, despite having some risk-management frameworks in place. It also pointed out that companies like DeepSeek, Z.ai, and Alibaba Cloud provided little public documentation regarding their safety strategies. Meta, Z.ai, DeepSeek, Alibaba, and Anthropic did not respond to requests for comment.
In a statement, xAI dismissed the report as “Legacy Media Lies,” although an attorney for Musk did not provide additional comments. Despite Musk’s past funding and advisory role with the Future of Life Institute, Tegmark clarified that he was not involved in the AI Safety Index.
Tegmark expressed concerns about the potential ramifications of unregulated AI development. Without sufficient oversight, he warned that AI could aid in creating bioweapons, manipulating individuals more effectively, or destabilizing governments. “Yes, we have big problems and things are going in a bad direction, but I want to emphasize how easy this is to fix,” he remarked, advocating for binding safety standards for AI companies.
While there have been some governmental efforts to enhance oversight of the AI sector, proposals have faced opposition from tech lobbying groups, which argue that excessive regulation might stifle innovation and drive companies to relocate. Nonetheless, legislative initiatives like California’s SB 53, signed by Governor Gavin Newsom, aim to improve monitoring of safety standards by requiring businesses to disclose their safety protocols and report incidents such as cyberattacks. Tegmark called this new law a positive step but stressed that much more action is necessary.
Rob Enderle, principal analyst at Enderle Group, noted that the AI Safety Index presents an intriguing approach to addressing the regulatory gap in the U.S. However, he cautioned that the current administration may struggle to devise effective regulations, raising concerns that poorly conceived rules could do more harm than good. “It’s also not clear that anybody has figured out how to put the teeth in the regulations to assure compliance,” he added.
See also
Logitech CEO Warns AI Gadget Rush Lacks Purpose, Focuses on Enhancing Traditional Devices
Anthropic’s Dario Amodei Positions Claude as Safer AI Alternative to OpenAI’s ChatGPT
NVIDIA’s GB200 NVL72 Boosts Kimi K2 Thinking by 10x, Revolutionizing AI Efficiency
FDA Embraces Agentic AI for Safety Reviews Amid Ongoing Concerns Over Errors and Hallucinations
ChatGPT Predicts Nvidia Stock Price Will Hit $180.75 by Month-End, Analysts Bullish on 2025




















































