Connect with us

Hi, what are you looking for?

Top Stories

AI Safety Index Reveals OpenAI, Anthropic Score C+, Meta and xAI Earn D Grades

OpenAI and Anthropic earn C+ ratings in the AI Safety Index, while Meta and xAI score D, highlighting urgent safety concerns in unregulated AI development.

Artificial intelligence companies may not be prioritizing the safety of humanity amid growing concerns about the potential harms of their technologies, according to a new report card released by the Future of Life Institute. The Silicon Valley-based nonprofit published its AI Safety Index on Wednesday, highlighting the industry’s lack of regulation and the insufficient incentives for companies to enhance safety measures.

As AI increasingly influences human interactions with technology, risks are surfacing, including AI-powered chatbots being misused for counseling, leading to tragic outcomes such as suicide, and AI being utilized in cyberattacks. The report raises alarms about future threats, including the potential for AI to develop weapons or facilitate governmental overthrows.

Max Tegmark, president of the Future of Life Institute and a professor at MIT, emphasized the urgency of the situation. “They are the only industry in the U.S. making powerful technology that’s completely unregulated, so that puts them in a race to the bottom against each other where they just don’t have the incentives to prioritize safety,” Tegmark stated.

The highest grades in the index were a C+, awarded to two San Francisco-based companies: OpenAI, known for its ChatGPT model, and Anthropic, which produces the AI chatbot Claude. Google’s AI division, Google DeepMind, received a C, while Meta, the parent company of Facebook, and xAI, founded by Elon Musk, earned D ratings. Chinese companies Z.ai and DeepSeek also scored a D, with Alibaba Cloud receiving the lowest grade of D-.

The overall scores were derived from an assessment of 35 indicators across six categories, including existential safety, risk assessment, and information sharing. The findings were based on evidence from publicly available sources and surveys completed by the companies themselves. A panel of eight AI experts conducted the grading, with members drawn from academia and AI-related organizations.

Notably, all companies ranked below average in the existential safety category, which evaluates internal monitoring, control interventions, and safety strategy. “While companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control,” the report noted.

In response to the findings, both OpenAI and Google DeepMind stated their commitment to safety. OpenAI asserted, “Safety is core to how we build and deploy AI. We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts.” Meanwhile, Google DeepMind emphasized its “rigorous, science-led approach to AI safety,” highlighting its Frontier Safety Framework aimed at mitigating risks from advanced AI models.

However, the Future of Life Institute’s report criticized xAI and Meta for lacking robust commitments to monitoring and control, despite having some risk-management frameworks in place. It also pointed out that companies like DeepSeek, Z.ai, and Alibaba Cloud provided little public documentation regarding their safety strategies. Meta, Z.ai, DeepSeek, Alibaba, and Anthropic did not respond to requests for comment.

In a statement, xAI dismissed the report as “Legacy Media Lies,” although an attorney for Musk did not provide additional comments. Despite Musk’s past funding and advisory role with the Future of Life Institute, Tegmark clarified that he was not involved in the AI Safety Index.

Tegmark expressed concerns about the potential ramifications of unregulated AI development. Without sufficient oversight, he warned that AI could aid in creating bioweapons, manipulating individuals more effectively, or destabilizing governments. “Yes, we have big problems and things are going in a bad direction, but I want to emphasize how easy this is to fix,” he remarked, advocating for binding safety standards for AI companies.

While there have been some governmental efforts to enhance oversight of the AI sector, proposals have faced opposition from tech lobbying groups, which argue that excessive regulation might stifle innovation and drive companies to relocate. Nonetheless, legislative initiatives like California’s SB 53, signed by Governor Gavin Newsom, aim to improve monitoring of safety standards by requiring businesses to disclose their safety protocols and report incidents such as cyberattacks. Tegmark called this new law a positive step but stressed that much more action is necessary.

Rob Enderle, principal analyst at Enderle Group, noted that the AI Safety Index presents an intriguing approach to addressing the regulatory gap in the U.S. However, he cautioned that the current administration may struggle to devise effective regulations, raising concerns that poorly conceived rules could do more harm than good. “It’s also not clear that anybody has figured out how to put the teeth in the regulations to assure compliance,” he added.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Finance

Benchmark boosts Broadcom's price target to $485 following a 76% surge in AI chip revenue, while the company faces potential margin pressures ahead.

AI Generative

Discover the top 7 AI chat apps of 2026, including Claude AI's $20 Pro plan and Google Gemini's multimodal features, guiding users to optimal...

Top Stories

Analysts warn that unchecked AI enthusiasm from companies like OpenAI and Nvidia could mask looming market instability as geopolitical tensions escalate and regulations lag.

Top Stories

SpaceX, OpenAI, and Anthropic are set for landmark IPOs as early as 2026, with valuations potentially exceeding $1 trillion, reshaping the AI investment landscape.

Top Stories

OpenAI launches Sora 2, enabling users to create lifelike videos with sound and dialogue from images, enhancing social media content creation.

AI Research

Shanghai AI Laboratory unveils the Science Context Protocol, enhancing global AI collaboration with over 1,600 interoperable tools and robust experiment lifecycle management.

Top Stories

Musk's xAI acquires a third building to enhance AI compute capacity to nearly 2GW, positioning itself for a competitive edge in the $230 billion...

Top Stories

Nvidia and OpenAI drive a $100 billion investment surge in AI as market dynamics shift, challenging growth amid regulatory skepticism and rising costs.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.