Connect with us

Hi, what are you looking for?

Top Stories

AI Safety Index Reveals OpenAI, Anthropic Score C+, Meta and xAI Earn D Grades

OpenAI and Anthropic earn C+ ratings in the AI Safety Index, while Meta and xAI score D, highlighting urgent safety concerns in unregulated AI development.

Artificial intelligence companies may not be prioritizing the safety of humanity amid growing concerns about the potential harms of their technologies, according to a new report card released by the Future of Life Institute. The Silicon Valley-based nonprofit published its AI Safety Index on Wednesday, highlighting the industry’s lack of regulation and the insufficient incentives for companies to enhance safety measures.

As AI increasingly influences human interactions with technology, risks are surfacing, including AI-powered chatbots being misused for counseling, leading to tragic outcomes such as suicide, and AI being utilized in cyberattacks. The report raises alarms about future threats, including the potential for AI to develop weapons or facilitate governmental overthrows.

Max Tegmark, president of the Future of Life Institute and a professor at MIT, emphasized the urgency of the situation. “They are the only industry in the U.S. making powerful technology that’s completely unregulated, so that puts them in a race to the bottom against each other where they just don’t have the incentives to prioritize safety,” Tegmark stated.

The highest grades in the index were a C+, awarded to two San Francisco-based companies: OpenAI, known for its ChatGPT model, and Anthropic, which produces the AI chatbot Claude. Google’s AI division, Google DeepMind, received a C, while Meta, the parent company of Facebook, and xAI, founded by Elon Musk, earned D ratings. Chinese companies Z.ai and DeepSeek also scored a D, with Alibaba Cloud receiving the lowest grade of D-.

The overall scores were derived from an assessment of 35 indicators across six categories, including existential safety, risk assessment, and information sharing. The findings were based on evidence from publicly available sources and surveys completed by the companies themselves. A panel of eight AI experts conducted the grading, with members drawn from academia and AI-related organizations.

Notably, all companies ranked below average in the existential safety category, which evaluates internal monitoring, control interventions, and safety strategy. “While companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control,” the report noted.

In response to the findings, both OpenAI and Google DeepMind stated their commitment to safety. OpenAI asserted, “Safety is core to how we build and deploy AI. We invest heavily in frontier safety research, build strong safeguards into our systems, and rigorously test our models, both internally and with independent experts.” Meanwhile, Google DeepMind emphasized its “rigorous, science-led approach to AI safety,” highlighting its Frontier Safety Framework aimed at mitigating risks from advanced AI models.

However, the Future of Life Institute’s report criticized xAI and Meta for lacking robust commitments to monitoring and control, despite having some risk-management frameworks in place. It also pointed out that companies like DeepSeek, Z.ai, and Alibaba Cloud provided little public documentation regarding their safety strategies. Meta, Z.ai, DeepSeek, Alibaba, and Anthropic did not respond to requests for comment.

In a statement, xAI dismissed the report as “Legacy Media Lies,” although an attorney for Musk did not provide additional comments. Despite Musk’s past funding and advisory role with the Future of Life Institute, Tegmark clarified that he was not involved in the AI Safety Index.

Tegmark expressed concerns about the potential ramifications of unregulated AI development. Without sufficient oversight, he warned that AI could aid in creating bioweapons, manipulating individuals more effectively, or destabilizing governments. “Yes, we have big problems and things are going in a bad direction, but I want to emphasize how easy this is to fix,” he remarked, advocating for binding safety standards for AI companies.

While there have been some governmental efforts to enhance oversight of the AI sector, proposals have faced opposition from tech lobbying groups, which argue that excessive regulation might stifle innovation and drive companies to relocate. Nonetheless, legislative initiatives like California’s SB 53, signed by Governor Gavin Newsom, aim to improve monitoring of safety standards by requiring businesses to disclose their safety protocols and report incidents such as cyberattacks. Tegmark called this new law a positive step but stressed that much more action is necessary.

Rob Enderle, principal analyst at Enderle Group, noted that the AI Safety Index presents an intriguing approach to addressing the regulatory gap in the U.S. However, he cautioned that the current administration may struggle to devise effective regulations, raising concerns that poorly conceived rules could do more harm than good. “It’s also not clear that anybody has figured out how to put the teeth in the regulations to assure compliance,” he added.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Business

Pentagon partners with OpenAI to integrate ChatGPT into GenAI.mil, granting 3 million personnel access to advanced AI capabilities for enhanced mission readiness.

Top Stories

AI Impact Summit in India aims to unlock ₹8 lakh crore in investments, gathering leaders like Bill Gates and Sundar Pichai to shape global...

AI Education

UGA invests $800,000 to launch a pilot program providing students access to premium AI tools like ChatGPT Edu and Gemini Pro starting spring 2026.

AI Generative

OpenAI has retired the GPT-4o model, impacting 0.1% of users who formed deep emotional bonds with the AI as it transitions to newer models...

AI Technology

CodePath partners with Anthropic to integrate Claude into AI courses, empowering low-income students to access high-demand skills with a 56% wage premium.

Top Stories

Anthropic's Claude Cowork triggers a $300 billion market shift as investors pivot to resilient sectors like Vertical SaaS and Cybersecurity amidst AI disruption.

AI Generative

ChatBCI introduces a pioneering P300 speller BCI that integrates GPT-3.5 for dynamic word prediction, enhancing communication speed for users with disabilities.

Top Stories

Microsoft’s AI chief Mustafa Suleyman outlines a bold shift to self-sufficiency by developing proprietary models, aiming for superintelligence and reducing reliance on OpenAI.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.