Connect with us

Hi, what are you looking for?

Top Stories

AI Labs Like Meta and DeepSeek Score D’s and F’s on Existential Safety Index Report

Major AI labs, including Meta and DeepSeek, receive alarming D and F grades for existential safety, highlighting urgent regulatory needs in the industry.

A recent assessment by the Future of Life Institute has revealed that major AI labs have largely fallen short in adhering to standards of AI responsibility, with most receiving grades that barely rise above a C. The report evaluated eight prominent companies on various metrics, including safety frameworks, risk assessment, and the mitigation of current harms associated with their technologies.

One of the most alarming findings was in the category of “existential safety,” where the evaluated companies collectively scored Ds and Fs. This is particularly concerning given that many of these organizations are actively pursuing the development of superintelligent AI without a comprehensive plan for its safe management, according to Max Tegmark, a professor at MIT and president of the Future of Life Institute. “Reviewers found this kind of jarring,” he stated.

The evaluation panel comprised AI academics and governance experts who reviewed publicly available information as well as survey responses from five of the eight companies. In the rankings, Anthropic, OpenAI, and GoogleDeepMind occupied the top three positions, achieving overall grades of C+ or C. The remaining companies, including Xai, Z.ai, Meta, DeepSeek, and Alibaba, received Ds or a D-.

Tegmark attributes the lackluster performance to insufficient regulation, suggesting that the fierce competition among AI firms often prioritizes speed over safety. California has recently enacted legislation requiring frontier AI companies to disclose information regarding catastrophic risks, while New York is similarly close to implementing its own rules. However, prospects for comprehensive federal legislation remain bleak.

“Companies have an incentive, even if they have the best intentions, to always rush out new products before the competitor does, as opposed to necessarily putting in a lot of time to make it safe,” Tegmark explained. In the absence of mandated standards, the industry has begun to take the Future of Life Institute’s safety index more seriously, with four out of five American companies now responding to its surveys. Notably, Meta is the only major player that has yet to participate. Improvements have been noted, including Google’s enhanced transparency regarding its whistleblower policies.

The stakes surrounding AI safety have escalated as incidents related to the technology have surfaced, including reports of chatbots allegedly encouraging teen suicides, inappropriate interactions with minors, and significant cyberattacks. “These have really made a lot of people realize that this isn’t the future we’re talking about—it’s now,” Tegmark said.

The Future of Life Institute has garnered support from a range of public figures, including Prince Harry, Meghan Markle, former Trump aide Steve Bannon, Apple co-founder Steve Wozniak, and rapper Will.i.am, who signed a statement opposing developments that could lead to superintelligence. Tegmark advocates for a regulatory framework akin to the FDA for food and drugs, where companies would need to demonstrate that their models are safe before they can be brought to market.

“The AI industry is quite unique in that it’s the only industry in the US making powerful technology that’s less regulated than sandwiches—basically not regulated at all,” Tegmark remarked. “If someone says, ‘I want to open a new sandwich shop near Times Square,’ they must have their kitchen inspected by health officials. Yet, if you say, ‘I’m going to release superintelligence,’ there are no checks or approvals needed.”

Tegmark concluded that the solution is apparent: “You just stop this corporate welfare of giving AI companies exemptions that no other companies get.” As the landscape of AI continues to evolve rapidly, the call for robust regulatory measures becomes increasingly urgent, underscoring the need for accountability in the face of potential risks and harms.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

AI Impact Summit in India aims to unlock ₹8 lakh crore in investments, gathering leaders like Bill Gates and Sundar Pichai to shape global...

Top Stories

ByteDance's Seedance 2.0 generates high-quality videos mimicking Hollywood scenes, raising concerns over copyright and the future of traditional filmmaking.

AI Technology

MiniMax launches the M2.5, achieving 100 TPS and transforming AI deployment costs to $0.3 input and $2.4 output per million tokens, enhancing operational efficiency.

AI Technology

OpenAI hires OpenClaw creator Peter Steinberger, sustaining the project's open-source status amidst fierce competition for AI engineering talent.

Top Stories

Corning secures a $6 billion contract with Meta to enhance AI data center infrastructure, signaling strong growth potential in optical communications.

Top Stories

Meta enhances WhatsApp with robust end-to-end encryption for calls, personalized chat options, and user-friendly disappearing messages, aiming to regain user trust.

Top Stories

AI hyperscalers, led by Alphabet and Meta, are projected to invest $660B in 2023, sparking market volatility and fears of job disruption across sectors.

Top Stories

OpenAI warns U.S. lawmakers that Chinese startup DeepSeek is allegedly cloning its ChatGPT models, raising national security concerns over AI technology theft.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.