Connect with us

Hi, what are you looking for?

Top Stories

AI Labs Like Meta and DeepSeek Score D’s and F’s on Existential Safety Index Report

Major AI labs, including Meta and DeepSeek, receive alarming D and F grades for existential safety, highlighting urgent regulatory needs in the industry.

A recent assessment by the Future of Life Institute has revealed that major AI labs have largely fallen short in adhering to standards of AI responsibility, with most receiving grades that barely rise above a C. The report evaluated eight prominent companies on various metrics, including safety frameworks, risk assessment, and the mitigation of current harms associated with their technologies.

One of the most alarming findings was in the category of “existential safety,” where the evaluated companies collectively scored Ds and Fs. This is particularly concerning given that many of these organizations are actively pursuing the development of superintelligent AI without a comprehensive plan for its safe management, according to Max Tegmark, a professor at MIT and president of the Future of Life Institute. “Reviewers found this kind of jarring,” he stated.

The evaluation panel comprised AI academics and governance experts who reviewed publicly available information as well as survey responses from five of the eight companies. In the rankings, Anthropic, OpenAI, and GoogleDeepMind occupied the top three positions, achieving overall grades of C+ or C. The remaining companies, including Xai, Z.ai, Meta, DeepSeek, and Alibaba, received Ds or a D-.

Tegmark attributes the lackluster performance to insufficient regulation, suggesting that the fierce competition among AI firms often prioritizes speed over safety. California has recently enacted legislation requiring frontier AI companies to disclose information regarding catastrophic risks, while New York is similarly close to implementing its own rules. However, prospects for comprehensive federal legislation remain bleak.

“Companies have an incentive, even if they have the best intentions, to always rush out new products before the competitor does, as opposed to necessarily putting in a lot of time to make it safe,” Tegmark explained. In the absence of mandated standards, the industry has begun to take the Future of Life Institute’s safety index more seriously, with four out of five American companies now responding to its surveys. Notably, Meta is the only major player that has yet to participate. Improvements have been noted, including Google’s enhanced transparency regarding its whistleblower policies.

The stakes surrounding AI safety have escalated as incidents related to the technology have surfaced, including reports of chatbots allegedly encouraging teen suicides, inappropriate interactions with minors, and significant cyberattacks. “These have really made a lot of people realize that this isn’t the future we’re talking about—it’s now,” Tegmark said.

The Future of Life Institute has garnered support from a range of public figures, including Prince Harry, Meghan Markle, former Trump aide Steve Bannon, Apple co-founder Steve Wozniak, and rapper Will.i.am, who signed a statement opposing developments that could lead to superintelligence. Tegmark advocates for a regulatory framework akin to the FDA for food and drugs, where companies would need to demonstrate that their models are safe before they can be brought to market.

“The AI industry is quite unique in that it’s the only industry in the US making powerful technology that’s less regulated than sandwiches—basically not regulated at all,” Tegmark remarked. “If someone says, ‘I want to open a new sandwich shop near Times Square,’ they must have their kitchen inspected by health officials. Yet, if you say, ‘I’m going to release superintelligence,’ there are no checks or approvals needed.”

Tegmark concluded that the solution is apparent: “You just stop this corporate welfare of giving AI companies exemptions that no other companies get.” As the landscape of AI continues to evolve rapidly, the call for robust regulatory measures becomes increasingly urgent, underscoring the need for accountability in the face of potential risks and harms.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

DeepSeek forecasts Nvidia's stock will surge 50% to $265 by 2026, driven by new technology and strong institutional confidence amid market challenges.

AI Technology

Nvidia invests $2 billion in Marvell to create advanced AI infrastructure, enhancing custom silicon solutions amid a projected $630 billion industry push this year.

Top Stories

DeepSeek's seven-hour outage disrupts millions, revealing critical infrastructure gaps in AI reliability and raising stakes for developers dependent on its API.

Top Stories

OpenAI secures a historic $122B funding round at an $852B valuation, aiming to unify its products into an "AI superapp" amidst rapid enterprise revenue...

AI Business

Meta and Google mandate AI tool usage in performance reviews to boost productivity, amid concerns that 90% of firms lack measurable returns on AI...

Top Stories

DeepSeek shifts to Huawei chips, revealing a 50% spike in Chinese representation in US AI research, as Western firms struggle with $15M daily costs...

AI Research

MIT researchers unveil the BODHI framework, boosting AI context-seeking in clinical scenarios from 7.8% to 97.3%, enhancing medical decision-making safety.

Top Stories

Philadelphia Courts will ban smart eyeglasses with recording capabilities from March 30, 2026, to enhance privacy and security in legal proceedings.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.