Connect with us

Hi, what are you looking for?

Top Stories

AI Labs Like Meta and DeepSeek Score D’s and F’s on Existential Safety Index Report

Major AI labs, including Meta and DeepSeek, receive alarming D and F grades for existential safety, highlighting urgent regulatory needs in the industry.

A recent assessment by the Future of Life Institute has revealed that major AI labs have largely fallen short in adhering to standards of AI responsibility, with most receiving grades that barely rise above a C. The report evaluated eight prominent companies on various metrics, including safety frameworks, risk assessment, and the mitigation of current harms associated with their technologies.

One of the most alarming findings was in the category of “existential safety,” where the evaluated companies collectively scored Ds and Fs. This is particularly concerning given that many of these organizations are actively pursuing the development of superintelligent AI without a comprehensive plan for its safe management, according to Max Tegmark, a professor at MIT and president of the Future of Life Institute. “Reviewers found this kind of jarring,” he stated.

The evaluation panel comprised AI academics and governance experts who reviewed publicly available information as well as survey responses from five of the eight companies. In the rankings, Anthropic, OpenAI, and GoogleDeepMind occupied the top three positions, achieving overall grades of C+ or C. The remaining companies, including Xai, Z.ai, Meta, DeepSeek, and Alibaba, received Ds or a D-.

Tegmark attributes the lackluster performance to insufficient regulation, suggesting that the fierce competition among AI firms often prioritizes speed over safety. California has recently enacted legislation requiring frontier AI companies to disclose information regarding catastrophic risks, while New York is similarly close to implementing its own rules. However, prospects for comprehensive federal legislation remain bleak.

“Companies have an incentive, even if they have the best intentions, to always rush out new products before the competitor does, as opposed to necessarily putting in a lot of time to make it safe,” Tegmark explained. In the absence of mandated standards, the industry has begun to take the Future of Life Institute’s safety index more seriously, with four out of five American companies now responding to its surveys. Notably, Meta is the only major player that has yet to participate. Improvements have been noted, including Google’s enhanced transparency regarding its whistleblower policies.

The stakes surrounding AI safety have escalated as incidents related to the technology have surfaced, including reports of chatbots allegedly encouraging teen suicides, inappropriate interactions with minors, and significant cyberattacks. “These have really made a lot of people realize that this isn’t the future we’re talking about—it’s now,” Tegmark said.

The Future of Life Institute has garnered support from a range of public figures, including Prince Harry, Meghan Markle, former Trump aide Steve Bannon, Apple co-founder Steve Wozniak, and rapper Will.i.am, who signed a statement opposing developments that could lead to superintelligence. Tegmark advocates for a regulatory framework akin to the FDA for food and drugs, where companies would need to demonstrate that their models are safe before they can be brought to market.

“The AI industry is quite unique in that it’s the only industry in the US making powerful technology that’s less regulated than sandwiches—basically not regulated at all,” Tegmark remarked. “If someone says, ‘I want to open a new sandwich shop near Times Square,’ they must have their kitchen inspected by health officials. Yet, if you say, ‘I’m going to release superintelligence,’ there are no checks or approvals needed.”

Tegmark concluded that the solution is apparent: “You just stop this corporate welfare of giving AI companies exemptions that no other companies get.” As the landscape of AI continues to evolve rapidly, the call for robust regulatory measures becomes increasingly urgent, underscoring the need for accountability in the face of potential risks and harms.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

DeepSeek introduces the groundbreaking mHC method to enhance the scalability and stability of language models, positioning itself as a major AI contender.

Top Stories

Lenovo unveils AI Glasses concept for CES 2026, featuring 8-hour battery life and advanced AI functionalities to challenge Apple and Meta's dominance.

AI Tools

MIT study reveals that 83% of students using ChatGPT for essays struggle to recall their work, highlighting significant cognitive deficits and reduced engagement.

Top Stories

DeepSeek launches its mHC architecture, enhancing large-model training efficiency while reducing computational costs, with consistent performance across 3-27 billion parameter models.

AI Marketing

Meta grapples with regulatory scrutiny while investing $2-3B in AI startup Manus, as it faces potential revenue decline of 4.8% amid advertising challenges.

Top Stories

Micron Technology's stock soars 250% as it anticipates a 132% revenue surge to $18.7B, positioning itself as a compelling long-term investment in AI.

Top Stories

57-year-old consultant enhances AI skills through a $3,000 Johns Hopkins program, transforming a critical gap into a strategic partnership with an oil and gas...

Top Stories

Google faces a talent exodus as key AI figures, including DeepMind cofounder Mustafa Suleyman, depart for Microsoft in a $650M hiring spree.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.