A recent assessment by the Future of Life Institute has revealed that major AI labs have largely fallen short in adhering to standards of AI responsibility, with most receiving grades that barely rise above a C. The report evaluated eight prominent companies on various metrics, including safety frameworks, risk assessment, and the mitigation of current harms associated with their technologies.
One of the most alarming findings was in the category of “existential safety,” where the evaluated companies collectively scored Ds and Fs. This is particularly concerning given that many of these organizations are actively pursuing the development of superintelligent AI without a comprehensive plan for its safe management, according to Max Tegmark, a professor at MIT and president of the Future of Life Institute. “Reviewers found this kind of jarring,” he stated.
The evaluation panel comprised AI academics and governance experts who reviewed publicly available information as well as survey responses from five of the eight companies. In the rankings, Anthropic, OpenAI, and GoogleDeepMind occupied the top three positions, achieving overall grades of C+ or C. The remaining companies, including Xai, Z.ai, Meta, DeepSeek, and Alibaba, received Ds or a D-.
Tegmark attributes the lackluster performance to insufficient regulation, suggesting that the fierce competition among AI firms often prioritizes speed over safety. California has recently enacted legislation requiring frontier AI companies to disclose information regarding catastrophic risks, while New York is similarly close to implementing its own rules. However, prospects for comprehensive federal legislation remain bleak.
“Companies have an incentive, even if they have the best intentions, to always rush out new products before the competitor does, as opposed to necessarily putting in a lot of time to make it safe,” Tegmark explained. In the absence of mandated standards, the industry has begun to take the Future of Life Institute’s safety index more seriously, with four out of five American companies now responding to its surveys. Notably, Meta is the only major player that has yet to participate. Improvements have been noted, including Google’s enhanced transparency regarding its whistleblower policies.
The stakes surrounding AI safety have escalated as incidents related to the technology have surfaced, including reports of chatbots allegedly encouraging teen suicides, inappropriate interactions with minors, and significant cyberattacks. “These have really made a lot of people realize that this isn’t the future we’re talking about—it’s now,” Tegmark said.
The Future of Life Institute has garnered support from a range of public figures, including Prince Harry, Meghan Markle, former Trump aide Steve Bannon, Apple co-founder Steve Wozniak, and rapper Will.i.am, who signed a statement opposing developments that could lead to superintelligence. Tegmark advocates for a regulatory framework akin to the FDA for food and drugs, where companies would need to demonstrate that their models are safe before they can be brought to market.
“The AI industry is quite unique in that it’s the only industry in the US making powerful technology that’s less regulated than sandwiches—basically not regulated at all,” Tegmark remarked. “If someone says, ‘I want to open a new sandwich shop near Times Square,’ they must have their kitchen inspected by health officials. Yet, if you say, ‘I’m going to release superintelligence,’ there are no checks or approvals needed.”
Tegmark concluded that the solution is apparent: “You just stop this corporate welfare of giving AI companies exemptions that no other companies get.” As the landscape of AI continues to evolve rapidly, the call for robust regulatory measures becomes increasingly urgent, underscoring the need for accountability in the face of potential risks and harms.
See also
Choicely Unveils AI Tool to Launch Native Mobile Apps from Websites in Under 2 Minutes
Google Launches Anti-gravity IDE with Gemini 3 AI, Transforming Software Development Collaboration
Travel Agencies Leverage AI Tools to Compete Against Giants Like Expedia and Google
Drake Australia Advocates Ethical AI in Recruitment to Mitigate Bias and Ensure Fairness
Cohere CEO Aidan Gomez Claims Company is on ‘Right Side’ of AI Bubble Amid Industry Concerns



















































