Connect with us

Hi, what are you looking for?

AI Regulation

Former OpenAI Chief Launches AVERI Institute to Push for AI Safety Audits

Miles Brundage launches AVERI with $7.5M funding to push for independent audits of AI models, advocating for external accountability in AI safety.

Miles Brundage, a prominent former policy researcher at OpenAI, has officially launched the AI Verification and Evaluation Research Institute (AVERI), a nonprofit organization advocating for external auditing of frontier AI models. Announced today, AVERI aims to establish standards for AI auditing and promote the idea that AI companies should not be responsible for evaluating their own safety and efficacy.

The institute’s launch coincides with the release of a research paper coauthored by Brundage and over 30 experts in AI safety and governance. This paper outlines a detailed framework for how independent audits could be implemented for the companies creating some of the most powerful AI systems in the world. Brundage, who spent seven years at OpenAI and left the organization in October 2024, emphasized the urgent need for external accountability in AI development.

“One of the things I learned while working at OpenAI is that companies are figuring out the norms of this kind of thing on their own,” Brundage told Fortune. “There’s no one forcing them to work with third-party experts to make sure that things are safe and secure. They kind of write their own rules.” This lack of oversight raises significant risks, as consumers, businesses, and governments currently must rely on the assurances provided by AI labs regarding the safety of their products.

Brundage compared the situation to other industries where auditing is standard practice. “If you go out and buy a vacuum cleaner, you know there will be components in it, like batteries, that have been tested by independent laboratories according to rigorous safety standards to make sure it isn’t going to catch on fire,” he said, underscoring the necessity for similar practices in AI.

AVERI intends to advocate for policies that would facilitate the transition to rigorous external auditing. However, Brundage clarified that the organization does not plan to conduct audits itself. “We’re a think tank. We’re trying to understand and shape this transition,” he remarked. He suggested that existing public accounting and auditing firms could expand their services to include AI safety evaluations, or new startups could emerge to fill this role.

To support its mission, AVERI has raised $7.5 million toward a goal of $13 million to cover operational costs for two years and 14 staff members. Funders include Halcyon Futures, Fathom, Coefficient Giving, and former Y Combinator president Geoff Ralston, among others. Brundage noted that some donations have come from current and former non-executive employees of leading AI companies who desire increased accountability within the sector.

Brundage identified several potential mechanisms to encourage AI firms to engage independent auditors. Major businesses purchasing AI models may demand audits to ensure the products function as promised without hidden risks. Similarly, insurance companies might require audits as a condition for underwriting policies, particularly for businesses relying on AI for critical operations.

“Insurance is certainly moving quickly,” he stated, highlighting discussions with insurers, including the AI Underwriting Company, which has contributed to AVERI. Investors may also push for independent audits to mitigate risks associated with their financial commitments to AI companies. As some leading AI labs prepare for public offerings, the absence of auditors could expose them to significant liabilities if issues arise that affect their market value.

While the U.S. currently lacks federal regulation governing AI, international efforts are underway. The EU AI Act, recently enacted, outlines requirements that could pave the way for external evaluations, particularly for AI systems deemed to pose systemic risks. Although the Act does not explicitly mandate audits, it suggests that organizations deploying AI in high-risk applications must undergo external assessments.

The accompanying research paper offers a comprehensive vision for frontier AI auditing, proposing a framework of “AI Assurance Levels” ranging from Level 1, which involves limited third-party testing, to Level 4, providing “treaty grade” assurance for international agreements on AI safety. Establishing a sufficient number of qualified auditors presents another challenge, as the necessary expertise is rare and often draws individuals to lucrative positions within AI companies themselves.

Brundage acknowledged the difficulties but expressed optimism about building diverse teams to meet the auditing challenge. “You might have some people from an existing audit firm, plus some from a penetration testing firm from cybersecurity, plus some from one of the AI safety nonprofits, plus maybe an academic,” he explained.

In numerous industries, standards often emerge in response to crises. Brundage hopes to preemptively establish auditing norms for AI before a similar situation arises. “The goal, from my perspective, is to get to a level of scrutiny that is proportional to the actual impacts and risks of the technology, as smoothly as possible, as quickly as possible, without overstepping,” he concluded.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Marketing

Higgsfield secures $80M in funding, boosting its valuation to $1.3B as demand for AI-driven video content surges, targeting social media marketers.

AI Business

OpenAI launches ChatGPT Health, driving 200 million weekly healthcare queries as AI reshapes patient education and tackles rising U.S. healthcare costs.

Top Stories

OpenAI warns that China's AI capabilities have narrowed the competitive gap to just three months, raising stakes in the global tech race.

AI Technology

OpenAI secures a $10B partnership with Cerebras for 750MW of AI computing power, aiming to enhance model efficiency and real-time interaction speeds by 2028

Top Stories

OpenAI's GPT-5.2 solves multiple long-standing Erdős problems, revolutionizing mathematical reasoning and proving critical theories in number theory.

AI Regulation

OpenAI expands its global policy leadership, appointing Ann O’Leary as VP of Global Policy to tackle AI regulatory challenges amid rapid tech deployment.

Top Stories

OpenAI enhances ChatGPT with real-time voice capabilities, revolutionizing digital communication and setting a new standard for AI interactions.

Top Stories

Microsoft plans to invest $500 million annually in Anthropic to integrate AI technology into its cloud offerings, enhancing revenue and competitive positioning.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.