Miles Brundage, a prominent former policy researcher at OpenAI, has officially launched the AI Verification and Evaluation Research Institute (AVERI), a nonprofit organization advocating for external auditing of frontier AI models. Announced today, AVERI aims to establish standards for AI auditing and promote the idea that AI companies should not be responsible for evaluating their own safety and efficacy.
The institute’s launch coincides with the release of a research paper coauthored by Brundage and over 30 experts in AI safety and governance. This paper outlines a detailed framework for how independent audits could be implemented for the companies creating some of the most powerful AI systems in the world. Brundage, who spent seven years at OpenAI and left the organization in October 2024, emphasized the urgent need for external accountability in AI development.
“One of the things I learned while working at OpenAI is that companies are figuring out the norms of this kind of thing on their own,” Brundage told Fortune. “There’s no one forcing them to work with third-party experts to make sure that things are safe and secure. They kind of write their own rules.” This lack of oversight raises significant risks, as consumers, businesses, and governments currently must rely on the assurances provided by AI labs regarding the safety of their products.
Brundage compared the situation to other industries where auditing is standard practice. “If you go out and buy a vacuum cleaner, you know there will be components in it, like batteries, that have been tested by independent laboratories according to rigorous safety standards to make sure it isn’t going to catch on fire,” he said, underscoring the necessity for similar practices in AI.
AVERI intends to advocate for policies that would facilitate the transition to rigorous external auditing. However, Brundage clarified that the organization does not plan to conduct audits itself. “We’re a think tank. We’re trying to understand and shape this transition,” he remarked. He suggested that existing public accounting and auditing firms could expand their services to include AI safety evaluations, or new startups could emerge to fill this role.
To support its mission, AVERI has raised $7.5 million toward a goal of $13 million to cover operational costs for two years and 14 staff members. Funders include Halcyon Futures, Fathom, Coefficient Giving, and former Y Combinator president Geoff Ralston, among others. Brundage noted that some donations have come from current and former non-executive employees of leading AI companies who desire increased accountability within the sector.
Brundage identified several potential mechanisms to encourage AI firms to engage independent auditors. Major businesses purchasing AI models may demand audits to ensure the products function as promised without hidden risks. Similarly, insurance companies might require audits as a condition for underwriting policies, particularly for businesses relying on AI for critical operations.
“Insurance is certainly moving quickly,” he stated, highlighting discussions with insurers, including the AI Underwriting Company, which has contributed to AVERI. Investors may also push for independent audits to mitigate risks associated with their financial commitments to AI companies. As some leading AI labs prepare for public offerings, the absence of auditors could expose them to significant liabilities if issues arise that affect their market value.
While the U.S. currently lacks federal regulation governing AI, international efforts are underway. The EU AI Act, recently enacted, outlines requirements that could pave the way for external evaluations, particularly for AI systems deemed to pose systemic risks. Although the Act does not explicitly mandate audits, it suggests that organizations deploying AI in high-risk applications must undergo external assessments.
The accompanying research paper offers a comprehensive vision for frontier AI auditing, proposing a framework of “AI Assurance Levels” ranging from Level 1, which involves limited third-party testing, to Level 4, providing “treaty grade” assurance for international agreements on AI safety. Establishing a sufficient number of qualified auditors presents another challenge, as the necessary expertise is rare and often draws individuals to lucrative positions within AI companies themselves.
Brundage acknowledged the difficulties but expressed optimism about building diverse teams to meet the auditing challenge. “You might have some people from an existing audit firm, plus some from a penetration testing firm from cybersecurity, plus some from one of the AI safety nonprofits, plus maybe an academic,” he explained.
In numerous industries, standards often emerge in response to crises. Brundage hopes to preemptively establish auditing norms for AI before a similar situation arises. “The goal, from my perspective, is to get to a level of scrutiny that is proportional to the actual impacts and risks of the technology, as smoothly as possible, as quickly as possible, without overstepping,” he concluded.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health
















































