OpenAI Chief Global Affairs Officer Chris Lehane recently shared insights on LinkedIn regarding the ongoing national debate over the regulation of frontier AI models. His comments emphasize the need for a cohesive regulatory approach that prioritizes safety while sustaining the United States’ innovation edge in AI technology. Lehane argues that “deploying frontier models safely and in a way that best positions the US to maintain its innovation lead” should be the guiding principle for any regulatory framework.
Lehane’s remarks come amid increasing uncertainty surrounding whether federal legislation, state actions, or executive authority should serve as the primary means of establishing safety standards for frontier AI models. He contends that only the federal government possesses access to the classified systems needed to test these models effectively, thereby preventing potential harm prior to deployment. “Frontier models are tested for their safety on classified systems, which only the federal government has access to,” he explains. “States, companies, and nonprofits don’t have such access.”
Highlighting OpenAI’s role in these federal processes, Lehane noted that the company has developed a publicly available preparedness framework and was among the first AI labs to enter into a voluntary agreement with the federal government, specifically through the Center for AI Standards and Innovation (CAISI). Established under the Biden Administration and updated during the Trump Administration, CAISI facilitates comprehensive safety testing of AI models.
Lehane argues that this federal capability supports a prevention-first model rather than relying solely on accountability after harm has occurred. He points out that several states have enacted their own frontier safety laws but emphasizes their structural limitations. While he acknowledges that “these laws have some positive benefits,” he criticizes their reliance on liability, asserting that they tend to be reactive rather than preventative. “State laws are all based on a liability approach (hold a company accountable after harm has occurred) and not a prevention approach (stopping the harm from happening in the first place),” he remarked.
According to Lehane, the inability of state authorities to access classified systems for safety testing renders them incapable of delivering the evaluations necessary to mitigate risks associated with frontier models. This leads to inconsistent regulatory requirements and fails to address essential safety concerns.
To create a unified national safety framework without imposing undue regulatory burdens on smaller AI companies, Lehane proposes three potential pathways. The first involves federal legislation that would enable frontier model testing through CAISI and establish national standards while allowing states to legislate in other areas. The second pathway suggests that states could voluntarily align their requirements with federal testing protocols. He cites California as an example of a state already moving in this direction and indicates that if New York were to follow suit, the combined influence of these states could help establish a national standard, a concept he describes as a kind of “reverse federalism.”
The third pathway is the issuance of an executive order that would exempt companies participating in voluntary CAISI testing and reporting from state-level frontier safety regulations. Lehane argues that all three approaches ultimately aim for the same goal, stating, “All three of these paths get us to our North Star: safely deploying our frontier models while keeping America’s innovation lead.”
The discourse around AI regulation continues to evolve, with key stakeholders weighing the balance between innovation and safety. As companies like OpenAI navigate this complex landscape, the outcomes of these regulatory discussions will have lasting implications for the future of artificial intelligence in various sectors.
See also
Brandon Amos Joins Reflection AI to Advance Reinforcement Learning and Open Models
BoodleBox Secures $5 Million to Enhance AI Collaboration Across 1,200+ Institutions
UK Copyright Laws Risk AI Innovation, Hinder Vaccine Development Efficiency
Uolo Secures $7M Funding to Enhance AI Learning for 1.1M Students in K–12 Schools
Instructure Launches IgniteAI with Key Updates for Canvas, Enhancing Data Privacy and Educator Support


















































