The United States Department of Transportation (DOT) is reportedly planning to utilize Google Gemini, a large language model (LLM), to draft federal transportation regulations, according to a ProPublica report. This initiative aims to streamline the traditionally lengthy regulatory process, with agency officials estimating that LLMs could handle “80% to 90%” of the drafting work typically conducted by legal and policy experts. As the agency’s general counsel noted, “it shouldn’t take you more than 20 minutes to get a draft rule out of Gemini.” DOT aims to be at the forefront of a broader federal push to employ LLMs for expedited rulemaking, aligning with prior reports of efforts by the administration to reduce federal regulations significantly.
While proponents of the initiative express optimism about the potential efficiency gains, skepticism remains among legal and policy experts. Critics argue that over-reliance on LLMs could expose agencies to significant legal and policy risks. The DOT’s approach suggests a willingness to accept these risks in exchange for faster rule generation. The agency’s general counsel controversially stated, “We don’t need a perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough. We’re flooding the zone.” This signals a departure from traditional standards of governance that prioritize thoroughness and accuracy.
Federal rulemaking is governed by the Administrative Procedure Act (APA), which requires agencies to issue detailed proposals and solicit public feedback before finalizing regulations. The process typically involves extensive documentation, often running hundreds of pages, and is subject to rigorous judicial review. The APA mandates that courts assess whether regulations are arbitrary or capricious, which adds a layer of scrutiny to the agency’s decision-making processes.
The incorporation of LLMs into this framework could provide significant advantages, as these models can generate vast quantities of text rapidly, even on complex topics. Historically, federal administrations from both parties have sought to integrate AI tools into their operations, an effort that has intensified in recent years. However, the DOT’s initiative appears to go further, with plans for LLMs to take on a more central role in decision-making, relegating human oversight to a mere monitoring function.
Concerns arise regarding the accuracy and reliability of AI-generated outputs. LLMs are known for producing errors, including a phenomenon referred to as “hallucination,” where the model generates false information. Furthermore, these systems may inadvertently amplify biases present in their training data, leading to potentially flawed policy outcomes. The risk of erroneous regulations is particularly alarming in high-stakes areas such as transportation safety, where incorrect decisions could have serious consequences.
Critics argue that the DOT’s strategy raises fundamental questions about the role of human expertise in the regulatory process. Federal law imposes obligations on agencies to consider relevant factors and provide thorough analyses for their decisions. Relying heavily on LLM outputs risks undermining these standards, as the models may not adequately address the nuances and complexities inherent in regulatory issues. Courts have consistently maintained that ultimate responsibility for policy decisions rests with the agency, not with automated systems.
Amid these concerns, the DOT’s general counsel’s statement that LLM-generated rules do not need to be “perfect” or even “very good” raises alarms about the agency’s commitment to regulatory integrity. This approach suggests a troubling trend toward prioritizing expediency over sound governance, potentially jeopardizing public safety and legal compliance.
As advocates and legal experts prepare for the implications of this shift, they may closely monitor DOT’s forthcoming rule proposals. They will likely scrutinize these documents for errors that could lead to judicial challenges based on the arbitrary and capricious standard. Additionally, there is a growing call for transparency regarding the use of AI in regulatory processes, emphasizing the need for agencies to disclose when they have employed LLMs in drafting regulations.
The ProPublica report serves as a critical reminder of the potential pitfalls of integrating AI into regulatory frameworks without sufficient oversight. As the DOT embarks on this ambitious project, the broader implications for governance, legal accountability, and public safety remain significant points of contention. The ongoing dialogue surrounding these issues will be crucial as federal agencies navigate the complexities of modern technology and its role in shaping policy.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































