Connect with us

Hi, what are you looking for?

AI Regulation

DOT Plans to Use Google Gemini for 90% of Federal Rulemaking, Ignoring Legal Risks

DOT plans to use Google Gemini to draft 80-90% of federal regulations, risking legal integrity for expedited rulemaking in transportation policy.

The United States Department of Transportation (DOT) is reportedly planning to utilize Google Gemini, a large language model (LLM), to draft federal transportation regulations, according to a ProPublica report. This initiative aims to streamline the traditionally lengthy regulatory process, with agency officials estimating that LLMs could handle “80% to 90%” of the drafting work typically conducted by legal and policy experts. As the agency’s general counsel noted, “it shouldn’t take you more than 20 minutes to get a draft rule out of Gemini.” DOT aims to be at the forefront of a broader federal push to employ LLMs for expedited rulemaking, aligning with prior reports of efforts by the administration to reduce federal regulations significantly.

While proponents of the initiative express optimism about the potential efficiency gains, skepticism remains among legal and policy experts. Critics argue that over-reliance on LLMs could expose agencies to significant legal and policy risks. The DOT’s approach suggests a willingness to accept these risks in exchange for faster rule generation. The agency’s general counsel controversially stated, “We don’t need a perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough. We’re flooding the zone.” This signals a departure from traditional standards of governance that prioritize thoroughness and accuracy.

Federal rulemaking is governed by the Administrative Procedure Act (APA), which requires agencies to issue detailed proposals and solicit public feedback before finalizing regulations. The process typically involves extensive documentation, often running hundreds of pages, and is subject to rigorous judicial review. The APA mandates that courts assess whether regulations are arbitrary or capricious, which adds a layer of scrutiny to the agency’s decision-making processes.

The incorporation of LLMs into this framework could provide significant advantages, as these models can generate vast quantities of text rapidly, even on complex topics. Historically, federal administrations from both parties have sought to integrate AI tools into their operations, an effort that has intensified in recent years. However, the DOT’s initiative appears to go further, with plans for LLMs to take on a more central role in decision-making, relegating human oversight to a mere monitoring function.

Concerns arise regarding the accuracy and reliability of AI-generated outputs. LLMs are known for producing errors, including a phenomenon referred to as “hallucination,” where the model generates false information. Furthermore, these systems may inadvertently amplify biases present in their training data, leading to potentially flawed policy outcomes. The risk of erroneous regulations is particularly alarming in high-stakes areas such as transportation safety, where incorrect decisions could have serious consequences.

Critics argue that the DOT’s strategy raises fundamental questions about the role of human expertise in the regulatory process. Federal law imposes obligations on agencies to consider relevant factors and provide thorough analyses for their decisions. Relying heavily on LLM outputs risks undermining these standards, as the models may not adequately address the nuances and complexities inherent in regulatory issues. Courts have consistently maintained that ultimate responsibility for policy decisions rests with the agency, not with automated systems.

Amid these concerns, the DOT’s general counsel’s statement that LLM-generated rules do not need to be “perfect” or even “very good” raises alarms about the agency’s commitment to regulatory integrity. This approach suggests a troubling trend toward prioritizing expediency over sound governance, potentially jeopardizing public safety and legal compliance.

As advocates and legal experts prepare for the implications of this shift, they may closely monitor DOT’s forthcoming rule proposals. They will likely scrutinize these documents for errors that could lead to judicial challenges based on the arbitrary and capricious standard. Additionally, there is a growing call for transparency regarding the use of AI in regulatory processes, emphasizing the need for agencies to disclose when they have employed LLMs in drafting regulations.

The ProPublica report serves as a critical reminder of the potential pitfalls of integrating AI into regulatory frameworks without sufficient oversight. As the DOT embarks on this ambitious project, the broader implications for governance, legal accountability, and public safety remain significant points of contention. The ongoing dialogue surrounding these issues will be crucial as federal agencies navigate the complexities of modern technology and its role in shaping policy.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Marketing

BusySeed unveils Rankxa, a tool tracking brand visibility across AI-generated responses, revealing 90% of brands lack meaningful presence in this new landscape.

AI Generative

Google is set to unveil its new video-generation tool, Omni, at I/O 2026, potentially integrating Gemini's capabilities and enhancing competition against ByteDance's Seedance 2.0.

AI Technology

A1 Public Relations helps entertainment brands enhance AI visibility in 2026 by integrating structured content and fresh, authoritative media, ensuring they are recognized by...

Top Stories

Bitcoin rebounds 27% to $76,128, but remains below critical resistance of $82,228 as AI insights reveal mixed market sentiment ahead.

AI Generative

Google TV enhances user experience with AI-driven image and video tools, introducing the Nano Banana and Veo features on Gemini-enabled TCL TVs in the...

AI Generative

DeepSeek unveils V4 AI model with advanced reasoning and agentic capabilities, outperforming OpenAI's GPT-5.2 while integrating Huawei chips for enhanced autonomy.

AI Generative

OpenAI's ChatGPT Images 2.0 outshines Gemini's Nano Banana 2 by delivering superior realism in image edits, consistently winning tests despite longer processing times.

Top Stories

Anuma launches a privacy-first AI platform allowing users access to 10 leading models with a unique encrypted memory, enhancing data control and context retention.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.