The U.S. Environmental Protection Agency (EPA) is advancing its strategy to integrate artificial intelligence (AI) into its regulatory operations, a key initiative stemming from the Trump administration’s broader push for federal efficiency. Despite the agency’s ambitious plans, actual implementation of AI technologies has not fully materialized, leading to a mixed record of progress. The EPA has identified multiple use cases for AI, demonstrating its potential to reshape agency workflows, though many applications remain in preliminary stages.
The Trump administration has placed significant emphasis on the adoption of AI, mandating federal agencies to develop strategies to harness the technology. In early 2025, the Office of Science and Technology Policy (OSTP) and the Office of Management and Budget (OMB) issued directives requiring agencies to outline their AI approaches and seek public input. This culminated in guidance formalized through OMB Memoranda M-25-21 and M-25-22, which were finalized in April 2025. By March 2026, the White House released its National Policy Framework on Artificial Intelligence, advocating for a model that prioritizes innovation over regulation and seeks to centralize federal oversight of AI applications.
In response to the OMB memoranda, the EPA released its AI Compliance Plan and AI Strategy in October 2025, which highlight various potential applications of AI while imposing necessary controls. The agency has articulated a vision of its workload as particularly amenable to AI integration, identifying 18 distinct use cases in its Final AI Strategy. These include efforts such as using AI to screen scientific studies for quality and facilitating the evaluation of pesticide applications through automated summaries.
However, an examination of the EPA’s reported AI Use Case Inventory for 2025, published in early 2026, reveals that the actual deployment of AI technologies is largely aspirational. The inventory, which includes 82 items, reflects a range of applications from deployed and pilot programs to retired use cases. Notably, many current applications are mundane, involving routine tasks like scheduling and document comparison, with only a limited number classified as high-impact. Specifically, the EPA has recognized just one deployed and one pre-deployment use case as high-impact, with another “presumed high-impact” but not yet evaluated as such.
The sole deployed high-impact use case, related to the Resource Conservation and Recovery Act (RCRA), leverages AI to prioritize inspections of Large Quantity Hazardous Waste Generators, yielding benefits such as reduced staff workload and improved identification of potential violators. The pre-deployment high-impact application focuses on the agency’s lead abatement initiatives, utilizing AI to analyze documents related to environmental compliance.
Additionally, the “presumed high-impact” use case involves an AI tool known as Brief Cam, which assists in reviewing surveillance footage for law enforcement investigations. Although these applications hold significant promise, many of EPA’s AI initiatives remain in exploratory phases, lacking the robust implementation necessary for high-impact regulatory outcomes.
Other noteworthy applications of AI at the EPA, while not classified as high-impact, could have direct regulatory implications. A pilot project by the agency’s Region 8 utilizes generative AI to summarize public comments, albeit not as a principal basis for decision-making. Further, pre-deployment AI tools aim to enhance data extraction from pesticide registration documents and to process public comments on proposed rules. There is also ongoing deployment of machine learning AI to rank scientific literature related to the Clean Air Act.
The intersection of AI and regulatory processes has drawn scrutiny within the legal community. A symposium published by the Yale Journal of Regulation in February 2026 examined the potential advantages and challenges of AI’s role in regulatory decision-making. Key discussions included the necessity for agencies to disclose algorithmic details when relying on AI for rulemaking, as well as the implications of having “a human in the loop” for oversight, as it pertains to compliance with the Administrative Procedure Act (APA).
As the EPA explores the integration of AI into its regulatory framework, stakeholders are advised to remain vigilant. The agency’s movement toward AI-assisted rulemaking and enforcement could invite legal scrutiny, particularly related to the adequacy of oversight and compliance with established administrative standards. With AI technology evolving rapidly, the potential for its use in decision-making processes underscores the importance of aligning legal frameworks to ensure accountability and transparency.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health




















































