In a significant move toward integrating artificial intelligence in federal operations, the General Services Administration (GSA) has facilitated contracts that enable 43 government agencies to access advanced AI tools from major commercial providers. Laura Stanton, deputy commissioner of the Federal Acquisition Service at GSA, announced at ACT-IAC’s Executive Leadership Conference that agencies can now secure enterprise licenses for $1 or less for cutting-edge large language models (LLMs), making previously cost-prohibitive technologies accessible for testing and exploration.
As more agencies embrace these AI solutions, they must adapt their procurement processes. Starting March 11, a new memo from the Office of Management and Budget (OMB) mandates that agencies update their acquisition policies within a 90-day timeframe to ensure that the LLMs they procure are both truthful and ideologically neutral. OMB Director Russ Vought outlined in a December 11 memo that LLMs must prioritize historical accuracy and objectivity, emphasizing that these models should not be used as tools for ideological manipulation.
The memo serves as a fulfillment of a requirement set forth in the July executive order aimed at preventing perceived ideological bias in AI applications across the federal government. It instructs agencies to apply these principles not only to new contracts but also to existing agreements, reinforcing the need for transparency in the procurement process. Agencies are now tasked with obtaining sufficient information from vendors to demonstrate compliance with these unbiased AI principles.
OMB clarified that while acquiring AI technology, agencies are not required to disclose sensitive technical details, such as model weights, but must seek adequate documentation to assess vendor risk management practices. The level of information available will depend on the vendor’s position in the supply chain and their relationship with the LLM developer, necessitating careful consideration during procurement.
Jose Arrieta, founder and CEO of Imagineeer and a former federal acquisition executive, emphasized that the memo is designed to operationalize AI within federal agencies. He described it as enabling rather than restrictive, highlighting that it introduces a framework requiring truth, accountability, and proper governance in AI contracts. “It creates a structure that rewards disciplined AI platforms, especially because it is grounded in enforceable governance,” he noted.
The OMB memo specifies that agencies must request four distinct data sets from vendors, including acceptable use policies and information pertinent to the specific LLM. This requirement aims to establish a minimum transparency threshold, thereby assisting agencies in evaluating and awarding contracts based on measurable standards.
Arrieta underscored the importance of the memo in empowering acquisition professionals to ask vendors more rigorous questions regarding transparency and model provenance. He pointed out that this guidance is not merely a policy statement but an operational directive aimed at enhancing federal procurement practices for AI technologies.
Despite the clarity provided by the memo, questions remain about the effective implementation of its guidelines. Arrieta mentioned the need for new acquisition templates and a vendor due diligence playbook to navigate complex scenarios in AI procurement. He stressed that the absence of explicit audit policies raises challenges in evaluating AI models, suggesting that practical frameworks must be developed to address these concerns. “The memo leaves space for that thinking, which is good, but someone needs to lead by creating playbooks,” he said.
As federal agencies prepare to integrate advanced AI solutions into their operations, the OMB’s directives offer a potential pathway for responsible deployment. The success of these initiatives will depend on how agencies adapt their procurement strategies in response to the evolving landscape of AI technology. The broader implications of this move could set new standards for transparency and accountability in federal technology acquisitions, shaping the future of AI use in government.
See also
Anthropic Disrupts First AI-Driven Cyber-Espionage Campaign Targeting 30 Firms
Thomson Reuters Launches CoCounsel Legal, Reducing Document Review Time by 63%
Pennsylvania Advances AI Regulation Bill as Healthcare Integrates AI Tools Daily
TRM Labs and Sphinx Partner to Automate AML Compliance with AI Agents, Enhancing Fraud Detection
Trump Blocks State AI Regulations, Empowering Tech Firms Amid Safety Concerns



















































