The federal government has pivoted towards accelerating the adoption of artificial intelligence (AI) while delaying governance discussions, a stance formalized in the Pentagon’s January AI Strategy and the administration’s AI Action Plan. The Pentagon’s strategy emphasized that the risks of moving too slowly outweigh those of “imperfect alignment,” advocating for models of “any lawful use” free from significant policy restrictions. In response, the General Services Administration (GSA) has proposed a new contract clause, GSAR 552.239-7001, targeting governance gaps in federal AI procurement. Open for public comment until March 20, the clause encompasses aspects ranging from data control to ideological output requirements, asserting its precedence over conflicting contractor policies.
This shift towards governance is noteworthy, especially after months of prioritizing speed over structure. The GSA, responsible for federal civilian purchases, typically channels AI acquisitions through programs like the Multiple Award Schedule, which aims to utilize customary commercial terms. However, the proposed clause departs from these norms, resembling conditions seen more often in defense contracts than in commercial arenas.
The clause aims to address multiple objectives simultaneously: it seeks to manage government-linked data, prohibit vendor-imposed usage restrictions, ensure portability, and impose conditions that encourage American sourcing. While some of these aims are overdue, the broad scope of the clause raises questions about its practical application in commercial buying practices. The inclusion of numerous competing agendas in a single clause could ultimately complicate the procurement process, which has historically lacked transparency and oversight in AI acquisitions.
One prominent feature of the clause is its detailed definition of “Government Data,” which restricts vendors from utilizing government data for training AI systems outside the bounds of the contract. This includes prohibitions on using government data for marketing or other commercial strategies. Although the intention is to prevent vendors from exploiting sensitive information, the clause’s broad definition risks stifling legitimate improvements to AI systems. Critics liken it to inviting a chef into a kitchen, then forbidding them from remembering any recipes learned while there, potentially hampering innovation.
Balancing Control and Governance
The clause also impacts how vendors must govern their own AI systems. It mandates that vendors cannot refuse to produce outputs based on their discretionary policies, an aspect that could conflict with existing safety protocols. In many cases, refusal to answer certain queries may be driven by concerns over the reliability of the model, but the clause does not clarify the threshold between necessary safeguards and prohibited restrictions.
Another complexity arises from the clause’s focus on the supply chain. By expanding compliance responsibilities to include upstream providers, it creates challenges for prime contractors, who may not have the means to verify compliance among all service providers involved in the chain. This could expose prime vendors to significant risks under the False Claims Act, as they might become the primary target for compliance-related penalties.
The ownership provisions of the clause further complicate matters. The government asserts ownership over all “Government Data” and any developments arising from its use. While it is reasonable for the government to restrict how vendors utilize its data, the broad ownership claims could deter major AI companies from engaging in federal contracts, given the relatively small revenue these contracts represent compared to their overall business.
Furthermore, the clause’s requirements for American sourcing may introduce new challenges in an industry characterized by global collaboration and layered models. This raises questions about how “American AI Systems” are defined, particularly when components may rely on international labor or data processing.
While the GSA’s draft is designed to ensure accountability and transparency, it risks conflating governance with control. The intention to impose strict oversight may ultimately hinder effective procurement. Policymakers face the ongoing challenge of balancing legitimate concerns over data security and supplier reliability with the need for a flexible and responsive procurement framework.
As discussions around the GSA’s proposed clause continue, the implications for how the federal government acquires AI technology remain significant. A careful approach is essential to avoid overreaching governance that could hamper innovation while still addressing the pressing need for oversight in an increasingly complex technological landscape.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery





















































