Connect with us

Hi, what are you looking for?

AI Government

GSA’s AI Contract Policy Sparks Backlash from Tech Giants Over Compliance Concerns

GSA’s proposed policy requiring AI vendors like Google and Amazon to grant federal agencies irrevocable licenses raises compliance concerns, potentially limiting access to advanced technologies.

The technology industry is voicing significant concerns over a proposed policy by the General Services Administration (GSA) aimed at standardizing artificial intelligence (AI) contracting terms. Issued last month, this draft guidance could potentially conflict with existing federal acquisition rules and discourage vendor participation in government contracts.

The proposed guidelines would necessitate that AI vendors grant federal agencies an “irrevocable, royalty-free, non-exclusive license” to utilize their systems for the duration of any contract. If these guidelines are adopted, they would permit agencies to integrate AI technology into current government systems “as necessary for any lawful government purpose.”

The Alliance for Digital Innovation (ADI), which represents major players including Amazon Web Services, Google, Salesforce, Zscaler, and Palantir, has argued that the proposal introduces significant contracting challenges. ADI warns that the policy could compel vendors to develop separate, government-exclusive versions of their products, stating that “the clause would require contractors to build and maintain a parallel, Government-only product distinct from their commercial product.” This, they caution, risks turning standard commercial procurements into bespoke development projects.

In their comments, ADI indicated that numerous provisions within the draft would impose compliance burdens “that are difficult, if not impossible to reconcile” with the manner in which commercial AI products are constructed and delivered. The group expressed concern that these requirements could disproportionately affect smaller and emerging AI companies that may lack the resources needed to modify their offerings for government use, possibly limiting access to cutting-edge technologies.

The Software & Information Industry Association (SIIA), which includes members such as Amazon, Anthropic, Google, and Oracle, echoed these sentiments. They warned that the GSA risks creating an environment where the most advanced AI solutions may no longer be accessible to federal agencies. SIIA highlighted potential conflicts between GSA’s proposal and the Federal Acquisition Regulation (FAR), stating that the clause raises issues surrounding intellectual property and could impose data governance and supply chain restrictions.

SIIA further noted that the limited scope for negotiation could compel companies to relinquish vital commercial protections, potentially jeopardizing the viability of their AI products. Such conflicts, according to SIIA, are “incompatible with the shared infrastructure and global innovation models essential to modern commercial AI operations.”

Beyond the licensing terms, the GSA’s proposal mandates that AI systems used by the federal government prioritize “historical accuracy, scientific inquiry, and objectivity” while remaining neutral and nonpartisan. Under the draft guidelines, systems would be subjected to automated federal evaluations for bias, truthfulness, safety, and ideological content, with vendors potentially liable for decommissioning costs if their systems fail these assessments.

ADI pointed out that several of these requirements are challenging to operationalize, citing undefined terms such as “ideological dogmas” and setting unrealistic expectations for model accuracy. The group advocated for a shift toward a “reasonable efforts” framework instead of strict “truthfulness” standards, arguing that the probabilistic nature of generative AI systems is not adequately represented by such rigid requirements.

Similarly, SIIA called for a more collaborative evaluation method, suggesting upfront benchmarking of models against government standards followed by shared results and joint improvements. To alleviate concerns, ADI urged the GSA to align its guidelines with the National Institute of Standards and Technology (NIST) AI Risk Management Framework, clarify evaluation criteria, and limit vendor liability for system performance.

“ADI and its member companies stand ready to engage in further dialogue to develop workable solutions that protect Government interests while preserving Contractors’ ability to deliver innovative, high-quality AI services at scale,” the organization stated. SIIA expressed its commitment to collaborating with the GSA to establish a framework that ensures AI systems are secure and trustworthy, while maintaining a focus on the commercial-first mandate that has historically fueled American technological leadership.

The GSA’s proposed changes follow a dispute between the Department of Defense and Anthropic, which declined to ease safeguards against the use of its technology for applications like fully autonomous weapons systems or mass domestic surveillance. The situation escalated when President Donald Trump barred federal agencies from utilizing Anthropic AI tools, prompting the GSA to introduce these guidelines shortly thereafter.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Finance

Google unveils TPU 8t and TPU 8i AI processors, achieving a 2.8x price-to-performance boost, intensifying competition with Nvidia and AMD in AI chip market.

Top Stories

Google unveils Lyria 3, a multimodal AI music generator enabling real-time song creation from prompts, enhancing creative control and sound quality for users.

AI Tools

Federal IT leaders, guided by Salesforce's Mia Jordan, call for a unified platform to enhance efficiency and reduce human error, tackling 1,000 daily application...

AI Cybersecurity

Anthropic's Claude Mythos exposes thousands of zero-day vulnerabilities, compelling organizations to elevate cybersecurity budgets by 10% annually amid rising AI-enabled attacks.

Top Stories

Nvidia shares drop 0.99% to $200.08 as Google negotiates with Marvell for new AI chips, signaling a shift towards custom silicon in the inference...

AI Cybersecurity

Mimecast introduces API-based e-mail security, boosting threat detection by 300% and addressing critical gaps in existing cloud security solutions.

AI Cybersecurity

Vercel’s breach exposes sensitive data after hackers exploited compromised OAuth tokens from the AI tool Context.ai, prompting urgent cybersecurity investigations.

AI Generative

Google integrates its Gemini AI with Google Photos, enabling personalized image generation from simple prompts, enhancing user engagement and privacy transparency.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.