Connect with us

Hi, what are you looking for?

AI Government

GSA’s AI Contract Policy Sparks Backlash from Tech Giants Over Compliance Concerns

GSA’s proposed policy requiring AI vendors like Google and Amazon to grant federal agencies irrevocable licenses raises compliance concerns, potentially limiting access to advanced technologies.

The technology industry is voicing significant concerns over a proposed policy by the General Services Administration (GSA) aimed at standardizing artificial intelligence (AI) contracting terms. Issued last month, this draft guidance could potentially conflict with existing federal acquisition rules and discourage vendor participation in government contracts.

The proposed guidelines would necessitate that AI vendors grant federal agencies an “irrevocable, royalty-free, non-exclusive license” to utilize their systems for the duration of any contract. If these guidelines are adopted, they would permit agencies to integrate AI technology into current government systems “as necessary for any lawful government purpose.”

The Alliance for Digital Innovation (ADI), which represents major players including Amazon Web Services, Google, Salesforce, Zscaler, and Palantir, has argued that the proposal introduces significant contracting challenges. ADI warns that the policy could compel vendors to develop separate, government-exclusive versions of their products, stating that “the clause would require contractors to build and maintain a parallel, Government-only product distinct from their commercial product.” This, they caution, risks turning standard commercial procurements into bespoke development projects.

In their comments, ADI indicated that numerous provisions within the draft would impose compliance burdens “that are difficult, if not impossible to reconcile” with the manner in which commercial AI products are constructed and delivered. The group expressed concern that these requirements could disproportionately affect smaller and emerging AI companies that may lack the resources needed to modify their offerings for government use, possibly limiting access to cutting-edge technologies.

The Software & Information Industry Association (SIIA), which includes members such as Amazon, Anthropic, Google, and Oracle, echoed these sentiments. They warned that the GSA risks creating an environment where the most advanced AI solutions may no longer be accessible to federal agencies. SIIA highlighted potential conflicts between GSA’s proposal and the Federal Acquisition Regulation (FAR), stating that the clause raises issues surrounding intellectual property and could impose data governance and supply chain restrictions.

SIIA further noted that the limited scope for negotiation could compel companies to relinquish vital commercial protections, potentially jeopardizing the viability of their AI products. Such conflicts, according to SIIA, are “incompatible with the shared infrastructure and global innovation models essential to modern commercial AI operations.”

Beyond the licensing terms, the GSA’s proposal mandates that AI systems used by the federal government prioritize “historical accuracy, scientific inquiry, and objectivity” while remaining neutral and nonpartisan. Under the draft guidelines, systems would be subjected to automated federal evaluations for bias, truthfulness, safety, and ideological content, with vendors potentially liable for decommissioning costs if their systems fail these assessments.

ADI pointed out that several of these requirements are challenging to operationalize, citing undefined terms such as “ideological dogmas” and setting unrealistic expectations for model accuracy. The group advocated for a shift toward a “reasonable efforts” framework instead of strict “truthfulness” standards, arguing that the probabilistic nature of generative AI systems is not adequately represented by such rigid requirements.

Similarly, SIIA called for a more collaborative evaluation method, suggesting upfront benchmarking of models against government standards followed by shared results and joint improvements. To alleviate concerns, ADI urged the GSA to align its guidelines with the National Institute of Standards and Technology (NIST) AI Risk Management Framework, clarify evaluation criteria, and limit vendor liability for system performance.

“ADI and its member companies stand ready to engage in further dialogue to develop workable solutions that protect Government interests while preserving Contractors’ ability to deliver innovative, high-quality AI services at scale,” the organization stated. SIIA expressed its commitment to collaborating with the GSA to establish a framework that ensures AI systems are secure and trustworthy, while maintaining a focus on the commercial-first mandate that has historically fueled American technological leadership.

The GSA’s proposed changes follow a dispute between the Department of Defense and Anthropic, which declined to ease safeguards against the use of its technology for applications like fully autonomous weapons systems or mass domestic surveillance. The situation escalated when President Donald Trump barred federal agencies from utilizing Anthropic AI tools, prompting the GSA to introduce these guidelines shortly thereafter.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Tools

Salesforce's research reveals that sales reps spend only 28% of their week selling, driving the rapid adoption of AI tools like Gong and Lavender...

AI Finance

Palantir's CEO Alex Karp warns that only vocationally trained workers or neurodivergent individuals may thrive in an AI-driven job market, highlighting a shift in...

AI Cybersecurity

Anthropic unveils Project Glasswing with partners like Amazon and Microsoft to protect critical software from AI-driven cyber threats, leveraging its Claude Mythos AI model.

Top Stories

X introduces a new in-app photo editor powered by Grok AI, enabling natural language photo modifications for over 500 million users amid ongoing trust...

AI Technology

Intel is in talks with Google and Amazon for advanced chip packaging services, potentially generating billions in annual revenue as it scales production capabilities.

AI Generative

Companies are pivoting to Generative Engine Optimization as AI chatbots reshape brand visibility, risking "Dark Revenue Loss" without proper strategies.

Top Stories

Anthropic partners with Google and Broadcom to secure multiple gigawatts of AI compute capacity by 2027, driving its annual revenue to over $30 billion.

Top Stories

Broadcom's stock surged 3.58% to $325.70 after securing a multi-year AI deal with Google and Anthropic for custom chips and 3.5 GW of computing...

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.