This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take

| 10 minute read

GSA's New AI Clause: Major Changes for AI Procurement

The federal government’s proposed AI procurement clause could force every company on a GSA Schedule to choose between its commercial AI terms and its government contracts. On March 6, 2026, GSA published “Basic Safeguarding of Artificial Intelligence Systems,” a clause that would rewrite the terms under which AI is sold to, or used in performance of work for, the federal government. The clause overrides commercial terms of service, claims government ownership of custom developments, restricts AI sourcing to American-made systems, and prohibits vendors from maintaining safety restrictions.

The reach extends well beyond AI-native companies. Any contractor that uses an AI-powered analytics platform, a machine learning-based cybersecurity tool, or even a coding assistant during contract performance could be affected. The terms are take-it-or-leave-it: MAS holders get 60 days to accept or lose schedule access. And OMB has declared compliance “material to contract eligibility and payment” — the phrase that could trigger False Claims Act liability, with treble damages and per-claim penalties for every non-compliant invoice.[1]

Who does the GSA AI clause apply to?

The clause applies to “all solicitations and contracts” in which AI capabilities are either provided to the government or used by the contractor as part of performance. That second prong captures not only companies selling AI products to agencies but also contractors who use AI tools internally during contract performance: an analytics platform with an AI feature, a document management system with AI-powered search, a cybersecurity tool using machine learning for threat detection.[2]

The clause also reaches upstream. It defines “Service Provider” as any entity that “directly or indirectly provides, operates, or licenses an AI system” to the contractor, even if that entity has no direct government contract. Contractors are responsible for their service providers’ compliance.

Consider a practical example: a 200-person company selling a document analysis tool to the VA through its GSA Schedule is certifying that its upstream AI provider complies with every provision of the clause, even though the contractor’s relationship with that provider is governed by a non-negotiable click-through API agreement.[3]

What does the GSA AI Clause Do?

The full text of the proposed clause runs nine categories of requirements, and contractors should review it in its entirety. Four provisions are potentially the most impactful:

Vendor safety policies are overridden

The contractor grants the government an “irrevocable, royalty-free, non-exclusive license” to use the AI system “for any lawful Government purpose.” An order-of-precedence provision makes GSAR 552.239-7001 supreme over any conflicting commercial terms, including the contractor’s or service provider’s terms of service.

Two provisions capture the clause’s reach:

“The AI System must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.”

“In the event of a conflict between this document and any policies, requirements, terms, conditions, or commercial agreements of the quote, the Contractor, or the Service Provider, this clause controls.”

If your terms of service disagree, the clause wins. This reverses GSA’s decade-long effort to accommodate commercial software terms. The connection to the supply chain risk designation is direct: the Pentagon demanded a vendor accept “any lawful use” terms without safety restrictions; the vendor refused; the government designated it a supply chain risk and pulled it from the MAS. The GSA clause generalizes that demand into a standard contract term for every AI vendor.

The clause also mandates that AI systems “must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.” This does not require retraining models. It targets the operational safety layer: system prompts, trust-and-safety filters, and use-case restrictions.

The tension is immediate: the clause simultaneously requires “truthful, trustworthy” outputs under its Unbiased AI Principles provision. Many refusal behaviors exist precisely because the AI provider identified domains where the model cannot answer reliably. Removing those refusals to satisfy the anti-refusal mandate may violate the truthfulness requirement. The clause does not acknowledge this tension.

The “American AI” requirement has no workable definition

Contractors must “use only American AI Systems,” defined as AI systems “developed and produced in the United States.” The clause prohibits “AI components manufactured, developed, or controlled by non-U.S. entities.”

Drawing this line may prove unworkable. A typical AI system involves models developed by a U.S. company, training data labeled in Kenya, open-source components from global research labs, chips designed in California and fabricated in Taiwan, and servers in Virginia. The clause does not define what “developed and produced” means when these elements span multiple countries.

IP provisions go far beyond standard government data rights

The government owns all data inputs, outputs, and “Custom Developments,” a term defined broadly to include modifications, customizations, configurations, enhancements, and associated workflows. Contractors and service providers are prohibited from using government data to train or improve AI models, inform business decisions, or retain data beyond contract scope.

This goes far beyond standard government data rights. Under the FAR framework (FAR 52.227-14, DFARS 252.227-7014), the government receives unlimited rights only in data produced exclusively at government expense. The GSA clause would depart from that principle. Any IP the contractor acquires in improvements, enhancements, feedback, or derivative works tied to government data would be automatically assigned to the government at creation, regardless of funding source. No standard FAR or DFARS clause claims ownership of vendor “feedback.”

For AI companies whose federal revenue is a fraction of total revenue, these terms may make the GSA channel uneconomic. If the leading providers exit, the government procures from smaller, less capable vendors — the opposite of the policy goal.

The clause also requires broad disclosure of potential trade secrets: within 30 days of contract award, contractors must disclose all AI systems used in performance, including model training methods, system limitations, and whether models were modified to comply with non-U.S. regulatory frameworks.

The clause imposes “eyes off” data handling as well: human review of government data is restricted to “strictly necessary” instances with all access logged, and all data must be securely deleted with written certification upon contract completion.

The “Unbiased AI” mandate creates tension with its own truthfulness requirement

Tracking Executive Order 14319, the clause requires “commercial efforts” to ensure AI systems are consistent with “Unbiased AI Principles.” Outputs must be “truthful,” must “prioritize historical accuracy, scientific inquiry, and objectivity,” and must “acknowledge uncertainty.” Contractors “must not intentionally encode partisan or ideological judgments” into outputs. The clause identifies “Diversity, Equity, Inclusion” as examples of prohibited “ideological dogmas.”

What counts as “ideological content” versus a legitimate safety measure is subjective, and the clause provides no standard. Companies that built demographic bias testing into their AI governance programs — as state deployer statutes increasingly require — may find those programs recharacterized as prohibited ideological content.

These provisions are enforced through undisclosed government benchmarking. The government may assess AI systems at any time for bias, truthfulness, safety, and “unsolicited ideological content” using its own benchmarks, without disclosing the data, methodologies, or evaluation criteria. If the government identifies noncompliance, it can suspend use of the AI system until “performance issues are satisfactorily addressed.” The vendor has no mechanism to challenge the methodology or see the test results before the government acts.

Separately, the clause requires 72-hour incident reporting to CISA without defining what constitutes a reportable “incident.”

Risks of Non-Compliance

The clause does not mention the False Claims Act. But the compliance architecture could make FCA liability difficult to avoid. OMB M-26-04 declares compliance “material to contract eligibility and payment.” That phrasing is deliberate. In Universal Health Services v. Escobar, 579 U.S. 176 (2016), what matters for FCA materiality is whether the government would refuse to pay if it knew of noncompliance. OMB’s language appears designed to establish exactly that predicate.[4]

Every non-compliant invoice — including invoices where an upstream provider is non-compliant — could be a separate false claim. The FCA’s qui tam mechanism generated 1,297 new cases in fiscal year 2025. Any employee who knows about noncompliance can file.

The structural problem: the clause requires certifications that contractors may have no practical way to verify. A prime contractor certifying that its upstream AI provider complies with the American AI requirement, the anti-refusal mandate, and the IP assignment terms may have no practical way to confirm any of it through a standard API agreement. The certification gets filed anyway. Once filed, the FCA applies.

The clause creates tension with EU, state, and federal cybersecurity requirements

The clause was drafted as if companies operate exclusively in the federal procurement market.

The EU AI Act requires providers of high-risk AI systems to implement safeguards preventing harmful outputs. The GSA clause prohibits those same safeguards as “discretionary policies.” A company selling the same AI system in both markets may find simultaneous compliance difficult or impossible.[6]

Colorado’s AI Act requires deployers to avoid algorithmic discrimination. The GSA clause treats demographic bias mitigation as potentially prohibited “ideological” content. The clause does not preempt state law — executive orders may not supply the “purposes and objectives of Congress” required for obstacle preemption. But the clause could achieve functional preemption through contracting power: GSA can condition MAS eligibility on abandoning state-law-required bias programs without ever winning a preemption case. The 36-state AG coalition has organized to resist, and until legislation or litigation resolves the question, state laws remain operative alongside the clause.[7]

The clause also adds AI-specific data and IP provisions without clarifying how they interact with FAR 52.204-21, DFARS 252.204-7012 (CMMC), or the forthcoming NDAA Section 1513 “CMMC for AI” requirements covering source code, model weights, and training methods.[8]

What should companies do now?

The comment period has closed. Companies with federal AI exposure may wish to consider three steps before Refresh 31 issues.

Map your AI stack. Most contractors do not have a complete picture of the AI components touching their government work. The foundation model API is the obvious one. The AI feature embedded in the analytics platform, the coding assistant the development team adopted six months ago, the cloud provider’s AI-powered logging — those take more work to surface. The exercise is worth doing now: each component you cannot trace to a certifiable upstream provider is a compliance gap, and the clause does not distinguish between AI you selected and AI you inherited.

Prepare for the 60-day window. Once Refresh 31 issues, MAS holders have 60 days to accept or walk away. The clause includes a tailoring mechanism — paragraph (j) — that allows bilateral revision of data and IP provisions at the order level. But negotiating under that mechanism requires knowing which provisions matter most to your business before the clock starts. Companies that work through their priority issues now will have real options when the window opens.

Assess your FCA exposure. Pull your current government contract certifications and check whether they still reflect how your organization actually uses AI. Many companies added AI capabilities to existing deliverables without updating their compliance posture. A focused review of those representations against the clause’s requirements — particularly the American AI, anti-refusal, and IP assignment provisions — can identify the gaps that carry the most risk and help prioritize where to act first.

What comes next

The clause did not emerge in a vacuum. GSA published it the same week the Department of War designated a major AI company a “supply chain risk” under 10 U.S.C. § 3252 for refusing to waive safety restrictions on its models for military use — the first such designation of an American company. The dispute centered on the same “any lawful use” demand the GSA clause now codifies as a standard contract term. The preliminary injunction hearing on March 24 will determine whether that designation survives judicial review.[9]

But regardless of that outcome, the GSA clause proceeds on independent authority. GSAR 552.239-7001 would be incorporated into all MAS contracts through Solicitation Refresh #31, and GSA released the clause through the MAS refresh process rather than traditional notice-and-comment rulemaking.[10] Under 41 U.S.C. § 1707, procurement policies with “a significant effect beyond the internal operating procedures of the agency” require 60-day Federal Register comment periods. GSA provided 14 days through its solicitation blog. That gap may support a procedural challenge.[11]

The clause is the latest step in a policy chain running from Executive Order 14179 through OMB M-25-21 and M-25-22, Executive Order 14319 (“Preventing Woke AI in the Federal Government”), and OMB M-26-04.[12] As we observed in The AI Enforcement Seesaw, the administration has paired deregulation in some areas with aggressive new mandates in others. The clause’s most aggressive provisions rest on the Procurement Act’s “economy and efficiency” standard, not on specific Congressional authorization. Three circuits have applied the major questions doctrine to presidential procurement orders, and the Supreme Court’s decisions in West Virginia v. EPA and Biden v. Nebraska suggest that agency actions of this significance face heightened scrutiny without clear statutory footing.[13]

Whether the clause survives contact with the courts, the EU, the states, and the insurance market is an open question. But the qui tam relator will file against the company holding the contract — not the one controlling the technology. We will continue to track the GSA clause, the supply chain risk litigation, and related federal AI procurement developments as they evolve. 
 


[1] GSA, “Proposed Government AI System Terms and Conditions,” MAS Solicitation Refresh #31, Solicitation No. 47QSMD20R0001 (Mar. 6, 2026), https://buy.gsa.gov/interact/system/files/GSA_Federal_Acquisition%20Service%20Proposed%20Government%20AI%20System%20Terms%20and%20Conditions.pdf.

[2] GSAR 552.239-7001 § (a).

[3] Id. § (b).

[4] OMB Memorandum M-26-04 at 4 (Dec. 11, 2025).

[5] See Universal Health Services, Inc. v. United States ex rel. Escobar, 579 U.S. 176, 194-95 (2016) (materiality is “demanding” and cannot be established by labeling alone); United States ex rel. Folliard v. Comstor Corp., 308 F. Supp. 3d 56 (D.D.C. 2018) (TAA noncompliance not material where government continued purchasing despite knowledge); cf. United States ex rel. Schutte v. SuperValu Inc., 598 U.S. 739 (2023) (FCA scienter measured by what defendant actually believed at time of claim).

[6] EU AI Act, Regulation (EU) 2024/1689, Art. 9.

[7] Colorado AI Act, S.B. 24-205 § 6-1-1703; NYC Local Law 144 of 2021.

[8] National Defense Authorization Act for Fiscal Year 2026, § 1513.

[9] See Anthropic, PBC v. Department of War et al., No. 3:26-cv-02197 (N.D. Cal. filed Mar. 9, 2026), Complaint; see also NPR, “Pentagon Labels AI Company Anthropic a Supply Chain Risk” (Mar. 6, 2026), https://www.npr.org/2026/03/06/g-s1-112713/pentagon-labels-ai-company-anthropic-a-supply-chain-risk.

[10] Id.

[11] See 41 U.S.C. § 1707(a) (requiring publication in Federal Register and providing that a procurement policy “may not take effect until 60 days after it is published for public comment” where it has “a significant effect beyond the internal operating procedures of the agency issuing the policy, regulation, procedure, or form”); see also GSA, “Advance Notice of MAS Refresh 31,” buy.gsa.gov (Mar. 6, 2026).

[12] Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence” (Jan. 23, 2025); OMB Memorandum M-25-22, “Driving Efficient Acquisition of Artificial Intelligence in Government” (Apr. 3, 2025); Executive Order 14319, “Preventing Woke AI in the Federal Government” (Jul. 23, 2025); OMB Memorandum M-26-04, “Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles” (Dec. 11, 2025).

[13] Anthropic, PBC v. Department of War et al., No. 3:26-cv-02197 (N.D. Cal. filed Mar. 9, 2026), Docket.