This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take

| 3 minute read

California Eliminates the "Autonomous AI" Defense: What AB 316 Means for AI Deployers

In 1979, an IBM training manual offered a principle that now reads as prophecy: “A computer can never be held accountable, therefore a computer must never make a management decision.”1 Nearly five decades later, computers—now called AI—make consequential decisions constantly. California’s response to this reality took effect January 1, 2026: if AI cannot be held accountable, then the humans and organizations behind it will be expected to answer instead.

Assembly Bill 316 eliminates a defense that some AI developers have attempted to assert: that the AI system, not the company, caused the harm.2 The law reflects a straightforward corollary to IBM’s 1979 insight. If computers cannot be accountable, and if they are nonetheless making decisions, accountability shifts to those who built, modified, or deployed them.

What the Law Prohibits

AB 316 adds a single provision to California’s Civil Code: in any civil action against a defendant who “developed, modified, or used” an AI system alleged to have caused harm, the defendant may not assert as a defense that “the artificial intelligence autonomously caused the harm.”3

The prohibition is narrow but targeted. It forecloses arguments that an AI system’s independent decision-making absolves the humans and organizations behind it. The provision responds to instances where defendants have attempted to characterize AI tools as separate actors—most notably, Air Canada’s unsuccessful argument that its customer service chatbot was a “separate legal entity” when it provided inaccurate bereavement fare information to a passenger.4

What Remains Available

The law explicitly preserves other defenses. Defendants may still present evidence relevant to causation, foreseeability, and comparative fault.5 A plaintiff must still establish that the AI system caused the alleged harm—and that the harm was a foreseeable consequence of the defendant’s conduct.

This distinction matters. AB 316 does not create strict liability for AI-related harms. It removes one specific argument—“the AI did it autonomously”—while leaving intact the traditional framework for establishing (or contesting) liability. Companies that implement reasonable safeguards, conduct appropriate testing, and maintain adequate documentation retain the ability to demonstrate they acted appropriately.

The Supply Chain Question

AB 316 applies to anyone who “developed, modified, or used” an AI system. This language encompasses the entire AI supply chain: the foundation model developer, the company that fine-tunes or customizes the model, the integrator that builds it into a product, and the enterprise that deploys it.

For organizations using third-party AI tools, this breadth has contract implications. Indemnification provisions, limitation of liability clauses, and warranty terms may warrant review. If an AI vendor’s tool causes harm and the downstream deployer faces suit, AB 316 prevents the deployer from arguing that the AI acted on its own—even if the deployer had limited visibility into how the model operates.

Organizations may wish to consider how their vendor agreements allocate risk in this environment. Key questions include whether the vendor provides adequate documentation of system behavior, what indemnification obligations exist for AI-related claims, and how limitation of liability provisions interact with the inability to assert an autonomous-harm defense.

Documentation as Defense

If the “AI did it” argument is unavailable, the remaining defenses—causation, foreseeability, comparative fault—become more important. Demonstrating reasonable care in AI deployment may require documentation that many organizations do not currently maintain: records of testing and validation, monitoring logs showing system performance, evidence of human oversight in high-stakes decisions, and audit trails for model updates.

This documentation serves dual purposes. It supports compliance with emerging AI transparency requirements, including California’s other 2026 AI legislation. And it provides evidence that may be relevant if the organization’s AI deployment is later challenged.

Looking Ahead

IBM’s 1979 guidance assumed organizations would keep computers out of consequential decisions. That assumption no longer holds. California’s AB 316 accepts that AI systems make decisions—and ensures that accountability follows.

Texas’s Responsible AI Governance Act, effective the same day as AB 316, takes a different approach—emphasizing intentional misconduct over impact-based liability—but both reflect legislative attention to AI accountability. The autonomous-harm defense was always legally questionable. AB 316 makes explicit what courts were likely to conclude: organizations cannot escape responsibility for their technology by attributing harm to the technology’s independence.

For in-house counsel, the practical response is the same work that supports AI governance more broadly—vendor due diligence, deployment documentation, and clear allocation of responsibility in contracts. The 1979 principle endures, just inverted: because a computer can never be held accountable, those who deploy it may find that they are.

  1. IBM, Internal Training Materials (1979), reproduced in Simon Willison, A Computer Can Never Be Held Accountable, simonwillison.net (Feb. 3, 2025), https://simonwillison.net/2025/Feb/3/a-computer-can-never-be-held-accountable/.↩︎

  2. Cal. Civ. Code § 1714.46, added by A.B. 316, 2025-2026 Reg. Sess. (Cal. 2025), https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260AB316.↩︎

  3. Id. at subd. (a).↩︎

  4. Moffatt v. Air Canada, 2024 BCCRT 149 (B.C. Civ. Res. Trib. Feb. 14, 2024). The tribunal rejected Air Canada’s argument and held the airline responsible for its chatbot’s representations.↩︎

  5. Cal. Civ. Code § 1714.46, subd. (b).↩︎

Tags

ai