State legislatures are regulating what AI chatbots say. Plaintiffs’ lawyers are suing over what AI chatbots collect. Most compliance teams are only watching one front.
In the first weeks of 2026, 78 chatbot-related bills have been filed across 27 states.[1] In the two months since California’s companion chatbot law took effect on January 1, 2026, at least six additional states have advanced chatbot legislation past committee or through a full chamber.[2] Meanwhile, an analysis of 284 deployer-facing AI litigation matters reveals that chatbot wiretap lawsuits (claims under the Electronic Communications Privacy Act and state wiretap statutes) have grown from 2 matters in 2021 to 30 in 2025, making chatbot wiretap the fastest-growing category of deployer-facing AI litigation.[3]
Three regulatory models are emerging
Not all chatbot regulation looks the same. Across the states with the most advanced bills, three distinct approaches are taking shape.
Disclosure-first. California’s SB 243 requires operators to tell users they are interacting with AI, mandate periodic break reminders for minors, and impose “reasonable measures” to prevent harmful content. The obligation is procedural: implement the protocols, document compliance, and the private right of action risk ($1,000 per violation) is manageable. Washington’s SB 5984, which passed the Senate, follows a similar framework but with tighter disclosure intervals (hourly for minors) and treble damages up to $25,000.
Use-restriction. New York’s S9051 goes further by enumerating specific chatbot behaviors that are prohibited when the user is a minor: no personal pronouns, no expressing personal opinions, no simulating emotional relationships, no prioritizing flattery over safety.[4] The compliance challenge is technical. These restrictions require modifying how the system generates outputs for minor users, not just adding disclosure layers.
Criminal prohibition. Tennessee’s SB 1493 creates a Class A felony (15 to 60 years imprisonment) for knowingly training AI to encourage suicide or simulate human emotional relationships.[5] The Senate also unanimously passed SB 1580, prohibiting AI systems from representing themselves as qualified mental health professionals, making Tennessee the first state to enact a standalone AI mental health prohibition.[6]
Oregon’s SB 1546, which advanced to the Senate floor on February 12,[7] goes further than any state in two respects: it requires mandatory conversation interruption (not just “protocols”) when a chatbot detects suicidal ideation,[8] and it requires operators to file annual public health reports documenting crisis referrals with the Oregon Health Authority.[9] It also applies to all users, not just minors, making it the broadest bill in scope.
The pattern across these states is a shift from telling users about the AI to telling operators what the AI cannot do.
The litigation wave deployers are not tracking
The legislative risk is the one most compliance teams are watching. The litigation risk is already in court.
A Baker Botts analysis of 284 deployer-facing AI litigation matters filed between 2020 and 2025 identified chatbot wiretap claims as the fastest-growing category, outpacing even algorithmic discrimination.[10] The theory is straightforward: when a website deploys a chatbot that records or analyzes visitor communications without adequate consent, plaintiffs argue the chatbot functions as an unlawful wiretap under ECPA, the California Invasion of Privacy Act (CIPA), or analogous state statutes.
Chatbot wiretap filings grew steadily from 2021 through 2023, then exploded in 2025 with more new matters filed in a single year than in the prior four years combined. The claims target deployers, not developers: it is the company that put the chatbot on its website, not the vendor that built it, that faces the suit. Defendants in 2025 included healthcare companies, insurance providers, dental practices, and universities.[11]
This matters because the state chatbot bills and the wiretap suits target different things. The bills regulate what the chatbot says to users. The wiretap suits target what the chatbot collects from users. A company that satisfies every disclosure requirement in California’s SB 243 may still face a wiretap class action if its chatbot records visitor conversations without ECPA-compliant consent. The two compliance obligations are distinct.
Does the executive order preempt state chatbot laws?
The December 2025 executive order directs the Attorney General to establish a task force challenging state AI laws inconsistent with federal policy.[12] That intent is no longer theoretical. On February 12, the White House sent a letter to Utah legislators calling the state’s AI Transparency Act “an unfixable bill that goes against the Administration’s AI Agenda.”[13] It was the first known instance of the White House directly pressuring a state legislator on AI policy.
On March 11, 2026, two deadlines converge: the Secretary of Commerce must identify state AI laws the administration considers “burdensome,” and the FTC must issue a policy statement on when state laws may be preempted.[14] The Commerce report will be referred to the DOJ’s AI Litigation Task Force for potential legal action.
The executive order includes one critical carve-out: it expressly exempts child safety protections from the preemption framework.[15] Since nearly every state chatbot bill focuses on protecting minors, the carve-out likely shields the core provisions of most bills. A bipartisan coalition of 36 attorneys general has warned Congress that federal preemption would have “disastrous consequences,” and state opposition continues to harden.[16]
Bills with broader scope, like Oregon’s SB 1546 (which applies to all users), face more preemption uncertainty. But the executive order cannot itself preempt state law; only Congress or a federal court can do so.[17] The Senate stripped a 10-year moratorium on state AI regulation from the reconciliation bill by a vote of 99 to 1.[18] Until legal challenges are resolved, all state chatbot laws remain enforceable.
What should in-house counsel do now?
Companies deploying consumer-facing AI chatbots may wish to consider three priorities.
Audit your chatbot’s interaction with minors. Every bill in the current wave targets minor users. The most common gap is the absence of a documented process for identifying minors and applying differentiated safety protocols. Companies subject to COPPA’s recent update already have some infrastructure; others may wish to treat age-gating as a priority. Deployers operating nationally may wish to adopt the most restrictive disclosure standard (Washington’s hourly cadence for minors) as a baseline, and inventory which emotional engagement features would need to be disabled for minor users in use-restriction jurisdictions.
Review your wiretap consent mechanisms. The chatbot wiretap litigation surge targets a distinct legal theory from the state chatbot laws. Companies may wish to evaluate whether their consent disclosures satisfy ECPA and the wiretap statutes in every state where their chatbot operates, particularly California (CIPA), Pennsylvania, Illinois, and other two-party consent jurisdictions. Complying with SB 243’s disclosure requirements does not satisfy the “prior consent” required to avoid wiretap liability.[19]
Prepare for two-front enforcement. California, New York, Oregon, and Washington all provide private rights of action under the new chatbot bills. Wiretap claims under CIPA carry $5,000 per violation. Tennessee’s proposed civil remedy includes $150,000 in liquidated damages. Compliance failures may generate class action exposure on both the legislative and litigation fronts simultaneously. Vendor agreements should address indemnification for both chatbot safety compliance and wiretap liability.
The 78 state bills and 58 federal wiretap suits share a common denominator: the AI chatbot has moved from a novelty to a regulated product. Companies that build to the most restrictive state’s standard and implement wiretap-grade consent today will spend less adapting as the remaining states act and the next wave of lawsuits arrives.
[1] Transparency Coalition, “Chatbot Bill Surge: Nationwide Concern Spurs 78 Proposals in 27 States” (Feb. 2026), https://www.transparencycoalition.ai/news/chatbot-bill-surge-nationwide-concern-spurs-78-proposals-in-27-states.
[2] As of February 17, 2026, active chatbot-specific bills that have advanced past introduction include Oregon SB 1546, New York S9051, Washington SB 5984/HB 2225, Tennessee SB 1580/SB 1493, Illinois SB 3262, Virginia HB 635/SB 796, Arizona HB 2311, Hawaii SB 3001/HB 2502, and Utah HB 438.
[3] Baker Botts analysis of 284 deployer-facing AI litigation matters filed between January 2020 and February 2026, drawn from CourtListener federal dockets, curated litigation trackers, and FTC/State AG enforcement databases.
[4] N.Y. S9051, proposed General Business Law Article 48, Section 1801 (2026).
[5] Tenn. SB 1493, creating Class A felony for knowingly training AI to encourage suicide or criminal homicide (2026). Class A felony sentencing range: Tenn. Code Ann. Section 40-35-111(b)(1).
[6] Tenn. SB 1580, unanimously passed Tennessee Senate (Feb. 2026), prohibiting development or deployment of AI systems that represent themselves as qualified mental health professionals.
[7] Or. SB 1546, advanced by Senate Committee on Early Childhood and Behavioral Health, 4-1 vote (Feb. 12, 2026).
[8] Or. SB 1546 (requiring mandatory conversation interruption and referral to crisis resources upon detection of suicidal ideation or self-harm).
[9] Or. SB 1546 (annual reporting to Oregon Health Authority documenting incidents of crisis resource referrals).
[10] Baker Botts analysis of 284 deployer-facing AI litigation matters filed between January 2020 and February 2026, drawn from CourtListener federal dockets, curated litigation trackers, and FTC/State AG enforcement databases.
[11] Notable 2025 chatbot wiretap defendants include Defendants in 2025 included healthcare insurers, life insurance companies, dental chains, and universities.
[12] Exec. Order, “Ensuring a National Policy Framework for Artificial Intelligence” (Dec. 11, 2025), https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.
[13] Letter from White House Office of Intergovernmental Affairs to Utah Legislature (Feb. 12, 2026), obtained by Axios. See “Scoop: White House pressures Utah lawmaker to kill AI transparency bill,” Axios (Feb. 15, 2026), https://www.axios.com/2026/02/15/white-house-utah-ai-transparency-bill.
[14] Exec. Order, Sections 3(a) (Commerce evaluation due 90 days from signing) and 4(b) (FTC policy statement due 90 days from signing).
[15] Id. (expressly exempting “child safety protections” from preemption framework).
[16] Letter from Bipartisan Coalition of 36 State Attorneys General to Congressional Leadership (Nov. 25, 2025), https://www.naag.org/press-releases/bipartisan-coalition-of-36-state-attorneys-general-opposes-federal-ban-on-state-ai-laws/.
[17] U.S. Const. art. VI, cl. 2 (Supremacy Clause); see Arizona v. United States, 567 U.S. 387, 399 (2012).
[18] U.S. Senate vote on amendment to remove 10-year moratorium on state AI regulation from reconciliation bill (2025), 99-1.
[19] Under California’s CIPA, Section 631, recording a “confidential communication” without “the consent of all parties” is actionable. See Javier v. Assurance IQ, LLC, 649 F. Supp. 3d 891 (N.D. Cal. 2023).

/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-02-13-21-46-31-329-698f9bb78f0cdb1aeca2030e.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-02-24-16-51-16-058-699dd70458651e09b88f34d3.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-02-24-16-45-52-555-699dd5c058651e09b88f27cb.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-02-24-16-28-29-355-699dd1ad58651e09b88effe7.jpg)