On December 19, New York Governor Kathy Hochul signed the RAISE Act, making New York the first state to enact major AI safety legislation after President Trump's December 11 executive order calling for federal preemption of state AI laws. Three days later, the FTC voted 2-0 to vacate its 2024 consent order against Rytr LLC, an AI writing tool, explicitly citing the Trump Administration's AI Action Plan.[1][2]
Same week. Opposite directions. For in-house counsel at companies deploying AI, this juxtaposition captures the new regulatory reality: don't mistake federal pullback for regulatory relief. States are stepping in—and the compliance obligations are multiplying, not simplifying.
Federal Enforcement Is Narrowing—But Not Disappearing
The Rytr decision signals a significant shift in how the FTC will approach AI enforcement under new leadership. The original 2024 complaint, brought under former Chair Lina Khan, alleged that Rytr's AI review-generation tool could produce false reviews—but didn't allege that anyone actually posted fake reviews using the tool. The new FTC, led by Chair Andrew Ferguson, found this insufficient.
The agency's reasoning is instructive. In vacating the order, the Commission stated that the original complaint "contains no allegations that Rytr itself created deceptive marketing material, only that its customers might have used its tool to do so." BCP Director Christopher Mufarrige put it more bluntly: "Condemning a technology or service simply because it potentially could be used in a problematic manner is inconsistent with the law and ordered liberty."[3]
This represents a doctrinal shift from "potential harm" to "actual harm" as the threshold for AI enforcement. Under this framework, neutral AI tools—those with legitimate uses alongside potential for misuse—face a higher bar for FTC action. The agency will require evidence that the tool was actually used to deceive consumers, not merely that it could be.
But this isn't deregulation across the board. The same day it vacated Rytr, the FTC sent warning letters to ten companies about fake reviews under the Consumer Review Rule. The message: the FTC will still act on consumer deception, but with a higher evidentiary bar for AI-specific theories and less appetite for "speculative harms." Companies that make affirmative misrepresentations about their AI capabilities—the "AI washing" cases—remain squarely in the enforcement crosshairs.[4]
States Aren't Waiting for Federal Resolution
While federal enforcement recalibrates, states are accelerating. New York's RAISE Act requires frontier AI developers to publish safety protocols and report safety incidents to the state within 72 hours of determination. The law creates a new oversight office within the Department of Financial Services. Violations carry penalties up to $1 million for first offenses, $3 million for subsequent violations. The law takes effect January 1, 2027.[5]
New York joins California, which enacted its Transparency in Frontier AI Act (effective January 2026) with similar developer transparency requirements. Together, these laws create a potential bicoastal de facto standard for frontier AI development—requirements that apply regardless of what happens in federal preemption litigation.
The state activity extends well beyond these flagship laws. Texas's Responsible AI Governance Act took effect in July 2025, establishing governance requirements for AI used in consequential decisions. Colorado's AI Act becomes effective February 2026, requiring deployers to use reasonable care to avoid algorithmic discrimination. And on December 19—the same day Hochul signed the RAISE Act—nearly two dozen state attorneys general sent a letter to the FCC urging it not to preempt state AI laws as contemplated by the Trump executive order.[6]
The federal-state tension is intensifying, not resolving. And until courts rule on preemption challenges, state laws remain enforceable.
What Deployers Should Do Now
For companies using AI systems—as opposed to building them—four priorities emerge from this regulatory moment:
Continue vendor due diligence. Federal enforcement may be narrowing, but state enforcement is not. Your AI vendors' compliance posture matters—perhaps more than before, given the patchwork of state requirements. When evaluating vendors, ask specifically about their state-law compliance programs for Texas, Colorado, New York, and California.
Map your state exposure. Which states' laws apply to your operations? The answer depends on where you operate, where your customers are, and where decisions affecting consumers are made. Inventory your obligations before the next compliance deadline arrives—Colorado's February 2026 effective date is closer than it appears.
Update incident response procedures. New York's 72-hour reporting requirement for safety incidents is aggressive. If your AI vendor experiences an incident, your internal workflows need to support rapid assessment and notification. This requires defined escalation paths, pre-drafted notification templates, and clear authority to make disclosure decisions under time pressure.
Review vendor contracts. Do your AI vendor agreements include state-law compliance representations? Incident notification obligations? Audit rights that would let you verify compliance? If the contract predates the current wave of state AI laws, the answer is likely no. Consider whether amendments are warranted.
The regulatory landscape hasn't simplified—it's bifurcated. Companies that assumed federal preemption would create breathing room may find the opposite: a narrower federal enforcement theory paired with active state enforcement creates compliance complexity, not relief. The prudent approach is to treat state requirements as durable obligations that will persist regardless of how federal preemption battles resolve.
The seesaw isn't balanced. It's in motion—and it's moving toward the states.
[1] N.Y. Governor's Office, "Governor Hochul Signs Nation-Leading Legislation to Require AI Frameworks for AI Frontier Models" (Dec. 19, 2025).
[2] FTC, "FTC Reopens and Sets Aside Rytr Final Order in Response to the Trump Administration's AI Action Plan" (Dec. 22, 2025).
[3] Id.
[4] FTC, "FTC Releases Warning Letters for Fake Consumer Reviews and AI" (Dec. 22, 2025).
[5] N.Y. S6953B/A6453B, Responsible AI Safety and Education Act (signed Dec. 19, 2025).
[6] Letter from State Attorneys General to FCC (Dec. 19, 2025).

/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2025-12-09-19-56-21-177-69387ee52b43241fe164114d.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-01-13-04-34-22-688-6965cb4e966611ff708b7140.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-01-12-22-47-57-162-69657a1d5262d0a2d84eb538.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2025-12-26-22-56-49-494-694f12b15300e4014be4c1c6.jpg)