In a single week, regulators on three continents moved against AI platforms capable of generating explicit imagery. The UK’s media regulator opened a formal investigation.[1] Malaysia and Indonesia blocked access to an AI image generation tool entirely—the first countries to do so.[2] Three U.S. senators called on Apple and Google to remove an AI application from their app stores.[3] The message is clear: AI-generated sexually explicit content—particularly involving minors—has become a universal enforcement trigger.
A Red Line Emerges
The enforcement actions share a common thread: AI systems that can generate non-consensual intimate imagery or content depicting minors. Unlike debates over AI bias or algorithmic transparency, this is not an area where regulators are feeling their way forward. They are acting swiftly and with unusual international alignment.
This pattern tracks domestic developments. Texas’s Responsible AI Governance Act, which took effect January 1, 2026, explicitly prohibits AI systems developed with intent to create child sexual abuse material or explicit deepfake content involving minors. The UK is moving to criminalize “nudification apps.” Malaysia and Indonesia did not wait for new legislation—they simply blocked access under existing authority.
The enforcement theory is straightforward: existing consumer protection, child safety, and obscenity laws apply to AI-generated content just as they do to human-created content. Regulators are not waiting for AI-specific statutes.
What This Means for Deployers
Organizations deploying AI image generation capabilities—whether customer-facing products or internal tools—may wish to evaluate their exposure now. The enforcement wave creates several concrete considerations:
Content policy review. If your organization uses or offers AI image generation, acceptable use policies may need to explicitly prohibit generation of non-consensual intimate imagery and any content depicting minors in sexual contexts. Policies are more effective when technically enforced, not merely contractual.
Age verification. Multiple enforcement actions cite inadequate age-gating as a failure point. Organizations may wish to evaluate whether their current verification mechanisms are sufficient, particularly for consumer-facing applications.
Output monitoring. Relying solely on input filtering is insufficient. The UK investigation specifically cited concerns about outputs, not just prompts. Organizations may want to consider whether they have visibility into what their AI tools actually generate.
Vendor due diligence. For organizations using third-party AI image generation APIs or platforms, the vendor’s content safety practices are now a material consideration. Contract terms may need to address content policy compliance, audit rights, and indemnification for regulatory enforcement.
These considerations align with the broader trend toward AI safety obligations for systems interacting with minors, which we discussed in the context of companion chatbot regulation last November.
Expect Continued Momentum
The international coordination is notable. The EU AI Act’s transparency requirements for AI-generated content take effect in August 2026, including watermarking and labeling obligations. The UK’s Online Safety Act already imposes duties on platforms hosting user-generated content. U.S. states continue to advance AI-specific legislation, with California’s transparency requirements now in effect.
For in-house counsel, the practical takeaway is that AI-generated explicit imagery—especially involving minors—is not a gray area. It is an enforcement priority across jurisdictions. Organizations deploying AI image generation tools may not want to wait for a subpoena or blocking order to evaluate their controls.
[1] Ofcom, “Ofcom launches investigation into X over Grok concerns” (Jan. 12, 2026), https://www.theregister.com/2026/01/12/xai_grok_uk_regulation/.
[2] “Malaysia, Indonesia become first to block Musk’s Grok over AI deepfakes,” NPR (Jan. 12, 2026), https://www.npr.org/2026/01/12/nx-s1-5674660/malaysia-indonesia-block-grok-ai-deepfakes.
[3] Letter from Sens. Wyden, Markey, and Luján to Apple and Google (Jan. 9, 2026), https://www.wyden.senate.gov/news/press-releases/wyden-markey-and-lujan-urge-apple-and-google-to-remove-x-and-grok-from-app-stores-following-grok-generating-illegal-sexual-images-at-scale.

/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2025-12-09-19-56-21-177-69387ee52b43241fe164114d.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-01-15-01-49-40-491-696847b4795a8a75bc6a99f3.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-01-13-04-34-22-688-6965cb4e966611ff708b7140.jpg)
/Passle/678abaae4818a4de3a652a62/SearchServiceImages/2026-01-12-22-47-57-162-69657a1d5262d0a2d84eb538.jpg)