This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take

| 4 minute read

White House Publishes National Policy Framework for Artificial Intelligence

On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, setting forth a series of legislative recommendations to Congress intended to guide the development of a comprehensive federal AI policy. The framework spans seven broad policy areas: (I) protecting children and empowering parents; (II) safeguarding and strengthening American communities; (III) respecting intellectual property rights and supporting creators; (IV) preventing censorship and protecting free speech; (V) enabling innovation and ensuring American AI dominance; (VI) educating Americans and developing an AI-ready workforce; and (VII) preempting state AI regulation through federal laws. 

Although the framework addresses a wide range of issues, its recommendations with respect to intellectual property—and copyrights in particular—may carry significant implications for content creators, publishers, AI developers, and the companies that work with them. Below, our team examines these broad policy recommendations, including implications for intellectual property and fair use. 

The Administration's Position on AI Training and Fair Use

Perhaps the most consequential statement in the framework (at least, from an IP perspective) is the Administration's view that the act training AI models on copyrighted material does not violate copyright laws—rather, the Administration believes that such a use falls squarely within the fair use exception to copyright protection. At the same time, the Administration expressly acknowledges the existence of contrary legal arguments and, at its core, supports allowing the courts to determine whether this use actually comports with the requirements of fair use (particularly, with respect to the line-drawing between fair use and infringement). Critically, the framework recommends that Congress refrain from taking any action that would affect or preempt the judiciary's resolution of this question, though it does encourage Congress to take some regulatory action in this space.

This approach has several practical implications. For AI developers, the Administration's stated belief provides a degree of political cover, but it does not carry the force of law or judicial precedent. As long as the fair use question remains before the courts, AI developers face continued uncertainty regarding the legality of their training practices. For rights holders, the Administration's decision not to push for a statutory resolution means that, at least for now, the courts remain the primary arena for establishing the boundaries of permissible AI training. Stakeholders on both sides should continue to closely monitor the ongoing litigation landscape, as the outcomes of pending cases could establish precedents with far-reaching consequences. 

Licensing Frameworks and Collective Rights Systems

The framework also recommends that Congress consider enabling licensing frameworks or collective rights systems that would allow rights holders to collectively negotiate compensation from AI providers without incurring antitrust liability, notwithstanding the potential fair use argument above. This recommendation contemplates a structured mechanism through which creators and publishers could, acting collectively, negotiate with AI developers (that is, the companies actually training AI models) for the use of their content—a mechanism that, absent legislative authorization, could raise concerns under existing antitrust law. However, the framework is careful to state that any such legislation should not address when or whether such licensing is required at all. In other words, the Administration envisions a permissive framework—one that removes legal barriers to collective negotiation—but stops short recommending mandatory licensing or compensation schemes for the use of copyrighted material in training. Rights holders should take note of this distinction, as it suggests that any future licensing regime would be market-driven rather than compulsory.

Digital Replicas and Publicity Rights

In addition to, and to some degree in contrast with, its recommendations on copyright, the framework addresses the increasingly important issue of AI-generated digital replicas. The Administration recommends that Congress consider establishing a federal framework protecting individuals from the unauthorized distribution or commercial use of AI-generated digital replicas of their voice, likeness, or other identifiable attributes—in other words, allowing individuals to have greater control over their own public images and whether or how it is used by AI models. Such a framework would, in effect, create a federal right of publicity specific to AI-generated content. 

At the same time, the framework recommends clear exceptions for parody, satire, news reporting, and other expressive works protected by the First Amendment, and cautions that Congress should prevent persons from abusing such a framework to stifle free speech online. This recommendation builds on the Administration's earlier support for the Take It Down Act, which addressed deepfake abuse targeting children and adult victims. A federal digital replica right would represent a significant development in the patchwork of state publicity rights laws and could have wide-ranging implications for the entertainment, media, and technology industries. It may also implicate preemption concerns for certain state-specific laws which already tackle this issue.

Ongoing Congressional Monitoring

Additionally, the framework urges Congress to continue carefully monitoring the development of copyright precedents and enforcement in the courts, and to evaluate whether, in light of novel AI-related considerations, additional legislative action beyond the measures proposed in the framework may be needed to fill potential gaps or provide additional protections for content creators. This language signals that the Administration views the current framework as an initial set of recommendations rather than a final word, and it leaves open the possibility of further legislative intervention as the legal and technological landscape continues to evolve.

Broader Context

Beyond intellectual property, the framework includes several other recommendations that may be of interest to AI developers and technology companies. Congress is encouraged to establish regulatory sandboxes for AI applications, provide resources to make federal datasets accessible for AI training, and refrain from creating any new federal rulemaking body to regulate AI, instead relying on existing sector-specific regulators and industry-led standards. The framework also recommends that Congress preempt state AI laws that impose undue burdens, in order to ensure a minimally burdensome national standard rather than a fragmented patchwork of state regulations, while preserving states' traditional police powers and their ability to enforce generally applicable laws such as those protecting children, preventing fraud, and safeguarding consumers. 

Conclusions

Intellectual property holders and AI developers alike should pay close attention to the framework's recommendations--and Congress's response to the same. The Administration has signaled a preference for judicial resolution of the core fair use question, a permissive (but not compulsory) licensing regime, and a new federal digital replica right—all of which could reshape the legal landscape for AI and intellectual property in the coming years. Taken together with the Administration's prior publications and orders in this space, it is clearly signaling a “legislation-light” approach to AI models, allowing competitors in this space to develop industry standards and practices rather than mandating specific conduct via regulation. We will continue to monitor legislative developments and case law in this area and will provide further updates as the situation evolves.

Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue.

Tags

ai, intellectual property, client update