The Trump Administration’s AI Policy Framework Strikes the Right Balance

David B McGarry

March 20, 2026

On Friday, the Trump administration released the National Policy Framework for Artificial Intelligence (AI), a document intended to guide Congress in crafting a national regulatory framework for the technology. Congress—and, indeed, the country—must choose whether to allow a fearful and timid perception of technology, myopically obsessed with safety, to dominate policy debates and squelch the innovative energies that have hitherto raised America to global technological leadership. This view has prevailed in legislatures of many states—notably in California, Colorado, and New York—resulting in an irregular patchwork of heavy-handed and ill-considered regulations that restrain innovators like the hunter’s net in Aesop’s fable of the Lion and the Mouse. The White House adopts a more intrepid, more American position: “Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones,” the Framework reads.

The Trump administration has laudably endorsed a single, consistent national standard to facilitate—not micromanage—American innovation. Next, Congress must act.

Besides preemption, the Framework proposes several reforms salutary for innovation.  “The United States must lead the world in AI by removing barriers to innovation,” the White House states. It advocates “establish[ing] regulatory sandboxes for AI applications,” “streamlin[ing] federal permitting for AI infrastructure construction and operation,” and, instead of creating “any new federal rulemaking body to regulate AI” per se, allowing “existing regulatory bodies with subject matter expertise” to contend with sector-specific applications of the technology.  These measures will expand the sphere of freedom in which technologists operate, promoting the development of tools that promise increased general prosperity, new efficiencies for businesses (large and small alike), new medicines and cures for old diseases and disabilities, and much more.

Digital technologies are, very often, information technologies. And as the Biden administration demonstrated, censorial regulators have both the motive and means to cajole or coerce digital platforms to suppress disfavored speech. The Framework opposes such action, which, very often, violates the First Amendment. It states: “Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.” Further, following a model for which the Taxpayers Protection Alliance (TPA) argued last fall, the White House urges Congress to “provide an effective means for Americans to seek redress from the Federal Government for agency efforts to censor expression on AI platforms or dictate the information provided by an AI platform.”

Of course, technologists may configure their models to generate whatever outputs they please. The First Amendment enjoins the state not to interfere in such matters. However, given recent experience, including the Trump administration’s own attempts to alter the editorial policies of media and digital platforms, Congress must provide recourse to censored Americans.

The Framework also holds that the “training of AI models on copyrighted material does not violate copyright laws,” recognizing that traditional legal principles ought to regulate digital technologies as they regulate humans. As TPA put it in a 2024 report, “Fair use is fair use, no matter whether the user is a human or a computer.”

For all its virtues, the Framework is not without flaws. First, with respect to child safety, it promotes the “empower[ment of] parents and guardians with robust tools to manage their children’s privacy settings, screen time, content exposure, and account controls.” This recalls provisions contained in such proposals as the Kids Online Safety Act. Moreover, the White House vaguely contemplates the use of “parental attestation” to determine users’ ages. TPA has long maintained that the process of establishing the parent–child relationship “would inevitably require…intrusive data gathering to prove both the identity of the parent and his or her status as the child’s legal guardian.”

Promoting AI by codifying stable and certain national rules differs categorically from subsidizing its development or deployment. Regrettably, the White House favors “provid[ing] AI resources to small businesses, such as grants, tax incentives, and technical assistance programs, to support wider deployment of AI tools across American industry.” AI ought not to be stifled, but neither should Congress interfere in market processes to incentivize its adoption. It is a powerful technology—perhaps even a revolutionary technology. It can—and will—succeed without the encouragement of Washington, D.C.

Funnily enough, the AI policy of President Trump has as its foremost opponent a bill that bears his name: the TRUMP AMERICA AI Act, released as a discussion draft on Wednesday by Sen. Marsha Blackburn (R-Tenn.). This proposal cannot be classified as Trumpian any more than the whale shark can be classified as a whale. As the R Street Institute’s Adam Thierer put it, Blackburn’s bill:

contains countless new mandates on a broad range of AI policy issues and would open the door to an unprecedented wave of lawsuits through open-ended liability provisions. It also includes several provisions that raise obvious First Amendment and privacy issues. Most problematically, Blackburn’s bill even flirts with the idea of nationalization of AI labs. Such a radical regulatory regime must be wholly rejected. It would undermine the amazing benefits algorithmic systems have to offer Americans while simultaneously handing China the lead in the race for global AI leadership.

Moreover, Blackburn’s draft contains extraneous provisions such as the Kids Online Safety Act—which contravenes the Framework’s injunctions against censorship—and a repeal of Section 230, at odds (conceptually, at least) with the Framework’s disapproval of penalizing “AI developers for a third party’s unlawful conduct involving their models.”

Punchbowl’s Diego Areas Munhoz reported on Friday that Blackburn “remains committed to her AI bill in light of WH framework. She said she ‘welcomes’ the WH ‘to this important discussion’ and looks ‘forward to working with my colleagues to codify the President’s agenda.’” If Blackburn wishes genuinely to “codify the President’s agenda,” she ought to abandon her bill altogether.

Economic and technological progress is not an assured fact of life. Instead, progress relies on an order of laws, institutions, mores, habits, and assumptions within which entrepreneurs, innovators, and consumers can operate with security and certainty. The White House’s Framework—proceeding from the premise that AI has great promise and that innovation ought, in general, to be permissionless—is a worthy blueprint for congressional action.