What You Should Be Reading: March 2025
David B McGarry
April 4, 2025
Welcome to “What You Should Be Reading,” a monthly series in which the Taxpayers Protection Alliance (TPA) pauses its frantic work trying to stop tariffs and other government overreach to recommend a few compelling new works of public-policy research.
March’s edition deviates from the hoary traditions of this blog series. The Trump administration issued a Request for Information (RFI), soliciting public comment to inform the crafting of an Artificial Intelligence (AI) Action Plan. Many organizations submitted comments, including TPA. Although we commend our comments to you, What You Should Be Reading has little use for self-promotion. Instead, below are the comments filed by three titans of technology policy — the Abundance Institute, NetChoice, and the R Street Institute.
So, without further ado…
NetChoice
AI might appear sui generis or entirely novel. In one sense, of course, it is — as are other revolutionary technological breakthroughs. But a myopic obsession with the novel aspects of AI misdirects attention from its commonalities with the technologies that came before. For regulators, such myopia produces calls for entirely new regulatory frameworks when older, time-tested frameworks would serve far better.
“Despite the novelty of AI and the historic inflection point it presents, America does not need a radical new approach to ensure global leadership in all things AI,” writes Patrick Hedger, the director of policy at NetChoice (and formerly of TPA). “The blueprint for success is evident in our history and our national DNA, as demonstrated by America’s current economic dominance, and the global internet built and shaped by Americans, such as Google, Amazon, Meta, and X.”
Hedger describes regulatory scaffolding that, while perhaps not immediately associated with AI, AI cannot flourish without. Needed reforms include pro-investment, pro-innovation antitrust enforcement, a recission of the Biden’s nonsensical burdening of mergers and acquisitions, and the enactment of a national data-privacy regime.
Hedger identifies two principles by which regulators should approach AI. First, “permissionless innovation,” the notion that innovators ought to be left free to work and experiment unless their activities raise tangible risks or harms. Second, “regulatory humility,” a plain recognition of the epistemological constraints that plague — and always will plague — would-be central planners.
As Adam Smith wrote, “The statesman, who should attempt to direct private people in what manner they ought to employ their capitals, would…assume an authority which could safely be trusted, not only to no single person, but to no council or senate whatever.” That goes for 18th-century mercantilists and 21st-century technocrats alike.
The R Street Institute
AI’s potential can be realized fully only by entrepreneurs and innovators, not by the state — and by entrepreneurs and innovators only if the state declines to micromanage their activities. The model of “permissionless innovation” proved its worth during the internet era, as the generally free American tech sector yielded a crop of technology giants — Apple, Google, Meta, Microsoft, etc. — unmatched by any other nation. Freedom is fertile ground for innovation.
As a matter of rhetoric, the Trump administration’s optimistic outlook with respect to AI deserves create, writes the R Street Institute’s Adam Thierer. “Over the past four years, much of the policy dialogue surrounding AI systems has been fear-based, viewing AI less as an opportunity to embrace and more as a danger to avoid,” Thierer notes. Europe, whose fear of technology-related risks drove the imposition of stifling regulatory regimes, and which consequently lacks a tech sector of note, supplies ample evidence of the dangers that would attend an American abandonment of permissionless innovation.
Thierer examines the pitfalls inherent to regulating such a technology as AI. “As the most important general-purpose technology of modern times, AI has important linkages with many other upstream and downstream technologies and industries, especially autonomous systems and robotics,” he writes. Due to AI’s wide-ranging applications, many regulators in many federal agencies can lay claim to some slice of authority in managing various kinds of AI products. He continues:
Rather than issuing broad new regulatory edicts based on the hypothetical harms of future AI products, agencies should enforce existing laws or other remedies on an as-needed basis. It is essential to remember that, in many cases, “the best AI law may be one that already exists.” There is no need to develop an entirely new regulatory superstructure for AI systems when so many other remedies exist. These include existing legal tools like product recall authority, unfair and deceptive practices law, and civil rights laws, as well as court-based common law remedies. This approach represents a more flexible way to manage potential AI risks than a top-down, ex-ante regulatory regime that would limit AI’s potential.
As a rule, AI outputs – not AI inputs – should be regulated; and only demonstrated harms – not vague, hypothetical harms – should be punished.
Besides Europe, another, more formidable international competitor looms — China. Unlike Brussels’ technocrats, the Chinese Communist Party has propelled its industry into true competition with the U.S. Therefore, the federal government’s regulatory missteps with respect to AI endanger not just innovation but national security. Beijing thinks authoritarian central planning will out-innovate the West’s free markets; from the view stateside, every unnecessary regulatory fetter Washington, D.C. imposes on AI impairs the capacity of free markets to prove the CCP wrong. Americans do not eschew the Chinese economic policy because they are too timid — they do so because they believe free enterprise creates more wealth, innovation, and economic stability than any central planner can.
The Abundance Institute
Neil Chilson and Josh Smith of the Abundance Institute note a flaw of the European-style, hyper-regulatory posture towards AI. For example, “Colorado has passed a ‘comprehensive’ AI regulation that no one knows how to implement,” they report. From a crass operational vantage, the most burdensome and sweeping of regulations rarely become enforced en toto, in a predictable manner, or without worrisome unintended effects; their sheer immensity makes such enforcement a sheer impossibility.
That does not mean, however, that new law cannot help AI’s advancement. Liability protections for developers, legal safe harbors, regulatory sandboxes, preemptions of state-level regulatory patchworks, and other measures could be of use. So-called “Right to Compute” laws “could establish that any government actions that restrict the ability to privately own or make use of computational resources for lawful purposes must be limited to those demonstrably necessary and narrowly tailored to fulfill a compelling government interest,” Chilson and Smith write.
Moreover, the federal government must raze the regulatory barriers that now slow the building of energy infrastructure. As AI’s proliferation continues, America will consume more energy; an advanced economy will demand far more grid capacity. Of particular note, nuclear energy has long faced insurmountable barriers. “The U.S. builds nuclear power generation at a glacial pace,” as the pair puts it. This problem is regulatory in nature — it implicates no failure of free markets. Its primary solution, therefore, is regulatory reform and liberalization of the industry.
Note: TPA highlights research projects that contribute meaningfully to important public-policy discussions. TPA does not necessarily endorse the policy recommendations the featured authors make.