The Five Keys for Success for the Trump Administration AI Push
Taxpayers Protection Alliance
January 24, 2025
By: Juan Londoño and David McGarry
The first days of the second Trump administration have sent a clear message. The new administration is aiming for an overhaul of the federal government’s approach to artificial intelligence (AI). The day one rescission of the Biden executive order on AI and the subsequent announcement of a flagship AI infrastructure project indicates that the government will take a more aggressive and pro-growth approach. This will not be an easy task for the administration. However, if they want to ensure they get it right, there are five key factors the administration must follow.
AI regulation should be led by Congress, not by states or bureaucrats
A potential fragmentation into competing, and potentially contradictory, regulatory standards could introduce burdensome uncertainty and added compliance costs. Additionally, offloading regulatory tasks to unaccountable government agencies can also lead to overregulation of the industry, by unduly exposing the industry to regulatory capture. Thus, Congress needs to re-assert its legislative authority and ensure that there is a singular federal standard that pre-empts any state level regulation. Additionally, it must refrain from offloading all rulemaking power to federal agencies.
In most circumstances, states can function well as “laboratories for democracy.” State and local governments can experiment with different policies to discover the best approach. This dynamic, however, has not translated well with digital services. Oftentimes, it is nearly impossible to determine “who” the user is, where he is, or which local regulations apply. The nature of these digital services is one that usually transcends geographical lines. Service providers (or their infrastructure) may be located in one state and the end user(s) located in one (or multiple) other jurisdictions. Additionally, users can mask their location when they use a virtual private network (VPN).
These dynamics add a tremendous amount of complexity for state-level regulatory compliance. The experiment of regulating data privacy at the state level has already shown that compliance can be quite costly. Studies estimate that the privacy “patchwork” has cost the American economy up to $112 billion per year. These costs have led to calls for a federal privacy standard to pre-empt these state-level regulations and provide industry with a single set of standards. This would lower compliance costs and reduce regulatory risk. Unfortunately, it seems various states have decided to repeat this failed approach with AI governance. California and Colorado passed comprehensive AI bills (although the California bill was vetoed by its governor), and over 700 lower-scale AI bills have been introduced across the country.
Congress should be wary of agencies creeping in to enact AI regulations that rightfully should go through the legislative process. Unlike the executive-branch rulemaking process, the legislative process necessarily includes the input of many officials, elected by the people, with widely diverging interests. This kind of deliberation usually leads to more prudent bills, better adapted to the needs of the nation as a whole.
This does not mean, however, that Congress cannot not rely on subject matter experts in the executive branch. Understanding AI—let alone regulating it—requires much time and study. Subject-matter experts will necessarily play a big part in ensuring that laws pertaining to AI fit neatly within the realities of the technology. Nonetheless, this should not be taken too far. It would be deeply unwise to offload policy making entirely to experts in administrative agencies and to insulate those experts from congressional oversight.
Compliance costs can never be underestimated
The experience of the European General Data Protection Regulation (GDPR) has taught a lesson to the world. The burden of compliance costs should not to be underestimated. Unfortunately, most of the proposals for AI legislation would introduce some sort of licensing regime, annual reporting requirements, or bias-mitigation protocols. All of these requirements would add significant costs both monetarily or in terms of man-hours in order to get necessary licenses, create these annual reports, or constantly monitor systems for potential bias.
Add in the aforementioned costs of the state-level patchwork, and these compliance costs can quickly balloon. Policymakers must understand that every dollar spent in compliance usually means a dollar taken away from other productive uses. Startups and small businesses are ones who feel this burden more acutely. Introducing size thresholds do not solve this problem either, as the rapid jump in costs could prove so punishing that they upend a company’s whole business model, essentially punishing success.
Europe has, for all the wrong reasons, become a shining example of the impact of regulation and compliance on the tech industry. After becoming the jurisdiction with some of the most strict regulatory regimes and broad reporting requirements, it has ended with an anemic tech industry, an approach they now regret. U.S.-based policymakers should be weary of recreating such an approach.
Regulation should be narrowly targeted and focus on proven harms
Policy makers should focus on regulating harmful AI outputs, not the technology as a whole. AI features of some sort will likely be incorporated in commercial and productive goods—from agriculture to manufacturing to logistics to medicine to entertainment, and so much more. As a general-purpose technology, any attempt to impose sweeping restrictions on AI models will likely yield stifling, overbroad regulations. Any attempts to specifically regulate AI products at large would resemble trying to regulate all products that include steel at one go. It would be neither practicable nor conducive to innovation.
The solution is to address the specific issues raised by specific types of AI products on an individual basis. This approach will allow policy to be tailored to remedy distinct problems that are produced by specific types of tools. However, before breaking new ground, policy makers should first attempt to extend the principles of existing law to cover AI’s outputs.
Moreover, as regulators consider how to craft guidelines for AI tools, they should focus on the real-world outcomes produced by AI tools, not hypothetical. It should not matter how AI products are designed so long as they do not harm consumers in ways that violate existing law. This can also work the other way around. A tool that has done all the “correct” processes but, has somehow led to proven harm should face scrutiny. A process-first approach misses the point of, and adds confusion to, regulation.
Government should leverage AI, but it must prepare properly to do so
AI promises to bring new efficiencies, cut costs and waste, improve outputs, and make human workers more productive. Policy makers should seek out ways AI tools can enhance government operations and curtail waste, fraud, and abuse. Procurement, outlays, and other processes could begin to incorporate AI tools to save taxpayers money. While it may require up-front spending to research and purchase new tools, it will save money in the long term. Moreover, the government’s cybersecurity teams should examine how AI can harden the nation’s cyber defenses. Cybercrime and cyber-espionage are now an ever-present challenge, and AI is playing a growing role in the field.
However, for many of the advantages AI could present for the government, adopting this technology without the necessary preparation could prove to be a liability. Unfortunately, the federal government has a muddy track record in cybersecurity practices. Data collection, processing, and storage that is necessary for deploying AI systems could make these agencies a larger target for cyberattacks. Higher government investment in studying vulnerabilities in government digital services, updating obsolete IT systems, and procuring cybersecurity hardware and software tools should be key priorities. As of now, the federal government lacks the necessary infrastructure to prevent and mitigate any unintended harms that could stem from deploying AI technology. This could, and should, change.
Scalability will largely depend on U.S.’s ability to build and provide a consistent flow of energy
AI is a power-hungry technology which requires tremendous amount of computing power. The advanced hardware involved in AI operations requires constant and reliable access to energy to function. To compete with the global AI industry, American policy makers need to think of computing power—and the factors that enable it—as a strategic resource. Therefore, it must produce and distribute as much energy as possible to keep up with an ever-rising demand.
The private sector has taken the initiative to overcome these energy challenges. For example, a number of U.S. companies have invested billions of dollars in either reactivating or building new nuclear power plants. Some have also invested in the development of self-powered data centers, oftentimes using renewable energy systems.
However, these investments often struggle with America’s complex regulatory regime. Amazon’s plans to ramp up nuclear power capacity were thwarted by the Federal Energy Regulatory Commission. Meta was blocked from building a new nuclear-powered data center due to a sighting of a rare bee species. These are just the hurdles that arise during the planning phase. Once ground is broken and projects kick off, companies will encounter a tremendous amount of red tape that hinders their attempts to develop new power capacity.
The regulatory barriers for AI companies extend beyond energy access. Permitting and zoning laws have also blocked the construction of essential infrastructure, such as data centers. As the technology scales up, it will need more data centers to provide models with the necessary computing power to collect, process, and transform data. However, the proliferation of anti–data center bills could hamper the country’s ability to build up more computing power.
An AI system with no computing power is like a car with no fuel. The system might have top-notch engineers and sophisticated algorithms, but without access to reliable energy supply and infrastructure, it would fall short of its true potential. Policy makers need to think of computing power as another strategic resource, like oil or precious minerals—especially as foreign competition in the AI space increases. In an AI-driven economic revolution, the countries that can scale up their computing power efficiently are the ones more likely to come up on top.
There is no guarantee for success with emerging technologies, as circumstances are often rapidly changing. However, past experiences have charted a reliable blueprint that would leave the U.S. in a position where they are more likely to succeed. As the Trump administration charts a new course for how the U.S. will approach AI policy, it would be well served by relying on these five principles.