A Federal AI Moratorium Is the America-First Policy the Constitution Requires
- Ashley Baker
- 5 hours ago
- 5 min read

State attempts to regulate artificial intelligence violate the Commerce Clause and risk crippling U.S. innovation.
By Ashley Baker and Jeffrey Depp
The question of who should regulate artificial intelligence has become one of the most politically heated fights in the country and has sharply divided the Republican party. Recent proposals for federal preemption over state AI laws such as the White House’s draft executive order and a proposal to write preemption language into the National Defense Authorization Act (NDAA) correctly realize that allowing discordant state rules to govern this vital sector would cripple innovation, subordinate national interests to local politics, and, critically, violate fundamental precepts of the US Constitution.
Unfortunately, these positive steps by the White House and in Congress have reignited a rhetorically charged debate over “Big Tech” and states’ rights, with many misconceptions and inaccuracies. It is therefore important to cut through the noise and break down the key issues behind the debate.
The argument for federal preemption rests on two foundational pillars: the constitutional imperative of an integrated national market and the economic reality that complex, disruptive innovation can only flourish under decentralized market forces.
Many of the critics of the AI moratorium have described their opposition as a federalism issue, claiming that such a measure would undermine “states’ rights.” Such claims lack a proper understanding of American federalism, the founding era, and the relevant text of the Constitution.
Unfortunately for the states’ rights advocates, the operation of artificial intelligence systems is inherently an exercise of interstate commerce. The deployment and use of AI involves users, developers, companies, data centers, energy infrastructure, and more that span across a multitude of states in each instance of use. Much like the flow of electronic commerce on the internet is considered interstate commerce, AI is a form of algorithmic interstate commerce.
An AI law in a state such as California, for example, would have extraterritorial reach to those outside of the state that would be forced to comply or held liable. On a broad level, basic principles of democratic representation hold that a person who cannot vote to elect California lawmakers should not have to comply with AI laws passed by the California state legislature. But more importantly, such laws would run afoul of the Constitutional mandate to protect commerce.
The Founding Fathers recognized the clear need for an explicit federal role in interstate commerce after the Articles of Confederation proved to be disastrous for commerce in the new nation. Under the Articles of Confederation, state-level barriers created dissatisfaction and discord among neighbors, impeding unrestrained commercial intercourse among the states, and impaired the ability to raise funds for national defense. The Framers sought to prevent the conflicting, and often ruinous regulations of the different states. Their solution can be found in the Commerce Clause, found in Article 1, Section 8, Clause 3, of the U.S. Constitution which grants Congress the power to regulate commerce "among the several States."
Multiple state AI laws targeting different components of our burgeoning nationwide AI infrastructure will reintroduce this failure, no doubt mandating restrictions that would stifle development nationwide. The Articles of Confederation, as historians would later note, were “incompatible with nationhood.” An America-First AI policy should uphold the lessons learned from this period of our founding. Allowing a patchwork of conflicting AI regulations for such a geographically unbounded technology creates precisely the kind of historical mischief the Commerce Clause was designed to eliminate.
A proper historical reading confirms that the Commerce Clause was intended to promote commerce, not to allow Congress to regulate any activity involving interstate commerce. Regulations are prudent only when they are instituted for the limited but significant purpose of nurturing, improving, or promoting private property ownership and economic dynamism. The federal government’s role must be to establish a minimally burdensome, uniform national framework that fosters competition, not to impose restrictive policy goals beyond its enumerated powers.
Astoundingly, more than 1,080 AI-related bills were introduced by state lawmakers throughout the 2025 legislative sessions. That’s more than half a dozen new AI bills per day. And many of these proposals cannot agree on how to define artificial intelligence, much less why or how it should be regulated.
This underscores a more fundamental problem with regulating AI-model development; these laws target AI systems for merely falling under the broad categorical umbrella of AI, rather than target the conduct that results in specific harm. Our laws have always functioned best when focused on conduct and actions that result in specific harms.
Critics of a federal AI moratorium incorrectly claim that such a pause will prevent states from enforcing preexisting tort laws. This is not the case. Nothing would prevent states from enforcing the existing state laws that protect against fraud, sex crimes, bias and discrimination, CSAM, or any other harm under technology-neutral laws of general applicability. Read the text. Absolutely nothing in Senator Cruz’s original moratorium could be interpreted as conflicting with existing laws that apply regardless of what type of technology is involved.
Federal action is also not “AI Amnesty for Big Tech.” Just because the federal government has exercised its legitimate powers found in the Commerce Clause and the Supremacy Clause to put a pause on state AI regulations doesn’t mean that tech companies can run rampant with tort violations and claim “we were on a break.” Tech companies are not getting a free pass. “Big tech amnesty” might be a catchy slogan, but it is also untethered from legal realities and is entirely unsupported by the text of the proposals that it is invoked to oppose.
One of the less discussed but most concerning issues in this debate, however, is that the state AI rules represent a more fundamental shift in the law. In targeting the technology itself, state lawmakers have moved outside the boundaries of the legitimate, localized role of the state’s police powers in protecting citizens from conduct that causes harm and into broadly regulating activities that involve interstate commerce. Further, these sorts of rigid regulatory frameworks are ill-suited for disruptive technologies like AI because no individual or regulatory committee possesses the foresight to predict the uncertain future of a still-emerging technology.
To regulate a complex and rapidly evolving system like AI, central planners would require omniscience—total knowledge of all possible effects and consequences for all future innovators—which is humanly impossible. As Israel Kirzner recognized, the progress of science and technology is moved by complex individual efforts that no computer or committee could predict and prescribe its course. The necessity of regulatory humility is deeply rooted in the knowledge problem articulated by many economists for over a century.
The work of Philippe Aghion and Joel Mokyr, which received the 2025 Economics Nobel, provides the modern context for this reality. Sustained economic growth requires systems that continuously generate, diffuse, and apply useful knowledge. In the process of creative destruction, successful innovation emerges through disruptive upheaval to render existing products and technologies obsolete.
This process of dynamic displacement, rather than static stability, is the key to growth. Attempts by state agencies to dictate multiple conflicting policy preferences—or impose social engineering agendas —are counterproductive, inhibiting the growth generated by new technological revolutions. The market process, relying on trial-and-error experimentation, is the only mechanism that can spontaneously coordinate growth through dynamic displacement.
The federal government, therefore, must preempt state fragmentation to protect and promote American AI development. To do otherwise is to sacrifice constitutional principles and subject a generational wave of innovation to regulatory demise, thereby guaranteeing that a powerful technological engine is stalled soon after ignition.
Ashley Baker is the Executive Director and Jeffrey Depp is Senior Counsel for Law and Policy at The Committee for Justice.

