Back to Commandeering Again: David Sacks, the AI Moratorium, and the Executive Order Courts Will Hate

Why Silicon Valley’s in-network defenses can’t paper over federalism limits.

The old line attributed to music lawyer Allen Grubman is, “No conflict, no interest.” Conflicts are part of the music business. But the AI moratorium that David Sacks is pushing onto President Trump (the idea that Washington should freeze or preempt state AI protections in the absence of federal AI policy) takes that logic to a different altitude. It asks the public to accept not just conflicts of interest, but centralized control of AI governance built around the financial interests of a small advisory circle, including Mr. Sacks himself.

When the New York Times published its reporting on Sacks’s hundreds of AI investments and his role in shaping federal AI and chip policy, the reaction from Silicon Valley was immediate and predictable. What’s most notable is who didn’t show up. No broad political coalition. No bipartisan defense. Just a tight cluster of VC and AI-industry figures from he AI crypto–tech nexus, praising their friend Mr. Sacks and attacking the story.

And the pattern was unmistakable: a series of non-denial denials from people who it is fair to say are massively conflicted themselves.

No one said the Times lied.

No one refuted the documented conflicts.

Instead, Sacks’ tech bros defenders attacked tone and implied bias, and suggested the article merely arranged “negative truths” in an unflattering narrative (although the Times did not even bring up Mr. Sacks’ moratorium scheme).

And you know who has yet to defend Mr. Sacks? Donald J. Trump. Which tells you all you need to know.

The Rumored AI Executive Order and Federal Lawsuits Against States

Behind the spectacle sits the most consequential part of the story: a rumored executive order that would direct the U.S. Department of Justice to sue states whose laws “interfere with AI development.” Reuters reports that “U.S. President Donald Trump is considering an executive order that would seek to preempt state laws on artificial intelligence through lawsuits and by withholding federal funding, according to a draft of the order seen by Reuters….”

That is not standard economic policy. That is not innovation strategy. That is commandeering — the same old unconstitutional move in shiny AI packaging that we’ve discussed many times starting with the One Big Beautiful Bill Act catastrophe.

The Supreme Court has been clear on this such as in Printz v. United States (521 U.S. 898 (1997) at 925): “[O]pinions of ours have made clear that the Federal Government may not compel the States to implement,by legislation or executive action, federal regulatory programs.”

Crucially, the Printz Court teaches us what I think is the key fact. Federal policy for all the United States is to be made by the legislative process in regular order subject to a vote of the people’s representatives, or by executive branch agencies that are led by Senate-confirmed officers of the United States appointed by the President and subject to public scrutiny under the Administrative Procedures Act. Period.

The federal government then implements its own policies directly. It cannot order states to implement federal policy, including in the negative by prohibiting states from exercising their Constitutional powers in the absence of federal policy. The Supreme Court crystalized this issue in a recent Congressional commandeering case of Murphy v. NCAA (138 S. Ct. 1461 (2018)) where the court held “[t]he distinction between compelling a State to enact legislation and prohibiting a State from enacting new laws is an empty one. The basic principle—that Congress cannot issue direct orders to state legislatures—applies in either event.” Read together, Printz and Murphy extend this core principle of federalism to executive orders.

The “presumption against preemption” is a canon of statutory interpretation that the Supreme Court has repeatedly held to be a foundational principle of American federalism. It also has the benefit of common sense. The canon reflects the deep Constitutional understanding that, unless Congress clearly says otherwise—which implies Congress has spoken—states retain their traditional police powers over matters such as the health, safety, land use, consumer protection, labor, and property rights of their citizens. Courts begin with the assumption that federal law does not displace state law, especially in areas the states have regulated for generations, all of which are implicated in the AI “moratorium”.

The Supreme Court has repeatedly affirmed this principle. When Congress legislates in fields historically occupied by the states, courts require a clear and manifest purpose to preempt state authority. Ambiguous statutory language is interpreted against preemption. This is not a policy preference—it is a rule of interpretation rooted in constitutional structure and respect for state sovereignty that goes back to the Founders.

The presumption is strongest where federal action would displace general state laws rather than conflict with a specific federal command. Consumer protection statutes, zoning and land-use controls, tort law, data privacy, and child-safety laws fall squarely within this protected zone. Federal silence is not enough; nor is agency guidance or executive preference.

In practice, the presumption against preemption forces Congress to own the consequences of preemption. If lawmakers intend to strip states of enforcement authority, they must do so plainly and take political responsibility for that choice. This doctrine serves as a crucial brake on back-door federalization, preventing hidden preemption in technical provisions and preserving the ability of states to respond to emerging harms when federal action lags or stalls. Like in A.I.

Applied to an A.I. moratorium, the presumption against preemption cuts sharply against federal action. A moratorium that blocks states from legislating even where Congress has chosen not to act flips federalism on its head—turning federal inaction into total regulatory paralysis, precisely what the presumption against preemption forbids.

As the Congressional Research Service primer on preemption concludes:

The Constitution’s Supremacy Clause provides that federal law is “the supreme Law of the Land” notwithstanding any state law to the contrary. This language is the foundation for the doctrine of federal preemption, according to which federal law supersedes conflicting state laws. The Supreme Court has identified two general ways in which federal law can preempt state law. First, federal law can expressly preempt state law when a federal statute or regulation contains explicit preemptive language. Second, federal law can impliedly preempt state law when Congress’s preemptive intent is implicit in the relevant federal law’s structure and purpose.

In both express and implied preemption cases, the Supreme Court has made clear that Congress’s purpose is the “ultimate touchstone” of its statutory analysis. In analyzing congressional purpose, the Court has at times applied a canon of statutory construction known as the “presumption against preemption,” which instructs that federal law should not be read as superseding states’ historic police powers “unless that was the clear and manifest purpose of Congress.”

If there is no federal statute, no one has any idea what that purpose is, certainly no justiciabile idea. Therefore, my bet is that the Court would hold that the Executive Branch cannot unilaterally create preemption, and neither can the DOJ sue states simply because the White House dislikes their AI, privacy, or biometric laws, much less their zoning laws applied to data centers.

Why David Sacks’s Involvement Raises the Political Temperature

As Scott Fitzgerald famously wrote, the very rich are different. But here’s what’s not different—David Sacks has something he’s not used to having. A boss. And that boss has polls. And those polls are not great at the moment. It’s pretty simple, really. When you work for a politician, your job is to make sure his polls go up, not down.

David Sacks is making his boss look bad. Presidents do not relish waking up to front-page stories that suggest their “A.I. czar” holds hundreds of investments directly affected by federal A.I. strategy, that major policy proposals track industry wish lists more closely than public safeguards, or that rumored executive orders could ignite fifty-state constitutional litigation led by your supporters like Mike Davis and egged on by people like Steve Bannon.

Those stories don’t just land on the advisor; they land on the President’s desk, framed as questions of his judgment, control, and competence. And in politics, loyalty has a shelf life. The moment an advisor stops being an asset and starts becoming a daily distraction much less liability, the calculus changes fast. What matters then is not mansions, brilliance, ideology, or past service, but whether keeping that adviser costs more than cutting them loose. I give you Elon Musk.

AI Policy Cannot Be Built on Preemption-by-Advisor

At bottom, this is a bet. The question isn’t whether David Sacks is smart, well-connected, or persuasive inside the room. The real question is whether Donald Trump wants to stake his presidency on David Sacks being right—right about constitutional preemption, right about executive authority, right about federal power to block the states, and right about how courts will react.

Because if Sacks is wrong, the fallout doesn’t land on him. It lands on the President. A collapsed A.I. moratorium, fifty-state litigation, injunctions halting executive action, and judges citing basic federalism principles would all be framed as defeats for Trump, not for an advisor operating at arm’s length.

Betting the presidency on an untested legal theory pushed by a politically exposed “no conflict no interest” tech investor isn’t bold leadership. It’s unnecessary risk. When Trump’s second term is over in a few years, Trump will be in the history books for all time. No one will remember who David Sacks was.

Structural Capture and the Trump AI Executive Order

The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. 


In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.”  This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.

AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions.  States also control the electrical utilities and zoning infrastructure that AI data centers depend on. 

Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.

This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. 
Those engineering incentives map cleanly onto the EO’s enforcement logic. 

The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.

There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.

The constitutional landscape is equally important.  Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress.  Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding.  And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.

So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.

The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?

A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.

But the deeper issue is how the AI ecosystem responds to a constrait.  If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.

At the systems level, a DOJ loss may even feed back into corporate strategy.  Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.

Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.

The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.

But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states.  If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.

This deserves close scrutiny before it becomes the template for AI governance moving forward.