Marc Andreessen’s Dormant Commerce Clause Fantasy

There’s a special kind of hubris in Silicon Valley, but Marc Andreessen may have finally discovered its purest form: imagining that the Dormant Commerce Clause (DCC) — a Constitutional doctrine his own philosophical allies loathe — will be his golden chariot into the Supreme Court to eliminate state AI regulation.

If you know the history, it borders on comedic, if you think that Ayn Rand is a great comedienne.

The DCC is a judge‑created doctrine inferred from the Commerce Clause (Article I, Section 8, Clause 3), preventing states from discriminating against or unduly burdening interstate commerce. Conservatives have long attacked it as a textless judicial invention. Justice Scalia called it a “judicial fraud”; Justice Thomas wants it abolished outright. Yet Andreessen’s Commerce Clause playbook is built on expanding a doctrine the conservative legal movement has spent 40 years dismantling.

Worse for him, the current Supreme Court is the least sympathetic audience possible.

Justice Gorsuch has repeatedly questioned DCC’s legitimacy and rejects free‑floating “extraterritoriality” theories. Justice Barrett, a Scalia textualist, shows no appetite for expanding the doctrine beyond anti‑protectionism. Justice Kavanaugh is business‑friendly but wary of judicial policymaking. None of these justices would give Silicon Valley a nationwide deregulatory veto disguised as constitutional doctrine. Add Alito and Thomas, and Andreessen couldn’t scrape a majority.

And then there’s Ted Cruz — Scalia’s former clerk — loudly cheerleading a doctrine his mentor spent decades attacking.

National Pork Producers Council v. Ross (2023): The Warning Shot

Andreessen’s theory also crashes directly into the Supreme Court’s fractured decision in the most recent DCC case before SCOTUS, National Pork Producers Council v. Ross (2023), where industry groups tried to use the DCC to strike down California’s animal‑welfare law due to its national economic effects.

The result? A deeply splintered Court produced several opinions.  Justice Gorsuch  announced the judgment of the Court, and delivered the opinion of the Court with respect to Parts I, II, III, IV–A, and V, in which Justices Thomas, Sotomayor, Kagan and Barrett joined, an opinion with respect to Parts IV–B and IV–D, in which Justice Thomas and Barrett joined, and an opinion with respect to Part IV–C, in which Justices Thomas, Sotomayor, and Kagan joined.  Justice Sotomayor filed an opinion concurring in part, in which Justice Kagan joined.  Justice Barrett filed an opinion concurring in part. Chief Justice Roberts filed an opinion concurring in part and dissenting in part, in which Justices Alito, Kavanaugh and Jackson joined. Justice Kavanaugh filed an opinion concurring in part and dissenting in part.

Got it?  

The upshot:
– No majority for expanding DCC “extraterritoriality.”
– No appetite for using DCC to invalidate state laws simply because they influence out‑of‑state markets.
– Multiple justices signaling that courts should not second‑guess state policy judgments through DCC balancing.
– Gorsuch’s lead opinion rejected the very arguments Silicon Valley now repackages for AI.

If Big Tech thinks this Court that decided National Pork—no pun intendedwill hand them a nationwide kill‑switch on state AI laws, they profoundly misunderstand the doctrine and the Court.

Andreessen didn’t just pick the wrong legal strategy. He picked the one doctrine the current Court is least willing to expand. The Dormant Commerce Clause isn’t a pathway to victory — it’s a constitutional dead end masquerading as innovation policy.

But…maybe he’s crazy like a fox.  

The Delay’s the Thing: The Dormant Commerce Clause as Delay Warfare

To paraphrase Saul Alinksy, the issue is never the issue, the issue is always delay.  Of course, if delay is the true objective, you couldn’t pick a better stalling tactic than hanging an entire federal moratorium on one of the Supreme Court’s most obscure and internally conflicted doctrines. The Dormant Commerce Clause isn’t a real path to victory—not with a Court where Scalia’s intellectual heirs openly question its legitimacy. But it is the perfect fig leaf for an executive order.

The point isn’t to win the case. The point is to give Trump just enough constitutional garnish to issue the EO, freeze state enforcement, and force every challenge into multi‑year litigation. That buys the AI industry exactly what it needs:  time. Time to scale. Time to consolidate. Time to embed itself into public infrastructure and defense procurement. Time  to become “too essential to regulate” or as Senator Hawley asked, too big to prosecute?

Big Tech doesn’t need a Supreme Court victory. It needs a judicial cloud, a preemption smokescreen, and a procedural maze that chills state action long enough for the industry to entrench itself permanently.  And no one knows that better than the moratorium’s biggest cheerleader, Senator Ted Cruz the Scalia clerk.

The Dormant Commerce Clause, in this context, isn’t a doctrine. It’s delay‑ware—legal molasses poured over every attempt by states to protect their citizens. And that delay may just be the real prize.

Structural Capture and the Trump AI Executive Order

The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. 


In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.”  This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.

AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions.  States also control the electrical utilities and zoning infrastructure that AI data centers depend on. 

Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.

This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. 
Those engineering incentives map cleanly onto the EO’s enforcement logic. 

The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.

There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.

The constitutional landscape is equally important.  Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress.  Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding.  And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.

So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.

The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?

A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.

But the deeper issue is how the AI ecosystem responds to a constrait.  If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.

At the systems level, a DOJ loss may even feed back into corporate strategy.  Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.

Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.

The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.

But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states.  If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.

This deserves close scrutiny before it becomes the template for AI governance moving forward.

Too Dynamic to Question, Too Dangerous to Ignore

When Ed Newton-Rex left Stability AI, he didn’t just make a career move — he issued a warning. His message was simple: we’ve built an industry that moves too fast to be honest.

AI’s defenders insist that regulation can’t keep up, that oversight will “stifle innovation.” But that speed isn’t a by-product; it’s the business model. The system is engineered for planned obsolescence of accountability — every time the public begins to understand one layer of technology, another version ships, invalidating the debate. The goal isn’t progress; it’s perpetual synthetic novelty, where nothing stays still long enough to be measured or governed, and “nothing says freedom like getting away with it.”

We’ve seen this play before. Car makers built expensive sensors we don’t want that fail on schedule; software platforms built policies that expire the moment they bite. In both cases, complexity became a shield and a racket — “too dynamic to question.” And yet, like those unasked-for, but paid for, features in the cars we don’t want, AI’s design choices are too dangerous to ignore. (Like what if your brakes really are going out, not just the sensor is malfunctioning.)

Ed Newton-Rex’s point — echoed in his tweets and testimony — is that the industry has mistaken velocity for virtue. He’s right. The danger is not that these systems evolve too quickly to regulate; it’s that they’re designed that way designed to fail just like that brake sensor. And until lawmakers recognize that speed itself is a form of governance, we’ll keep mistaking momentum for inevitability.

From Fictional “Looking Backward” to Nonfiction Silicon Valley: Will Technologists Crown the New Philosopher‑Kings?

More than a century ago, writers like Edward Bellamy and Edward Mandell House asked a question that feels as urgent in 2025 as it did in their era: Should society be shaped by its people, or designed by its elites? Both grappled with this tension in fiction. Bellamy’s Looking Backward (1888) imagined a future society run by rational experts — technocrats and bureaucrats centralizing economic and social life for the greater good. House’s Philip Dru: Administrator (1912) went a step further, envisioning an American civil war where a visionary figure seizes control from corrupt institutions to impose a new era of equity and order.  Sound familiar?

Today, Silicon Valley’s titans are rehearsing their own versions of these stories. In an era dominated by artificial intelligence, climate crisis, and global instability, the tension between democratic legitimacy and technocratic efficiency is more pronounced than ever.

The Bellamy Model: Eric Schmidt and Biden’s AI Order

President Biden’s sweeping Executive Order on AI issued in late 2023 feels like a chapter lifted from Looking Backward. Its core premise is unmistakable: Trust our national champion “trusted” technologists to design and govern the rules for an era shaped by artificial intelligence. At the heart of this approach is Eric Schmidt, former CEO of Google and a key advisor in shaping the AI order at least according to Eric Schmidt

Schmidt has long advocated for centralizing AI policymaking within a circle of vetted, elite technologists — a belief reminiscent of Bellamy’s idealistic vision. According to Schmidt, AI and other disruptive technologies are too pivotal, too dangerous, and too impactful to be left to messy democratic debates. For people in Schmidt’s cabal, this approach is prudent: a bulwark against AI’s darker possibilities. But it doesn’t do much to protect against darker possibilities from AI platforms.  For skeptics like me, it raises a haunting question posed by Bellamy himself: Are we delegating too much authority to a technocratic elite?

The Philip Dru Model: Musk, Sacks, and Trump’s Disruption Politics

Meanwhile, across the aisle, another faction of Silicon Valley is aligning itself with Donald Trump and making a very different bet for the future. Here, the nonfiction playbook is closer to the fictional Philip Dru. In House’s novel, an idealistic and forceful figure emerges from a broken system to impose order and equity. Enter Elon Musk and David Sacks, both positioning themselves as champions of disruption, backed by immense platforms, resources, and their own venture funds. 

Musk openly embraces a worldview wherein technologists have both the tools and the mandate to save society by reshaping transportation, energy, space, and AI itself. Meanwhile, Sacks advocates Silicon Valley as a de facto policymaker, disrupting traditional institutions and aligning with leaders like Trump to advance a new era of innovation-driven governance—with no Senate confirmation or even a security clearance. This competing cabal operates with the implicit belief that traditional democratic institutions, inevitiably bogged down by process, gridlock, and special interests can no longer solve society’s biggest problems. To Special Government Employees like Musk and Sacks, their disruption is not a threat to democracy, but its savior.

A New Gilded Age? Or a New Social Contract?

Both threads — Biden and Schmidt’s technocratic centralization and Musk, Sacks, and Trump’s disruption-driven politics — grapple with the legacy of Bellamy and House. In the Gilded Age that inspired those writers, industrial barons sought to justify their dominance with visions of rational, top-down progress. Today’s Silicon Valley billionaires carry a similar vision for the digital era, suggesting that elite technologists can govern more effectively than traditional democratic institutions like Plato’s “guardians” of The Republic.

But at what cost? Will AI policymaking and its implementation evolve as a public endeavor, shaped by citizen accountability? Or will it be molded by corporate elites making decisions in the background? Will future leaders consolidate their role as philosopher-kings and benevolent administrators — making themselves indispensable to the state?

The Stakes Are Clear

As the lines between Silicon Valley and Washington continue to blur, the questions posed by Bellamy and House have never been more relevant: Will technologist philosopher-kings write the rules for our collective future? Will democratic institutions evolve to balance AI and climate crisis effectively? Will the White House of 2025 (and beyond) cede authority to the titans of Silicon Valley? In this pivotal moment, America must ask itself: What kind of future do we want — one that is chosen by its citizens, or one that is designed for its citizens? The answer will define the character of American democracy for the rest of the 21st century — and likely beyond.

AI’s Manhattan Project Rhetoric, Clearance-Free Reality

Every time a tech CEO compares frontier AI to the Manhattan Project, take a breath—and remember what that actually means.  Master spycatcher James Jesus Angleton is rolling in his grave. (aka Matt Damon in The Good Shepherd.). And like most elevator pitch talking points, that analogy starts to fall apart on inspection.

The Manhattan Project wasn’t just a moonshot scientific collaboration. It was the most tightly controlled, security-obsessed R&D operation in American history. Every physicist, engineer, and janitor involved had a federal security clearance. Facilities were locked down under military command of General Leslie Groves. Communications were monitored. Access was compartmentalized. And still—still—the Soviets penetrated it.  See Klaus Fuchs.  Let’s understand just how secret the Manhattan Project was—General Curtis LeMay had no idea it was happening until he was asked to set up facilities for the Enola Gay on his bomber base on Tinian a few months before the first nuclear bomb.  You want to find out about the details of any frontier lab, just pick up the newspaper.  Not nearly the same thing. There were no chatbots involved and there were no Special Government Employees with no security clearance.

Oppie Sacks

So when today’s AI executives name-drop Oppenheimer and invoke the gravity of dual-use technologies, what exactly are they suggesting? That we’re building world-altering capabilities without any of the safeguards that even the AI Whiz Kids admit are historically necessary by their Manhattan Project talking point in the pitch deck?

These frontier labs aren’t locked down. They’re open-plan. They’re not vetting personnel. They’re recruiting from Discord servers. They’re not subject to classified environments. They’re training military-civilian dual-use models on consumer cloud platforms. And when questioned, they invoke private sector privilege and push back against any suggestion of state or federal regulation.  And here’s a newsflash—requiring a security clearance for scientific work in the vital national interest is not regulation.  (Neither is copyright but that’s another story.)

Meanwhile, they’re angling for access to Department of Energy nuclear real estate, government compute subsidies, and preferred status in export policy—all under the justification of “national security” because, you know, China.  They want the symbolism of the Manhattan Project without the substance. They want to be seen as indispensable without being held accountable.

The truth is that AI is dual-use. It can power logistics and surveillance, language learning and warfare. That’s not theoretical—it’s already happening. China openly treats AI as part of its military-civil fusion strategy. Russia has targeted U.S. systems with information warfare bots. And our labs? They’re scraping from the open internet and assuming the training data hasn’t been poisoned with the massive misinformation campaigns on Wikipedia, Reddit and X that are routine.

If even the Manhattan Project—run under maximum secrecy—was infiltrated by Soviet spies, what are the chances that today’s AI labs, operating in the wide open are immune?  Wouldn’t a good spycatcher like Angleton assume these wunderkinds have already been penetrated?

We have no standard vetting for employees. No security clearances. No model release controls. No audit trail for pretraining data integrity. And no clear protocol for foreign access to model weights, inference APIs, or sensitive safety infrastructure. It’s not a matter of if. It’s a matter of when—or more likely, a matter of already.

Remember–nobody got rich out of working on the Manhattan Project. That’s another big difference. These guys are in it for the money, make no mistake.

So when you hear the Manhattan Project invoked again, ask the follow-up question: Where’s the security clearance?  Where’s the classification?  Where’s the real protection?  Who’s playing the role of Klaus Fuchs?

Because if AI is our new Manhattan Project, then running it without security is more than hypocrisy. It’s incompetence at scale.

David Sacks Is Learning That the States Still Matter

For a moment, it looked like the tech world’s powerbrokers had pulled it off. Buried deep in a Republican infrastructure and tax package was a sleeper provision — the so-called AI moratorium — that would have blocked states from passing their own AI laws for up to a decade. It was an audacious move: centralize control over one of the most consequential technologies in history, bypass 50 state legislatures, and hand the reins to a small circle of federal agencies and especially to tech industry insiders.

But then it collapsed.

The Senate voted 99–1 to strike the moratorium. Governors rebelled. Attorneys general sounded the alarm. Artists, parents, workers, and privacy advocates from across the political spectrum said “no.” Even hardline conservatives like Ted Cruz eventually reversed course when it came down to the final vote. The message to Big Tech or the famous “Little Tech” was clear: the states still matter — and America’s tech elite ignore that at their peril.  (“Little Tech” is the latest rhetorical deflection promoted by Big Tech aka propaganda.)

The old Google crowd pushed the moratorium–their fingerprints were obvious. Having gotten fabulously rich off of their two favorites: The DMCA farce and the Section 230 shakedown. But there’s increasing speculation that White House AI Czar and Silicon Valley Viceroy David Sacks, PayPal alum and vocal MAGA-world player, was calling the ball. If true, that makes this defeat even more revealing.

Sacks represents something of a new breed of power-hungry tech-right influencer — part of the emerging “Red Tech” movement that claims to reject woke capitalism and coastal elitism but still wants experts to shape national policy from Silicon Valley, a chapter straight out of Philip Dru: Administrator. Sacks is tied to figures like Peter Thiel, Elon Musk, and a growing network of Trump-aligned venture capitalists. But even that alignment couldn’t save the moratorium.

Why? Because the core problem wasn’t left vs. right. It was top vs. bottom.

In 1964, Ronald Reagan’s classic speech called A Time for Choosing warned about “a little intellectual elite in a far-distant capitol” deciding what’s best for everyone else. That warning still rings true — except now the “capitol” might just be a server farm in Menlo Park or a podcast studio in LA.

The AI moratorium was an attempt to govern by preemption and fiat, not by consent. And the backlash wasn’t partisan. It came from red states and blue ones alike — places where elected leaders still think they have the right to protect their citizens from unregulated surveillance, deepfakes, data scraping, and economic disruption.

So yes, the defeat of the moratorium was a blow to Google’s strategy of soft-power dominance. But it was also a shot across the bow for David Sacks and the would-be masters of tech populism. You can’t have populism without the people.

If Sacks and his cohort want to play a long game in AI policy, they’ll have to do more than drop ideas into the policy laundry of think tank white papers and Beltway briefings. They’ll need to win public trust, respect state sovereignty, and remember that governing by sneaky safe harbors is no substitute for legitimacy.  

The moratorium failed because it presumed America could be governed like a tech startup — from the top, at speed, with no dissent. Turns out the country is still under the impression they have something to say about how they are governed, especially by Big Tech.