The Paradox of Huang’s Rope

If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.

The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.

It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.

This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?

The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.

Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.

You Can’t Prosecute Smuggling NVIDIA chips to CCP and Authorize Sales to CCP at the Same Time

The Trump administration is attempting an impossible contradiction: selling advanced NVIDIA AI chips to China while the Department of Justice prosecutes criminal cases for smuggling the exact same chips into China.

According to the DOJ:

“Operation Gatekeeper has exposed a sophisticated smuggling network that threatens our Nation’s security by funneling cutting-edge AI technology to those who would use it against American interests,” said Ganjei. “These chips are the building blocks of AI superiority and are integral to modern military applications. The country that controls these chips will control AI technology; the country that controls AI technology will control the future. The Southern District of Texas will aggressively prosecute anyone who attempts to compromise America’s technological edge.”

That divergence from the prosecutors is not industrial policy. That is incoherence. But mostly it’s just bad advice, likely coming from White House AI Czar David Sacks, Mr. Trump’s South African AI policy advisor who may have a hard time getting a security clearance in the first place..

On one hand, DOJ is rightly bringing cases over the illegal diversion of restricted AI chips—recognizing that these processors are strategic technologies with direct national-security implications. On the other hand, the White House is signaling that access to those same chips is negotiable, subject to licensing workarounds, regulatory carve-outs, or political discretion.

You cannot treat a technology as contraband in federal court and as a commercial export in the West Wing.

Pick one.

AI Chips Are Not Consumer Electronics

The United States does not sell China F-35 fighter jets. We do not sell Patriot missile systems. We do not sell advanced avionics platforms and then act surprised when they show up embedded in military infrastructure. High-end AI accelerators are in the same category.

NVIDIA’s most advanced chips are not merely commercial products. They are general-purpose intelligence infrastructure or what China calls military-civil fusion. They train surveillance systems, military logistics platforms, cyber-offensive tools, and models capable of operating autonomous weapons and battlefield decision-making pipelines with no human in the loop.

If DOJ treats the smuggling of these chips into China as a serious federal crime—and it should—there is no coherent justification for authorizing their sale through executive discretion. Except, of course, money, or in Mr. Sacks case, more money.

Fully Autonomous Weapons—and Selling the Rope

China does not need U.S. chips to build consumer AI. It wants them for military acceleration.Advanced NVIDIA AI chips are not just about chatbots or recommendation engines. They are the backbone of fully autonomous weapons systems—autonomous targeting, swarm coordination, battlefield logistics, and decision-support models that compress the kill chain beyond meaningful human control.

There is an old warning attributed to Vladimir Lenin—that capitalists would sell the rope by which they would later be hanged. Apocryphal or not, it captures this moment with uncomfortable precision.

If NVIDIA chips are powerful enough to underpin autonomous weapons systems for allied militaries, they are powerful enough to underpin autonomous weapons systems for adversaries like China. Trump’s own National Security Strategy statement clearly says previous U.S. elites made “mistaken” assumptions about China such as the famous one that letting China into the WTO would integrate Beijing into the famous rules-based international order. Trump tells us that instead China “got rich and powerful” and used this against us, and goes on to describe the CCP’s well known predatory subsidies, unfair trade, IP theft, industrial espionage, supply-chain leverage, and fentanyl precursor exports as threats the U.S. must “end.” By selling them the most advanced AI chips?

Western governments and investors simultaneously back domestic autonomous-weapons firms—such as Europe-based Helsing, supported by Spotify CEO Daniel Ek—explicitly building AI-enabled munitions for allied defense. That makes exporting equivalent enabling infrastructure to a strategic competitor indefensible.

The AI Moratorium Makes This Worse, Not Better

This contradiction unfolds alongside a proposed federal AI moratorium executive order originating with Mr. Sacks and Adam Thierer of Google’s R Street Institute that would preempt state-level AI protections.
States are told AI is too consequential for local regulation, yet the federal government is prepared to license exports of AI’s core infrastructure abroad.

If AI is too dangerous for states to regulate, it is too dangerous to export. Preemption at home combined with permissiveness abroad is not leadership. It is capture.

This Is What Policy Capture Looks Like

The common thread is not national security. It is Silicon Valley access. David Sacks and others in the AI–VC orbit argue that AI regulation threatens U.S. competitiveness while remaining silent on where the chips go and how they are used.

When DOJ prosecutes smugglers while the White House authorizes exports, the public is entitled to ask whose interests are actually being served. Advisory roles that blur public power and private investment cannot coexist with credible national-security policymaking particularly when the advisor may not even be able to get a US national security clearance unless the President blesses it.

A Line Has to Be Drawn

If a technology is so sensitive that its unauthorized transfer justifies prosecution, its authorized transfer should be prohibited absent extraordinary national interest. AI accelerators meet that test.

Until the administration can articulate a coherent justification for exporting these capabilities to China, the answer should be no. Not licensed. Not delayed. Not cosmetically restricted.

And if that position conflicts with Silicon Valley advisers who view this as a growth opportunity, they should return to where they belong. The fact that the US is getting 25% of the deal (which i bet never finds its way into America’s general account), means nothing except confirming Lenin’s joke about selling the rope to hang ourselves, you know, kind of like TikTok.

David Sacks should go back to Silicon Valley.

This is not venture capital. This is our national security and he’s selling it like rope.

Back to Commandeering Again: David Sacks, the AI Moratorium, and the Executive Order Courts Will Hate

Why Silicon Valley’s in-network defenses can’t paper over federalism limits.

The old line attributed to music lawyer Allen Grubman is, “No conflict, no interest.” Conflicts are part of the music business. But the AI moratorium that David Sacks is pushing onto President Trump (the idea that Washington should freeze or preempt state AI protections in the absence of federal AI policy) takes that logic to a different altitude. It asks the public to accept not just conflicts of interest, but centralized control of AI governance built around the financial interests of a small advisory circle, including Mr. Sacks himself.

When the New York Times published its reporting on Sacks’s hundreds of AI investments and his role in shaping federal AI and chip policy, the reaction from Silicon Valley was immediate and predictable. What’s most notable is who didn’t show up. No broad political coalition. No bipartisan defense. Just a tight cluster of VC and AI-industry figures from he AI crypto–tech nexus, praising their friend Mr. Sacks and attacking the story.

And the pattern was unmistakable: a series of non-denial denials from people who it is fair to say are massively conflicted themselves.

No one said the Times lied.

No one refuted the documented conflicts.

Instead, Sacks’ tech bros defenders attacked tone and implied bias, and suggested the article merely arranged “negative truths” in an unflattering narrative (although the Times did not even bring up Mr. Sacks’ moratorium scheme).

And you know who has yet to defend Mr. Sacks? Donald J. Trump. Which tells you all you need to know.

The Rumored AI Executive Order and Federal Lawsuits Against States

Behind the spectacle sits the most consequential part of the story: a rumored executive order that would direct the U.S. Department of Justice to sue states whose laws “interfere with AI development.” Reuters reports that “U.S. President Donald Trump is considering an executive order that would seek to preempt state laws on artificial intelligence through lawsuits and by withholding federal funding, according to a draft of the order seen by Reuters….”

That is not standard economic policy. That is not innovation strategy. That is commandeering — the same old unconstitutional move in shiny AI packaging that we’ve discussed many times starting with the One Big Beautiful Bill Act catastrophe.

The Supreme Court has been clear on this such as in Printz v. United States (521 U.S. 898 (1997) at 925): “[O]pinions of ours have made clear that the Federal Government may not compel the States to implement,by legislation or executive action, federal regulatory programs.”

Crucially, the Printz Court teaches us what I think is the key fact. Federal policy for all the United States is to be made by the legislative process in regular order subject to a vote of the people’s representatives, or by executive branch agencies that are led by Senate-confirmed officers of the United States appointed by the President and subject to public scrutiny under the Administrative Procedures Act. Period.

The federal government then implements its own policies directly. It cannot order states to implement federal policy, including in the negative by prohibiting states from exercising their Constitutional powers in the absence of federal policy. The Supreme Court crystalized this issue in a recent Congressional commandeering case of Murphy v. NCAA (138 S. Ct. 1461 (2018)) where the court held “[t]he distinction between compelling a State to enact legislation and prohibiting a State from enacting new laws is an empty one. The basic principle—that Congress cannot issue direct orders to state legislatures—applies in either event.” Read together, Printz and Murphy extend this core principle of federalism to executive orders.

The “presumption against preemption” is a canon of statutory interpretation that the Supreme Court has repeatedly held to be a foundational principle of American federalism. It also has the benefit of common sense. The canon reflects the deep Constitutional understanding that, unless Congress clearly says otherwise—which implies Congress has spoken—states retain their traditional police powers over matters such as the health, safety, land use, consumer protection, labor, and property rights of their citizens. Courts begin with the assumption that federal law does not displace state law, especially in areas the states have regulated for generations, all of which are implicated in the AI “moratorium”.

The Supreme Court has repeatedly affirmed this principle. When Congress legislates in fields historically occupied by the states, courts require a clear and manifest purpose to preempt state authority. Ambiguous statutory language is interpreted against preemption. This is not a policy preference—it is a rule of interpretation rooted in constitutional structure and respect for state sovereignty that goes back to the Founders.

The presumption is strongest where federal action would displace general state laws rather than conflict with a specific federal command. Consumer protection statutes, zoning and land-use controls, tort law, data privacy, and child-safety laws fall squarely within this protected zone. Federal silence is not enough; nor is agency guidance or executive preference.

In practice, the presumption against preemption forces Congress to own the consequences of preemption. If lawmakers intend to strip states of enforcement authority, they must do so plainly and take political responsibility for that choice. This doctrine serves as a crucial brake on back-door federalization, preventing hidden preemption in technical provisions and preserving the ability of states to respond to emerging harms when federal action lags or stalls. Like in A.I.

Applied to an A.I. moratorium, the presumption against preemption cuts sharply against federal action. A moratorium that blocks states from legislating even where Congress has chosen not to act flips federalism on its head—turning federal inaction into total regulatory paralysis, precisely what the presumption against preemption forbids.

As the Congressional Research Service primer on preemption concludes:

The Constitution’s Supremacy Clause provides that federal law is “the supreme Law of the Land” notwithstanding any state law to the contrary. This language is the foundation for the doctrine of federal preemption, according to which federal law supersedes conflicting state laws. The Supreme Court has identified two general ways in which federal law can preempt state law. First, federal law can expressly preempt state law when a federal statute or regulation contains explicit preemptive language. Second, federal law can impliedly preempt state law when Congress’s preemptive intent is implicit in the relevant federal law’s structure and purpose.

In both express and implied preemption cases, the Supreme Court has made clear that Congress’s purpose is the “ultimate touchstone” of its statutory analysis. In analyzing congressional purpose, the Court has at times applied a canon of statutory construction known as the “presumption against preemption,” which instructs that federal law should not be read as superseding states’ historic police powers “unless that was the clear and manifest purpose of Congress.”

If there is no federal statute, no one has any idea what that purpose is, certainly no justiciabile idea. Therefore, my bet is that the Court would hold that the Executive Branch cannot unilaterally create preemption, and neither can the DOJ sue states simply because the White House dislikes their AI, privacy, or biometric laws, much less their zoning laws applied to data centers.

Why David Sacks’s Involvement Raises the Political Temperature

As Scott Fitzgerald famously wrote, the very rich are different. But here’s what’s not different—David Sacks has something he’s not used to having. A boss. And that boss has polls. And those polls are not great at the moment. It’s pretty simple, really. When you work for a politician, your job is to make sure his polls go up, not down.

David Sacks is making his boss look bad. Presidents do not relish waking up to front-page stories that suggest their “A.I. czar” holds hundreds of investments directly affected by federal A.I. strategy, that major policy proposals track industry wish lists more closely than public safeguards, or that rumored executive orders could ignite fifty-state constitutional litigation led by your supporters like Mike Davis and egged on by people like Steve Bannon.

Those stories don’t just land on the advisor; they land on the President’s desk, framed as questions of his judgment, control, and competence. And in politics, loyalty has a shelf life. The moment an advisor stops being an asset and starts becoming a daily distraction much less liability, the calculus changes fast. What matters then is not mansions, brilliance, ideology, or past service, but whether keeping that adviser costs more than cutting them loose. I give you Elon Musk.

AI Policy Cannot Be Built on Preemption-by-Advisor

At bottom, this is a bet. The question isn’t whether David Sacks is smart, well-connected, or persuasive inside the room. The real question is whether Donald Trump wants to stake his presidency on David Sacks being right—right about constitutional preemption, right about executive authority, right about federal power to block the states, and right about how courts will react.

Because if Sacks is wrong, the fallout doesn’t land on him. It lands on the President. A collapsed A.I. moratorium, fifty-state litigation, injunctions halting executive action, and judges citing basic federalism principles would all be framed as defeats for Trump, not for an advisor operating at arm’s length.

Betting the presidency on an untested legal theory pushed by a politically exposed “no conflict no interest” tech investor isn’t bold leadership. It’s unnecessary risk. When Trump’s second term is over in a few years, Trump will be in the history books for all time. No one will remember who David Sacks was.

2026 Mechanical Rate 13.1¢

The Copyright Royalty Judges have announced that the new COLA-adjusted minimum statutory rate for 2026 is 13.1¢ for physical and downloads, up from 12.7¢, effective 1/1/26. This is the last year of the Phonorecords IV rate period, so that’s an increase from the 9.1¢ frozen mechanical rate that had been in effect for 15 years.

The adjusted rate stands in stark contrast to the streaming mechanical which not only has been frozen for the entire 5 year rate period, but has actually declined substantially due to Spotify’s bundling silliness. That smooth move has set up what will no doubt be a donnybrook in Phonorecords V, i.e., the next rate proceeding which is due to start any minute now (actually more like January, which is close enough).

It must be said that the reason there’s a rate increase for physical/downloads is due to the efforts of independents who filed two rounds of comments in Phonorecords IV and also the willingness of the labels to be flexible and reasonable. I suspect that has a lot to do with the fact that at the end of the day, we are all in the same business and it’s to everyone’s advantage that songwriters thrive. Obviously, the same cannot be said of the streaming platforms like Spotify that are busy seeding AI tracks with both hands. I really don’t know what business those people think they are in, but it’s not the music business.

Good News for TikTok Users: The PRC Definitely Isn’t Interested in Your Data (Just the Global Internet Backbone, Apparently)

If you’re a TikTok user who has ever worried, even a tiny bit, that the People’s Republic of China might have an interest in your behavior, preferences, movements, or social graph, take heart. A newly released Joint Cybersecurity Advisory from intelligence agencies in the United States, Canada, the U.K., Australia, New Zealand, and a long list of allied intelligence agencies proves beyond any shadow of a doubt that the PRC is far too busy compromising the world’s telecommunications infrastructure to care about your TikTok “For You Page.”

Nothing to see here. Scroll on.

For those who like their reassurance with a side of evidence, the advisory—titled “Countering Chinese State Actors’ Compromise of Networks Worldwide to Feed Global Espionage System”—is one of the clearest, broadest warnings ever issued about a Chinese state-sponsored intrusion campaign. And, because the agencies involved designated it as not sensitive and may be shared publicly without restriction (TLP:CLEAR), you can read it yourself.

The World’s Telecom Backbones: Now Featuring Uninvited Guests

The intel agency advisory describes a “Typhoon class” global espionage ecosystem run through persistent compromises of backbone routers, provider-edge and customer-edge routers, ISP and telecom infrastructure, transportation networks, lodging and hospitality systems, government and military-adjacent networks.

This is not hypothetical. The advisory includes extremely detailed penetration chains: attackers exploit widely known “Common Vulnerabilities and Exposures” (CVEs) in routers, firewalls, VPNs, and management interfaces, then establish persistence through configuration modifications, traffic mirroring, injected services, and encrypted tunnels. This lets them monitor, redirect, copy, or exfiltrate traffic across entire service regions.

Put plainly: if your internet service provider has a heartbeat and publicly routable equipment, the attackers have probably knocked on the door. And for a depressingly large number of large-scale network operators, they got in.

This is classical intelligence tradecraft. The PRC’s immediate goal isn’t ransomware. It’s not crypto mining. It’s not vandalism. It’s good old-fashioned espionage: long-term access, silent monitoring, and selective exploitation.

What They’re Collecting: Clues About Intent

The advisory makes the overall aim explicit: to give PRC intelligence the ability to identify and track targets’ communications and movements worldwide.

That includes metadata on calls, enterprise-internal communications, hotel and travel itineraries, traffic patterns for government and defense systems, persistent vantage points on global networks.

This is signals intelligence (SIGINT), not smash-and-grab.

And importantly: this kind of operation requires enormous intelligence-analytic processing, not a general-purpose “LLM training dataset.” These are targeted, high-value accesses, not indiscriminate web scrapes. The attackers are going after specific information—strategic, diplomatic, military, infrastructure, and political—not broad consumer content.

So no, this advisory is not about “AI training.” It is about access, exfiltration, and situational awareness across vital global communications arteries.

Does This Tell Us Anything About TikTok?

Officially, no. The advisory never mentions TikTok, ByteDance, or consumer social media apps. It is focused squarely on infrastructure.

But from a strategic-intent standpoint, it absolutely matters. Because when you combine:

1. Global telecom-layer access
2. Persistent long-term SIGINT footholds
3. The PRC’s demonstrated appetite for foreign behavioral data
4. The existence of the richest behavioral dataset on Earth—TikTok’s U.S. user base

—you get a coherent picture of the intelligence ecosystem the Chinese Communist Party is building on…I guess you’d have to say “the world”.

If a nation-state is willing to invest years compromising backbone routers, it is not a stretch to imagine what they could do with a mobile app installed on the phones of oh say 170 million Americans to pick a random number that conveniently collects social graphs, location traces, contact patterns, engagement preferences, political and commercial interests that are visible in the PRC.

But again, don’t worry. The advisory suggests only that Chinese state actors have global access to the infrastructure over which your TikTok traffic travels—not that they would dare take an interest in the app itself. And besides, the TikTok executives swore under oath to the U.S. Congress that it didn’t happen that way so it must be true.

After all, why would a government running a worldwide intrusion program want access to the largest behavioral-data sensor array outside the NSA?

If you still believe the PRC is nowhere near TikTok’s data, then this advisory will reassure you: it’s just a gentle reminder that Chinese state actors are burrowed into global telecom backbones, hotel networks, transportation systems, and military-adjacent infrastructure—pure souls simply striving to make sure your “For You” page loads quickly.

After all, why would a government running a worldwide network-intrusion program have any interest in the richest behavioral dataset on Earth?

Marc Andreessen’s Dormant Commerce Clause Fantasy

There’s a special kind of hubris in Silicon Valley, but Marc Andreessen may have finally discovered its purest form: imagining that the Dormant Commerce Clause (DCC) — a Constitutional doctrine his own philosophical allies loathe — will be his golden chariot into the Supreme Court to eliminate state AI regulation.

If you know the history, it borders on comedic, if you think that Ayn Rand is a great comedienne.

The DCC is a judge‑created doctrine inferred from the Commerce Clause (Article I, Section 8, Clause 3), preventing states from discriminating against or unduly burdening interstate commerce. Conservatives have long attacked it as a textless judicial invention. Justice Scalia called it a “judicial fraud”; Justice Thomas wants it abolished outright. Yet Andreessen’s Commerce Clause playbook is built on expanding a doctrine the conservative legal movement has spent 40 years dismantling.

Worse for him, the current Supreme Court is the least sympathetic audience possible.

Justice Gorsuch has repeatedly questioned DCC’s legitimacy and rejects free‑floating “extraterritoriality” theories. Justice Barrett, a Scalia textualist, shows no appetite for expanding the doctrine beyond anti‑protectionism. Justice Kavanaugh is business‑friendly but wary of judicial policymaking. None of these justices would give Silicon Valley a nationwide deregulatory veto disguised as constitutional doctrine. Add Alito and Thomas, and Andreessen couldn’t scrape a majority.

And then there’s Ted Cruz — Scalia’s former clerk — loudly cheerleading a doctrine his mentor spent decades attacking.

National Pork Producers Council v. Ross (2023): The Warning Shot

Andreessen’s theory also crashes directly into the Supreme Court’s fractured decision in the most recent DCC case before SCOTUS, National Pork Producers Council v. Ross (2023), where industry groups tried to use the DCC to strike down California’s animal‑welfare law due to its national economic effects.

The result? A deeply splintered Court produced several opinions.  Justice Gorsuch  announced the judgment of the Court, and delivered the opinion of the Court with respect to Parts I, II, III, IV–A, and V, in which Justices Thomas, Sotomayor, Kagan and Barrett joined, an opinion with respect to Parts IV–B and IV–D, in which Justice Thomas and Barrett joined, and an opinion with respect to Part IV–C, in which Justices Thomas, Sotomayor, and Kagan joined.  Justice Sotomayor filed an opinion concurring in part, in which Justice Kagan joined.  Justice Barrett filed an opinion concurring in part. Chief Justice Roberts filed an opinion concurring in part and dissenting in part, in which Justices Alito, Kavanaugh and Jackson joined. Justice Kavanaugh filed an opinion concurring in part and dissenting in part.

Got it?  

The upshot:
– No majority for expanding DCC “extraterritoriality.”
– No appetite for using DCC to invalidate state laws simply because they influence out‑of‑state markets.
– Multiple justices signaling that courts should not second‑guess state policy judgments through DCC balancing.
– Gorsuch’s lead opinion rejected the very arguments Silicon Valley now repackages for AI.

If Big Tech thinks this Court that decided National Pork—no pun intendedwill hand them a nationwide kill‑switch on state AI laws, they profoundly misunderstand the doctrine and the Court.

Andreessen didn’t just pick the wrong legal strategy. He picked the one doctrine the current Court is least willing to expand. The Dormant Commerce Clause isn’t a pathway to victory — it’s a constitutional dead end masquerading as innovation policy.

But…maybe he’s crazy like a fox.  

The Delay’s the Thing: The Dormant Commerce Clause as Delay Warfare

To paraphrase Saul Alinksy, the issue is never the issue, the issue is always delay.  Of course, if delay is the true objective, you couldn’t pick a better stalling tactic than hanging an entire federal moratorium on one of the Supreme Court’s most obscure and internally conflicted doctrines. The Dormant Commerce Clause isn’t a real path to victory—not with a Court where Scalia’s intellectual heirs openly question its legitimacy. But it is the perfect fig leaf for an executive order.

The point isn’t to win the case. The point is to give Trump just enough constitutional garnish to issue the EO, freeze state enforcement, and force every challenge into multi‑year litigation. That buys the AI industry exactly what it needs:  time. Time to scale. Time to consolidate. Time to embed itself into public infrastructure and defense procurement. Time  to become “too essential to regulate” or as Senator Hawley asked, too big to prosecute?

Big Tech doesn’t need a Supreme Court victory. It needs a judicial cloud, a preemption smokescreen, and a procedural maze that chills state action long enough for the industry to entrench itself permanently.  And no one knows that better than the moratorium’s biggest cheerleader, Senator Ted Cruz the Scalia clerk.

The Dormant Commerce Clause, in this context, isn’t a doctrine. It’s delay‑ware—legal molasses poured over every attempt by states to protect their citizens. And that delay may just be the real prize.

Structural Capture and the Trump AI Executive Order

The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. 


In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.”  This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.

AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions.  States also control the electrical utilities and zoning infrastructure that AI data centers depend on. 

Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.

This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. 
Those engineering incentives map cleanly onto the EO’s enforcement logic. 

The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.

There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.

The constitutional landscape is equally important.  Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress.  Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding.  And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.

So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.

The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?

A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.

But the deeper issue is how the AI ecosystem responds to a constrait.  If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.

At the systems level, a DOJ loss may even feed back into corporate strategy.  Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.

Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.

The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.

But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states.  If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.

This deserves close scrutiny before it becomes the template for AI governance moving forward.

DOJ Authority and the “Because China” Trump AI Executive Order

When an Executive Order purports to empower the Department of Justice to sue states, the stakes go well beyond routine federal–state friction.  In the draft Trump AI Executive Order “Eliminating State Law Obstruction of National AI Policy”, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation” whatever that means.  It sounds an awful lot like laws that interfere with Google’s business model. This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven at least as much by private interests of the richest corporations in commercial history as by public policy.

AI regulation sits squarely in longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land use, and labor conditions.  Crucially, states also control the electrical and zoning infrastructure that AI data centers depend on like say putting a private nuclear reactor next to your house.  Directing DOJ to attack these laws effectively deputizes the federal government as the legal enforcer for a handful of private AI companies seeking unbridled “growth” without engaging in the legislative process. Meaning you don’t get a vote. All this against the backdrop of one of the biggest economic bubbles since the last time these companies nearly tanked the U.S. economy.

This inversion is constitutionally significant. 

Historically, DOJ sues states to vindicate federal rights or enforce federal statutes—not to advance the commercial preferences of private industries.  Here, the EO appears to convert DOJ into a litigation shield for private companies looking to avoid state oversight altogether.  Under Youngstown Sheet & Tube Company, et al. v. Charles Sawyer, Secretary of Commerce, the President lacks authority to create new enforcement powers without congressional delegation, and under the major questions doctrine (West Virginia v. EPA), a sweeping reallocation of regulatory power requires explicit statutory grounding from Congress, including the Senate. That would be the Senate that resoundingly stripped the last version of the AI moratorium from the One Big Beautiful Bill Act by a vote of 99-1 against.

There are also First Amendment implications.  Many state AI laws address synthetic impersonation, deceptive outputs, and risks associated with algorithmic distribution.  If DOJ preempts these laws, the speech environment becomes shaped not by public debate or state protections but by executive preference and the operational needs of the largest AI platforms. Courts have repeatedly warned that government cannot structure the speech ecosystem indirectly through private intermediaries (Bantam Books v. Sullivan.)

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states. These provisions warrant careful scrutiny before they become the blueprint for AI governance moving forward.

The UK Finally Moves to Ban Above-Face-Value Ticket Resale

The UK is preparing to do something fans have begged for and secondary platforms have dreaded for years: ban the resale of tickets above face value. The plan, expected to be announced formally within days, would make the UK one of the toughest anti-scalping jurisdictions in the world. After a decade of explosive profiteering on sites like Viagogo and StubHub, the UK government has decided the resale marketplace needs a reset.

This move delivers on a major campaign promise from the 2024 Labour manifesto and comes on the heels of an unusually unified push from the artist community. More than 40 major artists — including Dua Lipa, Coldplay, Radiohead, Robert Smith, Sam Fender, PJ Harvey, The Chemical Brothers, and Florence + The Machine — signed an open letter urging Prime Minister Sir Keir Starmer to “stop touts from fleecing fans.” (“Touts” is British for “scalpers” which includes resellers like StubHub.). Sporting groups, consumer advocates, and supporter associations quickly echoed the call.

Under the reported proposal, tickets could only be resold at face value, with minimal, capped service fees to prevent platforms from disguising mark-ups as “processing costs.” This is a clear rejection of earlier floated compromises such as allowing resale up to 30% over face value which consumer groups said would simply legitimize profiteering.

Secondary platforms reacted instantly. Reuters reports that StubHub’s U.S.-listed parent lost around 14% of its market value on the news, compounding a disastrous first earnings report. As CNBC’s Jim Cramer put it bluntly: “It’s been a bust — and when you become a busted IPO, it’s very hard to change the narrative.” The UK announcement didn’t just nudge the stock downward; it slammed the door on the rosy growth story StubHub’s bankers were trying to sell.  Readers will know just how broken up I am about that little turn of events.  

Meanwhile, the UK Competition and Markets Authority has opened investigations into fee structures, “drip pricing,” and deceptive listings on both StubHub and Viagogo. Live Nation/Ticketmaster welcomed the move, noting that it already limits resale to face value in the UK.

One important nuance often lost in the public debate: dynamic pricing is not part of this ban — and in the UK, dynamic pricing isn’t the systemic problem it is in the U.S. Ticketmaster and other platforms consistently tell regulators that artists and their teams decide whether to use dynamic pricing, not the platforms. More importantly, relatively few artists actually ask for it. Most want their fans to get in at predictable, transparent prices — and some, like Robert Smith of The Cure, have publicly rejected dynamic pricing altogether.

That’s why the UK’s reform gets the target right: it goes after the for-profit resale economy, not the artists. It stops arbitrage without interfering with how performers choose to price their own shows.

The looming ban also highlights the widening gap between the UK and the U.S. While the UK is about to outlaw the very model that fuels American secondary platforms, U.S. reform remains paralyzed by lobbying pressure, fragmented state laws, and political reluctance to confront multimillion-dollar resale operators.

If the UK fully implements this reform, it becomes the most significant consumer-protection shift in live entertainment in more than a decade. And given the coalition behind it — artists, fans, sports groups, consumer advocates, and now regulators — this time the momentum looks hard to stop.

The Return of the Bubble Rider: Masa, OpenAI, and the New AI Supercycle

“Hubris gives birth to the tyrant; hubris, when glutted on vain visions, plunges into an abyss of doom.”
Agamemnon by Aeschylus

Masayoshi Son has always believed he could see farther into the technological future than everyone else. Sometimes he does. Sometimes he rides straight off a cliff. But the pattern is unmistakable: he is the market’s most fearless—and sometimes most reckless—Bubble Rider.

In the late 1990s, Masa became the patron saint of the early internet. SoftBank took stakes in dozens of dot-coms, anchored by its wildly successful bet on Yahoo! (yes, Yahoo!  Ask your mom.). For a moment, Masa was briefly one of the world’s richest men on paper. Then the dot-bomb hit. Overnight, SoftBank lost nearly everything. Masa has said he personally watched $70 billion evaporate—the largest individual wealth wipeout ever recorded at the time. But his instinct wasn’t to retreat. It was to reload.

That same pattern returned with SoftBank’s Vision Fund. Masa raised unprecedented capital from sovereign wealth pools and bet big on the “AI + data” megatrend—then plowed it into companies like WeWork, Zume, Brandless, and other combustion-ready unicorns. When those valuations collapsed, SoftBank again absorbed catastrophic losses. And yet the thesis survived, just waiting for its next bubble.

We’re now in what I’ve called the AI Bubble—the largest capital-formation mania since the original dot-com wave, powered by foundation AI labs, GPU scarcity, and a global arms race to capture platform rents. And here comes Masa again, right on schedule.

SoftBank has now sold its entire Nvidia stake—the hottest AI infrastructure trade of the decade—freeing up nearly $6 billion. That money is being redirected straight into OpenAI’s secondary stock offering at an eyewatering marked-to-fantasy $500 billion valuation. In the same week, SoftBank confirmed it is preparing even larger AI investments. This is Bubble Riding at its purest: exiting one vertical where returns may be peaking, and piling into the center of speculative gravity before the froth crests.

What I suspect Masa sees is simple: if generative AI succeeds, the model owners will become the new global monopolies alongside the old global monopolies like Google and Microsoft.  You know, democratizing the Internet. If it fails, the whole electric grid and water supply may crash along with it. He’s choosing a side—and choosing it at absolute top-of-market pricing.

The other difference between the dot-com bubble and the AI bubble is legal, not just financial. Pets.com and its peers (who I refer to generically as “Socks.com” the company that uses the Internet to find socks under the bed) were silly, but they weren’t being hauled into court en masse for building their core product on other people’s property. 

Today’s AI darlings are major companies being run like pirate markets. Meta, Anthropic, OpenAI and others are already facing a wall of litigation from authors, news organizations, visual artists, coders, and music rightsholders who all say the same thing: your flagship models exist only because you ingested our work without permission, at industrial scale, and you’re still doing it. 

That means this bubble isn’t just about overpaying for growth; it’s about overpaying for businesses whose main asset—trained model weights—may be encumbered by unpriced copyright and privacy claims. The dot-com era mispriced eyeballs. The AI era may be mispricing liability.  And that’s serious stuff.

There’s another distortion the dot-com era never had: the degree to which the AI bubble is being propped up by taxpayers. Socks.com didn’t need a new substation, a federal loan guarantee, or a 765 kV transmission corridor to find your socks. Today’s Socks.ai does need all that to use AI to find socks under the bed.  All the AI giants do. Their business models quietly assume public willingness to underwrite an insanely expensive buildout of power plants, high-voltage lines, and water-hungry cooling infrastructure—costs socialized onto ratepayers and communities so that a handful of platforms can chase trillion-dollar valuations. The dot-com bubble misallocated capital; the AI bubble is trying to reroute the grid.

In that sense, this isn’t just financial speculation on GPUs and model weights—it’s a stealth industrial policy, drafted in Silicon Valley and cashed at the public’s expense.

The problem, as always, is timing. Bubbles create enormous winners and equally enormous craters. Masa’s career is proof. But this time, the stakes are higher. The AI Bubble isn’t just a capital cycle; it’s a geopolitical and industrial reordering, pulling in cloud platforms, national security, energy systems, media industries, and governments with a bad case of FOMO scrambling to regulate a technology they barely understand.

And now, just as Masa reloads for his next moonshot, the market itself is starting to wobble. The past week’s selloff may not be random—it feels like a classic early-warning sign of a bubble straining under its own weight. In every speculative cycle, the leaders crack first: the most crowded trades, the highest-multiple stories, the narratives everyone already believes. This time, those leaders are the AI complex—GPU giants, hyperscale clouds, and anything with “model” or “inference” in the deck. When those names roll over together, it tells you something deeper than normal volatility is at work.

What the downturn may exposes is the growing narrative about an “earnings gap.“ Investors have paid extraordinary prices for companies whose long-term margins remain theoretical, whose energy demands are exploding, and whose regulatory and copyright liabilities are still unpriced. The AI story is enormous—but the business model remains unresolved. A selloff forces the market to remember the thing it forgets at every bubble peak: cash flow eventually matters.

Back in the late-cycle of the dot com era, I had lunch in December of 1999 with a friend who had worked 20 years in a division of a huge conglomerate, bought his division in a leveraged buyout, ran that company for 10 years then took that public, sold it to another company that then went public.  He asked me to explain how these dot coms were able to go public, a process he equated with hard work and serious people.  I said, well we like them to have four quarters of top line revenue.  He stared at me.  I said, I know it’s stupid, but that’s what they say.  He said, it’s all going to crash.  And boy did it ever.

And ironically, nothing captures this late-cycle psychology better than Masa’s own behavior. SoftBank selling Nvidia—the proven cash-printing side of AI—to buy OpenAI at a $500 billion valuation isn’t contrarian genius; it’s the definition of a crowded climax trade, the moment when everyone is leaning the same direction. When that move coincides with the tape turning red, the message is unmistakable: the AI supercycle may not be over, but the easy phase is.

Whether this is the start of a genuine deflation or just the first hard jolt before the final manic leg, the pattern is clear. The AI Bubble is no longer hypothetical—it is showing up on the trading screens, in the sentiment, and in the rotation of capital itself.

Masa may still believe the crest of the wave lies ahead. But the market has begun to ask the question every bubble eventually faces: What if this is the top of the ride?

Masa is betting that the crest of the curve lies ahead—that we’re in Act Two of an AI supercycle. Maybe he’s right. Or maybe he’s gearing up for his third historic wipeout.

Either way, he’s back in the saddle.

The Bubble Rider rides again.