The Paradox of Huang’s Rope

If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.

The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.

It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.

This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?

The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.

Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.

You Can’t Prosecute Smuggling NVIDIA chips to CCP and Authorize Sales to CCP at the Same Time

The Trump administration is attempting an impossible contradiction: selling advanced NVIDIA AI chips to China while the Department of Justice prosecutes criminal cases for smuggling the exact same chips into China.

According to the DOJ:

“Operation Gatekeeper has exposed a sophisticated smuggling network that threatens our Nation’s security by funneling cutting-edge AI technology to those who would use it against American interests,” said Ganjei. “These chips are the building blocks of AI superiority and are integral to modern military applications. The country that controls these chips will control AI technology; the country that controls AI technology will control the future. The Southern District of Texas will aggressively prosecute anyone who attempts to compromise America’s technological edge.”

That divergence from the prosecutors is not industrial policy. That is incoherence. But mostly it’s just bad advice, likely coming from White House AI Czar David Sacks, Mr. Trump’s South African AI policy advisor who may have a hard time getting a security clearance in the first place..

On one hand, DOJ is rightly bringing cases over the illegal diversion of restricted AI chips—recognizing that these processors are strategic technologies with direct national-security implications. On the other hand, the White House is signaling that access to those same chips is negotiable, subject to licensing workarounds, regulatory carve-outs, or political discretion.

You cannot treat a technology as contraband in federal court and as a commercial export in the West Wing.

Pick one.

AI Chips Are Not Consumer Electronics

The United States does not sell China F-35 fighter jets. We do not sell Patriot missile systems. We do not sell advanced avionics platforms and then act surprised when they show up embedded in military infrastructure. High-end AI accelerators are in the same category.

NVIDIA’s most advanced chips are not merely commercial products. They are general-purpose intelligence infrastructure or what China calls military-civil fusion. They train surveillance systems, military logistics platforms, cyber-offensive tools, and models capable of operating autonomous weapons and battlefield decision-making pipelines with no human in the loop.

If DOJ treats the smuggling of these chips into China as a serious federal crime—and it should—there is no coherent justification for authorizing their sale through executive discretion. Except, of course, money, or in Mr. Sacks case, more money.

Fully Autonomous Weapons—and Selling the Rope

China does not need U.S. chips to build consumer AI. It wants them for military acceleration.Advanced NVIDIA AI chips are not just about chatbots or recommendation engines. They are the backbone of fully autonomous weapons systems—autonomous targeting, swarm coordination, battlefield logistics, and decision-support models that compress the kill chain beyond meaningful human control.

There is an old warning attributed to Vladimir Lenin—that capitalists would sell the rope by which they would later be hanged. Apocryphal or not, it captures this moment with uncomfortable precision.

If NVIDIA chips are powerful enough to underpin autonomous weapons systems for allied militaries, they are powerful enough to underpin autonomous weapons systems for adversaries like China. Trump’s own National Security Strategy statement clearly says previous U.S. elites made “mistaken” assumptions about China such as the famous one that letting China into the WTO would integrate Beijing into the famous rules-based international order. Trump tells us that instead China “got rich and powerful” and used this against us, and goes on to describe the CCP’s well known predatory subsidies, unfair trade, IP theft, industrial espionage, supply-chain leverage, and fentanyl precursor exports as threats the U.S. must “end.” By selling them the most advanced AI chips?

Western governments and investors simultaneously back domestic autonomous-weapons firms—such as Europe-based Helsing, supported by Spotify CEO Daniel Ek—explicitly building AI-enabled munitions for allied defense. That makes exporting equivalent enabling infrastructure to a strategic competitor indefensible.

The AI Moratorium Makes This Worse, Not Better

This contradiction unfolds alongside a proposed federal AI moratorium executive order originating with Mr. Sacks and Adam Thierer of Google’s R Street Institute that would preempt state-level AI protections.
States are told AI is too consequential for local regulation, yet the federal government is prepared to license exports of AI’s core infrastructure abroad.

If AI is too dangerous for states to regulate, it is too dangerous to export. Preemption at home combined with permissiveness abroad is not leadership. It is capture.

This Is What Policy Capture Looks Like

The common thread is not national security. It is Silicon Valley access. David Sacks and others in the AI–VC orbit argue that AI regulation threatens U.S. competitiveness while remaining silent on where the chips go and how they are used.

When DOJ prosecutes smugglers while the White House authorizes exports, the public is entitled to ask whose interests are actually being served. Advisory roles that blur public power and private investment cannot coexist with credible national-security policymaking particularly when the advisor may not even be able to get a US national security clearance unless the President blesses it.

A Line Has to Be Drawn

If a technology is so sensitive that its unauthorized transfer justifies prosecution, its authorized transfer should be prohibited absent extraordinary national interest. AI accelerators meet that test.

Until the administration can articulate a coherent justification for exporting these capabilities to China, the answer should be no. Not licensed. Not delayed. Not cosmetically restricted.

And if that position conflicts with Silicon Valley advisers who view this as a growth opportunity, they should return to where they belong. The fact that the US is getting 25% of the deal (which i bet never finds its way into America’s general account), means nothing except confirming Lenin’s joke about selling the rope to hang ourselves, you know, kind of like TikTok.

David Sacks should go back to Silicon Valley.

This is not venture capital. This is our national security and he’s selling it like rope.

Marc Andreessen’s Dormant Commerce Clause Fantasy

There’s a special kind of hubris in Silicon Valley, but Marc Andreessen may have finally discovered its purest form: imagining that the Dormant Commerce Clause (DCC) — a Constitutional doctrine his own philosophical allies loathe — will be his golden chariot into the Supreme Court to eliminate state AI regulation.

If you know the history, it borders on comedic, if you think that Ayn Rand is a great comedienne.

The DCC is a judge‑created doctrine inferred from the Commerce Clause (Article I, Section 8, Clause 3), preventing states from discriminating against or unduly burdening interstate commerce. Conservatives have long attacked it as a textless judicial invention. Justice Scalia called it a “judicial fraud”; Justice Thomas wants it abolished outright. Yet Andreessen’s Commerce Clause playbook is built on expanding a doctrine the conservative legal movement has spent 40 years dismantling.

Worse for him, the current Supreme Court is the least sympathetic audience possible.

Justice Gorsuch has repeatedly questioned DCC’s legitimacy and rejects free‑floating “extraterritoriality” theories. Justice Barrett, a Scalia textualist, shows no appetite for expanding the doctrine beyond anti‑protectionism. Justice Kavanaugh is business‑friendly but wary of judicial policymaking. None of these justices would give Silicon Valley a nationwide deregulatory veto disguised as constitutional doctrine. Add Alito and Thomas, and Andreessen couldn’t scrape a majority.

And then there’s Ted Cruz — Scalia’s former clerk — loudly cheerleading a doctrine his mentor spent decades attacking.

National Pork Producers Council v. Ross (2023): The Warning Shot

Andreessen’s theory also crashes directly into the Supreme Court’s fractured decision in the most recent DCC case before SCOTUS, National Pork Producers Council v. Ross (2023), where industry groups tried to use the DCC to strike down California’s animal‑welfare law due to its national economic effects.

The result? A deeply splintered Court produced several opinions.  Justice Gorsuch  announced the judgment of the Court, and delivered the opinion of the Court with respect to Parts I, II, III, IV–A, and V, in which Justices Thomas, Sotomayor, Kagan and Barrett joined, an opinion with respect to Parts IV–B and IV–D, in which Justice Thomas and Barrett joined, and an opinion with respect to Part IV–C, in which Justices Thomas, Sotomayor, and Kagan joined.  Justice Sotomayor filed an opinion concurring in part, in which Justice Kagan joined.  Justice Barrett filed an opinion concurring in part. Chief Justice Roberts filed an opinion concurring in part and dissenting in part, in which Justices Alito, Kavanaugh and Jackson joined. Justice Kavanaugh filed an opinion concurring in part and dissenting in part.

Got it?  

The upshot:
– No majority for expanding DCC “extraterritoriality.”
– No appetite for using DCC to invalidate state laws simply because they influence out‑of‑state markets.
– Multiple justices signaling that courts should not second‑guess state policy judgments through DCC balancing.
– Gorsuch’s lead opinion rejected the very arguments Silicon Valley now repackages for AI.

If Big Tech thinks this Court that decided National Pork—no pun intendedwill hand them a nationwide kill‑switch on state AI laws, they profoundly misunderstand the doctrine and the Court.

Andreessen didn’t just pick the wrong legal strategy. He picked the one doctrine the current Court is least willing to expand. The Dormant Commerce Clause isn’t a pathway to victory — it’s a constitutional dead end masquerading as innovation policy.

But…maybe he’s crazy like a fox.  

The Delay’s the Thing: The Dormant Commerce Clause as Delay Warfare

To paraphrase Saul Alinksy, the issue is never the issue, the issue is always delay.  Of course, if delay is the true objective, you couldn’t pick a better stalling tactic than hanging an entire federal moratorium on one of the Supreme Court’s most obscure and internally conflicted doctrines. The Dormant Commerce Clause isn’t a real path to victory—not with a Court where Scalia’s intellectual heirs openly question its legitimacy. But it is the perfect fig leaf for an executive order.

The point isn’t to win the case. The point is to give Trump just enough constitutional garnish to issue the EO, freeze state enforcement, and force every challenge into multi‑year litigation. That buys the AI industry exactly what it needs:  time. Time to scale. Time to consolidate. Time to embed itself into public infrastructure and defense procurement. Time  to become “too essential to regulate” or as Senator Hawley asked, too big to prosecute?

Big Tech doesn’t need a Supreme Court victory. It needs a judicial cloud, a preemption smokescreen, and a procedural maze that chills state action long enough for the industry to entrench itself permanently.  And no one knows that better than the moratorium’s biggest cheerleader, Senator Ted Cruz the Scalia clerk.

The Dormant Commerce Clause, in this context, isn’t a doctrine. It’s delay‑ware—legal molasses poured over every attempt by states to protect their citizens. And that delay may just be the real prize.

Structural Capture and the Trump AI Executive Order

The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. 


In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.”  This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.

AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions.  States also control the electrical utilities and zoning infrastructure that AI data centers depend on. 

Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.

This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. 
Those engineering incentives map cleanly onto the EO’s enforcement logic. 

The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.

There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.

The constitutional landscape is equally important.  Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress.  Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding.  And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.

So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.

The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?

A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.

But the deeper issue is how the AI ecosystem responds to a constrait.  If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.

At the systems level, a DOJ loss may even feed back into corporate strategy.  Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.

Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.

The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.

But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states.  If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.

This deserves close scrutiny before it becomes the template for AI governance moving forward.

DOJ Authority and the “Because China” Trump AI Executive Order

When an Executive Order purports to empower the Department of Justice to sue states, the stakes go well beyond routine federal–state friction.  In the draft Trump AI Executive Order “Eliminating State Law Obstruction of National AI Policy”, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation” whatever that means.  It sounds an awful lot like laws that interfere with Google’s business model. This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven at least as much by private interests of the richest corporations in commercial history as by public policy.

AI regulation sits squarely in longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land use, and labor conditions.  Crucially, states also control the electrical and zoning infrastructure that AI data centers depend on like say putting a private nuclear reactor next to your house.  Directing DOJ to attack these laws effectively deputizes the federal government as the legal enforcer for a handful of private AI companies seeking unbridled “growth” without engaging in the legislative process. Meaning you don’t get a vote. All this against the backdrop of one of the biggest economic bubbles since the last time these companies nearly tanked the U.S. economy.

This inversion is constitutionally significant. 

Historically, DOJ sues states to vindicate federal rights or enforce federal statutes—not to advance the commercial preferences of private industries.  Here, the EO appears to convert DOJ into a litigation shield for private companies looking to avoid state oversight altogether.  Under Youngstown Sheet & Tube Company, et al. v. Charles Sawyer, Secretary of Commerce, the President lacks authority to create new enforcement powers without congressional delegation, and under the major questions doctrine (West Virginia v. EPA), a sweeping reallocation of regulatory power requires explicit statutory grounding from Congress, including the Senate. That would be the Senate that resoundingly stripped the last version of the AI moratorium from the One Big Beautiful Bill Act by a vote of 99-1 against.

There are also First Amendment implications.  Many state AI laws address synthetic impersonation, deceptive outputs, and risks associated with algorithmic distribution.  If DOJ preempts these laws, the speech environment becomes shaped not by public debate or state protections but by executive preference and the operational needs of the largest AI platforms. Courts have repeatedly warned that government cannot structure the speech ecosystem indirectly through private intermediaries (Bantam Books v. Sullivan.)

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states. These provisions warrant careful scrutiny before they become the blueprint for AI governance moving forward.

The Return of the Bubble Rider: Masa, OpenAI, and the New AI Supercycle

“Hubris gives birth to the tyrant; hubris, when glutted on vain visions, plunges into an abyss of doom.”
Agamemnon by Aeschylus

Masayoshi Son has always believed he could see farther into the technological future than everyone else. Sometimes he does. Sometimes he rides straight off a cliff. But the pattern is unmistakable: he is the market’s most fearless—and sometimes most reckless—Bubble Rider.

In the late 1990s, Masa became the patron saint of the early internet. SoftBank took stakes in dozens of dot-coms, anchored by its wildly successful bet on Yahoo! (yes, Yahoo!  Ask your mom.). For a moment, Masa was briefly one of the world’s richest men on paper. Then the dot-bomb hit. Overnight, SoftBank lost nearly everything. Masa has said he personally watched $70 billion evaporate—the largest individual wealth wipeout ever recorded at the time. But his instinct wasn’t to retreat. It was to reload.

That same pattern returned with SoftBank’s Vision Fund. Masa raised unprecedented capital from sovereign wealth pools and bet big on the “AI + data” megatrend—then plowed it into companies like WeWork, Zume, Brandless, and other combustion-ready unicorns. When those valuations collapsed, SoftBank again absorbed catastrophic losses. And yet the thesis survived, just waiting for its next bubble.

We’re now in what I’ve called the AI Bubble—the largest capital-formation mania since the original dot-com wave, powered by foundation AI labs, GPU scarcity, and a global arms race to capture platform rents. And here comes Masa again, right on schedule.

SoftBank has now sold its entire Nvidia stake—the hottest AI infrastructure trade of the decade—freeing up nearly $6 billion. That money is being redirected straight into OpenAI’s secondary stock offering at an eyewatering marked-to-fantasy $500 billion valuation. In the same week, SoftBank confirmed it is preparing even larger AI investments. This is Bubble Riding at its purest: exiting one vertical where returns may be peaking, and piling into the center of speculative gravity before the froth crests.

What I suspect Masa sees is simple: if generative AI succeeds, the model owners will become the new global monopolies alongside the old global monopolies like Google and Microsoft.  You know, democratizing the Internet. If it fails, the whole electric grid and water supply may crash along with it. He’s choosing a side—and choosing it at absolute top-of-market pricing.

The other difference between the dot-com bubble and the AI bubble is legal, not just financial. Pets.com and its peers (who I refer to generically as “Socks.com” the company that uses the Internet to find socks under the bed) were silly, but they weren’t being hauled into court en masse for building their core product on other people’s property. 

Today’s AI darlings are major companies being run like pirate markets. Meta, Anthropic, OpenAI and others are already facing a wall of litigation from authors, news organizations, visual artists, coders, and music rightsholders who all say the same thing: your flagship models exist only because you ingested our work without permission, at industrial scale, and you’re still doing it. 

That means this bubble isn’t just about overpaying for growth; it’s about overpaying for businesses whose main asset—trained model weights—may be encumbered by unpriced copyright and privacy claims. The dot-com era mispriced eyeballs. The AI era may be mispricing liability.  And that’s serious stuff.

There’s another distortion the dot-com era never had: the degree to which the AI bubble is being propped up by taxpayers. Socks.com didn’t need a new substation, a federal loan guarantee, or a 765 kV transmission corridor to find your socks. Today’s Socks.ai does need all that to use AI to find socks under the bed.  All the AI giants do. Their business models quietly assume public willingness to underwrite an insanely expensive buildout of power plants, high-voltage lines, and water-hungry cooling infrastructure—costs socialized onto ratepayers and communities so that a handful of platforms can chase trillion-dollar valuations. The dot-com bubble misallocated capital; the AI bubble is trying to reroute the grid.

In that sense, this isn’t just financial speculation on GPUs and model weights—it’s a stealth industrial policy, drafted in Silicon Valley and cashed at the public’s expense.

The problem, as always, is timing. Bubbles create enormous winners and equally enormous craters. Masa’s career is proof. But this time, the stakes are higher. The AI Bubble isn’t just a capital cycle; it’s a geopolitical and industrial reordering, pulling in cloud platforms, national security, energy systems, media industries, and governments with a bad case of FOMO scrambling to regulate a technology they barely understand.

And now, just as Masa reloads for his next moonshot, the market itself is starting to wobble. The past week’s selloff may not be random—it feels like a classic early-warning sign of a bubble straining under its own weight. In every speculative cycle, the leaders crack first: the most crowded trades, the highest-multiple stories, the narratives everyone already believes. This time, those leaders are the AI complex—GPU giants, hyperscale clouds, and anything with “model” or “inference” in the deck. When those names roll over together, it tells you something deeper than normal volatility is at work.

What the downturn may exposes is the growing narrative about an “earnings gap.“ Investors have paid extraordinary prices for companies whose long-term margins remain theoretical, whose energy demands are exploding, and whose regulatory and copyright liabilities are still unpriced. The AI story is enormous—but the business model remains unresolved. A selloff forces the market to remember the thing it forgets at every bubble peak: cash flow eventually matters.

Back in the late-cycle of the dot com era, I had lunch in December of 1999 with a friend who had worked 20 years in a division of a huge conglomerate, bought his division in a leveraged buyout, ran that company for 10 years then took that public, sold it to another company that then went public.  He asked me to explain how these dot coms were able to go public, a process he equated with hard work and serious people.  I said, well we like them to have four quarters of top line revenue.  He stared at me.  I said, I know it’s stupid, but that’s what they say.  He said, it’s all going to crash.  And boy did it ever.

And ironically, nothing captures this late-cycle psychology better than Masa’s own behavior. SoftBank selling Nvidia—the proven cash-printing side of AI—to buy OpenAI at a $500 billion valuation isn’t contrarian genius; it’s the definition of a crowded climax trade, the moment when everyone is leaning the same direction. When that move coincides with the tape turning red, the message is unmistakable: the AI supercycle may not be over, but the easy phase is.

Whether this is the start of a genuine deflation or just the first hard jolt before the final manic leg, the pattern is clear. The AI Bubble is no longer hypothetical—it is showing up on the trading screens, in the sentiment, and in the rotation of capital itself.

Masa may still believe the crest of the wave lies ahead. But the market has begun to ask the question every bubble eventually faces: What if this is the top of the ride?

Masa is betting that the crest of the curve lies ahead—that we’re in Act Two of an AI supercycle. Maybe he’s right. Or maybe he’s gearing up for his third historic wipeout.

Either way, he’s back in the saddle.

The Bubble Rider rides again.

Too Dynamic to Question, Too Dangerous to Ignore

When Ed Newton-Rex left Stability AI, he didn’t just make a career move — he issued a warning. His message was simple: we’ve built an industry that moves too fast to be honest.

AI’s defenders insist that regulation can’t keep up, that oversight will “stifle innovation.” But that speed isn’t a by-product; it’s the business model. The system is engineered for planned obsolescence of accountability — every time the public begins to understand one layer of technology, another version ships, invalidating the debate. The goal isn’t progress; it’s perpetual synthetic novelty, where nothing stays still long enough to be measured or governed, and “nothing says freedom like getting away with it.”

We’ve seen this play before. Car makers built expensive sensors we don’t want that fail on schedule; software platforms built policies that expire the moment they bite. In both cases, complexity became a shield and a racket — “too dynamic to question.” And yet, like those unasked-for, but paid for, features in the cars we don’t want, AI’s design choices are too dangerous to ignore. (Like what if your brakes really are going out, not just the sensor is malfunctioning.)

Ed Newton-Rex’s point — echoed in his tweets and testimony — is that the industry has mistaken velocity for virtue. He’s right. The danger is not that these systems evolve too quickly to regulate; it’s that they’re designed that way designed to fail just like that brake sensor. And until lawmakers recognize that speed itself is a form of governance, we’ll keep mistaking momentum for inevitability.

SB 683: California’s Quiet Rejection of the DMCA—and a Roadmap for Real AI Accountability

When Lucian Grainge drew a bright line—“UMG will not do business with bad actors regardless of the consequences”—he did more than make a corporate policy statement.  He threw down a moral challenge to an entire industry: choose creators or choose exploitation.

California’s recently passed SB 683 does not shout as loudly, but it answers the same call. By refusing to copy Washington’s bureaucratic NO FAKES Act and its DMCA-style “notice-and-takedown” maze, SB 683 quietly re-asserts a lost principle: rights are vindicated through courts and accountability, not compliance portals.

What SB 683 actually does

SB 683 amends California Civil Code § 3344, the state’s right-of-publicity statute for living persons, to make injunctive relief real and fast.  If someone’s name, voice, or likeness is exploited without consent, a court can now issue a temporary restraining order or preliminary injunction.  If the order is granted without notice, the defendant must comply within two business days.  

That sounds procedural—and it is—but it matters. SB 683 replaces “send an email to a platform” with “go to a judge.”   It converts moral outrage into enforceable law.

The deeper signal: a break from the DMCA’s bureaucracy

For twenty-seven years, the Digital Millennium Copyright Act (DMCA) has governed online infringement through a privatized system of takedown notices, counter-notices, and platform safe harbors.  When it was passed, Silicon Valley came alive with schemes to get around copyright infringement through free riding schemes that beat a path to Grokster‘s door.

But the DMCA was built for a dial-up internet and has aged about as gracefully as a boil on cow’s butt.

The Copyright Office’s 2020 Section 512 Study concluded that whatever Solomonic balance Congress thought it was making has completely collapsed:

“[T]he volume of notices demonstrates that the notice-and-takedown system does not effectively remove infringing content from the internet; it is, at best, a game of whack-a-mole.”

“Congress’ original intended balance has been tilted askew.”  

“Rightsholders report notice-and-takedown is burdensome and ineffective.”  

“Judicial interpretations have wrenched the process out of alignment with Congress’ intentions.” 
 
“Rising notice volume can only indicate that the system is not working.”  

Unsurprisingly, the Office concluded that “Roughly speaking, many OSPs spoke of section 512 as being a success, enabling them to [free ride and] grow exponentially and serve the public without facing debilitating lawsuits [or one might say, paying the freight]. Rightsholders reported a markedly different perspective, noting grave concerns with the ability of individual creators to meaningfully use the section 512 system to address copyright infringement and the “whack-a-mole” problem of infringing content re-appearing after being taken down. Based upon its own analysis of the present effectiveness of section 512, the Office has concluded that Congress’ original intended balance has been tilted askew.”

Which is a genteel way of saying the DMCA is an abject failure for creators and halcyon days for venture-backed online service providers. So why would anyone who cared about creators want to continue that absurd process?

SB 683 flips that logic. Instead of creating bureaucracy and rewarding the one who can wait out the last notice standing, it demands obedience to law.  Instead of deferring to internal “trust and safety” departments, it puts a judge back in the loop. That’s a cultural and legal break—a small step, but in the right direction.

The NO FAKES Act: déjà vu all over again

Washington’s proposed NO FAKES Act is designed to protect individuals from AI-generated digital replicas which is great. However—NO FAKES recreates the truly awful DMCA’s failed architecture: a federal registry of “designated agents,” a complex notice-and-takedown workflow, and a new safe-harbor regime based on “good-faith compliance.”    You know, notice and notice and notice and notice and notice and notice and…..

If NO FAKES passes, platforms like Google would again hold all the procedural cards: largely ignore notices until they’re convenient, claim “good faith,” and continue monetizing AI-generated impersonations.  In other words, it gives the platforms exactly what they wanted because delay is the point.  I seriously doubt that Congress of 1998 thought that their precious DMCA would be turned into a not so funny joke on artists, and I do remember Congressman Howard Berman (one of the House managers for DMCA) looking like he was going to throw up during the SOPA hearings when he found out how many millions of DMCA notices YouTube alone receives.  So why would we want to make the same mistake again thinking we’ll have a different outcome?  With the same platforms now richer beyond category? Who could possibly defend such garbage as anything but a colossal mistake?

The approach of SB 683 is, by contrast, the opposite of NO FAKES. It tells creators: you don’t need to find the right form—you need to find a judge.  It tells platforms: if a court says take it down, you have two days, not two months of emails, BS counter notices and a bad case of learned helplessness.  True, litigation is more costly than sending a DMCA notice, but litigation is far more likely to be effective in keeping infringing material down and will not become a faux “license” like DMCA has become.  

The DMCA heralded twenty-seven years of normalizing massive and burdensome copyright infringement and raising generations of lawyers to defend the thievery while Big Tech scooped up free rider rents that they then used for anti-creator lobbying around the world.  It should be entirely unsurprising that all of that litigation and lobbying has lead us to the current existential crisis.

Lucian Grainge’s throw-down and the emerging fault line

When Mr. Grainge spoke, he wasn’t just defending Universal’s catalog; he was drawing a perimeter around normalizing AI exploitation, and not buying into an even further extension of “permissionless innovation.”

Universal’s position aligns with what California just did. While Congress toys with a federal opt-out regime for AI impersonations, Sacramento quietly passed a law grounded in judicial enforcement and personal rights.  It’s not perfect, but it’s a rejection of the “catch me if you can” ethos that has defined Silicon Valley’s relationship with artists for decades.

A job for the Attorney General

SB 683 leaves enforcement to private litigants, but the scale of AI exploitation demands public enforcement under the authority of the State.  California’s Attorney General should have explicit power to pursue pattern-or-practice actions against companies that:

– Manufacture or distribute AI-generated impersonations of deceased performers (like Sora 2’s synthetic videos).
– Monetize those impersonations through advertising or subscription revenue (like YouTube does right now with the Sora videos).
– Repackage deepfake content as “user-generated” to avoid responsibility.

Such conduct isn’t innovation—it’s unfair competition under California law. AG actions could deliver injunctions, penalties, and restitution far faster than piecemeal suits. And as readers know, I love a good RICO, so let’s put out there that the AG should consider prosecuting the AI cabal with its interlocking investments under Penal Code §§ 186–186.8, known as the California Control of Profits of Organized Crime Act (CCPOCA) (h/t Seeking Alpha).

While AI platforms complain of “burdensome” and “unproductive” litigation, that’s simply not true of enterprises like the AI cabal—litigation is exactly what was required in order to reveal the truth about massive piracy powering the circular AI bubble economy. Litigation has revealed that the scale of infringement by AI platforms like Anthropic and Meta is so vast that private damages are meaningless. It is increasingly clear these companies are not alone—they have relied on pirate libraries and torrent ecosystems to ingest millions of works across every creative category. Rather than whistle past the graveyard while these sites flourish, government must confront its failure to enforce basic property rights. When theft becomes systemic, private remedies collapse, and enforcement becomes a matter for the state. Even Anthropic’s $1.5 billion settlement feels hollow because the crime is so immense. Not just because statutory damages in the US were also established in 1999 to confront…CD ripping.

AI regulation as the moment to fix the DMCA

The coming wave of AI legislation represents the first genuine opportunity in a generation to rewrite the online liability playbook.  AI and the DMCA cannot peacefully coexist—platforms will always choose whichever regime helps them keep the money.

If AI regulation inherits the DMCA’s safe harbors, nothing changes. Instead, lawmakers should take the SB 683 cue:
– Restore judicial enforcement.  
– Tie AI liability to commercial benefit. 
– Require provenance, not paperwork.  
– Authorize public enforcement.

The living–deceased gap: California’s unfinished business


SB 683 improves enforcement for living persons, but California’s § 3344.1 already protects deceased individuals against digital replicas.  That creates an odd inversion: John Coltrane’s estate can challenge an AI-generated “Coltrane tone,” but a living jazz artist cannot.   The Legislature should align the two statutes so the living and the dead share the same digital dignity.

Why this matters now

Platforms like YouTube host and monetize videos generated by AI systems such as Sora, depicting deceased performers in fake performances.  If regulators continue to rely on notice-and-takedown, those platforms will never face real risk.   They’ll simply process the takedown, re-serve the content through another channel, and cash another check.

The philosophical pivot

The DMCA taught the world that process can replace principle. SB 683 quietly reverses that lesson.  It says: a person’s identity is not an API, and enforcement should not depend on how quickly you fill out a form.

In the coming fight over AI and creative rights, that distinction matters. California’s experiment in court-centered enforcement could become the model for the next generation of digital law—where substance defeats procedure, and accountability outlives automation.

SB 683 is not a revolution, but it’s a reorientation. It abandons the DMCA’s failed paperwork culture and points toward a world where AI accountability and creator rights converge under the rule of law.

If the federal government insists on doubling down with the NO FAKES Act’s national “opt-out” registry, California may once again find itself leading by quiet example: rights first, bureaucracy last.

Google’s “AI Overviews” Draws a Formal Complaint in Germany under the EU Digital Services Act

A coalition of NGOs, media associations, and publishers in Germany has filed a formal Digital Services Act (DSA) complaint against Google’s AI Overviews, arguing the feature diverts traffic and revenue from independent media, increases misinformation risks via opaque systems, and threatens media plurality. Under the DSA, violations can carry fines up to 6% of global revenue—a potentially multibillion-dollar exposure.

The complaint claims that AI Overviews answer users’ queries inside Google, short-circuiting click-throughs to the original sources and starving publishers of ad and subscription revenues. Because users can’t see how answers are generated or verified, the coalition warns of heightened misinformation risk and erosion of democratic discourse.

Why the Digital Services Act Matters

As I understand the DSA, the news publishers can either (1) lodge a complaint with their national Digital Services Coordinator alleging a platform’s DSA breach (triggers regulatory scrutiny);  (2) Use the platform dispute tools: first the internal complaint-handling system, then certified out-of-court dispute settlement for moderation/search-display disputes—often faster practical relief; (3) Sue for damages in national courts for losses caused by a provider’s DSA infringement (Art. 54); or (4) Act collectively by mandating a qualified entity or through the EU Representative Actions Directive to seek injunctions/redress (kind of like class actions in the US but more limited in scope). 

Under the DSA, Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) are services with more than 45 million EU users (approximately 10% of the population). Once formally designated by the European Commission, they face stricter obligations than smaller platforms: conducting annual systemic risk assessments, implementing mitigation measures, submitting to independent audits, providing data access to researchers, and ensuring transparency in recommender systems and advertising. Enforcement is centralized at the Commission, with penalties up to 6% of global revenue. This matters because VLOPs like Google, Meta, and TikTok must alter core design choices that directly affect media visibility and revenue.In parallel, the European Commission/DSCs retain powerful public-enforcement tools against Very Large Online Platforms. 

As a designated Very Large Online Platform, Google faces strict duties to mitigate systemic risks, provide algorithmic transparency, and avoid conduct that undermines media pluralism. The complaint contends AI Overviews violate these requirements by replacing outbound links with Google’s own synthesized answers.

The U.S. Angle: Penske lawsuit

A Major Publisher Has Sued Google in Federal Court Over AI Overview

On Sept. 14, 2025, Penske Media (Rolling Stone, Billboard, Variety) sued Google in D.C. federal court, alleging AI Overviews repurpose its journalism, depress clicks, and damage revenue—marking the first lawsuit by a major U.S. publisher aimed squarely at AI Overviews. The claims include an allegation on training-use claiming that Google enriched itself by using PMC’s works to train and ground models powering Gemini/AI Overviews, seeking restitution and disgorgement. Penske also argues that Google abuses its search monopoly to coerce publishers: indexing is effectively tied to letting Google (a) republish/summarize their material in AI Overviews, Featured Snippets, and AI Mode, and (b) use their works to train Google’s LLMs—reducing click-through and revenues while letting Google expand its monopoly into online publishing. 

Trade Groups Urged FTC/DOJ Action

The News/Media Alliance had previously asked the FTC and DOJ to investigate AI Overviews for diverting traffic and ‘misappropriating’ publishers’ investments, calling for enforcement under FTC Act §5 and Sherman Act §2.

Data Showing Traffic Harm

Industry analyses indicate material referral declines tied to AI Overviews. Digital Content Next reports Google Search referrals down 1%–25% for most member publishers over recent weeks; Digiday pegs impacts as much as 25%. The trend feeds a broader ‘Google Zero’ concern—zero-click results displacing publisher visits.

Why Europe vs. U.S. Paths Differ

The EU/DSA offers a procedural path to assess systemic risk and platform design choices like AI Overviews and levy platform-wide remedies and fines. In the U.S., the fight currently runs through private litigation (Penske) and competition/consumer-protection advocacy at FTC/DOJ, where enforcement tools differ and take longer to mobilize.

RAG vs. Training Data Issues

AI Overviews are best understood as a Retrieval-Augmented Generation (RAG) issue. Readers will recall that RAG is probably the most direct example of verbatim copying in AI outputs. The harms arise because Google as middleman retrieves live publisher content and synthesizes it into an answer inside the Search Engine Results Page (SERP), reducing traffic to the sources. This is distinct from the training-data lawsuits (Kadrey, Bartz) that allege unlawful ingestion of works during model pretraining.

Kadrey: Indirect Market Harm

A RAG case like Penske’s could also be characterized as indirect market harm. Judge Chhabria’s ruling in Kadrey under U.S. law highlights that market harm isn’t limited to direct substitution for fair use purposes. Factor 4 in fair use analysis includes foreclosure of licensing and derivative markets. For AI/search, that means reduced referrals depress ad and subscription revenue, while widespread zero-click synthesis may foreclose an emerging licensing market for summaries and excerpts. Evidence of harm includes before/after referral data, revenue deltas, and qualitative harms like brand erasure and loss of attribution. Remedies could include more prominent linking, revenue-sharing, compliance with robots/opt-outs, and provenance disclosures.

I like them RAG cases.

The Essential Issue is Similar in EU and US

Whether in Brussels or Washington, the core dispute is very similar: Who captures the value of journalism in an AI-mediated search world? Germany’s DSA complaint and Penske’s U.S. lawsuit frame twin fronts of a larger conflict—one about control of distribution, payment for content, and the future of a pluralistic press. Not to mention the usual free-riding and competition issues swirling around Google as it extracts rents by inserting itself into places it’s not wanted.

How an AI Moratorium Would Preclude Penske’s Lawsuit

Many “AI moratorium” proposals function as broad safe harbors with preemption. A moratorium to benefit AI and pick national champions was the subject of an IP Subcommittee hearing on September 18. If Congress enacted a moratorium that (1) expressly immunizes core AI practices (training, grounding, and SERP-level summaries), (2) preempts overlapping state claims, and (3) channels disputes into agency processes with exclusive public enforcement, it would effectively close the courthouse door to private suits like Penske and make the US more like Europe without the enforcement apparatus. Here’s how:

Express immunity for covered conduct. If the statute declares that using publicly available content for training and for retrieval-augmented summaries in search is lawful during the moratorium, Penske’s core theory (RAG substitution plus training use) loses its predicate.
No private right of action / exclusive public enforcement. Limiting enforcement to the FTC/DOJ (or a designated tech regulator) would bar private plaintiffs from seeking damages or injunctions over covered AI conduct.
Antitrust carve-out or agency preclearance. Congress could provide that covered AI practices (AI Overviews, featured snippets powered by generative models, training/grounding on public web content) cannot form the basis for Sherman/Clayton liability during the moratorium, or must first be reviewed by the agency—undercutting Penske’s §1/§2 counts.
Primary-jurisdiction plus statutory stay. Requiring first resort to the agency with a mandatory stay of court actions would pause (or dismiss) Penske until the regulator acts.
Preemption of state-law theories. A preemption clause would sweep in state unjust-enrichment and consumer-protection claims that parallel the covered AI practices.
Limits on injunctive relief. Barring courts from enjoining covered AI features (e.g., SERP-level summaries) and reserving design changes to the agency would eliminate the centerpiece remedy Penske seeks.
Potential retroactive shield. If drafted to apply to past conduct, a moratorium could moot pending suits by deeming prior training/RAG uses compliant for the moratorium period.

A moratorium with safe harbors, preemption, and agency-first review would either stay, gut, or bar Penske’s antitrust and unjust-enrichment claims—reframing the dispute as a regulatory matter rather than a private lawsuit. Want to bet that White House AI Viceroy David Sacks will be sitting in judgement?

Missile Gap, Again: Big Tech’s Private Power vs. the Public Grid

If we let a hyped “AI gap” dictate land and energy policy, we’ll privatize essential infrastructure and socialize the fallout.

Every now and then, it’s important to focus on what our alleged partners in music distribution are up to, because the reality is they’re not record people—their real goal is getting their hands on the investment we’ve all made in helping compelling artists find and keep an audience. And when those same CEOs use the profits from our work to pivot to “defense tech” or “dual use” AI (civilian and military), we should hear what that euphemism really means: killing machines.

Daniel Ek is backing battlefield-AI ventures; Eric Schmidt has spent years bankrolling and lobbying for the militarization of AI while shaping the policies that green-light it. This is what happens when we get in business with people who don’t share our values: the capital, data, and social license harvested from culture gets recycled into systems built to find, fix, and finish human beings. As Bob Dylan put it in Masters of War, “You fasten the triggers for the others to fire.” These deals aren’t value-neutral—they launder credibility from art into combat. If that’s the future on offer, our first duty is to say so plainly—and refuse to be complicit.

The same AI outfits that for decades have refused to license or begrudgingly licensed the culture they ingest are now muscling into the hard stuff—power grids, water systems, and aquifers—wherever governments are desperate to win their investment. Think bespoke substations, “islanded” microgrids dedicated to single corporate users, priority interconnects, and high-volume water draws baked into “innovation” deals. It’s happening globally, but nowhere more aggressively than in the U.S., where policy and permitting are being bent toward AI-first infrastructure—thanks in no small part to Silicon Valley’s White House “AI viceroy,” David Sacks. If we don’t demand accountability at the point of data and at the point of energy and water, we’ll wake up to AI that not only steals our work but also commandeers our utilities. Just like Senator Wyden accomplished for Oregon.

These aren’t pop-up server farms; they’re decades-long fixtures. Substations and transmission are built on 30–50-year horizons, generation assets run 20–60, with multi-decade PPAs, water rights, and recorded easements that outlive elections. Once steel’s in the ground, rate designs and priority interconnects get contractually sticky. Unlike the Internet fights of the last 25 years—where you could force a license for what travels through the pipe—this AI footprint binds communities for generations; it’s essentially forever. So we will be stuck for generations with the decisions we make today.

Because China–The New Missle Gap

There’s a familiar ring to the way America is now talking about AI, energy, and federal land use (and likely expropriation). In the 1950s Cold War era, politicians sold the country on a “missile gap” that later proved largely mythical, yet it hardened budgets, doctrine, and concrete in ways that lasted decades.

Today’s version is the “AI gap”—a story that says China is sprinting on AI, so we must pave faster, permit faster, and relax old guardrails to keep up. Of course, this diverts attention from China’s advances in directed-energy weapons and hypersonic missiles which are here right now today and will play havoc in an actual battlefield—which the West has no counter to. But let’s not talk about those (at least not until we lose a carrier in the South China Sea), let’s worry about AI because that will make Silicon Valley even richer.

Watch any interview of executives from the frontier AI labs and within minutes they will hit their “because China” talking point. National security and competitiveness are real concerns, but they don’t justify blank checks and Constitutional-level safe harbors. The missile‑gap analogy is useful because it reminds us how a compelling threat narrative propaganda can swamp due diligence. We can support strategic compute and energy without letting an AI‑gap story permanently bulldoze open space and saddle communities with the bill.

Energy Haves (Them) and Have Nots (Everyone else)

The result is a two‑track energy state AKA hell on earth. On Track A, the frontier AI lab hyperscalers like Google, Meta, Microsoft, OpenAI & Co. build company‑town infrastructure for AI—on‑site electricity generation by microgrids outside of everyone else’s electric grid, dedicated interties and other interconnections between electric operators—often on or near federal land. On Track B, the public grid carries everyone else: homes, hospitals, small manufacturers, water districts. As President Trump said at the White House AI dinner this week, Track A promises to “self‑supply,” but even self‑supplied campuses still lean on the public grid for backup and monetization, and they compete for scarce interconnection headroom.

President Trump is allowing the hyperscalers to get permanent rights to build on massive parcels of government land, including private utilities to power the massive electricity and water cooling needs for AI data centers. Strangely enough, this is continuing a Biden policy under an executive order issued late in Biden Presidency that Trump now takes credit for, and is a 180 out from America First according to people who ought to know like Steve Bannon. And yet it is happening.

White House Dinners are Old News in Silicon Valley

If someone says “AI labs will build their own utilities on federal land,” that land comes in two flavors: Department of Defense (now War Department) or Department of Energy sites and land owned by the Bureau of Land Management (BLM). This are vastly different categories.  DoD/DOE sites such as Idaho National Laboratory Oak Ridge Reservation, Paducah GDP, and the Savannah River Site, imply behind-the-fence, mission-tied microgrids with limited public friction; BLM land implies public-land rights-of-way and multi-use trade-offs (grazing, wildlife, cultural), longer timelines, and grid-export dynamics with potential “curtailment” which means prioritizing electricity for the hyperscalers. For example, Idaho National Laboratory (INL) as one of the four AI/data-center sites. INL’s own environmental reports state that about 60% of the INL site is open to livestock grazing, with monitoring of grazing impacts on habitat.  That’s likely over.

This is about how we power anything not controlled by a handful of firms. And it’s about the land footprint: fenced solar yards, switchyards, substations, massive transport lines, wider roads, laydown areas. On BLM range and other open spaces, those facilities translate into real, local losses—grazable acres inside fences, stock trails detoured, range improvements relocated.

What the two tracks really do

Track A solves a business problem: compute growth outpacing the public grid’s construction cycle. By putting electrons next to servers (literally), operators avoid waiting years for a substation or a 230‑kV line. Microgrids provide islanding during emergencies and participation in wholesale markets when connected. It’s nimble, and it works—for the operator.

Track B inherits the volatility: planners must consider a surge of large loads that may or may not appear, while maintaining reliability for everyone else. Capacity margins tighten; transmission projects get reprioritized; retail rates absorb the externalities. When utilities plan for speculative loads and those projects cancel or slide, the region can be left with stranded costs or deferred maintenance elsewhere.

The land squeeze we’re not counting

Public agencies tout gigawatts permitted. They rarely publish the acreage fenced, AUMs affected, or water commitments. Utility‑scale solar commonly pencils out to on the order of 5–7 acres per megawatt of capacity depending on layout and topography. At that ratio, a single gigawatt occupies thousands of acres—acres that, unlike wind, often can’t be grazed once panels and security fences go in. Even where grazing is technically possible, access roads, laydown yards, and vegetation control impose real costs on neighboring users.

Wind is more compatible with grazing, but it isn’t footprint‑free. Pads, roads, and safety buffers fragment pasture. Transmission to move that energy still needs corridors—and those corridors cross someone’s water lines and gates. Multiple use is a principle; on the ground it’s a schedule, a map, and a cost. Just for reference, a rule‑of‑thumb for acres/electricity produces is approximately 5–7 acres per megawatt of direct current (“MWdc”), but access roads, laydown, and buffers extend beyond the fence line.

We are going through this right now in my part of the world. Central Texas is bracing for a wave of new high-voltage transmission. These are 345-kV corridors cutting (literally) across the Hill Country to serve load growth for chip fabricators and data centers and tie-in distant generation (so big lines are a must once you commit to the usage). Ranchers and small towns are pushing back hard: eminent-domain threats, devalued land, scarred vistas, live-oak and wildlife impacts, and routes that ignore existing roads and utility corridors. Packed hearings and county resolutions demand co-location, undergrounding studies, and real alternatives—not “pick a line on a map” after the deal is done. The fight isn’t against reliability; it’s against a planning process that externalizes costs onto farmers, ranchers, other landowners and working landscapes.

Texas’s latest SB 6 is the case study. After a wave of ultra-large AI/data-center loads, frontier labs and their allies pushed lawmakers to rewrite reliability rules so the grid would accommodate them. SB 6 empowers the Texas grid operator ERCOT to police new mega-loads—through emergency curtailment and/or firm-backup requirements—effectively reshaping interconnection priorities and shifting reliability risk and costs onto everyone else. “Everyone else” means you and me, kind of like the “full faith and credit of the US”. Texas SB 6 was signed into law in June 2025 by Gov. Greg Abbott. It’s now in effect and directs PUCT/ERCOT to set new rules for very large loads (e.g., data centers), including curtailment during emergencies and added interconnection/backup-power requirements. So the devil will be in the details and someone needs to put on the whole armor of God, so to speak.

The phantom problem

Another quiet driver of bad outcomes is phantom demand: developers filing duplicative load or interconnection requests to keep options open. On paper, it looks like a tidal wave; in practice, only a slice gets built. If every inquiry triggers a utility study, a route survey, or a placeholder in a capital plan, neighborhoods can end up paying for capacity that never comes online to serve them.

A better deal for the public and the range

Prioritize already‑disturbed lands—industrial parks, mines, reservoirs, existing corridors—before greenfield BLM range land. Where greenfield is unavoidable, set a no‑net‑loss goal for AUMs and require real compensation and repair SLAs for affected range improvements.

Milestone gating for large loads: require non‑refundable deposits, binding site control, and equipment milestones before a project can hold scarce interconnection capacity or trigger grid upgrades. Count only contracted loads in official forecasts; publish scenario bands so rate cases aren’t built on hype.

Common‑corridor rules: make developers prove they can’t use existing roads or rights‑of‑way before claiming new footprints. Where fencing is required, use wildlife‑friendly designs and commit to seasonal gates that preserve stock movement.

Public equity for public land: if a campus wins accelerated federal siting and long‑term locational advantage, tie that to a public revenue share or capacity rights that directly benefit local ratepayers and counties. Public land should deliver public returns, not just private moats.

Grid‑help obligations: if a private microgrid islands to protect its own uptime, it should also help the grid when connected. Enroll batteries for frequency and reserve services; commit to emergency export; and pay a fair share of fixed transmission costs instead of shifting them onto households.

Or you could do what the Dutch and Irish governments proposed under the guise of climate change regulations—kill all the cattle. I can tell you right now that that ain’t gonna happen in Texas.

Will We Get Fooled Again?

If we let a hyped latter day “missile gap” set the terms, we’ll lock in a two‑track energy state: private power for those who can afford to build it, a more fragile and more expensive public grid for everyone else, and open spaces converted into permanent infrastructure at a discount. The alternative is straightforward: price land and grid externalities honestly, gate speculative demand, require public returns on public siting, and design corridor rules that protect working landscapes. That’s not anti‑AI; it’s pro‑public. Everything not controlled by Big Tech—will be better for it.

Let’s be clear: the data-center onslaught will be financed by the taxpayer one way or another—either as direct public outlays or through sweet-heart “leases” of federal land to build private utilities behind the fence for the richest corporations in commercial history. After all the goodies that Trump is handing to the AI platforms, let’s not have any loose talk of “selling” excess electricity to the public–that price should be zero. Even so, the sales pitch about “excess” electricity they’ll generously sell back to the grid is a fantasy; when margins tighten, they’ll throttle output costs, not volunteer philanthropy. Picture it: do you really think these firms won’t optimize for themselves first and last? We’ll be left with the bills, the land impacts, and a grid redesigned around their needs. Ask yourself—what in the last 25 years of Big Tech behavior says “trustworthy” to you?