The White House’s latest AI framework reads like a familiar story dressed in new clothes: we must move fast, avoid “overregulation,” and ensure that the United States “wins” the AI race—because China.
That framing is not new. It is, in fact, a modern version of the Thucydides Trap: the idea that when a rising power threatens to displace an established one, conflict—economic, political, or otherwise—becomes more likely. But what is striking here is not the invocation of competition. It’s how narrowly that competition is defined.
The framework implicitly treats AI dominance as a function of compute, capital, and model scale. Build bigger models faster, feed them more data, and ensure that domestic firms face as few constraints as possible. In that telling, creators, rights, and consent become secondary considerations—at best friction, at worst obstacles.
But that is a profound misread of where U.S. advantage actually lies.
American leadership has never been just about scale. It has been about legitimacy—the ability to build systems that other countries, companies, and individuals trust enough to adopt. That is the essence of soft power. And soft power is not generated by extraction; it is generated by rules that are perceived as fair.
When U.S. policy signals that training on creative works without meaningful consent is acceptable—or even necessary to “win”—it risks trading long-term legitimacy for short-term acceleration. That is a dangerous bargain. It tells the world that American AI leadership is built not on innovation alone, but on the uncompensated appropriation of global cultural and informational resources.
Other jurisdictions are already responding. The EU is experimenting with transparency mandates. Rights holders globally are pushing for enforceable consent regimes. Even countries that want to encourage AI development are increasingly wary of frameworks that look like data extraction at scale without accountability.
This is where the Thucydides analogy breaks down—or at least becomes more complicated. The real risk is not simply that China catches up technologically. It is that the United States, in trying to outrun that possibility, undermines the normative foundations of its own leadership.
Soft power erosion is not dramatic. It doesn’t announce itself with a headline. It accumulates quietly: in trade negotiations, in regulatory divergence, in the willingness of other countries to align—or not align—with U.S. standards. Over time, that erosion can matter more than any benchmark score or model release.
There is another path. The United States could lead by insisting that AI development is compatible with consent, compensation, and provenance. It could treat creators not as inputs to be harvested, but as stakeholders in a system that depends on their work. It could build infrastructure—technical and legal—that makes those principles operational, not aspirational.
That approach may look slower in the short term. It may impose costs that competitors are willing to ignore. But it is also how durable leadership is built.
Because in the long run, the question is not just who builds the most powerful models. It is who builds systems that the rest of the world is willing to trust.
And that is a competition the United States cannot afford to lose.
Nate Garhart writing in Reuters analyzes Perplexity AI’s novel—some might say bizarre—legal defense in copyright suits filed by the New York Times and the Chicago Tribune in December 2025. Rather than relying primarily on fair use, the typical defense in AI infringement cases, Perplexity instead argues it lacked “volitional conduct” sufficient for direct copyright infringement, contending that it did not “make” the infringing copies in a legally relevant sense. The defense in Perplexity’s motion to dismiss draws on the Second Circuit’s 2008 Cartoon Network v. CSC Holdings decision, where a DVR service was not held directly liable because the user, not the service, initiated the recording of each specific work. Sound familiar? That’s one straight outta 1999. You know, the technology made me do it.
The article explains the strategic logic: eliminating direct infringement would be meaningful, even if secondary liability theories survive. However, Mr. Garhart is correctly skeptical the defense will succeed, at least at the motion to dismiss stage. A key difficulty for Perplexity is that Perplexity’s system involves a far more complicated causal chain than a mere DVR: it crawls, scrapes and copies paywalled articles, indexes them, stores them, and generates output that may track the original expression—each step reflecting deliberate system design. Hold that thought, we’ll come back to that one shortly. The newspapers’ attorney, Steven Lieberman, publicly emphasized that Perplexity “does not dispute copying The Times’s journalism from behind a paywall to deliver responses to their customers in real time”. Rut roh.
Mr. Garhart makes clear that Perplexity’s attempt to cast itself as a mere automated tool triggered by user prompts is fundamentally at odds with how generative AI systems actually work. There are several reasons why the “passive conduit” framing fails.
Failure to implement filtering or safeguards: Neither defendant developed tools to diminish infringing activity, which—while not independently sufficient—was probative of intent alongside other evidence.
Moreover, at each stage of Perplexity’s training pipeline, human decision-making is deeply embedded: engineers and researchers decide what content to tokenize, how to structure training data, and which model behaviors to reinforce or suppress through “reinforcement learning from human feedback” (RLHF) and other fine-tuning methods. The resulting system is curated by humans at multiple points in the typical workflow from dataset selection and preprocessing, to model alignment and quality control, meaning the outputs are not the product of a purely autonomous process but rather of layered, intentional design choices made by people, or more precisely, by Perplexity.
Tokenization itself is a telling example of design choice: by selecting a tokenization scheme and deciding which corpora to process (and spend scarce compute resources on), the system’s developers are making both editorial and commercial judgments about what material the model will learn from and be capable of reproducing. These upstream human choices further undercut the notion that the system is a passive conduit simply responding to downstream user prompts.
Importantly, these tokenization decisions are not made in a vacuum or for altruistic reasons—they are driven by the commercial imperative of delivering a product sufficiently useful that consumers will pay Perplexity for it, rather than paying the New York Times or other original publishers for their journalism. The economic logic is plain: the more effectively the system can ingest and repackage high-quality copyrighted content, the more valuable the product becomes to subscribers, and the more extracted revenue flows to Perplexity instead of to the creators whose work fuels the system. These upstream human choices further undercut the notion that the system is a passive conduit simply responding to user prompts. Sound familiar?
Applying Grokster‘s Logic to Generative AI
Several design features of a generative AI answer engine map onto the Grokster framework, even without identical facts:
I think Mr. Garhart’s most compelling point is that a user’s query is not the kind of discrete, volitional act that broke the causal chain in Cartoon Network. A user who types “What does the New York Times say about X?” is asking a question—not selecting a specific copyrighted work and pressing “copy” as with a DVR. The Perplexity system then selects, processes, and generates expressive content drawn from copyrighted sources because that’s how it was trained. The Grokster Court rejected the notion that intermediaries like Perplexity could hide behind user-initiated actions when those intermediaries had built systems designed to facilitate infringement and had taken affirmative steps to encourage it.
Critically, the generative AI system’s response to a prompt is shaped by decisions made long before the user ever typed a query. Humans selected the training corpora, decided how text would be tokenized and encoded, fine-tuned the model’s outputs through iterative RLHF and other quality-control processes, and designed the retrieval and generation architecture. Each of these steps reflects purposeful human conduct—not the behavior of a neutral pipe. A system in which humans curate the inputs, architect the processing, and refine the outputs at multiple stages is, by any reasonable measure, an active participant in producing the allegedly infringing content.
In sum, generative AI systems are not passive conduits. They are purpose-built products whose design choices—what to crawl, what to tokenize, how to store it, when to reproduce it, and how to monetize it—reflect exactly the kind of upstream volition and deliberate architecture that both the Cartoon Network volitional conduct doctrine and the Grokster inducement framework are designed to capture. The fact that a user prompt triggers the final output does not absolve a company that engineered every step in the chain leading to that output.
Why did Perplexity scrape leading newspapers for content to feed their AI? Because it was high value, well written, well editing writing and it was valuable to them. In short, they did it for the money.
They robbed the authors for the same famous reason Willie Sutton robbed the banks. Because that’s where the money is.
For the better part of a year, local opposition to AI hyperscaler data centers has been dismissed as NIMBYism—yet it is a movement that has gained real traction. Rural counties worried about water draw. Suburban communities objecting to diesel backup generators. Landowners frustrated over transmission corridors cutting through farmland and massive data centers removing large swaths of productive land in essentially irreversible dedication to AI.
Local politics around data-center construction often turn on land use, water, and power. Officials welcome tax base and jobs, but residents worry about noise, transmission lines, diesel backup generators, and groundwater consumption. Zoning boards and county commissioners become battlegrounds where developers promise infrastructure upgrades and community benefits while opponents push for setbacks, environmental review, and limits on incentives. Utilities and grid operators weigh reliability and cost shifting, especially where hyperscale demand requires new substations or high-voltage lines. Rural areas face pressure from land aggregation and fast-track permitting, while cities debate transparency, property-tax abatements, and whether long-term public costs outweigh near-term economic gains.
But the politics just escalated.
According to multiple reports, President Trump is preparing to highlight “ratepayer protection pledges” from major tech companies during his State of the Union address tonight — urging AI and cloud companies to publicly commit that residential electricity customers will not bear the cost of new data-center load.
That confirms concerns from Trump advisor Peter Navarro over the last couple months and is not a small signal.
For months, grassroots organizers have warned that hyperscale AI buildout could increase local electricity rates, force costly new transmission lines, accelerate natural gas plant approvals, and strain already fragile regional grids. And then there’s the nuclear issues as hyperscalers openly promote new nuclear plants. Until now, much of the policy conversation has centered on growth and competitiveness, you know, because China. The Trump pivot reframes the issue around consumer protection — closely tracking the concerns raised by grassroots opponents.
What the White House Is Signaling
The reported approach stops short of imposing a formal price cap on electricity or shifting costs to taxpayers. Instead, policymakers are signaling that large technology firms — particularly hyperscale operators — should voluntarily shoulder the marginal power costs created by their own demand growth.
In practice, this means encouraging companies such as Microsoft, Alphabet, Amazon, and OpenAI to fund grid upgrades, transmission extensions, standby generation, and other infrastructure required to serve new data-center loads, rather than socializing those costs across ordinary ratepayers. The political logic is straightforward: if hyperscale demand is driving billions in new utility investment, the beneficiaries should internalize the expense. The strategy relies on negotiated commitments, public-utility leverage, and reputational pressure rather than mandates, aiming to avoid rate shocks while still enabling continued digital-infrastructure expansion.
We’ll see.
In parallel, the administration has backed efforts to expand electricity supply in regions experiencing sharp data-center load growth, pairing political support with regulatory acceleration. In practice, this has meant encouraging grid operators to run emergency or supplemental capacity auctions—for example, in markets like PJM or ERCOT—to secure short-lead-time generation such as gas peaker plants, temporary turbines, and large-scale battery storage. Policymakers have also supported fast-track permitting and uprates at existing nuclear and natural-gas facilities, along with expedited approvals for new combined-cycle plants where reliability risks are rising. In some areas, utilities are advancing transmission expansions and demand-response programs to bridge near-term gaps. The goal is to bring firm capacity online quickly enough to keep pace with AI-driven electricity demand without triggering reliability shortfalls or price spikes.
Supposedly, Trump’s message is if data centers drive the demand spike, data centers should fund the solution. That makes sense, but count me as a skeptic as to whether this will actually happen, or whether hyperscalers will come to the taxpayer. You know, because China. But let’s sell China Nvidia chips.
Why This Matters for the Grassroots Fight
Grassroots opposition to large-scale data centers has crystallized around three increasingly defined pillars — each with its own constituency and political leverage.
1. Land Use and Community Character. Residents object to the scale and industrial footprint of hyperscale campuses: multi-building complexes, 24/7 lighting, diesel backup generators, high-security fencing, and new high-voltage transmission corridors. In rural counties, projects can involve the quiet aggregation of farmland followed by rezoning from agricultural to industrial use. In suburban areas, neighbors focus on setbacks, noise from cooling systems, and visual impact. Planning and zoning hearings have become flashpoints where local control collides with state-level economic development priorities.
2. Environmental and Water Stress. Data centers are energy- and water-intensive facilities. In water-constrained regions, evaporative cooling systems raise concerns about aquifer drawdown and drought resilience. Environmental advocates question lifecycle emissions from new gas-fired generation built to serve AI load, as well as the cumulative impact of substations, transmission lines, and backup generators. Even where companies pledge renewable procurement, critics argue that incremental demand can still drive fossil fuel buildout in constrained grids.
3. Electricity Costs and Grid Strain. The most politically volatile pillar is ratepayer impact. Local activists argue that if hyperscale demand requires billions in new generation, transmission, and distribution investment, those costs could be socialized through higher retail rates. Concerns also extend to reliability — whether rapid load growth risks price spikes, capacity shortfalls, or emergency measures during extreme weather.
And then there’s the jobs myth. The “data center jobs” pitch often overstates long-term employment. Construction phases can generate hundreds of temporary union and trade jobs—electricians, concrete crews, steel, and site work—sometimes for 12–24 months. But once operational, hyperscale facilities are highly automated and run by surprisingly small permanent staffs relative to their footprint and power load. A multi-building campus consuming hundreds of megawatts may employ only a few dozen to low hundreds of full-time workers, focused on security, facilities management, and network operations. For rural counties weighing tax abatements and infrastructure upgrades, the gap between short-term construction labor and modest permanent payroll becomes a central economic-development question.
By elevating electricity price protection to a presidential talking point, the administration effectively validates this third pillar. What began as local testimony at zoning meetings is now part of national energy policy framing: the principle that ordinary households should not subsidize AI infrastructure through their power bills. That rhetorical shift transforms a local grievance into a broader political issue with statewide and federal implications.
This is no longer just a zoning fight. It is now a kitchen-table affordability issue. Which may be a good start.
The Uncomfortable Math
AI data centers run 24/7, require enormous continuous baseload power, often demand dedicated substations, and can trigger multi-billion-dollar transmission upgrades. In regulated utility regions, those upgrades may be socialized across ratepayers unless cost allocation rules are enforced.
That is the central fear: even if tech companies pay for direct interconnection, broader grid reinforcement costs may still reach residential customers. If “ratepayer protection” pledges gain traction, this would mark a major federal acknowledgement that the risk is politically real.
Why This Is Bigger Than Trump
Governors in data-center-heavy states have also expressed concern. Utilities want load growth but fear rate shock. Grid operators face pressure to accelerate capacity procurement without triggering bill spikes. Grassroots activists have argued the AI buildout is outpacing responsible grid planning — and that argument has now moved from local meetings to national politics.
Whether any president—including Trump—can truly compel hyperscale tech firms to absorb rising power and infrastructure costs remains uncertain. Without formal regulation, enforcement tools are limited to negotiation, procurement leverage, and public pressure, all of which depend on the companies’ strategic interests.
Voluntary pledges can signal cooperation but lack binding force especially if market conditions shift. The Trump announcement also raises a political question: does the “pledge” represent a balancing act inside the administration between economic populists and China hawks like Peter Navarro, often associated with industrial-policy cost discipline, and pro-AI growth lobbyists such as Silicon Valley’s AI Viceroy David Sacks? If so, the commitment may reflect an internal compromise as much as an external policy toward accelerationist hyperscalers.
Data-center growth is turning electricity affordability into a geopolitical issue, not just a local zoning fight. When hyperscalers drop a 100–500 MW load into a market, they can tighten reserve margins, push up wholesale prices, and force expensive transmission and distribution upgrades—costs that governments then have to allocate between the new entrant and everyone else. That same demand can crowd out electrification priorities (heat pumps, EVs, industrial decarbonization) or trigger emergency procurement of “firm” power—often gas—because reliability deadlines don’t wait for ideal renewable buildouts.
We are way past McDonald’s on the Champs-Élysées
This is where “net zero” starts to look like it’s in the rear-view mirror. Many jurisdictions still talk about decarbonization, but the near-term political imperative is keeping the lights on and bills stable. If the choice is between fast AI load growth and strict emissions trajectories, the operational reality in many grids is that fossil backup and accelerated thermal approvals re-enter the picture—sometimes explicitly, sometimes quietly. Meanwhile, countries with abundant cheap power (hydro, nuclear, subsidized gas) gain leverage as preferred data-center destinations, while constrained grids face moratoria, queue rationing, and public backlash.
Data-center expansion is rapidly turning electricity policy into a global political and economic tradeoff. When hyperscale facilities add hundreds of megawatts of demand, they can tighten capacity margins, raise wholesale prices, and force costly grid upgrades—decisions governments must make about who ultimately pays. In many markets, this new load competes directly with electrification goals such as EV adoption, heat pumps, and industrial decarbonization. Reliability timelines often drive utilities toward fast, firm capacity—frequently gas—because intermittent renewables and storage cannot always be deployed quickly enough.
In that sense, Trump’s choices increasingly resemble a classic “guns and butter” dilemma. Policymakers must balance the strategic push for AI infrastructure and digital competitiveness against long-term climate commitments. While net-zero targets remain official policy in many jurisdictions, near-term choices often prioritize keeping power reliable and affordable, even if that means slowing emissions progress. The tension does not necessarily mean decarbonization disappears, but it underscores the difficulty of advancing both rapid AI build-out and strict net-zero trajectories simultaneously under real-world grid constraints.
Rate Payers Get the Immediate Proof: Utility bills
If the White House advances voluntary ratepayer-protection pledges, several trajectories could unfold. Technology companies may publicly commit to absorbing incremental grid and infrastructure costs, framing the move as responsible corporate citizenship. Personally, I don’t think Trump actually believes it, and I fully expect that the teleprompter will say one thing, and then in a classic Trump aside, he will undercut the speech writers.
Utilities, facing rising capital requirements, could press for clearer cost-allocation rules to ensure large-load customers bear system expansion expenses. State public-utility commissions might reopen tariffs and special-contract pricing for hyperscale users, testing how far voluntary commitments translate into enforceable rate structures.
Meanwhile, grassroots groups are likely to demand transparent accounting to verify that ordinary customers are insulated from price impacts. Yet the full economic value of any pledge will emerge only over years of build-out and rate cases—long after the current administration, and Trump himself, are no longer in office.
For the moment, the debate has shifted. Grassroots opposition is no longer just about land or water. It is about who pays when AI reshapes the grid — and now the president is talking about it.
Let’s say I’m wrong and Trump is serious about reigning in AI. If Trump were able to make such a policy stick, it could mark a broader shift in how governments confront the external costs of rapid AI expansion. Requiring hyperscalers to internalize infrastructure and power burdens could slow the breakneck build-out that fuels large-scale model training and synthetic media proliferation.
For artists and performers, that deceleration could matter. The fight over voice, likeness, and identity—already highlighted by figures such as Brad Pitt and Tom Cruise ripped off by China’s Seedance 2.0 —centers on protecting human personhood from industrial-scale replication. A structural slowdown in AI growth would not end that conflict, but it could rebalance leverage, giving creators, unions, and policymakers more time to establish enforceable guardrails.
Paul Sinclair’s framing of generative music AI as a choice between “open studios” and permissioned systems makes a basic category mistake. Consent is not a creative philosophy or a branding position. It is a systems constraint. You cannot “prefer” consent into existence. A permissioned system either enforces authorization at the level where machine learning actually occurs—or it does not exist at all.
That distinction matters not only for artists, but for the long-term viability of AI companies themselves. Platforms built on unresolved legal exposure may scale quickly, but they do so on borrowed time. Systems built on enforceable consent may grow more slowly at first, but they compound durability, defensibility, and investor confidence over time. Legality is not friction. It is infrastructure. It’s a real “eat your vegetables” moment.
The Great Reset
Before any discussion of opt-in, licensing, or future governance, one prerequisite must be stated plainly: a true permissioned system requires a hard reset of the model itself. A model trained on unlicensed material cannot be transformed into a consent-based system through policy changes, interface controls, or aspirational language. Once unauthorized material is ingested and used for training, it becomes inseparable from the trained model. There is no technical “undo” button.
The debate is often framed as openness versus restriction, innovation versus control. That framing misses the point. The real divide is whether a system is built to respect authorization where machine learning actually happens. A permissioned system cannot be layered on top of models trained without permission, nor can it be achieved by declaring legacy models “deprecated.” Machine learning systems do not forget unless they are reset. The purpose of a trained model is remembering—preserving statistical patterns learned from its data—not forgetting. Models persist, shape downstream outputs, and retain economic value long after they are removed from public view. Administrative terminology is not remediation.
Recent industry language about future “licensed models” implicitly concedes this reality. If a platform intends to operate on a consent basis, the logical consequence is unavoidable: permissioned AI begins with scrapping the contaminated model and rebuilding from zero using authorized data only.
Why “Untraining” Does Not Solve the Problem
Some argue that problematic material can simply be removed from an existing model through “untraining.” In practice, this is not a reliable solution. Modern machine-learning systems do not store discrete copies of works; they encode diffuse statistical relationships across millions or billions of parameters. Once learned, those relationships cannot be surgically excised with confidence. It’s not Harry Potter’s Pensieve.
Even where partial removal techniques exist, they are typically approximate, difficult to verify, and dependent on assumptions about how information is represented internally. A model may appear compliant while still reflecting patterns derived from unauthorized data. For systems claiming to operate on affirmative permission, approximation is not enough. If consent is foundational, the only defensible approach is reconstruction from a clean, authorized corpus.
The Structural Requirements of Consent
Once a genuine reset occurs, the technical requirements of a permissioned system become unavoidable.
Authorized training corpus. Every recording, composition, and performance used for training must be included through affirmative permission. If unauthorized works remain, the model remains non-consensual.
Provenance at the work level. Each training input must be traceable to specific authorized recordings and compositions with auditable metadata identifying the scope of permission.
Enforceable consent, including withdrawal. Authorization must allow meaningful limits and revocation, with systems capable of responding in ways that materially affect training and outputs.
Segregation of licensed and unlicensed data. Permissioned systems require strict internal separation to prevent contamination through shared embeddings or cross-trained models.
Transparency and auditability. Permission claims must be supported by documentation capable of independent verification. Transparency here is engineering documentation, not marketing copy.
These are not policy preferences. They are practical consequences of a consent-based architecture.
The Economic Reality—and Upside—of Reset
Rebuilding models from scratch is expensive. Curating authorized data, retraining systems, implementing provenance, and maintaining compliance infrastructure all require significant investment. Not every actor will be able—or willing—to bear that cost. But that burden is not an argument against permission. It is the price of admission.
Crucially, that cost is also largely non-recurring. A platform that undertakes a true reset creates something scarce in the current AI market: a verifiably permissioned model with reduced litigation risk, clearer regulatory posture, and greater long-term defensibility. Over time, such systems are more likely to attract durable partnerships, survive scrutiny, and justify sustained valuation.
Throughout technological history, companies that rebuilt to comply with emerging legal standards ultimately outperformed those that tried to outrun them. Permissioned AI follows the same pattern. What looks expensive in the short term often proves cheaper than compounding legal uncertainty.
Architecture, Not Branding
This is why distinctions between “walled garden,” “opt-in,” or other permission-based labels tend to collapse under technical scrutiny. Whatever the terminology, a system grounded in authorization must satisfy the same engineering conditions—and must begin with the same reset. Branding may vary; infrastructure does not.
Permissioned AI is possible. But it is reconstructive, not incremental. It requires acknowledging that past models are incompatible with future claims of consent. It requires making the difficult choice to start over.
The irony is that legality is not the enemy of scale—it is the only path to scale that survives. Permission is not aspiration. It is architecture.
A grass‑roots “data center and electric grid rebellion” is emerging across the United States as communities push back against the local consequences of AI‑driven infrastructure expansion. Residents are increasingly challenging large‑scale data centers and the transmission lines needed to power them, citing concerns about enormous electricity demand, water consumption, noise pollution, land use, declining property values, and opaque approval processes. What were once routine zoning or utility hearings are now crowded, contentious events, with citizens organizing quickly and sharing strategies across counties and states.
This opposition is no longer ad hoc. In Northern Virginia—often described as the global epicenter of data centers—organized campaigns such as the Coalition to Protect Prince William County have mobilized voters, fundraised for local elections, demanded zoning changes, and challenged approvals in court. In Maryland’s Prince George’s County, resistance has taken on a strong environmental‑justice framing, with groups like the South County Environmental Justice Coalition arguing that data centers concentrate environmental and energy burdens in historically marginalized communities and calling for moratoria and stronger safeguards.
Nationally, consumer and civic groups are increasingly coordinated, using shared data, mapping tools, and media pressure to argue that unchecked data‑center growth threatens grid reliability and shifts costs onto ratepayers. Together, these campaigns signal a broader political reckoning over who bears the costs of the AI economy.
Global Data Centers
Here’s a snapshot of grass roots opposition in Texas, Louisiana and Nevada:
Texas
Texas has some of the most active and durable local opposition, driven by land use, water, and transmission corridors.
Hill Country & Central Texas (Burnet, Llano, Gillespie, Blanco Counties) Grass-roots groups formed initially around high-voltage transmission lines (765 kV) tied to load growth, now explicitly linking those lines to data center demand. Campaigns emphasize:
rural land fragmentation
wildfire risk
eminent domain abuse
lack of local benefit These groups are often informal coalitions of landowners rather than NGOs, but they coordinate testimony, public-records requests, and local elections.
DFW & North Texas Neighborhood associations opposing rezoning for hyperscale facilities focus on noise (backup generators), property values, and school-district tax distortions created by data-center abatements.
ERCOT framing Texas groups uniquely argue that data centers are socializing grid instability risk onto residential ratepayers while privatizing upside—an argument that resonates with conservative voters.
Louisiana
Opposition is newer but coalescing rapidly, often tied to petrochemical and LNG resistance networks.
North Louisiana & Mississippi River Corridor Community groups opposing new data centers frame them as:
“energy parasites” tied to gas plants
extensions of an already overburdened industrial corridor
threats to water tables and wetlands Organizers often overlap with environmental-justice and faith-based coalitions that previously fought refineries and export terminals.
Key tactic: reframing data centers as industrial facilities, not “tech,” triggering stricter land-use scrutiny.
Nevada
Nevada opposition centers on water scarcity and public-land use.
Clark County & Northern Nevada Residents and conservation groups question:
water allocations for evaporative cooling
siting near public or BLM-managed land
grid upgrades subsidized by ratepayers for private AI firms
Distinct Nevada argument: data centers compete directly with housing and tribal water needs, not just environmental values.
If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.
The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.
It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.
This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?
The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.
Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.
“Operation Gatekeeper has exposed a sophisticated smuggling network that threatens our Nation’s security by funneling cutting-edge AI technology to those who would use it against American interests,” said Ganjei. “These chips are the building blocks of AI superiority and are integral to modern military applications. The country that controls these chips will control AI technology; the country that controls AI technology will control the future. The Southern District of Texas will aggressively prosecute anyone who attempts to compromise America’s technological edge.”
That divergence from the prosecutors is not industrial policy. That is incoherence. But mostly it’s just bad advice, likely coming from White House AI Czar David Sacks, Mr. Trump’s South African AI policy advisor who may have a hard time getting a security clearance in the first place..
On one hand, DOJ is rightly bringing cases over the illegal diversion of restricted AI chips—recognizing that these processors are strategic technologies with direct national-security implications. On the other hand, the White House is signaling that access to those same chips is negotiable, subject to licensing workarounds, regulatory carve-outs, or political discretion.
You cannot treat a technology as contraband in federal court and as a commercial export in the West Wing.
Pick one.
AI Chips Are Not Consumer Electronics
The United States does not sell China F-35 fighter jets. We do not sell Patriot missile systems. We do not sell advanced avionics platforms and then act surprised when they show up embedded in military infrastructure. High-end AI accelerators are in the same category.
NVIDIA’s most advanced chips are not merely commercial products. They are general-purpose intelligence infrastructure or what China calls military-civil fusion. They train surveillance systems, military logistics platforms, cyber-offensive tools, and models capable of operating autonomous weapons and battlefield decision-making pipelines with no human in the loop.
If DOJ treats the smuggling of these chips into China as a serious federal crime—and it should—there is no coherent justification for authorizing their sale through executive discretion. Except, of course, money, or in Mr. Sacks case, more money.
Fully Autonomous Weapons—and Selling the Rope
China does not need U.S. chips to build consumer AI. It wants them for military acceleration.Advanced NVIDIA AI chips are not just about chatbots or recommendation engines. They are the backbone of fully autonomous weapons systems—autonomous targeting, swarm coordination, battlefield logistics, and decision-support models that compress the kill chain beyond meaningful human control.
There is an old warning attributed to Vladimir Lenin—that capitalists would sell the rope by which they would later be hanged. Apocryphal or not, it captures this moment with uncomfortable precision.
If NVIDIA chips are powerful enough to underpin autonomous weapons systems for allied militaries, they are powerful enough to underpin autonomous weapons systems for adversaries like China. Trump’s own National Security Strategy statement clearly says previous U.S. elites made “mistaken” assumptions about China such as the famous one that letting China into the WTO would integrate Beijing into the famous rules-based international order. Trump tells us that instead China “got rich and powerful” and used this against us, and goes on to describe the CCP’s well known predatory subsidies, unfair trade, IP theft, industrial espionage, supply-chain leverage, and fentanyl precursor exports as threats the U.S. must “end.” By selling them the most advanced AI chips?
Western governments and investors simultaneously back domestic autonomous-weapons firms—such as Europe-based Helsing, supported by Spotify CEO Daniel Ek—explicitly building AI-enabled munitions for allied defense. That makes exporting equivalent enabling infrastructure to a strategic competitor indefensible.
The AI Moratorium Makes This Worse, Not Better
This contradiction unfolds alongside a proposed federal AI moratorium executive order originating with Mr. Sacks and Adam Thierer of Google’s R Street Institute that would preempt state-level AI protections. States are told AI is too consequential for local regulation, yet the federal government is prepared to license exports of AI’s core infrastructure abroad.
If AI is too dangerous for states to regulate, it is too dangerous to export. Preemption at home combined with permissiveness abroad is not leadership. It is capture.
This Is What Policy Capture Looks Like
The common thread is not national security. It is Silicon Valley access. David Sacks and others in the AI–VC orbit argue that AI regulation threatens U.S. competitiveness while remaining silent on where the chips go and how they are used.
When DOJ prosecutes smugglers while the White House authorizes exports, the public is entitled to ask whose interests are actually being served. Advisory roles that blur public power and private investment cannot coexist with credible national-security policymaking particularly when the advisor may not even be able to get a US national security clearance unless the President blesses it.
A Line Has to Be Drawn
If a technology is so sensitive that its unauthorized transfer justifies prosecution, its authorized transfer should be prohibited absent extraordinary national interest. AI accelerators meet that test.
Until the administration can articulate a coherent justification for exporting these capabilities to China, the answer should be no. Not licensed. Not delayed. Not cosmetically restricted.
And if that position conflicts with Silicon Valley advisers who view this as a growth opportunity, they should return to where they belong. The fact that the US is getting 25% of the deal (which i bet never finds its way into America’s general account), means nothing except confirming Lenin’s joke about selling the rope to hang ourselves, you know, kind of like TikTok.
David Sacks should go back to Silicon Valley.
This is not venture capital. This is our national security and he’s selling it like rope.
There’s a special kind of hubris in Silicon Valley, but Marc Andreessen may have finally discovered its purest form: imagining that the Dormant Commerce Clause (DCC) — a Constitutional doctrine his own philosophical allies loathe — will be his golden chariot into the Supreme Court to eliminate state AI regulation.
If you know the history, it borders on comedic, if you think that Ayn Rand is a great comedienne.
The DCC is a judge‑created doctrine inferred from the Commerce Clause (Article I, Section 8, Clause 3), preventing states from discriminating against or unduly burdening interstate commerce. Conservatives have long attacked it as a textless judicial invention. Justice Scalia called it a “judicial fraud”; Justice Thomas wants it abolished outright. Yet Andreessen’s Commerce Clause playbook is built on expanding a doctrine the conservative legal movement has spent 40 years dismantling.
Worse for him, the current Supreme Court is the least sympathetic audience possible.
Justice Gorsuch has repeatedly questioned DCC’s legitimacy and rejects free‑floating “extraterritoriality” theories. Justice Barrett, a Scalia textualist, shows no appetite for expanding the doctrine beyond anti‑protectionism. Justice Kavanaugh is business‑friendly but wary of judicial policymaking. None of these justices would give Silicon Valley a nationwide deregulatory veto disguised as constitutional doctrine. Add Alito and Thomas, and Andreessen couldn’t scrape a majority.
And then there’s Ted Cruz — Scalia’s former clerk — loudly cheerleading a doctrine his mentor spent decades attacking.
National Pork Producers Council v. Ross (2023): The Warning Shot
Andreessen’s theory also crashes directly into the Supreme Court’s fractured decision in the most recent DCC case before SCOTUS, National Pork Producers Council v. Ross (2023), where industry groups tried to use the DCC to strike down California’s animal‑welfare law due to its national economic effects.
The result? A deeply splintered Court produced several opinions. Justice Gorsuch announced the judgment of the Court, and delivered the opinion of the Court with respect to Parts I, II, III, IV–A, and V, in which Justices Thomas, Sotomayor, Kagan and Barrett joined, an opinion with respect to Parts IV–B and IV–D, in which Justice Thomas and Barrett joined, and an opinion with respect to Part IV–C, in which Justices Thomas, Sotomayor, and Kagan joined. Justice Sotomayor filed an opinion concurring in part, in which Justice Kagan joined. Justice Barrett filed an opinion concurring in part. Chief Justice Roberts filed an opinion concurring in part and dissenting in part, in which Justices Alito, Kavanaugh and Jackson joined. Justice Kavanaugh filed an opinion concurring in part and dissenting in part.
Got it?
The upshot: – No majority for expanding DCC “extraterritoriality.” – No appetite for using DCC to invalidate state laws simply because they influence out‑of‑state markets. – Multiple justices signaling that courts should not second‑guess state policy judgments through DCC balancing. – Gorsuch’s lead opinion rejected the very arguments Silicon Valley now repackages for AI.
If Big Tech thinks this Court that decided National Pork—no pun intended—will hand them a nationwide kill‑switch on state AI laws, they profoundly misunderstand the doctrine and the Court.
Andreessen didn’t just pick the wrong legal strategy. He picked the one doctrine the current Court is least willing to expand. The Dormant Commerce Clause isn’t a pathway to victory — it’s a constitutional dead end masquerading as innovation policy.
But…maybe he’s crazy like a fox.
The Delay’s the Thing: The Dormant Commerce Clause as Delay Warfare
To paraphrase Saul Alinksy, the issue is never the issue, the issue is always delay. Of course, if delay is the true objective, you couldn’t pick a better stalling tactic than hanging an entire federal moratorium on one of the Supreme Court’s most obscure and internally conflicted doctrines. The Dormant Commerce Clause isn’t a real path to victory—not with a Court where Scalia’s intellectual heirs openly question its legitimacy. But it is the perfect fig leaf for an executive order.
The point isn’t to win the case. The point is to give Trump just enough constitutional garnish to issue the EO, freeze state enforcement, and force every challenge into multi‑year litigation. That buys the AI industry exactly what it needs: time. Time to scale. Time to consolidate. Time to embed itself into public infrastructure and defense procurement. Time to become “too essential to regulate” or as Senator Hawley asked, too big to prosecute?
Big Tech doesn’t need a Supreme Court victory. It needs a judicial cloud, a preemption smokescreen, and a procedural maze that chills state action long enough for the industry to entrench itself permanently. And no one knows that better than the moratorium’s biggest cheerleader, Senator Ted Cruz the Scalia clerk.
The Dormant Commerce Clause, in this context, isn’t a doctrine. It’s delay‑ware—legal molasses poured over every attempt by states to protect their citizens. And that delay may just be the real prize.
The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction.
In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.” This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.
AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions. States also control the electrical utilities and zoning infrastructure that AI data centers depend on.
Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.
This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. Those engineering incentives map cleanly onto the EO’s enforcement logic.
The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.
There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.
The constitutional landscape is equally important. Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress. Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding. And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.
So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.
The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?
A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.
But the deeper issue is how the AI ecosystem responds to a constrait. If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.
At the systems level, a DOJ loss may even feed back into corporate strategy. Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.
Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.
The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.
But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.
Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states. If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.
This deserves close scrutiny before it becomes the template for AI governance moving forward.
When an Executive Order purports to empower the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. In the draft Trump AI Executive Order “Eliminating State Law Obstruction of National AI Policy”, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation” whatever that means. It sounds an awful lot like laws that interfere with Google’s business model. This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven at least as much by private interests of the richest corporations in commercial history as by public policy.
AI regulation sits squarely in longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land use, and labor conditions. Crucially, states also control the electrical and zoning infrastructure that AI data centers depend on like say putting a private nuclear reactor next to your house. Directing DOJ to attack these laws effectively deputizes the federal government as the legal enforcer for a handful of private AI companies seeking unbridled “growth” without engaging in the legislative process. Meaning you don’t get a vote. All this against the backdrop of one of the biggest economic bubbles since the last time these companies nearly tanked the U.S. economy.
This inversion is constitutionally significant.
Historically, DOJ sues states to vindicate federal rights or enforce federal statutes—not to advance the commercial preferences of private industries. Here, the EO appears to convert DOJ into a litigation shield for private companies looking to avoid state oversight altogether. Under Youngstown Sheet & Tube Company, et al. v. Charles Sawyer, Secretary of Commerce, the President lacks authority to create new enforcement powers without congressional delegation, and under the major questions doctrine (West Virginia v. EPA), a sweeping reallocation of regulatory power requires explicit statutory grounding from Congress, including the Senate. That would be the Senate that resoundingly stripped the last version of the AI moratorium from the One Big Beautiful Bill Act by a vote of 99-1 against.
There are also First Amendment implications. Many state AI laws address synthetic impersonation, deceptive outputs, and risks associated with algorithmic distribution. If DOJ preempts these laws, the speech environment becomes shaped not by public debate or state protections but by executive preference and the operational needs of the largest AI platforms. Courts have repeatedly warned that government cannot structure the speech ecosystem indirectly through private intermediaries (Bantam Books v. Sullivan.)
Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states. These provisions warrant careful scrutiny before they become the blueprint for AI governance moving forward.