Update: Trump Floats “Ratepayer Protection” Pledges as Grassroots Revolt Over Data Centers Spreads

For the better part of a year, local opposition to AI hyperscaler data centers has been dismissed as NIMBYism—yet it is a movement that has gained real traction. Rural counties worried about water draw. Suburban communities objecting to diesel backup generators. Landowners frustrated over transmission corridors cutting through farmland and massive data centers removing large swaths of productive land in essentially irreversible dedication to AI.

Local politics around data-center construction often turn on land use, water, and power. Officials welcome tax base and jobs, but residents worry about noise, transmission lines, diesel backup generators, and groundwater consumption. Zoning boards and county commissioners become battlegrounds where developers promise infrastructure upgrades and community benefits while opponents push for setbacks, environmental review, and limits on incentives. Utilities and grid operators weigh reliability and cost shifting, especially where hyperscale demand requires new substations or high-voltage lines. Rural areas face pressure from land aggregation and fast-track permitting, while cities debate transparency, property-tax abatements, and whether long-term public costs outweigh near-term economic gains.

But the politics just escalated.

According to multiple reports, President Trump is preparing to highlight “ratepayer protection pledges” from major tech companies during his State of the Union address tonight — urging AI and cloud companies to publicly commit that residential electricity customers will not bear the cost of new data-center load.

That confirms concerns from Trump advisor Peter Navarro over the last couple months and is not a small signal.

For months, grassroots organizers have warned that hyperscale AI buildout could increase local electricity rates, force costly new transmission lines, accelerate natural gas plant approvals, and strain already fragile regional grids. And then there’s the nuclear issues as hyperscalers openly promote new nuclear plants. Until now, much of the policy conversation has centered on growth and competitiveness, you know, because China. The Trump pivot reframes the issue around consumer protection — closely tracking the concerns raised by grassroots opponents.

What the White House Is Signaling

The reported approach stops short of imposing a formal price cap on electricity or shifting costs to taxpayers. Instead, policymakers are signaling that large technology firms — particularly hyperscale operators — should voluntarily shoulder the marginal power costs created by their own demand growth.

In practice, this means encouraging companies such as Microsoft, Alphabet, Amazon, and OpenAI to fund grid upgrades, transmission extensions, standby generation, and other infrastructure required to serve new data-center loads, rather than socializing those costs across ordinary ratepayers. The political logic is straightforward: if hyperscale demand is driving billions in new utility investment, the beneficiaries should internalize the expense. The strategy relies on negotiated commitments, public-utility leverage, and reputational pressure rather than mandates, aiming to avoid rate shocks while still enabling continued digital-infrastructure expansion.

We’ll see.

In parallel, the administration has backed efforts to expand electricity supply in regions experiencing sharp data-center load growth, pairing political support with regulatory acceleration. In practice, this has meant encouraging grid operators to run emergency or supplemental capacity auctions—for example, in markets like PJM or ERCOT—to secure short-lead-time generation such as gas peaker plants, temporary turbines, and large-scale battery storage. Policymakers have also supported fast-track permitting and uprates at existing nuclear and natural-gas facilities, along with expedited approvals for new combined-cycle plants where reliability risks are rising. In some areas, utilities are advancing transmission expansions and demand-response programs to bridge near-term gaps. The goal is to bring firm capacity online quickly enough to keep pace with AI-driven electricity demand without triggering reliability shortfalls or price spikes.

Supposedly, Trump’s message is if data centers drive the demand spike, data centers should fund the solution. That makes sense, but count me as a skeptic as to whether this will actually happen, or whether hyperscalers will come to the taxpayer. You know, because China. But let’s sell China Nvidia chips.

Why This Matters for the Grassroots Fight

Grassroots opposition to large-scale data centers has crystallized around three increasingly defined pillars — each with its own constituency and political leverage.

1. Land Use and Community Character.
Residents object to the scale and industrial footprint of hyperscale campuses: multi-building complexes, 24/7 lighting, diesel backup generators, high-security fencing, and new high-voltage transmission corridors. In rural counties, projects can involve the quiet aggregation of farmland followed by rezoning from agricultural to industrial use. In suburban areas, neighbors focus on setbacks, noise from cooling systems, and visual impact. Planning and zoning hearings have become flashpoints where local control collides with state-level economic development priorities.

2. Environmental and Water Stress.
Data centers are energy- and water-intensive facilities. In water-constrained regions, evaporative cooling systems raise concerns about aquifer drawdown and drought resilience. Environmental advocates question lifecycle emissions from new gas-fired generation built to serve AI load, as well as the cumulative impact of substations, transmission lines, and backup generators. Even where companies pledge renewable procurement, critics argue that incremental demand can still drive fossil fuel buildout in constrained grids.

3. Electricity Costs and Grid Strain.
The most politically volatile pillar is ratepayer impact. Local activists argue that if hyperscale demand requires billions in new generation, transmission, and distribution investment, those costs could be socialized through higher retail rates. Concerns also extend to reliability — whether rapid load growth risks price spikes, capacity shortfalls, or emergency measures during extreme weather.

And then there’s the jobs myth. The “data center jobs” pitch often overstates long-term employment. Construction phases can generate hundreds of temporary union and trade jobs—electricians, concrete crews, steel, and site work—sometimes for 12–24 months. But once operational, hyperscale facilities are highly automated and run by surprisingly small permanent staffs relative to their footprint and power load. A multi-building campus consuming hundreds of megawatts may employ only a few dozen to low hundreds of full-time workers, focused on security, facilities management, and network operations. For rural counties weighing tax abatements and infrastructure upgrades, the gap between short-term construction labor and modest permanent payroll becomes a central economic-development question.

By elevating electricity price protection to a presidential talking point, the administration effectively validates this third pillar. What began as local testimony at zoning meetings is now part of national energy policy framing: the principle that ordinary households should not subsidize AI infrastructure through their power bills. That rhetorical shift transforms a local grievance into a broader political issue with statewide and federal implications.

This is no longer just a zoning fight. It is now a kitchen-table affordability issue. Which may be a good start.

The Uncomfortable Math

AI data centers run 24/7, require enormous continuous baseload power, often demand dedicated substations, and can trigger multi-billion-dollar transmission upgrades. In regulated utility regions, those upgrades may be socialized across ratepayers unless cost allocation rules are enforced.

That is the central fear: even if tech companies pay for direct interconnection, broader grid reinforcement costs may still reach residential customers. If “ratepayer protection” pledges gain traction, this would mark a major federal acknowledgement that the risk is politically real.

Why This Is Bigger Than Trump

Governors in data-center-heavy states have also expressed concern. Utilities want load growth but fear rate shock. Grid operators face pressure to accelerate capacity procurement without triggering bill spikes. Grassroots activists have argued the AI buildout is outpacing responsible grid planning — and that argument has now moved from local meetings to national politics.

Whether any president—including Trump—can truly compel hyperscale tech firms to absorb rising power and infrastructure costs remains uncertain. Without formal regulation, enforcement tools are limited to negotiation, procurement leverage, and public pressure, all of which depend on the companies’ strategic interests.

Voluntary pledges can signal cooperation but lack binding force especially if market conditions shift. The Trump announcement also raises a political question: does the “pledge” represent a balancing act inside the administration between economic populists and China hawks like Peter Navarro, often associated with industrial-policy cost discipline, and pro-AI growth lobbyists such as Silicon Valley’s AI Viceroy David Sacks? If so, the commitment may reflect an internal compromise as much as an external policy toward accelerationist hyperscalers.

Data-center growth is turning electricity affordability into a geopolitical issue, not just a local zoning fight. When hyperscalers drop a 100–500 MW load into a market, they can tighten reserve margins, push up wholesale prices, and force expensive transmission and distribution upgrades—costs that governments then have to allocate between the new entrant and everyone else. That same demand can crowd out electrification priorities (heat pumps, EVs, industrial decarbonization) or trigger emergency procurement of “firm” power—often gas—because reliability deadlines don’t wait for ideal renewable buildouts.

We are way past McDonald’s on the Champs-Élysées

This is where “net zero” starts to look like it’s in the rear-view mirror. Many jurisdictions still talk about decarbonization, but the near-term political imperative is keeping the lights on and bills stable. If the choice is between fast AI load growth and strict emissions trajectories, the operational reality in many grids is that fossil backup and accelerated thermal approvals re-enter the picture—sometimes explicitly, sometimes quietly. Meanwhile, countries with abundant cheap power (hydro, nuclear, subsidized gas) gain leverage as preferred data-center destinations, while constrained grids face moratoria, queue rationing, and public backlash.

Data-center expansion is rapidly turning electricity policy into a global political and economic tradeoff. When hyperscale facilities add hundreds of megawatts of demand, they can tighten capacity margins, raise wholesale prices, and force costly grid upgrades—decisions governments must make about who ultimately pays. In many markets, this new load competes directly with electrification goals such as EV adoption, heat pumps, and industrial decarbonization. Reliability timelines often drive utilities toward fast, firm capacity—frequently gas—because intermittent renewables and storage cannot always be deployed quickly enough.

In that sense, Trump’s choices increasingly resemble a classic “guns and butter” dilemma. Policymakers must balance the strategic push for AI infrastructure and digital competitiveness against long-term climate commitments. While net-zero targets remain official policy in many jurisdictions, near-term choices often prioritize keeping power reliable and affordable, even if that means slowing emissions progress. The tension does not necessarily mean decarbonization disappears, but it underscores the difficulty of advancing both rapid AI build-out and strict net-zero trajectories simultaneously under real-world grid constraints.

Rate Payers Get the Immediate Proof: Utility bills

If the White House advances voluntary ratepayer-protection pledges, several trajectories could unfold. Technology companies may publicly commit to absorbing incremental grid and infrastructure costs, framing the move as responsible corporate citizenship. Personally, I don’t think Trump actually believes it, and I fully expect that the teleprompter will say one thing, and then in a classic Trump aside, he will undercut the speech writers.

Utilities, facing rising capital requirements, could press for clearer cost-allocation rules to ensure large-load customers bear system expansion expenses. State public-utility commissions might reopen tariffs and special-contract pricing for hyperscale users, testing how far voluntary commitments translate into enforceable rate structures.

Meanwhile, grassroots groups are likely to demand transparent accounting to verify that ordinary customers are insulated from price impacts. Yet the full economic value of any pledge will emerge only over years of build-out and rate cases—long after the current administration, and Trump himself, are no longer in office.

For the moment, the debate has shifted. Grassroots opposition is no longer just about land or water. It is about who pays when AI reshapes the grid — and now the president is talking about it.

Let’s say I’m wrong and Trump is serious about reigning in AI. If Trump were able to make such a policy stick, it could mark a broader shift in how governments confront the external costs of rapid AI expansion. Requiring hyperscalers to internalize infrastructure and power burdens could slow the breakneck build-out that fuels large-scale model training and synthetic media proliferation.

For artists and performers, that deceleration could matter. The fight over voice, likeness, and identity—already highlighted by figures such as Brad Pitt and Tom Cruise ripped off by China’s Seedance 2.0 —centers on protecting human personhood from industrial-scale replication. A structural slowdown in AI growth would not end that conflict, but it could rebalance leverage, giving creators, unions, and policymakers more time to establish enforceable guardrails.

Infrastructure, Not Aspiration: Why Permissioned AI Begins With a Hard Reset

Paul Sinclair’s framing of generative music AI as a choice between “open studios” and permissioned systems makes a basic category mistake. Consent is not a creative philosophy or a branding position. It is a systems constraint. You cannot “prefer” consent into existence. A permissioned system either enforces authorization at the level where machine learning actually occurs—or it does not exist at all.

That distinction matters not only for artists, but for the long-term viability of AI companies themselves. Platforms built on unresolved legal exposure may scale quickly, but they do so on borrowed time. Systems built on enforceable consent may grow more slowly at first, but they compound durability, defensibility, and investor confidence over time. Legality is not friction. It is infrastructure. It’s a real “eat your vegetables” moment.

The Great Reset

Before any discussion of opt-in, licensing, or future governance, one prerequisite must be stated plainly: a true permissioned system requires a hard reset of the model itself. A model trained on unlicensed material cannot be transformed into a consent-based system through policy changes, interface controls, or aspirational language. Once unauthorized material is ingested and used for training, it becomes inseparable from the trained model. There is no technical “undo” button.

The debate is often framed as openness versus restriction, innovation versus control. That framing misses the point. The real divide is whether a system is built to respect authorization where machine learning actually happens. A permissioned system cannot be layered on top of models trained without permission, nor can it be achieved by declaring legacy models “deprecated.” Machine learning systems do not forget unless they are reset. The purpose of a trained model is remembering—preserving statistical patterns learned from its data—not forgetting. Models persist, shape downstream outputs, and retain economic value long after they are removed from public view. Administrative terminology is not remediation.

Recent industry language about future “licensed models” implicitly concedes this reality. If a platform intends to operate on a consent basis, the logical consequence is unavoidable: permissioned AI begins with scrapping the contaminated model and rebuilding from zero using authorized data only.

Why “Untraining” Does Not Solve the Problem

Some argue that problematic material can simply be removed from an existing model through “untraining.” In practice, this is not a reliable solution. Modern machine-learning systems do not store discrete copies of works; they encode diffuse statistical relationships across millions or billions of parameters. Once learned, those relationships cannot be surgically excised with confidence. It’s not Harry Potter’s Pensieve.

Even where partial removal techniques exist, they are typically approximate, difficult to verify, and dependent on assumptions about how information is represented internally. A model may appear compliant while still reflecting patterns derived from unauthorized data. For systems claiming to operate on affirmative permission, approximation is not enough. If consent is foundational, the only defensible approach is reconstruction from a clean, authorized corpus.

The Structural Requirements of Consent

Once a genuine reset occurs, the technical requirements of a permissioned system become unavoidable.

Authorized training corpus. Every recording, composition, and performance used for training must be included through affirmative permission. If unauthorized works remain, the model remains non-consensual.

Provenance at the work level. Each training input must be traceable to specific authorized recordings and compositions with auditable metadata identifying the scope of permission.

Enforceable consent, including withdrawal. Authorization must allow meaningful limits and revocation, with systems capable of responding in ways that materially affect training and outputs.

Segregation of licensed and unlicensed data. Permissioned systems require strict internal separation to prevent contamination through shared embeddings or cross-trained models.

Transparency and auditability. Permission claims must be supported by documentation capable of independent verification. Transparency here is engineering documentation, not marketing copy.

These are not policy preferences. They are practical consequences of a consent-based architecture.

The Economic Reality—and Upside—of Reset

Rebuilding models from scratch is expensive. Curating authorized data, retraining systems, implementing provenance, and maintaining compliance infrastructure all require significant investment. Not every actor will be able—or willing—to bear that cost. But that burden is not an argument against permission. It is the price of admission.

Crucially, that cost is also largely non-recurring. A platform that undertakes a true reset creates something scarce in the current AI market: a verifiably permissioned model with reduced litigation risk, clearer regulatory posture, and greater long-term defensibility. Over time, such systems are more likely to attract durable partnerships, survive scrutiny, and justify sustained valuation.

Throughout technological history, companies that rebuilt to comply with emerging legal standards ultimately outperformed those that tried to outrun them. Permissioned AI follows the same pattern. What looks expensive in the short term often proves cheaper than compounding legal uncertainty.

Architecture, Not Branding

This is why distinctions between “walled garden,” “opt-in,” or other permission-based labels tend to collapse under technical scrutiny. Whatever the terminology, a system grounded in authorization must satisfy the same engineering conditions—and must begin with the same reset. Branding may vary; infrastructure does not.

Permissioned AI is possible. But it is reconstructive, not incremental. It requires acknowledging that past models are incompatible with future claims of consent. It requires making the difficult choice to start over.

The irony is that legality is not the enemy of scale—it is the only path to scale that survives. Permission is not aspiration. It is architecture.

Grassroots Revolt Against Data Centers Goes National: Water Use Now the Flashpoint

Over the last two weeks, grassroots opposition to data centers has moved from sporadic local skirmishes to a recognizable national pattern. While earlier fights centered on land use, noise, and tax incentives, the current phase is more focused and more dangerous for developers: water.

Across multiple states, residents are demanding to see the “water math” behind proposed data centers—how much water will be consumed (not just withdrawn), where it will come from, whether utilities can actually supply it during drought conditions, and what enforceable reporting and mitigation requirements will apply. In arid regions, water scarcity is an obvious constraint. But what’s new is that even in traditionally water-secure states, opponents are now framing data centers as industrial-scale consumptive users whose needs collide directly with residential growth, agriculture, and climate volatility.

The result: moratoria, rezoning denials, delayed hearings, task forces, and early-stage organizing efforts aimed at blocking projects before entitlements are locked in.

Below is a snapshot of how that opposition has played out state by state over the last two weeks.

State-by-State Breakdown

Virginia  

Virginia remains ground zero for organized pushback.

Botetourt County: Residents confronted the Western Virginia Water Authority over a proposed Google data center, pressing officials about long-term water supply impacts and groundwater sustainability.  

Hanover County (Richmond region): The Planning Commission voted against recommending rezoning for a large multi-building data center project.  

State Legislature: Lawmakers are advancing reform proposals that would require water-use modeling and disclosure.

Georgia  

Metro Atlanta / Middle Georgia: Local governments’ recruitment of hyperscale facilities is colliding with resident concerns.  

DeKalb County: An extended moratorium reflects a pause-and-rewrite-the-rules strategy.  

Monroe County / Forsyth area: Data centers have become a local political issue.

Arizona  

The state has moved to curb groundwater use in rural basins via new regulatory designations requiring tracking and reporting.  

Local organizing frames AI data centers as unsuitable for arid regions.

Maryland  

Prince George’s County (Landover Mall site): Organized opposition centered on environmental justice and utility burdens.  

Authorities have responded with a pause/moratorium and a task force.

Indiana  

Indianapolis (Martindale-Brightwood): Packed rezoning hearings forced extended timelines.  

Greensburg: Overflow crowds framed the fight around water-user rankings.

Oklahoma  

Luther (OKC metro): Organized opposition before formal filings.

Michigan  

Broad local opposition with water and utility impacts cited.  

State-level skirmishes over incentives intersect with water-capacity debates.

North Carolina  

Apex (Wake County area): Residents object to strain on electricity and water.

Wisconsin & Pennsylvania 

Corporate messaging shifts in response to opposition; Microsoft acknowledged infrastructure and water burdens.

The Through-Line: “Show Us the Water Math”

Lawrence of Arabia: The Well Scene

Across these states, the grassroots playbook has converged:

Pack the hearing.  

Demand water-use modeling and disclosure.  

Attack rezoning and tax incentives.  

Force moratoria until enforceable rules exist.

Residents are demanding hard numbers: consumptive losses, aquifer drawdown rates, utility-system capacity, drought contingencies, and legally binding mitigation.

Why This Matters for AI Policy

This revolt exposes the physical contradiction at the heart of the AI infrastructure build-out: compute is abstract in policy rhetoric but experienced locally as land, water, power, and noise.

Communities are rejecting a development model that externalizes its physical costs onto local water systems and ratepayers.

Water is now the primary political weapon communities are using to block, delay, and reshape AI infrastructure projects.

Read the local news:

America’s AI Boom Is Running Into An Unplanned Water Problem (Ken Silverstein/Forbes)

Residents raise water concerns over proposed Google data center (Allyssa Beatty/WDBJ7 News)

How data centers are rattling a Georgia Senate special election (Greg Bluesetein/Atlanta Journal Constitution)

A perfect, wild storm’: widely loathed datacenters see little US political opposition (Tom Perkins/The Guardian) 

Hanover Planning Commission votes to deny rezoning request for data center development (Joi Fultz/WTVR)

Microsoft rolls out initiative to limit data-center power costs, water use impact (Reuters)

Grass‑Roots Rebellion Against Data Centers and Grid Expansion

A grass‑roots “data center and electric grid rebellion” is emerging across the United States as communities push back against the local consequences of AI‑driven infrastructure expansion. Residents are increasingly challenging large‑scale data centers and the transmission lines needed to power them, citing concerns about enormous electricity demand, water consumption, noise pollution, land use, declining property values, and opaque approval processes. What were once routine zoning or utility hearings are now crowded, contentious events, with citizens organizing quickly and sharing strategies across counties and states.



This opposition is no longer ad hoc. In Northern Virginia—often described as the global epicenter of data centers—organized campaigns such as the Coalition to Protect Prince William County have mobilized voters, fundraised for local elections, demanded zoning changes, and challenged approvals in court. In Maryland’s Prince George’s County, resistance has taken on a strong environmental‑justice framing, with groups like the South County Environmental Justice Coalition arguing that data centers concentrate environmental and energy burdens in historically marginalized communities and calling for moratoria and stronger safeguards.



Nationally, consumer and civic groups are increasingly coordinated, using shared data, mapping tools, and media pressure to argue that unchecked data‑center growth threatens grid reliability and shifts costs onto ratepayers. Together, these campaigns signal a broader political reckoning over who bears the costs of the AI economy.

Global Data Centers

Here’s a snapshot of grass roots opposition in Texas, Louisiana and Nevada:

Texas

Texas has some of the most active and durable local opposition, driven by land use, water, and transmission corridors.

  • Hill Country & Central Texas (Burnet, Llano, Gillespie, Blanco Counties)
    Grass-roots groups formed initially around high-voltage transmission lines (765 kV) tied to load growth, now explicitly linking those lines to data center demand. Campaigns emphasize:
    • rural land fragmentation
    • wildfire risk
    • eminent domain abuse
    • lack of local benefit
      These groups are often informal coalitions of landowners rather than NGOs, but they coordinate testimony, public-records requests, and local elections.
  • DFW & North Texas
    Neighborhood associations opposing rezoning for hyperscale facilities focus on noise (backup generators), property values, and school-district tax distortions created by data-center abatements.
  • ERCOT framing
    Texas groups uniquely argue that data centers are socializing grid instability risk onto residential ratepayers while privatizing upside—an argument that resonates with conservative voters.

Louisiana

Opposition is newer but coalescing rapidly, often tied to petrochemical and LNG resistance networks.

  • North Louisiana & Mississippi River Corridor
    Community groups opposing new data centers frame them as:
    • “energy parasites” tied to gas plants
    • extensions of an already overburdened industrial corridor
    • threats to water tables and wetlands
      Organizers often overlap with environmental-justice and faith-based coalitions that previously fought refineries and export terminals.
  • Key tactic: reframing data centers as industrial facilities, not “tech,” triggering stricter land-use scrutiny.

Nevada

Nevada opposition centers on water scarcity and public-land use.

  • Clark County & Northern Nevada
    Residents and conservation groups question:
    • water allocations for evaporative cooling
    • siting near public or BLM-managed land
    • grid upgrades subsidized by ratepayers for private AI firms
  • Distinct Nevada argument: data centers compete directly with housing and tribal water needs, not just environmental values.

The Data Center Rebellion is Here and It’s Reshaping the Political Landscape (Washington Post)

Residents protest high-voltage power lines that could skirt Dinosaur Valley State Park (ALEJANDRA MARTINEZ AND PAUL COBLER/Texas Tribune)

US Communities Halt $64B Data Center Expansions Amid Backlash (Lucas Greene/WebProNews)

Big Tech’s fast-expanding plans for data centers are running into stiff community opposition (Marc Levy/Associated Press)

Data center ‘gold rush’ pits local officials’ hunt for new revenue against residents’ concerns (Alander Rocha/Georgia Record)

DOJ Authority and the “Because China” Trump AI Executive Order

When an Executive Order purports to empower the Department of Justice to sue states, the stakes go well beyond routine federal–state friction.  In the draft Trump AI Executive Order “Eliminating State Law Obstruction of National AI Policy”, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation” whatever that means.  It sounds an awful lot like laws that interfere with Google’s business model. This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven at least as much by private interests of the richest corporations in commercial history as by public policy.

AI regulation sits squarely in longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land use, and labor conditions.  Crucially, states also control the electrical and zoning infrastructure that AI data centers depend on like say putting a private nuclear reactor next to your house.  Directing DOJ to attack these laws effectively deputizes the federal government as the legal enforcer for a handful of private AI companies seeking unbridled “growth” without engaging in the legislative process. Meaning you don’t get a vote. All this against the backdrop of one of the biggest economic bubbles since the last time these companies nearly tanked the U.S. economy.

This inversion is constitutionally significant. 

Historically, DOJ sues states to vindicate federal rights or enforce federal statutes—not to advance the commercial preferences of private industries.  Here, the EO appears to convert DOJ into a litigation shield for private companies looking to avoid state oversight altogether.  Under Youngstown Sheet & Tube Company, et al. v. Charles Sawyer, Secretary of Commerce, the President lacks authority to create new enforcement powers without congressional delegation, and under the major questions doctrine (West Virginia v. EPA), a sweeping reallocation of regulatory power requires explicit statutory grounding from Congress, including the Senate. That would be the Senate that resoundingly stripped the last version of the AI moratorium from the One Big Beautiful Bill Act by a vote of 99-1 against.

There are also First Amendment implications.  Many state AI laws address synthetic impersonation, deceptive outputs, and risks associated with algorithmic distribution.  If DOJ preempts these laws, the speech environment becomes shaped not by public debate or state protections but by executive preference and the operational needs of the largest AI platforms. Courts have repeatedly warned that government cannot structure the speech ecosystem indirectly through private intermediaries (Bantam Books v. Sullivan.)

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states. These provisions warrant careful scrutiny before they become the blueprint for AI governance moving forward.

Ghosts in the Machine: How AI’s “Future” Runs on a 1960s Grid

The smart people want us to believe that artificial intelligence is the frontier and apotheosis of human progress. They sell it as transformative and disruptive. That’s probably true as far as it goes, but it doesn’t go that far. In practice, the infrastructure that powers it often dates back to a different era and there is the paradox: much of the electricity to power AI’s still flows through the bones of mid‑20th century engineering. Wouldn’t it be a good thing if they innovated a new energy source before they crowd out the humans?

The Current Generation Energy Mix — And What AI Adds

To see that paradox, start with the U.S. national electricity mix:

In 2023 , the U.S. generated about 4,178 billion kWh of electricity at utility-scale facilities. Of that, 60% came from fossil fuels (coal, natural gas, petroleum, other gases), 19% came from nuclear, and 21% from renewables (wind, solar, hydro). 
Nuclear power remains the backbone of zero-carbon baseload: it supplies around 18–19% of U.S. electricity, and nearly half of all non‑emitting generation. 
In 2025, clean sources (nuclear + renewables) are edging upward. According to Ember, in March 2025 fossil fuels fell below 50% of U.S. electricity generation for the first time (49.2%), marking a historic shift.
– Yet still, more than half of US power comes from carbon-emitting sources in most months.

Meanwhile, AI’s demand is surging:

– The Department of Energy estimates that data centers consumed 4.4% of U.S. electricity in 2023 (176 TWh) and projects this to rise to 6.7–12% by 2028 (325–580 TWh) according to the Department of Energy.
– An academic study of 2,132 U.S. data centers (2023–2024) found that these facilities accounted for more than 4% of national power consumption, with 56% coming from fossil sources, and emitted more than 105 million tons of CO₂e (approximately 2.18% of U.S. emissions in 2023). 
– That study also concluded: data centers’ carbon intensity (CO₂ per kWh) is 48% higher than the U.S. average.

So: AI’s power demands are no small increment—they threaten to stress a grid still anchored in older thermal technologies.

Global Data Centers https://www.datacentermap.com

Why “1960s Infrastructure” Isn’t Hyperbole

When I say AI is running on 1960s technology, I mean several things:

1. Thermal generation methods remain largely unchanged according to the EPA.  Coal-fired steam turbines and natural gas combined-cycle plants still dominate.

2. Many plants are old and aging.  The average age of coal plants in the U.S. is about 43 years; some facilities are over 60. Transmission lines and grid control systems often date from mid-to late-20th century planning.

3. Nuclear’s modern edge is historical.  Most U.S. nuclear reactors in operation were ordered in the 1960s–1970s and built over subsequent decades. In other words: The commercial installed base is old.

The Rickover Motif: Nuclear, Legacy, and Power Politics

To criticize AI’s reliance on legacy infrastructure, one powerful symbol is Admiral Hyman G. Rickover, the man often called the “Father of the Nuclear Navy.” Rickover’s work in the 1950s and 1960s not only shaped naval propulsion but also influenced the civilian nuclear sector.

Rickover pushed for rigorous engineering standards , standardization, safety protocols, and institutional discipline in building reactors. After the success of naval nuclear systems, Rickover was assigned by the Atomic Energy Commission to influence civilian nuclear power development.

Rickover famously required applicants to the nuclear submarine service to have “fixed their own car.” That speaks to technical literacy, self-reliance, and understanding systems deeply, qualities today’s AI leaders often lack. I mean seriously—can you imagine Sam Altman on a mechanic’s dolly covered in grease?

As the U.S. Navy celebrates its 250th anniversary, it’s ironic that modern AI ambitions lean on reactors whose protocols, safety cultures, and control logic remain deeply shaped by Rickover-era thinking from…yes…1947. And remember, Admiral Rickover had to transfer the hidebound Navy to nuclear power which at the time was just recently discovered and not well understood—and away from diesel. Diesel. That’s innovation and required a hugely entrepreneurial leader.

The Hypocrisy of Innovation Without Infrastructure

AI companies claim disruption but site data centers wherever grid power is cheapest — often near legacy thermal or nuclear plants. They promote “100% renewable” branding via offsets, but in real time pull electricity from fossil-heavy grids. Dense compute loads aggravate transmission congestion. FERC and NERC now list hyperscale data centers as emerging reliability risks. 

The energy costs AI doesn’t pay — grid upgrades, transmission reinforcement, reserve margins — are socialized onto ratepayers and bondholders. If the AI labs would like to use their multibillion dollar valuations to pay off that bond debt, that’s a conversation. But they don’t want that, just like they don’t want to pay for the copyrights they train on.

Innovation without infrastructure isn’t innovation — it’s rent-seeking. Shocking, I know…Silicon Valley engaging in rent-seeking and corporate welfare.

The 1960s Called. They Want Their Grid Back.

We cannot build the future on the bones of the past. If AI is truly going to transform the world, its promoters must stop pretending that plugging into a mid-century grid is good enough. The industry should lead on grid modernization, storage, and advanced generation, not free-ride on infrastructure our grandparents paid for.

Admiral Rickover understood that technology without stewardship is just hubris. He built a nuclear Navy because new power required new systems and new thinking. That lesson is even more urgent now.

Until it is learned, AI will remain a contradiction: the most advanced machines in human history, running on steam-age physics and Cold War engineering.


From Plutonium to Prompt Engineering: Big Tech’s Land Grab at America’s Nuclear Sites–and Who’s Paying for It?

In a twist of post–Cold War irony, the same federal sites that once forged the isotopes of nuclear deterrence are now poised to fuel the arms race of artificial intelligence under the leadership of Special Government Employee and Silicon Valley Viceroy David Sacks. Under a new Department of Energy (DOE) initiative, 16 legacy nuclear and lab sites — including Savannah River, Idaho National Lab, and Oak Ridge Tennessee — are being opened to private companies to host massive AI data centers. That’s right–Tennessee where David Sacks is riding roughshod over the ELVIS Act.

But as this techno-industrial alliance gathers steam, one question looms large: Who benefits — and how will the American public be compensated for leasing its nuclear commons to the world’s most powerful corporations? Spoiler alert: We won’t.

A New Model, But Not the Manhattan Project

This program is being billed in headlines as a “new Manhattan Project for AI.” But that comparison falls apart quickly. The original Manhattan Project was:
– Owned by the government
– Staffed by public scientists
– Built for collective defense

Today’s AI infrastructure effort is:
– Privately controlled
– Driven by monopolies and venture capital
– Structured to avoid transparency and public input
– Uses free leases on public land with private nuclear reactors

Call it the Manhattan Project in reverse — not national defense, but national defense capture.

The Art of the Deal: Who gets what?

What Big Tech Is Getting

– Access to federal land already zoned, secured, and wired
– Exemption from state and local permitting
– Bypass of grid congestion via nuclear-ready substations
– DOE’s help fast-tracking nuclear microreactors (SMRs)
– Potential sovereign AI training enclaves, shielded from export controls and oversight

And all of it is being made available to private companies called the “Frontier labs”: Microsoft, Oracle, Amazon, OpenAI, Anthropic, xAI — the very firms at the center of the AI race.

What the Taxpayer Gets (Maybe)

Despite this extraordinary access, almost nothing is disclosed about how the public is compensated. No known revenue-sharing models. No guaranteed public compute access. No equity. No royalties.

Land lease payments? Not disclosed. Probably none.
Local tax revenue? Minimal (federal lands exempt)
Infrastructure benefit sharing? Unclear or limited

It’s all being negotiated quietly, under vague promises of “national competitiveness.”

Why AI Labs Want DOE Sites

Frontier labs like OpenAI and Anthropic — and their cloud sponsors — need:
– Gigawatts of energy
– Secure compute environments
– Freedom from export rules and Freedom of Information Act requests
– Permitting shortcuts and national branding

The DOE sites offer all of that — plus built-in federal credibility. The same labs currently arguing in court that their training practices are “fair use” now claim they are defenders of democracy training AI on taxpayer-built land.

This Isn’t the Manhattan Project — It’s the Extraction Economy in a Lab Coat

The tech industry loves to invoke patriotism when it’s convenient — especially when demanding access to federal land, nuclear infrastructure, or diplomatic cover from the EU’s AI Act. But let’s be clear:

This isn’t the Manhattan Project. Or rather we should hope it isn’t because that one didn’t end well and still hasn’t.
It’s not public service.
It’s Big Tech lying about fair use, wrapped in an American flag — and for all we know, it might be the first time David Sacks ever saw one.

When companies like OpenAI and Microsoft claim they’re defending democracy while building proprietary systems on DOE nuclear land, we’re not just being gaslit — we’re being looted.

If the AI revolution is built on nationalizing risk and privatizing power, it’s time to ask whose country this still is — and who gets to turn off the lights.

Beyond Standard Oil: How the AI Action Plan Made America a Command Economy for Big Tech That You Will Pay For

When the White House requested public comments earlier this year on how the federal government should approach artificial intelligence, thousands of Americans—ranging from scientists to artists, labor leaders to civil liberties advocates—responded with detailed recommendations. Yet when America’s AI Action Plan was released today, it became immediately clear that those voices were largely ignored. The plan reads less like a response to public input and more like a pre-written blueprint drafted in collaboration with the very corporations it benefits. The priorities, language, and deregulatory thrust suggest that the real consultations happened behind closed doors—with Big Tech executives, not the American people.

In other words, business as usual.

By any historical measure—Standard Oil, AT&T, or even the Cold War military-industrial complex—the Trump Administration’s “America’s AI Action Plan” represents a radical leap toward a command economy built for and by Big Tech. Only this time, there are no rate regulations, no antitrust checks, and no public obligations—just streamlined subsidies, deregulation, and federally orchestrated dominance by a handful of private AI firms.

“Frontier Labs” as National Champions

The plan doesn’t pretend to be neutral. It picks winners—loudly. Companies like OpenAI, Anthropic, Meta, Microsoft, and Google are effectively crowned as “national champions,” entrusted with developing the frontier of artificial intelligence on behalf of the American state.

– The National AI Research Resource (NAIRR) and National Science Foundation partnerships funnel taxpayer-funded compute and talent into these firms.
– Federal procurement standards now require models that align with “American values,” but only as interpreted by government-aligned vendors.
– These companies will receive priority access to compute in a national emergency, hard-wiring them into the national security apparatus.
– Meanwhile, so-called “open” models will be encouraged in name only—no requirement for training data transparency, licensing, or reproducibility.

This is not a free market. This is national champion industrial policy—without the regulation or public equity ownership that historically came with it.

Infrastructure for Them, Not Us

The Action Plan reads like a wishlist from Silicon Valley’s executive suites:

– Federal lands are being opened up for AI data centers and energy infrastructure.
– Environmental and permitting laws are gutted to accelerate construction of facilities for private use.
– A national electrical grid expansion is proposed—not to serve homes and public transportation, but to power hyperscaler GPUs for model training.
– There’s no mention of public access, community benefit, or rural deployment. This is infrastructure built with public expense for private use.

Even during the era of Ma Bell, the public got universal service and price caps. Here? The public is asked to subsidize the buildout and then stand aside.

Deregulation for the Few, Discipline for the Rest

The Plan explicitly orders:
– Rescission of Biden-era safety and equity requirements.
– Reviews of FTC investigations to shield AI firms from liability.
– Withholding of federal AI funding from states that attempt to regulate the technology for safety, labor, or civil rights purposes.

Meanwhile, these same companies are expected to supply the military, detect cyberattacks, run cloud services for federal agencies, and set speech norms in government systems.

The result? An unregulated cartel tasked with executing state functions.

More Extreme Than Standard Oil or AT&T

Let’s be clear: Standard Oil was broken up. AT&T had to offer regulated universal service. Lockheed, Raytheon, and the Cold War defense contractors were overseen by procurement auditors and GAO enforcement.

This new AI economy is more privatized than any prior American industrial model—yet more dependent on the federal government than ever before. It’s an inversion of free market principles wrapped in American flags and GPU clusters.

Welcome to the Command Economy—For Tech Oligarchs

There’s a word for this: command economy. But instead of bureaucrats in Soviet ministries, we now have a handful of unelected CEOs directing infrastructure, energy, science, education, national security, and labor policy—all through cozy relationships with federal agencies.

If we’re going to nationalize AI, let’s do it honestly—with public governance, democratic accountability, and shared benefit. But this halfway privatized, fully subsidized, and wholly unaccountable structure isn’t capitalism. It’s capture.