The AI Subsidy Is Over. Or Maybe It’s Just Beginning.


The current narrative says the “AI subsidy era” is ending. Prices are rising. Rate limits are tightening. Ads are creeping in. Enterprise tiers are replacing all-you-can-eat plans. In short: users will finally start paying what AI actually costs.

Haydon Field writing in The Verge tells us:

Earlier this month, millions of OpenClaw users woke up to a sweeping mandate: The viral AI agent tool, which this year took the worldwide tech industry by storm, had been severely restricted by Anthropic.

Anthropic, like other leading AI labs, was under immense pressure to lessen the strain on its systems and start turning a profit. So if the users wanted its Claude AI to power their popular agents, they’d have to start paying handsomely for the privilege.

“Our subscriptions weren’t built for the usage patterns of these third-party tools,” wrote Boris Cherny, head of Claude Code, on X. “We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that.”

The announcement was a sign of the times. Investors have poured hundreds of billions of dollars into companies like OpenAI and Anthropic to help them scale and build out their compute. Now, they’re expecting returns. After years of offering cheap or totally free access to advanced AI systems, the bill is starting to come due — and downstream, users are beginning to feel the pinch.

That’s true but it’s leaving out a lot.

Yes, the consumer subsidy—venture-backed underpricing of inference—may be winding down. But the broader subsidy system that made AI possible isn’t going away. It’s expanding. Just ask President Trump.

To understand why, you have to go back to the last great digital disruption.

From P2P to Streaming to AI

Start with Napster.

P2P didn’t just enable infringement. It rewired expectations. It taught users that all music should be available, instantly, for free. Why? Because there was gold in them long tails. Forget about supply and demand, we had infinite supply so demand would take care of itself.

It’s for sale

Every artist, songwriter, label and publisher in the history of recorded music were not compensated for this shift. They were its involuntary financiers. Their catalogs created the demand, the network effects, and the user adoption that built the early internet music economy.

Streaming—think Spotify—didn’t reverse that logic. It formalized it. (Remember, streaming saved us from piracy and we should all be so grateful.) It actually transferred that involuntary financing from the p2p balance sheet to Spotify’s, and took it public.


Streaming platforms accepted a new baseline: the entire world’s repertoire must be available at all times, regardless of demand. That is a costly and structurally inefficient mandate, but it became the price of competing in a market shaped by P2P expectations. Licensing systems like the Mechanical Licensing Collective (MLC) were built to support that scale, but the underlying premise remained: total availability first, compensation second.

AI changes the game again.

AI Doesn’t Just Distribute Works. It Consumes Them.

P2P distributed music. Streaming licensed it. AI models ingest it.

That’s the critical difference.

Generative AI systems are trained on massive corpora that include copyrighted works, performances, and what we might call personhood signals—voice, style, tone, phrasing, and creative identity. These inputs are not just indexed or streamed. They are transmogrified (see what I did there) into model weights that can generate new outputs that compete with, mimic, or substitute for the originals.

So the role of the artist evolves:
    •    In P2P: unpaid distributor subsidy
    •    In streaming: underpaid inventory supplier
    •    In AI: uncompensated production input
That is not a marginal shift. It is a structural one.

The Real Subsidy Stack

When people say the “AI subsidy era is over,” they are usually talking about one thing: cheap access to compute.
But AI has always depended on a multi-layered subsidy stack:

    Creators – supply training data, cultural value, and identity signals without compensation or consent
    Users – supply prompts, feedback, and behavioral data that improve the models
    Communities – absorb land use, water consumption, and environmental costs
    Ratepayers – fund grid upgrades, transmission, and reliability for data center demand
    Venture capital – underwrites early losses to drive adoption and scale

The shift we are seeing now is not the end of subsidies. It’s a reallocation. Or as a cynic might say, it’s rearranging the deck chairs to hide the lifeboats.

Users may start paying more. But creators still aren’t being paid for training. Communities are still being asked to host infrastructure. And the physical footprint of AI is accelerating. Just ask President Trump.

The World Turned Upside Down

What makes this moment different is the scale of the buildout.
We are not just talking about apps anymore. We are talking about an industrial transformation:
    •    New data centers the size of small cities
    •    High-voltage transmission lines
    •    Water-intensive cooling systems
    •    Semiconductor supply chains
    •    And even discussions of new nuclear capacity to support compute demand

This is infrastructure on the scale of a national project, or more like national mobilization. But it is being built on top of a premise that has not been resolved: the uncompensated use of human creative work as training input.

That is the inversion: We are building power plants for systems that depend on not paying the people whose work makes those systems possible.

A Better Frame

The cleanest way to understand this is as a continuum:

P2P turned infringement into consumer expectation.
Streaming turned that expectation into platform infrastructure.
AI turns uncompensated authorship into industrial feedstock.

Or more bluntly:
The AI free ride is not ending. It is being re-invoiced. Users may now see higher prices. But the deeper subsidies—creative, environmental, and civic—remain off the books.

What Comes Next

If the industry is serious about “pricing AI correctly,” it cannot stop at compute.

It has to address:
    •    Compensation frameworks for training data
    •    Attribution and provenance standards
    •    Licensing models for style and voice
    •    Infrastructure cost allocation (who pays for the grid?)
    •    Governance of large-scale compute deployment

Otherwise, we are not exiting the subsidy era. We are doing what Big Tech lives for.

We are scaling it.

And this time, instead of a few server racks in a dorm room, we are building an global energy system around it.

Same Popcorn, Different Wrapper

In ancient Rome, Marcus Licinius Crassus was the wealthiest man alive. And he had a system. He owned real estate and he also owned the fire brigades. When a house caught fire, Crassus sent his men to the scene. But they didn’t rush in with water.

First, he made the owner an offer. Sell me your house for pennies. The house that is literally on fire. Agree to the price, and the fire would be put out. Refuse… and his fire brigade would simply watch it burn.

Some even whispered that Crassus’s men set fires themselves, just to create new ‘opportunities.’ Ya think?

It was ruthless. Ingenious. And it gave him his own kind of safe harbor. If you controlled the fire brigade… there was no liability. No regulator. No competition. Just profit. Because Crassus set the valuation.

Now—fast forward two thousand years. AI hyperscalers haven’t just rediscovered Crassus’s model. They’ve reimagined it.

The Valuation is the Thing

There is a moment in every cycle when the story stops even pretending to line up with the business. That moment usually shows up quietly at first, almost as a joke, and then all at once everyone realizes the joke is being taken seriously.

We may be there again.

Allbirds, a company that built its brand selling wool sneakers to a very specific kind of customer, is now pivoting into AI compute infrastructure. Not adjacent. Not evolutionary. Just a clean jump into GPUs and datacenters. The rebrand writes itself. NewBird AI.

If that sounds absurd, it should. But it should also feel familiar. The mistake is to focus on the technology. The technology is always real. The internet was real. AI is real. The mistake is to assume the valuation attached to that technology has anything to do with the underlying business. That part is almost always where things go sideways. The people. The ones who set the fires.

Fire Good, Valuations Bad

Look at the comps. Spotify sits around a one hundred billion dollar market cap. Universal Music Group is closer to thirty eight. Warner Music Group is around fifteen. The companies that own the music, the actual asset, the thing that endures, are worth a fraction of the company that packages and distributes it and will one day be replaced, just like streaming replaced CDs.

That is not a story about innovation. It is a story about what the market chooses to value.

Once you see that, the Allbirds pivot stops looking irrational. It starts looking like one of the only logical moves available. If the market assigns higher multiples to infrastructure, to platforms, to anything that can be described as scalable, then the rational response is to become that thing. Not because the company has any particular advantage in doing so, but because the category itself carries the valuation.

We have seen this movie before. In the late nineties, companies selling ordinary products wrapped themselves in the language of the internet. They were not retailers. They were platforms. They were not losing money. Oh no, no, no. They were scaling. They could IPO with four quarters of top line revenue. The technology stack became the story. The story became the valuation. The underlying business became almost incidental. Larry Ellison’s famous spoof Internet company, HeyIdiot.com was a “cash portal” that only sold one product, being shares of HeyIdiot.com stock at incrementally higher prices to even greater fools.

The systems built around those businesses grew increasingly complex. Layers of software justified layers of capital. At the same time, the basic economics often made less and less sense. Somewhere outside the pitch decks, the vulnerabilities were obvious. The infrastructure was fragile. The incentives were misaligned. But the narrative carried everything forward until it didn’t.

This cycle has its own vocabulary. Instead of platforms and portals, we have models and compute. Instead of e commerce infrastructure, we have GPU clusters. The words are different. The behavior is not.

But somebody’s AI is not in on the joke…

“Part of their exploration into new ideas within the tech industry?” Say what? Somebody’s not in on the joke.

The pattern is simple. Take something real and wrap it in something that can be described as infinite, like you know, shelf space for the long tail. The wrapper gets the multiple. The underlying asset becomes an input cost. Over time, the market forgets the difference. Particularly with help from Mary Meeker.

That is how you end up with a distributor valued above the content it distributes. It is how you end up with a sneaker company presenting itself as a datacenter operator. It is how each cycle convinces itself that it has broken from the last one when it is mostly repeating it with better branding.

Same popcorn. Different wrapper.

None of this requires believing that AI is not important. It is. None of this requires believing that compute does not matter. It does. The question is not whether the technology is real. The question is why the valuation attached to it keeps drifting so far from the businesses claiming it.

There is a point where companies stop explaining how they make money and start explaining what category they belong to. That is usually the point where the market has shifted from pricing businesses to pricing narratives.

When that happens, the incentives become clear. You do not need to build the best company. You need to be seen as the right kind of company. You need the HeyIdiot wrapper.

So no, this is not about the macro environment. It is not about timing the cycle or reading the tea leaves of innovation.

It is simpler than that.

It is the valuation, stupid.

And yes, it is still stupid. But as Crassus might tell you, the house is also still on fire, mofo. What do you want to do about it?

The SXSW–PwC Report Is Polished—But It’s Still a Conference Echo Chamber of an Echo Chamber

The 2026 SXSW–PwC Insights Report is well-written, highly readable, and professionally assembled with lots of graphics. It succeeds at what it sets out to do: synthesize themes from a sprawling, interdisciplinary conference into something digestible for executives and strategists.

But it is important to be clear about what this document actually is—and what it is not.

It is not a study.
It is not an empirical analysis.
And it is certainly not the product of anything resembling peer review.

It is a reflection of conference discourse. And lunches. But at least they don’t mention “because China.”

The missing story: creators and labor

Perhaps the most notable gap—particularly given SXSW’s cultural footprint as a music festival that never paid a musician it couldn’t stiff—is the absence of a meaningful discussion of creators and labor.

Adding insult to injury, the report’s most conspicuous nod to the music business that spawned SXSW is in the report section titled “Replay vs. Breakout Hit,” a cute music metaphor for what is essentially a self-grading exercise. Why are we not surprised. For a conference rooted in the labor and culture of musicians, the report has remarkably little to say about musicians as workers or rights-holders. Or at all. It reads a bit like those tech offices that name their conference rooms after artists while inside those rooms people figure out how to disintermediate, devalue, or extract from the artists themselves. Not mentioning names but their initials are Google.

Technology throughout the report is framed as expanding capability, but not as transferring wealth.

There is little engagement with:
– whether creators are compensated or displaced
– how value flows through AI systems
– the asymmetry between platforms and individuals

This is not a minor omission. It goes to the core of whether the trends being described are sustainable—or extractive.

The “Replay vs. Breakout Hit” page is less a retrospective than a self-grading exercise. It does not test last year’s insights against events or outcomes. It simply shows that if you keep attending the same conference circuit, you can usually hear enough of the same themes to call your prior buzzwords validated.

SXSW sits at the intersection of music, film, and technology. If a report emerging from that environment cannot meaningfully account for creators, it is not just incomplete—it is asking the wrong question.

Start with the source: SXSW is not a neutral environment

The report is based on PwC’s attendance at more than 100 SXSW sessions and conversations with “thought leaders.” That sounds comprehensive, but it also tells you everything you need to know about the limits of the exercise. And that’s a whole lot of lunches.

SXSW—like TED and similar marquee events—is not designed to test ideas. It is designed to showcase them.

Panels are curated. Speakers are selected. Topics are framed in advance. And most importantly, participants are there for a reason: to promote something. A company. A framework. A product. A worldview. And oh, yes. A band.

That doesn’t make the content worthless. But it does mean the incentives are not aligned with truth-seeking.

They are aligned with visibility.

When panels become pitch environments

In practice, this structure often produces what anyone who has spent time in these rooms recognizes immediately: panels that function less as discussions and more as coordinated signaling exercises.

Especially in the tech space, you frequently see:
– Panelists advancing aligned narratives about “inevitable” technological change
– Framing that assumes adoption rather than interrogates the wisdom of adoption
– Soft, non-adversarial questioning that avoids meaningful challenge

And yes, there have long been instances where the “moderator” is not a neutral facilitator at all, but an industry advocate or policy lobbyist shaping the conversation, sometimes with only a token dissenting voice on stage who wasn’t in on the joke and looked confused.

The result is not debate. It is choreography.

Narrative momentum is not economic reality

SXSW is a narrative marketplace. It is very good at surfacing what people are excited about. But more precisely, SXSW is very good at surfacing what people with funding are excited about—which is usually themselves. And also their products and the narratives that make both more valuable. It is also a place where the ability to show up is itself a form of signaling—funding is not just the topic, it is the price of admission. Did I say “themselves”?

That framing matters because it explains why the output looks the way it does. The report is not simply identifying trends—it is reflecting a filtered environment in which access, funding, investment capital, and narrative are tightly intertwined.

The report expands and echoes those incentives like a meta-leave behind pitch sheet.

The SXSW–PwC report does not correct for this dynamic—it amplifies it.

By design, the report takes curated panels featuring self-selected speakers operating in a self-promotional environment
and distills them into “insights” for business leaders.

That is a closed loop.

What emerges is not independent analysis, but a refined version of the same narratives that were already being performed on stage—particularly around AI, innovation, and organizational transformation. Like every other tech-influenced conference.

The AI story: all acceleration, limited friction

Unsurprisingly, AI dominates the report.

The framing is familiar:
– AI as a creative amplifier
– AI as a competitive necessity
– AI as an organizational transformation layer

What is much less developed are the counterweights:
– Substitution effects (especially in creative labor markets)
– Market dilution and “flooding” dynamics
– Legal and regulatory constraints (copyright, privacy, liability)
– The question of who actually captures the value created

Instead, AI is largely treated as a capability problem: How quickly can organizations adopt and deploy? Thinking that leads to statements like this:

Complex stories underperform, while reactive, emotionally charged content thrives—and bad actors reverse-engineer those dynamics to move from the margins to the mainstream. Compounding the problem, under-resourced newsrooms are losing experienced journalists needed to maintain editorial standards, leaving the information vacuum to be filled by algorithmically optimized noise.

Yes, experienced journalists are just up and leaving, wowza. What’s the world coming to? Any interest in connecting some dots there, PwC lunchers?

Not only does the report not even dig an inch deep into any issue involving labor, or question the bargaining leverage that AI confers on employers much less ask who benefits, who loses, and under what terms?

“Act now or fall behind” is not analysis. Like many consulting-adjacent outputs, the report leans heavily on urgency. But these claims are not tied to measurable benchmarks or falsifiable outcomes.

One More Thing

The real issue with reports like this is not that they are wrong.

It is that they are produced within an environment where skepticism is disincentivized and narratives are shaped before the conversation even begins.

The SXSW–PwC report captures that environment faithfully. But it does not escape it.

And in that sense, it perfectly illustrates why you don’t turn to a firm like PwC to analyze creators—they’re looking through the wrong lens from the start. The report shows little evidence that anyone with direct experience representing creators was meaningfully involved in reviewing it.

To be clear, this is not inherently a flaw. SXSW has hosted genuinely thoughtful and introspective panels, alongside plenty of circular admiration society panels as well. But no one has ever suggested that polling those panels would produce anything resembling decision-maker work product. And, to be fair, bravo to the PwC employees who managed to get their trip expensed to talk their book. That’s the true spirit of SXSW.





Sony’s AI Music Attribution Tool: What It Actually Does (and What It Doesn’t)

As generative music systems like Suno and Udio move into the center of copyright debates, one question keeps coming up: Can we actually tell which songs influenced an AI-generated track? And then can we use that determination in a host of other processes like royalty payments?

Recently a number of people have pointed to research from Sony AI as evidence that the answer might be yes. Sony has publicly discussed work on tools designed to analyze the relationship between training data and AI-generated music outputs.

But the reality is a little more nuanced. Sony’s work is interesting and potentially important—but it is often misunderstood. What Sony has described is not a magic detector that can listen to a generated song and instantly reveal every recording the model trained on.

Instead, Sony is describing something more modest—and in some ways more useful.

Let’s unpack what the technology appears to do right now.

Two Problems Sony Is Trying to Solve

Sony AI has publicly discussed research in two related areas.

The first is training-data attribution. This means trying to estimate which recordings in a model’s training dataset influenced a generated output.

The second is musical similarity or version matching. This involves detecting when two pieces of music share meaningful musical material even if they are not exact copies of each other.

Sony has framed both efforts as research directions rather than a finished commercial product. In other words, this is still a developing technical approach, not a turnkey system that can produce definitive copyright answers.

Training Data Attribution in Plain English

The most relevant Sony work is a research project titled Large-Scale Training Data Attribution for Music Generative Models via Unlearning.

That title sounds intimidating, but the basic idea is fairly intuitive and also suggests the project is part of the broader machine unlearning academic discipline.

The system does not operate like Shazam. It does not simply listen to an AI-generated song and say:

“This track was trained on Song X, Song Y, and Song Z.”

Instead, the approach works more like this.

Imagine you already know—or at least suspect—which recordings were used to train the model. You have a candidate set of training tracks.

The system then asks:

Among these training recordings, which ones seem most likely to have influenced this generated output?

In other words, the system ranks influence among known candidates.

The research approach borrows from an area of machine learning called machine unlearning, which studies how particular training examples affect a model’s behavior. In simplified terms, researchers can test how the model behaves when certain training examples are removed or adjusted. If the output changes meaningfully, that suggests those examples had measurable influence.

The important point is that this is an influence-ranking tool, not a forensic detector.

It tries to answer:

“Which of these known training tracks mattered most?”

Not:

“Tell me every song the model was trained on.”

Sony’s Other Idea: Smarter Music Comparison

Sony has also described work on musical similarity detection.

Traditional audio fingerprinting systems—like those used by Shazam or Audible Magic—are very good at identifying identical recordings. If you upload the same song or a slightly altered version, the system can match it.

But generative AI raises a different problem. An AI output might resemble a song musically without copying the recording itself.

Sony’s research tries to detect those kinds of relationships.

For example, a system might notice that two tracks share melodic fragments, rhythmic patterns, harmonic progressions, or musical phrases even if the arrangement, production, or instrumentation is different.

In plain English, this kind of tool tries to answer a different question:

“Are these two pieces of music related in substance?”

Not:

“Are they the exact same recording?”

The Big Limitation: You Still Need the Training Dataset

Here’s the key limitation that often gets overlooked.

Sony’s attribution approach appears to depend on having access to the candidate training dataset.

The system works by comparing a generated output against recordings that are already known or suspected to have been used during training. It estimates influence among those candidates.

That means the system answers the question:

“Which of these training tracks influenced the output?”

But it does not answer the question:

“What unknown recordings were used to train this model?”

If the training corpus is hidden or undisclosed, the attribution system has nothing to test against.

This makes the technology conceptually similar to many machine-learning research experiments, which measure influence using known datasets. Researchers can test influence among known training examples, but they cannot reconstruct an unknown dataset from outputs alone.

What This Could Look Like in the Real World

If the training corpus were known, a practical workflow might look like this.

First, the recordings in the training corpus would be identified. Audio fingerprinting systems could match those recordings to commercial releases.

That step answers the question:

What copyrighted recordings appear in the training data?

Then an attribution tool like the one Sony describes could be used to analyze generated outputs and estimate which of those known recordings appear to have influenced them.

This would not prove copying in every case. But it could dramatically narrow the analysis—from millions of possible influences to a smaller list of likely candidates.

What Sony Has Not Claimed

Sony’s public statements do not suggest that the attribution problem is solved.

Sony has not announced a system that automatically calculates track-by-track royalty payments for AI-generated songs. Nor has it described a tool that conclusively proves copyright copying from an AI output alone.

Instead, the work is framed as research aimed at improving transparency and accountability in generative music systems.

Why Labels Might Still Be Interested

Even with these limitations, the idea could be attractive to rights holders.

If training datasets were known, attribution tools could theoretically support new ways of analyzing how music catalogs interact with generative AI systems.

For example, such tools might help support:

  • royalty allocation models
  • influence-weighted compensation frameworks
  • catalog analytics
  • AI audit trails showing how repertoire contributes to model behavior

In other words, the technology could potentially become a measurement tool for how music catalogs influence generative systems.

What Sony did and did not do (yet)

Sony’s work does not magically reveal every song an AI model trained on. And it does not eliminate the need to know what is in the training dataset.

Instead, its value appears to lie after the training data is known.

Once you have a candidate training corpus, tools like the ones Sony describes may help analyze which recordings influenced particular outputs.

That makes the technology best understood as a post-disclosure attribution layer, not a substitute for knowing what recordings were used in training in the first place.

Grassroots Revolt Against Data Centers Goes National: Water Use Now the Flashpoint

Over the last two weeks, grassroots opposition to data centers has moved from sporadic local skirmishes to a recognizable national pattern. While earlier fights centered on land use, noise, and tax incentives, the current phase is more focused and more dangerous for developers: water.

Across multiple states, residents are demanding to see the “water math” behind proposed data centers—how much water will be consumed (not just withdrawn), where it will come from, whether utilities can actually supply it during drought conditions, and what enforceable reporting and mitigation requirements will apply. In arid regions, water scarcity is an obvious constraint. But what’s new is that even in traditionally water-secure states, opponents are now framing data centers as industrial-scale consumptive users whose needs collide directly with residential growth, agriculture, and climate volatility.

The result: moratoria, rezoning denials, delayed hearings, task forces, and early-stage organizing efforts aimed at blocking projects before entitlements are locked in.

Below is a snapshot of how that opposition has played out state by state over the last two weeks.

State-by-State Breakdown

Virginia  

Virginia remains ground zero for organized pushback.

Botetourt County: Residents confronted the Western Virginia Water Authority over a proposed Google data center, pressing officials about long-term water supply impacts and groundwater sustainability.  

Hanover County (Richmond region): The Planning Commission voted against recommending rezoning for a large multi-building data center project.  

State Legislature: Lawmakers are advancing reform proposals that would require water-use modeling and disclosure.

Georgia  

Metro Atlanta / Middle Georgia: Local governments’ recruitment of hyperscale facilities is colliding with resident concerns.  

DeKalb County: An extended moratorium reflects a pause-and-rewrite-the-rules strategy.  

Monroe County / Forsyth area: Data centers have become a local political issue.

Arizona  

The state has moved to curb groundwater use in rural basins via new regulatory designations requiring tracking and reporting.  

Local organizing frames AI data centers as unsuitable for arid regions.

Maryland  

Prince George’s County (Landover Mall site): Organized opposition centered on environmental justice and utility burdens.  

Authorities have responded with a pause/moratorium and a task force.

Indiana  

Indianapolis (Martindale-Brightwood): Packed rezoning hearings forced extended timelines.  

Greensburg: Overflow crowds framed the fight around water-user rankings.

Oklahoma  

Luther (OKC metro): Organized opposition before formal filings.

Michigan  

Broad local opposition with water and utility impacts cited.  

State-level skirmishes over incentives intersect with water-capacity debates.

North Carolina  

Apex (Wake County area): Residents object to strain on electricity and water.

Wisconsin & Pennsylvania 

Corporate messaging shifts in response to opposition; Microsoft acknowledged infrastructure and water burdens.

The Through-Line: “Show Us the Water Math”

Lawrence of Arabia: The Well Scene

Across these states, the grassroots playbook has converged:

Pack the hearing.  

Demand water-use modeling and disclosure.  

Attack rezoning and tax incentives.  

Force moratoria until enforceable rules exist.

Residents are demanding hard numbers: consumptive losses, aquifer drawdown rates, utility-system capacity, drought contingencies, and legally binding mitigation.

Why This Matters for AI Policy

This revolt exposes the physical contradiction at the heart of the AI infrastructure build-out: compute is abstract in policy rhetoric but experienced locally as land, water, power, and noise.

Communities are rejecting a development model that externalizes its physical costs onto local water systems and ratepayers.

Water is now the primary political weapon communities are using to block, delay, and reshape AI infrastructure projects.

Read the local news:

America’s AI Boom Is Running Into An Unplanned Water Problem (Ken Silverstein/Forbes)

Residents raise water concerns over proposed Google data center (Allyssa Beatty/WDBJ7 News)

How data centers are rattling a Georgia Senate special election (Greg Bluesetein/Atlanta Journal Constitution)

A perfect, wild storm’: widely loathed datacenters see little US political opposition (Tom Perkins/The Guardian) 

Hanover Planning Commission votes to deny rezoning request for data center development (Joi Fultz/WTVR)

Microsoft rolls out initiative to limit data-center power costs, water use impact (Reuters)

Grass‑Roots Rebellion Against Data Centers and Grid Expansion

A grass‑roots “data center and electric grid rebellion” is emerging across the United States as communities push back against the local consequences of AI‑driven infrastructure expansion. Residents are increasingly challenging large‑scale data centers and the transmission lines needed to power them, citing concerns about enormous electricity demand, water consumption, noise pollution, land use, declining property values, and opaque approval processes. What were once routine zoning or utility hearings are now crowded, contentious events, with citizens organizing quickly and sharing strategies across counties and states.



This opposition is no longer ad hoc. In Northern Virginia—often described as the global epicenter of data centers—organized campaigns such as the Coalition to Protect Prince William County have mobilized voters, fundraised for local elections, demanded zoning changes, and challenged approvals in court. In Maryland’s Prince George’s County, resistance has taken on a strong environmental‑justice framing, with groups like the South County Environmental Justice Coalition arguing that data centers concentrate environmental and energy burdens in historically marginalized communities and calling for moratoria and stronger safeguards.



Nationally, consumer and civic groups are increasingly coordinated, using shared data, mapping tools, and media pressure to argue that unchecked data‑center growth threatens grid reliability and shifts costs onto ratepayers. Together, these campaigns signal a broader political reckoning over who bears the costs of the AI economy.

Global Data Centers

Here’s a snapshot of grass roots opposition in Texas, Louisiana and Nevada:

Texas

Texas has some of the most active and durable local opposition, driven by land use, water, and transmission corridors.

  • Hill Country & Central Texas (Burnet, Llano, Gillespie, Blanco Counties)
    Grass-roots groups formed initially around high-voltage transmission lines (765 kV) tied to load growth, now explicitly linking those lines to data center demand. Campaigns emphasize:
    • rural land fragmentation
    • wildfire risk
    • eminent domain abuse
    • lack of local benefit
      These groups are often informal coalitions of landowners rather than NGOs, but they coordinate testimony, public-records requests, and local elections.
  • DFW & North Texas
    Neighborhood associations opposing rezoning for hyperscale facilities focus on noise (backup generators), property values, and school-district tax distortions created by data-center abatements.
  • ERCOT framing
    Texas groups uniquely argue that data centers are socializing grid instability risk onto residential ratepayers while privatizing upside—an argument that resonates with conservative voters.

Louisiana

Opposition is newer but coalescing rapidly, often tied to petrochemical and LNG resistance networks.

  • North Louisiana & Mississippi River Corridor
    Community groups opposing new data centers frame them as:
    • “energy parasites” tied to gas plants
    • extensions of an already overburdened industrial corridor
    • threats to water tables and wetlands
      Organizers often overlap with environmental-justice and faith-based coalitions that previously fought refineries and export terminals.
  • Key tactic: reframing data centers as industrial facilities, not “tech,” triggering stricter land-use scrutiny.

Nevada

Nevada opposition centers on water scarcity and public-land use.

  • Clark County & Northern Nevada
    Residents and conservation groups question:
    • water allocations for evaporative cooling
    • siting near public or BLM-managed land
    • grid upgrades subsidized by ratepayers for private AI firms
  • Distinct Nevada argument: data centers compete directly with housing and tribal water needs, not just environmental values.

The Data Center Rebellion is Here and It’s Reshaping the Political Landscape (Washington Post)

Residents protest high-voltage power lines that could skirt Dinosaur Valley State Park (ALEJANDRA MARTINEZ AND PAUL COBLER/Texas Tribune)

US Communities Halt $64B Data Center Expansions Amid Backlash (Lucas Greene/WebProNews)

Big Tech’s fast-expanding plans for data centers are running into stiff community opposition (Marc Levy/Associated Press)

Data center ‘gold rush’ pits local officials’ hunt for new revenue against residents’ concerns (Alander Rocha/Georgia Record)

You Can’t Prosecute Smuggling NVIDIA chips to CCP and Authorize Sales to CCP at the Same Time

The Trump administration is attempting an impossible contradiction: selling advanced NVIDIA AI chips to China while the Department of Justice prosecutes criminal cases for smuggling the exact same chips into China.

According to the DOJ:

“Operation Gatekeeper has exposed a sophisticated smuggling network that threatens our Nation’s security by funneling cutting-edge AI technology to those who would use it against American interests,” said Ganjei. “These chips are the building blocks of AI superiority and are integral to modern military applications. The country that controls these chips will control AI technology; the country that controls AI technology will control the future. The Southern District of Texas will aggressively prosecute anyone who attempts to compromise America’s technological edge.”

That divergence from the prosecutors is not industrial policy. That is incoherence. But mostly it’s just bad advice, likely coming from White House AI Czar David Sacks, Mr. Trump’s South African AI policy advisor who may have a hard time getting a security clearance in the first place..

On one hand, DOJ is rightly bringing cases over the illegal diversion of restricted AI chips—recognizing that these processors are strategic technologies with direct national-security implications. On the other hand, the White House is signaling that access to those same chips is negotiable, subject to licensing workarounds, regulatory carve-outs, or political discretion.

You cannot treat a technology as contraband in federal court and as a commercial export in the West Wing.

Pick one.

AI Chips Are Not Consumer Electronics

The United States does not sell China F-35 fighter jets. We do not sell Patriot missile systems. We do not sell advanced avionics platforms and then act surprised when they show up embedded in military infrastructure. High-end AI accelerators are in the same category.

NVIDIA’s most advanced chips are not merely commercial products. They are general-purpose intelligence infrastructure or what China calls military-civil fusion. They train surveillance systems, military logistics platforms, cyber-offensive tools, and models capable of operating autonomous weapons and battlefield decision-making pipelines with no human in the loop.

If DOJ treats the smuggling of these chips into China as a serious federal crime—and it should—there is no coherent justification for authorizing their sale through executive discretion. Except, of course, money, or in Mr. Sacks case, more money.

Fully Autonomous Weapons—and Selling the Rope

China does not need U.S. chips to build consumer AI. It wants them for military acceleration.Advanced NVIDIA AI chips are not just about chatbots or recommendation engines. They are the backbone of fully autonomous weapons systems—autonomous targeting, swarm coordination, battlefield logistics, and decision-support models that compress the kill chain beyond meaningful human control.

There is an old warning attributed to Vladimir Lenin—that capitalists would sell the rope by which they would later be hanged. Apocryphal or not, it captures this moment with uncomfortable precision.

If NVIDIA chips are powerful enough to underpin autonomous weapons systems for allied militaries, they are powerful enough to underpin autonomous weapons systems for adversaries like China. Trump’s own National Security Strategy statement clearly says previous U.S. elites made “mistaken” assumptions about China such as the famous one that letting China into the WTO would integrate Beijing into the famous rules-based international order. Trump tells us that instead China “got rich and powerful” and used this against us, and goes on to describe the CCP’s well known predatory subsidies, unfair trade, IP theft, industrial espionage, supply-chain leverage, and fentanyl precursor exports as threats the U.S. must “end.” By selling them the most advanced AI chips?

Western governments and investors simultaneously back domestic autonomous-weapons firms—such as Europe-based Helsing, supported by Spotify CEO Daniel Ek—explicitly building AI-enabled munitions for allied defense. That makes exporting equivalent enabling infrastructure to a strategic competitor indefensible.

The AI Moratorium Makes This Worse, Not Better

This contradiction unfolds alongside a proposed federal AI moratorium executive order originating with Mr. Sacks and Adam Thierer of Google’s R Street Institute that would preempt state-level AI protections.
States are told AI is too consequential for local regulation, yet the federal government is prepared to license exports of AI’s core infrastructure abroad.

If AI is too dangerous for states to regulate, it is too dangerous to export. Preemption at home combined with permissiveness abroad is not leadership. It is capture.

This Is What Policy Capture Looks Like

The common thread is not national security. It is Silicon Valley access. David Sacks and others in the AI–VC orbit argue that AI regulation threatens U.S. competitiveness while remaining silent on where the chips go and how they are used.

When DOJ prosecutes smugglers while the White House authorizes exports, the public is entitled to ask whose interests are actually being served. Advisory roles that blur public power and private investment cannot coexist with credible national-security policymaking particularly when the advisor may not even be able to get a US national security clearance unless the President blesses it.

A Line Has to Be Drawn

If a technology is so sensitive that its unauthorized transfer justifies prosecution, its authorized transfer should be prohibited absent extraordinary national interest. AI accelerators meet that test.

Until the administration can articulate a coherent justification for exporting these capabilities to China, the answer should be no. Not licensed. Not delayed. Not cosmetically restricted.

And if that position conflicts with Silicon Valley advisers who view this as a growth opportunity, they should return to where they belong. The fact that the US is getting 25% of the deal (which i bet never finds its way into America’s general account), means nothing except confirming Lenin’s joke about selling the rope to hang ourselves, you know, kind of like TikTok.

David Sacks should go back to Silicon Valley.

This is not venture capital. This is our national security and he’s selling it like rope.

Too Dynamic to Question, Too Dangerous to Ignore

When Ed Newton-Rex left Stability AI, he didn’t just make a career move — he issued a warning. His message was simple: we’ve built an industry that moves too fast to be honest.

AI’s defenders insist that regulation can’t keep up, that oversight will “stifle innovation.” But that speed isn’t a by-product; it’s the business model. The system is engineered for planned obsolescence of accountability — every time the public begins to understand one layer of technology, another version ships, invalidating the debate. The goal isn’t progress; it’s perpetual synthetic novelty, where nothing stays still long enough to be measured or governed, and “nothing says freedom like getting away with it.”

We’ve seen this play before. Car makers built expensive sensors we don’t want that fail on schedule; software platforms built policies that expire the moment they bite. In both cases, complexity became a shield and a racket — “too dynamic to question.” And yet, like those unasked-for, but paid for, features in the cars we don’t want, AI’s design choices are too dangerous to ignore. (Like what if your brakes really are going out, not just the sensor is malfunctioning.)

Ed Newton-Rex’s point — echoed in his tweets and testimony — is that the industry has mistaken velocity for virtue. He’s right. The danger is not that these systems evolve too quickly to regulate; it’s that they’re designed that way designed to fail just like that brake sensor. And until lawmakers recognize that speed itself is a form of governance, we’ll keep mistaking momentum for inevitability.

AI Frontier Labs and the Singularity as a Modern Prophetic Cult

It gets rid of your gambling debts 
It quits smoking 
It’s a friend, it’s a companion 
It’s the only product you will ever need
From Step Right Up, written by Tom Waits

The AI “frontier labs” — OpenAI, Anthropic, DeepMind, xAI, and their constellation of evangelists — often present themselves as the high priests of a coming digital transcendence. This is sometimes called “the singularity” which refers to a hypothetical future point when artificial intelligence surpasses human intelligence, triggering rapid, unpredictable technological growth. Often associated with self-improving AI, it implies a transformation of society, consciousness, and control, where human decision-making may be outpaced or rendered obsolete by machines operating beyond our comprehension. 

But viewed through the lens of social psychology, the AI evangelists increasingly resembles that of cognitive dissonance cults, as famously documented in Dr. Leon Festinger and team’s important study of a UFO cult (a la Heaven’s Gate), When Prophecy Fails.  (See also The Great Disappointment.)

In that social psychology foundational study, a group of believers centered around a woman named “Marian Keech” predicted the world would end in a cataclysmic flood, only to be rescued by alien beings — but when the prophecy failed, they doubled down. Rather than abandoning their beliefs, the group rationalized the outcome (“We were spared because of our faith”) and became even more committed. They get this self-hypnotized look, kind of like this guy (and remember-this is what the Meta marketing people thought was the flagship spot for Meta’s entire superintelligence hustle):


This same psychosis permeates Singularity narratives and the AI doom/alignment discourse:
– The world is about to end — not by water, but by unaligned superintelligence.
– A chosen few (frontier labs) hold the secret knowledge to prevent this.
– The public must trust them to build, contain, and govern the very thing they fear.
– And if the predicted catastrophe doesn’t come, they’ll say it was their vigilance that saved us.

Like cultic prophecy, the Singularity promises transformation:
– Total liberation or annihilation (including liberation from annihilation by the Red Menace, i.e., the Chinese Communist Party).
– A timeline (“AGI by 2027”, “everything will change in 18 months”).
– An elite in-group with special knowledge and “Don’t be evil” moral responsibility.
– A strict hierarchy of belief and loyalty — criticism is heresy, delay is betrayal.

This serves multiple purposes:
1. Maintains funding and prestige by positioning the labs as indispensable moral actors.
2. Deflects criticism of copyright infringement, resource consumption, or labor abuse with existential urgency (because China, don’t you know).
3. Converts external threats (like regulation) into internal persecution, reinforcing group solidarity.

The rhetoric of “you don’t understand how serious this is” mirrors cult defenses exactly.

Here’s the rub: the timeline keeps slipping. Every six months, we’re told the leap to “godlike AI” is imminent. GPT‑4 was supposed to upend everything. That didn’t happen, so GPT‑5 will do it for real. Gemini flopped, but Claude 3 might still be the one.

When prophecy fails, they don’t admit error — they revise the story:
– “AI keeps accelerating”
– “It’s a slow takeoff, not a fast one.”
– “We stopped the bad outcomes by acting early.”
– “The doom is still coming — just not yet.”

Leon Festinger’s theories seen in When Prophecy Fails, especially cognitive dissonance and social comparison, influence AI by shaping how systems model human behavior, resolve conflicting inputs, and simulate decision-making. His work guides developers of interactive agents, recommender systems, and behavioral algorithms that aim to mimic or respond to human inconsistencies, biases, and belief formation.   So this isn’t a casual connection.

As with Festinger’s study, the failure of predictions intensifies belief rather than weakening it. And the deeper the believer’s personal investment, the harder it is to turn back. For many AI cultists, this includes financial incentives, status, and identity.

Unlike spiritual cults, AI frontier labs have material outcomes tied to their prophecy:
– Federal land allocations, as we’ve seen with DOE site handovers.
– Regulatory exemptions, by presenting themselves as saviors.
– Massive capital investment, driven by the promise of world-changing returns.

In the case of AI, this is not just belief — it’s belief weaponized to secure public assets, shape global policy, and monopolize technological futures. And when the same people build the bomb, sell the bunker, and write the evacuation plan, it’s not spiritual salvation — it’s capture.

The pressure to sustain the AI prophecy—that artificial intelligence will revolutionize everything—is unprecedented because the financial stakes are enormous. Trillions of dollars in market valuation, venture capital, and government subsidies now hinge on belief in AI’s inevitable dominance. Unlike past tech booms, today’s AI narrative is not just speculative; it is embedded in infrastructure planning, defense strategy, and global trade. This creates systemic incentives to ignore risks, downplay limitations, and dismiss ethical concerns. To question the prophecy is to threaten entire business models and geopolitical agendas. As with any ideology backed by capital, maintaining belief becomes more important than truth.

The Singularity, as sold by the frontier labs, is not just a future hypothesis — it’s a living ideology. And like the apocalyptic cults before them, these institutions demand public faith, offer no accountability, and position themselves as both priesthood and god.

If we want a secular, democratic future for AI, we must stop treating these frontier labs as prophets — and start treating them as power centers subject to scrutiny, not salvation.

AI Needs Ever More Electricity—And Google Wants Us to Pay for It

Uncle Sugar’s “National Emergency” Pitch to Congress

At a recent Congressional hearing, former Google CEO Eric “Uncle Sugar” Schmidt delivered a message that was as jingoistic as it was revealing: if America wants to win the AI arms race, it better start building power plants. Fast. But the subtext was even clearer—he expects the taxpayer to foot the bill because, you know, the Chinese Communist Party. Yes, when it comes to fighting the Red Menace, the all-American boys in Silicon Valley will stand ready to fight to the last Ukrainian, or Taiwanese, or even Texan.

Testifying before the House Energy & Commerce Committee on April 9, Schmidt warned that AI’s natural limit isn’t chips—it’s electricity. He projected that the U.S. would need 92 gigawatts of new generation capacity—the equivalent of nearly 100 nuclear reactors—to keep up with AI demand.

Schmidt didn’t propose that Google, OpenAI, Meta, or Microsoft pay for this themselves, just like they didn’t pay for broadband penetration. No, Uncle Sugar pushed for permitting reform, federal subsidies, and government-driven buildouts of new energy infrastructure. In plain English? He wants the public sector to do the hard and expensive work of generating the electricity that Big Tech will profit from.

Will this Improve the Grid?

And let’s not forget: the U.S. electric grid is already dangerously fragile. It’s aging, fragmented, and increasingly vulnerable to cyberattacks, electromagnetic pulse (EMP) weapons, and even extreme weather events. Pouring public money into ultra-centralized AI data infrastructure—without first securing the grid itself—is like building a mansion on a cracked foundation.

If we are going to incur public debt, we should prioritize resilience, distributed energy, grid security, and community-level reliability—not a gold-plated private infrastructure buildout for companies that already have trillion-dollar valuations.

Big Tech’s Growing Appetite—and Private Hoarding

This isn’t just a future problem. The data center buildout is already in full swing and your Uncle Sugar must be getting nervous about where he’s going to get the money from to run his AI and his autonomous drone weapons. In Oregon, where electricity is famously cheap thanks to the Bonneville Power Administration’s hydroelectric dams on the Columbia River, tech companies have quietly snapped up huge portions of the grid’s output. What was once a shared public benefit—affordable, renewable power—is now being monopolized by AI compute farms whose profits leave the region to the bank accounts in Silicon Valley.

Meanwhile, Microsoft is investing in a nuclear-powered data center next to the defunct Three Mile Island reactor—but again, it’s not about public benefit. It’s about keeping Azure’s training workloads running 24/7. And don’t expect them to share any of that power capacity with the public—or even with neighboring hospitals, schools, or communities.

Letting the Public Build Private Fortresses

The real play here isn’t just to use public power—it’s to get the public to build the power infrastructure, and then seal it off for proprietary use. Moats work both ways.

That includes:
– Publicly funded transmission lines across hundreds of miles to deliver power to remote server farms;
– Publicly subsidized generation capacity (nuclear, gas, solar, hydro—you name it);
– And potentially, prioritized access to the grid that lets AI workloads run while the rest of us face rolling blackouts during heatwaves.

All while tech giants don’t share their models, don’t open their training data, and don’t make their outputs public goods. It’s a privatized extractive model, powered by your tax dollars.

Been Burning for Decades

Don’t forget: Google and YouTube have already been burning massive amounts of electricity for 20 years. It didn’t start with ChatGPT or Gemini. Serving billions of search queries, video streams, and cloud storage events every day requires a permanent baseload—yet somehow this sudden “AI emergency” is being treated like a surprise, as if nobody saw it coming.

If they knew this was coming (and they did), why didn’t they build the power? Why didn’t they plan for sustainability? Why is the public now being told it’s our job to fix their bottleneck?

The Cold War Analogy—Flipped on Its Head

Some industry advocates argue that breaking up Big Tech or slowing AI infrastructure would be like disarming during a new Cold War with China. But Gail Slater, the Assistant Attorney General leading the DOJ’s Antitrust Division, pushed back forcefully—not at a hearing, but on the War Room podcast.

In that interview, Slater recalled how AT&T tried to frame its 1980s breakup as a national security threat, arguing it would hurt America’s Cold War posture. But the DOJ did it anyway—and it led to an explosion of innovation in wireless technology.

“AT&T said, ‘You can’t do this. We are a national champion. We are critical to this country’s success. We will lose the Cold War if you break up AT&T,’ in so many words. … Even so, [the DOJ] moved forward … America didn’t lose the Cold War, and … from that breakup came a lot of competition and innovation.”

“I learned that in order to compete against China, we need to be in all these global races the American way. And what I mean by that is we’ll never beat China by becoming more like China. China has national champions, they have a controlled economy, et cetera, et cetera.

We win all these races and history has taught by our free market system, by letting the ball rip, by letting companies compete, by innovating one another. And the reason why antitrust matters to that picture, to the free market system is because we’re the cop on the beat at the end of the day. We step in when competition is not working and we ensure that markets remain competitive.”

Slater’s message was clear: regulation and competition enforcement are not threats to national strength—they’re prerequisites to it. So there’s no way that the richest corporations in commercial history should be subsidized by the American taxpayer.

Bottom Line: It’s Public Risk, Private Reward

Let’s be clear:

– They want the public to bear the cost of new electricity generation.
– They want the public to underwrite transmission lines.
– They want the public to streamline regulatory hurdles.
– And they plan to privatize the upside, lock down the infrastructure, keep their models secret and socialize the investment risk.

This isn’t a public-private partnership. It’s a one-way extraction scheme. America needs a serious conversation about energy—but it shouldn’t begin with asking taxpayers to bail out the richest companies in commercial history.