The White House’s latest AI framework reads like a familiar story dressed in new clothes: we must move fast, avoid “overregulation,” and ensure that the United States “wins” the AI race—because China.
That framing is not new. It is, in fact, a modern version of the Thucydides Trap: the idea that when a rising power threatens to displace an established one, conflict—economic, political, or otherwise—becomes more likely. But what is striking here is not the invocation of competition. It’s how narrowly that competition is defined.
The framework implicitly treats AI dominance as a function of compute, capital, and model scale. Build bigger models faster, feed them more data, and ensure that domestic firms face as few constraints as possible. In that telling, creators, rights, and consent become secondary considerations—at best friction, at worst obstacles.
But that is a profound misread of where U.S. advantage actually lies.
American leadership has never been just about scale. It has been about legitimacy—the ability to build systems that other countries, companies, and individuals trust enough to adopt. That is the essence of soft power. And soft power is not generated by extraction; it is generated by rules that are perceived as fair.
When U.S. policy signals that training on creative works without meaningful consent is acceptable—or even necessary to “win”—it risks trading long-term legitimacy for short-term acceleration. That is a dangerous bargain. It tells the world that American AI leadership is built not on innovation alone, but on the uncompensated appropriation of global cultural and informational resources.
Other jurisdictions are already responding. The EU is experimenting with transparency mandates. Rights holders globally are pushing for enforceable consent regimes. Even countries that want to encourage AI development are increasingly wary of frameworks that look like data extraction at scale without accountability.
This is where the Thucydides analogy breaks down—or at least becomes more complicated. The real risk is not simply that China catches up technologically. It is that the United States, in trying to outrun that possibility, undermines the normative foundations of its own leadership.
Soft power erosion is not dramatic. It doesn’t announce itself with a headline. It accumulates quietly: in trade negotiations, in regulatory divergence, in the willingness of other countries to align—or not align—with U.S. standards. Over time, that erosion can matter more than any benchmark score or model release.
There is another path. The United States could lead by insisting that AI development is compatible with consent, compensation, and provenance. It could treat creators not as inputs to be harvested, but as stakeholders in a system that depends on their work. It could build infrastructure—technical and legal—that makes those principles operational, not aspirational.
That approach may look slower in the short term. It may impose costs that competitors are willing to ignore. But it is also how durable leadership is built.
Because in the long run, the question is not just who builds the most powerful models. It is who builds systems that the rest of the world is willing to trust.
And that is a competition the United States cannot afford to lose.
One of the most important things about the White House AI framework released last week is what it is not.
It is not an executive order.
That may sound like a technical distinction, but it is doing an enormous amount of work here. Because by avoiding the form of an executive order, the framework avoids something even more important: Judicial review.
An executive order that attempted to declare AI training on copyrighted works lawful—or to constrain Congress from acting—would immediately invite challenge in the very judicial branch the framework also seeks to influence. Oh, that would be fun.
It would raise Administrative Procedure Act questions. It would trigger separation-of-powers scrutiny. It would likely be litigated within days.
This framework does none of that and is not susceptible to judicial challenge.
Instead, it achieves much of the same practical effect—shaping legal outcomes, constraining policy space, and signaling preferred doctrine—without creating a justiciable action. It is, in effect, law without law, and outcomes by positioning. Silicon Valley’s favorite.
Takings by Policy, Not Statute
Start with the most obvious constitutional issue: the Takings Clause of Fifth Amendment of the U.S. Constitution which states that “private property [cannot] be taken for public use, without just compensation.”
Copyright is a form of property. That is not controversial. It is a statutory property right grounded in the Constitution’s Intellectual Property Clause, and it carries exclusive rights that have long been understood as economically valuable.
Now consider what the White House framework does.
It declares that AI training—mass, indiscriminate ingestion of copyrighted works—as lawful. It does so without requiring compensation. And it does so in a context where the resulting systems can substitute for, or diminish the market for, the original works.
If that official policy position of the Executive Branch were enacted into law, it would raise a straightforward question:
Has the government authorized the use of private property for public and commercial purposes without compensation? Or more directly, has the Executive Branch just announced that will not prosecute that indiscriminate ingestion for any reason? Can we expect to see amicus briefs from the Solicitor General opposing copyright owners pursuing their rights in court?
That is sounding a lot like a taking.
But because the framework is not law, it avoids the moment where that question must be answered. It does not extinguish rights formally. It renders them economically hollow in practice, while leaving the formal structure intact.
That is the key move: functional elimination without formal abolition.
Ex Post Facto in Everything but Name
The framework also raises a second, less discussed issue: the logic of ex post facto lawmaking.
The Ex Post Facto Clause technically applies to criminal law. But the underlying principle is broader: the government should not change the legal consequences of past conduct to benefit favored actors or disadvantage others. Of course, copyright owners raising this argument will have the Spotify retroactive safe harbor in Title I of the Music Modernization Act thrown in their face as rank hypocrisy, which they would richly deserve, although as any 10 year old can tell you, two wrongs don’t make a right, at least in theory.
Here, the timeline matters.
Massive datasets have already been scraped.
Models have already been trained.
The conduct that enabled this may, in many instances, have been legally questionable—and in cases of willful infringement, potentially criminal under federal copyright law. Or if you listen to me, the largest case of criminal copyright infringement in history.
Now comes the policy years after the fact in the face of over 150 AI lawsuits all based on copyright infringement to one degree or another:
Training is lawful.
That looks less like interpretation and more like retroactive validation.
Even if framed as civil doctrine, the effect is similar to retroactive decriminalization of conduct tied to vested rights. It sends a clear message: conduct that may have been unlawful when undertaken will be treated as lawful because it is now economically indispensable to the broligarchs.
That is not how the rule of law is supposed to work.
Separation of Powers by Suggestion
The framework’s treatment of Congress is equally striking. It does not say Congress lacks authority to legislate. The President cannot say that. Well…he can, but there’s no foundation for the statement. The Constitution is clear: Congress defines copyright.
Instead, the framework says Congress should not act in ways that would affect judicial resolution of the training question.
That is an unusual formulation. Congress legislates in areas under litigation all the time. Indeed, it is often expected to clarify statutory ambiguity.
What the framework is doing is more subtle: It is attempting to shape the legislative field without formally constraining it.
And it pairs that with an implicit second message:
Legislation that restricts training or mandates licensing is inconsistent with executive policy.
Such legislation is therefore unlikely to be signed by the President. So why bring it?
That is a veto signal—delivered without the political cost of an actual veto.
Judicial Signaling Without Command
The same dynamic applies to the courts.
The framework claims to “defer” to the judiciary. But it simultaneously declares a preferred outcome: training is lawful.
That is not deference. That is signaling.
Judges are, of course, independent. But they do not operate in a vacuum. They are aware of executive priorities, legislative inaction, and market realities. When all three align around a single policy direction, it creates an interpretive gravitational force that is difficult to ignore.
And the signal travels further.
To lawyers. To regulators. To anyone whose career may intersect with executive appointment.
It normalizes what counts as a “reasonable” position within the current policy environment.
Prosecutorial Silence as Policy
There is also a more immediate, practical consequence.
While the framework does not have the force of law, it functions as an indirect directive to the Department of Justice. By declaring training lawful as a matter of policy, it signals that federal enforcement resources should not be used to pursue cases premised on the opposite view.
In effect, it tells prosecutors:
Do not spend time considering criminal enforcement for large-scale copyright violations tied to AI training. Do not spend time considering antitrust enforcement against the broligarchs. In fact, don’t spend any time prosecuting anyone regarding AI.
That matters because, for example, willful copyright infringement at scale can, in certain circumstances, give rise to criminal liability. I mean if that doesn’t, what does? Yet under this framework, even the possibility of such enforcement is quietly set aside.
This is not formal immunity. But in practice, it can look very similar.
Why “Not an Executive Order” Matters
If this were an executive order, all of these issues would be front and center:
Is this a taking?
Does it exceed executive authority?
Does it interfere with Congress?
Does it interfere with the Judiciary?
Because it is not and EO, these important issues remain in the background—present but untested.
That is the genius, and the danger, of the approach.
It allows the executive branch to:
Shape doctrine
Influence courts
Constrain Congress
Guide enforcement priorities
Normalize contested conduct
—all without triggering the mechanisms designed to check it.
The Constitutional Shadow
The AI framework does not violate the Constitution in any formal sense.
It does something more complicated.
It operates in the constitutional shadow—where policy can reshape rights, incentives, and expectations without ever crossing the line that would allow a court to say no.
But shadows matter.
Because by the time the law catches up—if it ever does—the world the Constitution was meant to govern and protect may already have changed.
As generative music systems like Suno and Udio move into the center of copyright debates, one question keeps coming up: Can we actually tell which songs influenced an AI-generated track? And then can we use that determination in a host of other processes like royalty payments?
Recently a number of people have pointed to research from Sony AI as evidence that the answer might be yes. Sony has publicly discussed work on tools designed to analyze the relationship between training data and AI-generated music outputs.
But the reality is a little more nuanced. Sony’s work is interesting and potentially important—but it is often misunderstood. What Sony has described is not a magic detector that can listen to a generated song and instantly reveal every recording the model trained on.
Instead, Sony is describing something more modest—and in some ways more useful.
Let’s unpack what the technology appears to do right now.
The first is training-data attribution. This means trying to estimate which recordings in a model’s training dataset influenced a generated output.
The second is musical similarity or version matching. This involves detecting when two pieces of music share meaningful musical material even if they are not exact copies of each other.
Sony has framed both efforts as research directions rather than a finished commercial product. In other words, this is still a developing technical approach, not a turnkey system that can produce definitive copyright answers.
That title sounds intimidating, but the basic idea is fairly intuitive and also suggests the project is part of the broader machine unlearning academic discipline.
The system does not operate like Shazam. It does not simply listen to an AI-generated song and say:
“This track was trained on Song X, Song Y, and Song Z.”
Instead, the approach works more like this.
Imagine you already know—or at least suspect—which recordings were used to train the model. You have a candidate set of training tracks.
The system then asks:
Among these training recordings, which ones seem most likely to have influenced this generated output?
In other words, the system ranks influence among known candidates.
The research approach borrows from an area of machine learning called machine unlearning, which studies how particular training examples affect a model’s behavior. In simplified terms, researchers can test how the model behaves when certain training examples are removed or adjusted. If the output changes meaningfully, that suggests those examples had measurable influence.
The important point is that this is an influence-ranking tool, not a forensic detector.
It tries to answer:
“Which of these known training tracks mattered most?”
Not:
“Tell me every song the model was trained on.”
Sony’s Other Idea: Smarter Music Comparison
Sony has also described work on musical similarity detection.
Traditional audio fingerprinting systems—like those used by Shazam or Audible Magic—are very good at identifying identical recordings. If you upload the same song or a slightly altered version, the system can match it.
But generative AI raises a different problem. An AI output might resemble a song musically without copying the recording itself.
Sony’s research tries to detect those kinds of relationships.
For example, a system might notice that two tracks share melodic fragments, rhythmic patterns, harmonic progressions, or musical phrases even if the arrangement, production, or instrumentation is different.
In plain English, this kind of tool tries to answer a different question:
“Are these two pieces of music related in substance?”
Not:
“Are they the exact same recording?”
The Big Limitation: You Still Need the Training Dataset
Here’s the key limitation that often gets overlooked.
Sony’s attribution approach appears to depend on having access to the candidate training dataset.
The system works by comparing a generated output against recordings that are already known or suspected to have been used during training. It estimates influence among those candidates.
That means the system answers the question:
“Which of these training tracks influenced the output?”
But it does not answer the question:
“What unknown recordings were used to train this model?”
If the training corpus is hidden or undisclosed, the attribution system has nothing to test against.
This makes the technology conceptually similar to many machine-learning research experiments, which measure influence using known datasets. Researchers can test influence among known training examples, but they cannot reconstruct an unknown dataset from outputs alone.
What This Could Look Like in the Real World
If the training corpus were known, a practical workflow might look like this.
First, the recordings in the training corpus would be identified. Audio fingerprinting systems could match those recordings to commercial releases.
That step answers the question:
What copyrighted recordings appear in the training data?
Then an attribution tool like the one Sony describes could be used to analyze generated outputs and estimate which of those known recordings appear to have influenced them.
This would not prove copying in every case. But it could dramatically narrow the analysis—from millions of possible influences to a smaller list of likely candidates.
What Sony Has Not Claimed
Sony’s public statements do not suggest that the attribution problem is solved.
Sony has not announced a system that automatically calculates track-by-track royalty payments for AI-generated songs. Nor has it described a tool that conclusively proves copyright copying from an AI output alone.
Instead, the work is framed as research aimed at improving transparency and accountability in generative music systems.
Why Labels Might Still Be Interested
Even with these limitations, the idea could be attractive to rights holders.
If training datasets were known, attribution tools could theoretically support new ways of analyzing how music catalogs interact with generative AI systems.
For example, such tools might help support:
royalty allocation models
influence-weighted compensation frameworks
catalog analytics
AI audit trails showing how repertoire contributes to model behavior
In other words, the technology could potentially become a measurement tool for how music catalogs influence generative systems.
What Sony did and did not do (yet)
Sony’s work does not magically reveal every song an AI model trained on. And it does not eliminate the need to know what is in the training dataset.
Instead, its value appears to lie after the training data is known.
Once you have a candidate training corpus, tools like the ones Sony describes may help analyze which recordings influenced particular outputs.
That makes the technology best understood as a post-disclosure attribution layer, not a substitute for knowing what recordings were used in training in the first place.
For the better part of a year, local opposition to AI hyperscaler data centers has been dismissed as NIMBYism—yet it is a movement that has gained real traction. Rural counties worried about water draw. Suburban communities objecting to diesel backup generators. Landowners frustrated over transmission corridors cutting through farmland and massive data centers removing large swaths of productive land in essentially irreversible dedication to AI.
Local politics around data-center construction often turn on land use, water, and power. Officials welcome tax base and jobs, but residents worry about noise, transmission lines, diesel backup generators, and groundwater consumption. Zoning boards and county commissioners become battlegrounds where developers promise infrastructure upgrades and community benefits while opponents push for setbacks, environmental review, and limits on incentives. Utilities and grid operators weigh reliability and cost shifting, especially where hyperscale demand requires new substations or high-voltage lines. Rural areas face pressure from land aggregation and fast-track permitting, while cities debate transparency, property-tax abatements, and whether long-term public costs outweigh near-term economic gains.
But the politics just escalated.
According to multiple reports, President Trump is preparing to highlight “ratepayer protection pledges” from major tech companies during his State of the Union address tonight — urging AI and cloud companies to publicly commit that residential electricity customers will not bear the cost of new data-center load.
That confirms concerns from Trump advisor Peter Navarro over the last couple months and is not a small signal.
For months, grassroots organizers have warned that hyperscale AI buildout could increase local electricity rates, force costly new transmission lines, accelerate natural gas plant approvals, and strain already fragile regional grids. And then there’s the nuclear issues as hyperscalers openly promote new nuclear plants. Until now, much of the policy conversation has centered on growth and competitiveness, you know, because China. The Trump pivot reframes the issue around consumer protection — closely tracking the concerns raised by grassroots opponents.
What the White House Is Signaling
The reported approach stops short of imposing a formal price cap on electricity or shifting costs to taxpayers. Instead, policymakers are signaling that large technology firms — particularly hyperscale operators — should voluntarily shoulder the marginal power costs created by their own demand growth.
In practice, this means encouraging companies such as Microsoft, Alphabet, Amazon, and OpenAI to fund grid upgrades, transmission extensions, standby generation, and other infrastructure required to serve new data-center loads, rather than socializing those costs across ordinary ratepayers. The political logic is straightforward: if hyperscale demand is driving billions in new utility investment, the beneficiaries should internalize the expense. The strategy relies on negotiated commitments, public-utility leverage, and reputational pressure rather than mandates, aiming to avoid rate shocks while still enabling continued digital-infrastructure expansion.
We’ll see.
In parallel, the administration has backed efforts to expand electricity supply in regions experiencing sharp data-center load growth, pairing political support with regulatory acceleration. In practice, this has meant encouraging grid operators to run emergency or supplemental capacity auctions—for example, in markets like PJM or ERCOT—to secure short-lead-time generation such as gas peaker plants, temporary turbines, and large-scale battery storage. Policymakers have also supported fast-track permitting and uprates at existing nuclear and natural-gas facilities, along with expedited approvals for new combined-cycle plants where reliability risks are rising. In some areas, utilities are advancing transmission expansions and demand-response programs to bridge near-term gaps. The goal is to bring firm capacity online quickly enough to keep pace with AI-driven electricity demand without triggering reliability shortfalls or price spikes.
Supposedly, Trump’s message is if data centers drive the demand spike, data centers should fund the solution. That makes sense, but count me as a skeptic as to whether this will actually happen, or whether hyperscalers will come to the taxpayer. You know, because China. But let’s sell China Nvidia chips.
Why This Matters for the Grassroots Fight
Grassroots opposition to large-scale data centers has crystallized around three increasingly defined pillars — each with its own constituency and political leverage.
1. Land Use and Community Character. Residents object to the scale and industrial footprint of hyperscale campuses: multi-building complexes, 24/7 lighting, diesel backup generators, high-security fencing, and new high-voltage transmission corridors. In rural counties, projects can involve the quiet aggregation of farmland followed by rezoning from agricultural to industrial use. In suburban areas, neighbors focus on setbacks, noise from cooling systems, and visual impact. Planning and zoning hearings have become flashpoints where local control collides with state-level economic development priorities.
2. Environmental and Water Stress. Data centers are energy- and water-intensive facilities. In water-constrained regions, evaporative cooling systems raise concerns about aquifer drawdown and drought resilience. Environmental advocates question lifecycle emissions from new gas-fired generation built to serve AI load, as well as the cumulative impact of substations, transmission lines, and backup generators. Even where companies pledge renewable procurement, critics argue that incremental demand can still drive fossil fuel buildout in constrained grids.
3. Electricity Costs and Grid Strain. The most politically volatile pillar is ratepayer impact. Local activists argue that if hyperscale demand requires billions in new generation, transmission, and distribution investment, those costs could be socialized through higher retail rates. Concerns also extend to reliability — whether rapid load growth risks price spikes, capacity shortfalls, or emergency measures during extreme weather.
And then there’s the jobs myth. The “data center jobs” pitch often overstates long-term employment. Construction phases can generate hundreds of temporary union and trade jobs—electricians, concrete crews, steel, and site work—sometimes for 12–24 months. But once operational, hyperscale facilities are highly automated and run by surprisingly small permanent staffs relative to their footprint and power load. A multi-building campus consuming hundreds of megawatts may employ only a few dozen to low hundreds of full-time workers, focused on security, facilities management, and network operations. For rural counties weighing tax abatements and infrastructure upgrades, the gap between short-term construction labor and modest permanent payroll becomes a central economic-development question.
By elevating electricity price protection to a presidential talking point, the administration effectively validates this third pillar. What began as local testimony at zoning meetings is now part of national energy policy framing: the principle that ordinary households should not subsidize AI infrastructure through their power bills. That rhetorical shift transforms a local grievance into a broader political issue with statewide and federal implications.
This is no longer just a zoning fight. It is now a kitchen-table affordability issue. Which may be a good start.
The Uncomfortable Math
AI data centers run 24/7, require enormous continuous baseload power, often demand dedicated substations, and can trigger multi-billion-dollar transmission upgrades. In regulated utility regions, those upgrades may be socialized across ratepayers unless cost allocation rules are enforced.
That is the central fear: even if tech companies pay for direct interconnection, broader grid reinforcement costs may still reach residential customers. If “ratepayer protection” pledges gain traction, this would mark a major federal acknowledgement that the risk is politically real.
Why This Is Bigger Than Trump
Governors in data-center-heavy states have also expressed concern. Utilities want load growth but fear rate shock. Grid operators face pressure to accelerate capacity procurement without triggering bill spikes. Grassroots activists have argued the AI buildout is outpacing responsible grid planning — and that argument has now moved from local meetings to national politics.
Whether any president—including Trump—can truly compel hyperscale tech firms to absorb rising power and infrastructure costs remains uncertain. Without formal regulation, enforcement tools are limited to negotiation, procurement leverage, and public pressure, all of which depend on the companies’ strategic interests.
Voluntary pledges can signal cooperation but lack binding force especially if market conditions shift. The Trump announcement also raises a political question: does the “pledge” represent a balancing act inside the administration between economic populists and China hawks like Peter Navarro, often associated with industrial-policy cost discipline, and pro-AI growth lobbyists such as Silicon Valley’s AI Viceroy David Sacks? If so, the commitment may reflect an internal compromise as much as an external policy toward accelerationist hyperscalers.
Data-center growth is turning electricity affordability into a geopolitical issue, not just a local zoning fight. When hyperscalers drop a 100–500 MW load into a market, they can tighten reserve margins, push up wholesale prices, and force expensive transmission and distribution upgrades—costs that governments then have to allocate between the new entrant and everyone else. That same demand can crowd out electrification priorities (heat pumps, EVs, industrial decarbonization) or trigger emergency procurement of “firm” power—often gas—because reliability deadlines don’t wait for ideal renewable buildouts.
We are way past McDonald’s on the Champs-Élysées
This is where “net zero” starts to look like it’s in the rear-view mirror. Many jurisdictions still talk about decarbonization, but the near-term political imperative is keeping the lights on and bills stable. If the choice is between fast AI load growth and strict emissions trajectories, the operational reality in many grids is that fossil backup and accelerated thermal approvals re-enter the picture—sometimes explicitly, sometimes quietly. Meanwhile, countries with abundant cheap power (hydro, nuclear, subsidized gas) gain leverage as preferred data-center destinations, while constrained grids face moratoria, queue rationing, and public backlash.
Data-center expansion is rapidly turning electricity policy into a global political and economic tradeoff. When hyperscale facilities add hundreds of megawatts of demand, they can tighten capacity margins, raise wholesale prices, and force costly grid upgrades—decisions governments must make about who ultimately pays. In many markets, this new load competes directly with electrification goals such as EV adoption, heat pumps, and industrial decarbonization. Reliability timelines often drive utilities toward fast, firm capacity—frequently gas—because intermittent renewables and storage cannot always be deployed quickly enough.
In that sense, Trump’s choices increasingly resemble a classic “guns and butter” dilemma. Policymakers must balance the strategic push for AI infrastructure and digital competitiveness against long-term climate commitments. While net-zero targets remain official policy in many jurisdictions, near-term choices often prioritize keeping power reliable and affordable, even if that means slowing emissions progress. The tension does not necessarily mean decarbonization disappears, but it underscores the difficulty of advancing both rapid AI build-out and strict net-zero trajectories simultaneously under real-world grid constraints.
Rate Payers Get the Immediate Proof: Utility bills
If the White House advances voluntary ratepayer-protection pledges, several trajectories could unfold. Technology companies may publicly commit to absorbing incremental grid and infrastructure costs, framing the move as responsible corporate citizenship. Personally, I don’t think Trump actually believes it, and I fully expect that the teleprompter will say one thing, and then in a classic Trump aside, he will undercut the speech writers.
Utilities, facing rising capital requirements, could press for clearer cost-allocation rules to ensure large-load customers bear system expansion expenses. State public-utility commissions might reopen tariffs and special-contract pricing for hyperscale users, testing how far voluntary commitments translate into enforceable rate structures.
Meanwhile, grassroots groups are likely to demand transparent accounting to verify that ordinary customers are insulated from price impacts. Yet the full economic value of any pledge will emerge only over years of build-out and rate cases—long after the current administration, and Trump himself, are no longer in office.
For the moment, the debate has shifted. Grassroots opposition is no longer just about land or water. It is about who pays when AI reshapes the grid — and now the president is talking about it.
Let’s say I’m wrong and Trump is serious about reigning in AI. If Trump were able to make such a policy stick, it could mark a broader shift in how governments confront the external costs of rapid AI expansion. Requiring hyperscalers to internalize infrastructure and power burdens could slow the breakneck build-out that fuels large-scale model training and synthetic media proliferation.
For artists and performers, that deceleration could matter. The fight over voice, likeness, and identity—already highlighted by figures such as Brad Pitt and Tom Cruise ripped off by China’s Seedance 2.0 —centers on protecting human personhood from industrial-scale replication. A structural slowdown in AI growth would not end that conflict, but it could rebalance leverage, giving creators, unions, and policymakers more time to establish enforceable guardrails.
Paul Sinclair’s framing of generative music AI as a choice between “open studios” and permissioned systems makes a basic category mistake. Consent is not a creative philosophy or a branding position. It is a systems constraint. You cannot “prefer” consent into existence. A permissioned system either enforces authorization at the level where machine learning actually occurs—or it does not exist at all.
That distinction matters not only for artists, but for the long-term viability of AI companies themselves. Platforms built on unresolved legal exposure may scale quickly, but they do so on borrowed time. Systems built on enforceable consent may grow more slowly at first, but they compound durability, defensibility, and investor confidence over time. Legality is not friction. It is infrastructure. It’s a real “eat your vegetables” moment.
The Great Reset
Before any discussion of opt-in, licensing, or future governance, one prerequisite must be stated plainly: a true permissioned system requires a hard reset of the model itself. A model trained on unlicensed material cannot be transformed into a consent-based system through policy changes, interface controls, or aspirational language. Once unauthorized material is ingested and used for training, it becomes inseparable from the trained model. There is no technical “undo” button.
The debate is often framed as openness versus restriction, innovation versus control. That framing misses the point. The real divide is whether a system is built to respect authorization where machine learning actually happens. A permissioned system cannot be layered on top of models trained without permission, nor can it be achieved by declaring legacy models “deprecated.” Machine learning systems do not forget unless they are reset. The purpose of a trained model is remembering—preserving statistical patterns learned from its data—not forgetting. Models persist, shape downstream outputs, and retain economic value long after they are removed from public view. Administrative terminology is not remediation.
Recent industry language about future “licensed models” implicitly concedes this reality. If a platform intends to operate on a consent basis, the logical consequence is unavoidable: permissioned AI begins with scrapping the contaminated model and rebuilding from zero using authorized data only.
Why “Untraining” Does Not Solve the Problem
Some argue that problematic material can simply be removed from an existing model through “untraining.” In practice, this is not a reliable solution. Modern machine-learning systems do not store discrete copies of works; they encode diffuse statistical relationships across millions or billions of parameters. Once learned, those relationships cannot be surgically excised with confidence. It’s not Harry Potter’s Pensieve.
Even where partial removal techniques exist, they are typically approximate, difficult to verify, and dependent on assumptions about how information is represented internally. A model may appear compliant while still reflecting patterns derived from unauthorized data. For systems claiming to operate on affirmative permission, approximation is not enough. If consent is foundational, the only defensible approach is reconstruction from a clean, authorized corpus.
The Structural Requirements of Consent
Once a genuine reset occurs, the technical requirements of a permissioned system become unavoidable.
Authorized training corpus. Every recording, composition, and performance used for training must be included through affirmative permission. If unauthorized works remain, the model remains non-consensual.
Provenance at the work level. Each training input must be traceable to specific authorized recordings and compositions with auditable metadata identifying the scope of permission.
Enforceable consent, including withdrawal. Authorization must allow meaningful limits and revocation, with systems capable of responding in ways that materially affect training and outputs.
Segregation of licensed and unlicensed data. Permissioned systems require strict internal separation to prevent contamination through shared embeddings or cross-trained models.
Transparency and auditability. Permission claims must be supported by documentation capable of independent verification. Transparency here is engineering documentation, not marketing copy.
These are not policy preferences. They are practical consequences of a consent-based architecture.
The Economic Reality—and Upside—of Reset
Rebuilding models from scratch is expensive. Curating authorized data, retraining systems, implementing provenance, and maintaining compliance infrastructure all require significant investment. Not every actor will be able—or willing—to bear that cost. But that burden is not an argument against permission. It is the price of admission.
Crucially, that cost is also largely non-recurring. A platform that undertakes a true reset creates something scarce in the current AI market: a verifiably permissioned model with reduced litigation risk, clearer regulatory posture, and greater long-term defensibility. Over time, such systems are more likely to attract durable partnerships, survive scrutiny, and justify sustained valuation.
Throughout technological history, companies that rebuilt to comply with emerging legal standards ultimately outperformed those that tried to outrun them. Permissioned AI follows the same pattern. What looks expensive in the short term often proves cheaper than compounding legal uncertainty.
Architecture, Not Branding
This is why distinctions between “walled garden,” “opt-in,” or other permission-based labels tend to collapse under technical scrutiny. Whatever the terminology, a system grounded in authorization must satisfy the same engineering conditions—and must begin with the same reset. Branding may vary; infrastructure does not.
Permissioned AI is possible. But it is reconstructive, not incremental. It requires acknowledging that past models are incompatible with future claims of consent. It requires making the difficult choice to start over.
The irony is that legality is not the enemy of scale—it is the only path to scale that survives. Permission is not aspiration. It is architecture.
Over the last two weeks, grassroots opposition to data centers has moved from sporadic local skirmishes to a recognizable national pattern. While earlier fights centered on land use, noise, and tax incentives, the current phase is more focused and more dangerous for developers: water.
Across multiple states, residents are demanding to see the “water math” behind proposed data centers—how much water will be consumed (not just withdrawn), where it will come from, whether utilities can actually supply it during drought conditions, and what enforceable reporting and mitigation requirements will apply. In arid regions, water scarcity is an obvious constraint. But what’s new is that even in traditionally water-secure states, opponents are now framing data centers as industrial-scale consumptive users whose needs collide directly with residential growth, agriculture, and climate volatility.
The result: moratoria, rezoning denials, delayed hearings, task forces, and early-stage organizing efforts aimed at blocking projects before entitlements are locked in.
Below is a snapshot of how that opposition has played out state by state over the last two weeks.
State-by-State Breakdown
Virginia
Virginia remains ground zero for organized pushback.
Botetourt County: Residents confronted the Western Virginia Water Authority over a proposed Google data center, pressing officials about long-term water supply impacts and groundwater sustainability.
Hanover County (Richmond region): The Planning Commission voted against recommending rezoning for a large multi-building data center project.
State Legislature: Lawmakers are advancing reform proposals that would require water-use modeling and disclosure.
Georgia
Metro Atlanta / Middle Georgia: Local governments’ recruitment of hyperscale facilities is colliding with resident concerns.
DeKalb County: An extended moratorium reflects a pause-and-rewrite-the-rules strategy.
Monroe County / Forsyth area: Data centers have become a local political issue.
Arizona
The state has moved to curb groundwater use in rural basins via new regulatory designations requiring tracking and reporting.
Local organizing frames AI data centers as unsuitable for arid regions.
Maryland
Prince George’s County (Landover Mall site): Organized opposition centered on environmental justice and utility burdens.
Authorities have responded with a pause/moratorium and a task force.
Indiana
Indianapolis (Martindale-Brightwood): Packed rezoning hearings forced extended timelines.
Greensburg: Overflow crowds framed the fight around water-user rankings.
Oklahoma
Luther (OKC metro): Organized opposition before formal filings.
Michigan
Broad local opposition with water and utility impacts cited.
State-level skirmishes over incentives intersect with water-capacity debates.
North Carolina
Apex (Wake County area): Residents object to strain on electricity and water.
Wisconsin & Pennsylvania
Corporate messaging shifts in response to opposition; Microsoft acknowledged infrastructure and water burdens.
The Through-Line: “Show Us the Water Math”
Lawrence of Arabia: The Well Scene
Across these states, the grassroots playbook has converged:
Pack the hearing.
Demand water-use modeling and disclosure.
Attack rezoning and tax incentives.
Force moratoria until enforceable rules exist.
Residents are demanding hard numbers: consumptive losses, aquifer drawdown rates, utility-system capacity, drought contingencies, and legally binding mitigation.
Why This Matters for AI Policy
This revolt exposes the physical contradiction at the heart of the AI infrastructure build-out: compute is abstract in policy rhetoric but experienced locally as land, water, power, and noise.
Communities are rejecting a development model that externalizes its physical costs onto local water systems and ratepayers.
Water is now the primary political weapon communities are using to block, delay, and reshape AI infrastructure projects.
South Korea has become the latest flashpoint in a rapidly globalizing conflict over artificial intelligence, creator rights and copyright. A broad coalition of Korean creator and copyright organizations—spanning literature, journalism, broadcasting, screenwriting, music, choreography, performance, and visual arts—has issued a joint statement rejecting the government’s proposed Korea AI Action Plan, warning that it risks allowing AI companies to use copyrighted works without meaningful permission or payment.
The groups argue that the plan signals a fundamental shift away from a permission-based copyright framework toward a regime that prioritizes AI deployment speed and “legal certainty” for developers, even if that certainty comes at the expense of creators’ control and compensation. Their statement is unusually blunt: they describe the policy direction as a threat to the sustainability of Korea’s cultural industries and pledge continued opposition unless the government reverses course.
The controversy centers on Action Plan No. 32, which promotes “activating the ecosystem for the use and distribution of copyrighted works for AI training and evaluation.” The plan directs relevant ministries to prepare amendments—either to Korea’s Copyright Act, the AI Basic Act, or through a new “AI Special Act”—that would enable AI training uses of copyrighted works without legal ambiguity.
Creators argue that “eliminating legal ambiguity” reallocates legal risk rather than resolves it. Instead of clarifying consent requirements or building licensing systems, the plan appears to reduce the legal exposure of AI developers while shifting enforcement burdens onto creators through opt-out or technical self-help mechanisms.
Similar policy patterns have emerged in the United Kingdom and India, where governments have emphasized legal certainty and innovation speed while creative sectors warn of erosion to prior-permission and fair-compensation norms. South Korea’s debate stands out for the breadth of its opposition and the clarity of the warning from cultural stakeholders.
The South Korean government avoids using the term “safe harbor,” but its plan to remove “legal ambiguity” reads like an effort to build one. The asymmetry is telling: rather than eliminating ambiguity by strengthening consent and payment mechanisms, the plan seeks to eliminate ambiguity by making AI training easier to defend as lawful—without meaningful consent or compensation frameworks. That is, in substance, a safe harbor, and a species of blanket license. The resulting “certainty” would function as a pass for AI companies, while creators are left to police unauthorized use after the fact, often through impractical opt-out mechanisms—to the extent such rights remain enforceable at all.
A grass‑roots “data center and electric grid rebellion” is emerging across the United States as communities push back against the local consequences of AI‑driven infrastructure expansion. Residents are increasingly challenging large‑scale data centers and the transmission lines needed to power them, citing concerns about enormous electricity demand, water consumption, noise pollution, land use, declining property values, and opaque approval processes. What were once routine zoning or utility hearings are now crowded, contentious events, with citizens organizing quickly and sharing strategies across counties and states.
This opposition is no longer ad hoc. In Northern Virginia—often described as the global epicenter of data centers—organized campaigns such as the Coalition to Protect Prince William County have mobilized voters, fundraised for local elections, demanded zoning changes, and challenged approvals in court. In Maryland’s Prince George’s County, resistance has taken on a strong environmental‑justice framing, with groups like the South County Environmental Justice Coalition arguing that data centers concentrate environmental and energy burdens in historically marginalized communities and calling for moratoria and stronger safeguards.
Nationally, consumer and civic groups are increasingly coordinated, using shared data, mapping tools, and media pressure to argue that unchecked data‑center growth threatens grid reliability and shifts costs onto ratepayers. Together, these campaigns signal a broader political reckoning over who bears the costs of the AI economy.
Global Data Centers
Here’s a snapshot of grass roots opposition in Texas, Louisiana and Nevada:
Texas
Texas has some of the most active and durable local opposition, driven by land use, water, and transmission corridors.
Hill Country & Central Texas (Burnet, Llano, Gillespie, Blanco Counties) Grass-roots groups formed initially around high-voltage transmission lines (765 kV) tied to load growth, now explicitly linking those lines to data center demand. Campaigns emphasize:
rural land fragmentation
wildfire risk
eminent domain abuse
lack of local benefit These groups are often informal coalitions of landowners rather than NGOs, but they coordinate testimony, public-records requests, and local elections.
DFW & North Texas Neighborhood associations opposing rezoning for hyperscale facilities focus on noise (backup generators), property values, and school-district tax distortions created by data-center abatements.
ERCOT framing Texas groups uniquely argue that data centers are socializing grid instability risk onto residential ratepayers while privatizing upside—an argument that resonates with conservative voters.
Louisiana
Opposition is newer but coalescing rapidly, often tied to petrochemical and LNG resistance networks.
North Louisiana & Mississippi River Corridor Community groups opposing new data centers frame them as:
“energy parasites” tied to gas plants
extensions of an already overburdened industrial corridor
threats to water tables and wetlands Organizers often overlap with environmental-justice and faith-based coalitions that previously fought refineries and export terminals.
Key tactic: reframing data centers as industrial facilities, not “tech,” triggering stricter land-use scrutiny.
Nevada
Nevada opposition centers on water scarcity and public-land use.
Clark County & Northern Nevada Residents and conservation groups question:
water allocations for evaporative cooling
siting near public or BLM-managed land
grid upgrades subsidized by ratepayers for private AI firms
Distinct Nevada argument: data centers compete directly with housing and tribal water needs, not just environmental values.
It’s a strange question to ask about AI and copyright, but a useful one. When generative-AI fans insist that training models on copyrighted works is merely “learning like a human,” they rely on a metaphor that collapses under even minimal scrutiny. Psychoanalysis—whatever one thinks of Freud’s conclusions—begins from a premise that modern AI rhetoric quietly denies: the unconscious is not a database, and humans are not machines.
As Freud wrote in TheInterpretation of Dreams, “Our memory has no guarantees at all, and yet we bow more often than is objectively justified to the compulsion to believe what it says.” No AI truthiness there.
Human learning does not involve storing perfect, retrievable copies of what we read, hear, or see. Memory is reconstructive, shaped by context, emotion, repression, and time. Dreams do not replay inputs; they transform them. What persists is meaning, not a file.
AI training works in the opposite direction—obviously. Training begins with high-fidelity copying at industrial scale. It converts human expressive works into durable statistical parameters designed for reuse, recall, and synthesis for eternity. Where the human mind forgets, distorts, and misremembers as a feature of cognition, models are engineered to remember as much as possible, as efficiently as possible, and to deploy those memories at superhuman speed. Nothing like humans.
Calling these two processes “the same kind of learning” is not analogy—it is misdirection. And that misdirection matters, because copyright law was built around the limits of human expression: scarcity, imperfection, and the fact that learning does not itself create substitute works at scale.
Dream-Work Is Not a Training Pipeline
Freud’s theory of dreams turns on a simple but powerful idea: the mind does not preserve experience intact. Instead, it subjects experience to dream-work—processes like condensation (many ideas collapsed into one image), displacement (emotional significance shifted from one object to another), and symbolization (one thing representing another, allowing humans to create meaning and understanding through symbols). The result is not a copy of reality but a distorted, overdetermined construction whose origins cannot be cleanly traced.
This matters because it shows what makes human learning human. We do not internalize works as stable assets. We metabolize them. Our memories are partial, fallible, and personal. Two people can read the same book and walk away with radically different understandings—and neither “contains” the book afterward in any meaningful sense. There is no Rashamon effect for an AI.
AI training is the inverse of dream-work. It depends on perfect copying at ingestion, retention of expressive regularities across vast parameter spaces, and repeatable reuse untethered from embodiment, biography, or forgetting. If Freud’s model describes learning as transformation through loss, AI training is transformation through compression without forgetting.
One produces meaning. The other produces capacity.
The Unconscious Is Not a Database
Psychoanalysis rejects the idea that memory functions like a filing cabinet. The unconscious is not a warehouse of intact records waiting to be retrieved. Memory is reconstructed each time it is recalled, reshaped by narrative, emotion, and social context. Forgetting is not a failure of the system; it is a defining feature.
AI systems are built on the opposite premise. Training assumes that more retention is better, that fidelity is a virtue, and that expressive regularities should remain available for reuse indefinitely. What human cognition resists by design—perfect recall at scale—machine learning seeks to maximize.
This distinction alone is fatal to the “AI learns like a human” claim. Human learning is inseparable from distortion, limitation, and individuality. AI training is inseparable from durability, scalability, and reuse.
In The Divided Self, R. D. Laing rejects the idea that the mind is a kind of internal machine storing stable representations of experience. What we encounter instead is a self that exists only precariously, defined by what Laing calls “ontological security” or its absence—the sense of being real, continuous, and alive in relation to others. Experience, for Laing, is not an object that can be detached, stored, or replayed; it is lived, relational, and vulnerable to distortion. He warns repeatedly against confusing outward coherence with inner unity, emphasizing that a person may present a fluent, organized surface while remaining profoundly divided within. That distinction matters here: performance is not understanding, and intelligible output is not evidence of an interior life that has “learned” in any human sense.
Why “Unlearning” Is Not Forgetting
Once you understand this distinction, the problem with AI “unlearning” becomes obvious.
In human cognition, there is no clean undo. Memories are never stored as discrete objects that can be removed without consequence. They reappear in altered forms, entangled with other experiences. Freud’s entire thesis rests on the impossibility of clean erasure.
AI systems face the opposite dilemma. They begin with discrete, often unlawful copies, but once those works are distributed across parameters, they cannot be surgically removed with certainty. At best, developers can stop future use, delete datasets, retrain models, or apply partial mitigation techniques (none of which they are willing to even attempt). What they cannot do is prove that the expressive contribution of a particular work has been fully excised.
This is why promises (especially contractual promises) to “reverse” improper ingestion are so often overstated. The system was never designed for forgetting. It was designed for reuse.
Why This Matters for Fair Use and Market Harm
The “AI = human learning” analogy does real damage in copyright analysis because it smuggles conclusions into fair-use factor one (transformative purpose and character) and obscures factor four (market harm).
Learning has always been tolerated under copyright law because learning does not flood markets. Humans do not emerge from reading a novel with the ability to generate thousands of competing substitutes at scale. Generative models do exactly that—and only because they are trained through industrial-scale copying.
Copyright law is calibrated to human limits. When those limits disappear, the analysis must change with them. Treating AI training as merely “learning” collapses the very distinction that makes large-scale substitution legally and economically significant.
The Pensieve Fallacy
There is a world in which minds function like databases. It is a fictional one.
In Harry Potter and the Goblet of Fire, wizards can extract memories, store them in vials, and replay them perfectly using a Pensieve. Memories in that universe are discrete, stable, lossless objects. They can be removed, shared, duplicated, and inspected without distortion. As Dumbledore explained to Harry, “I use the Pensieve. One simply siphons the excess thoughts from one’s mind, pours them into the basin, and examines them at one’s leisure. It becomes easier to spot patterns and links, you understand, when they are in this form.”
That is precisely how AI advocates want us to imagine learning works.
But the Pensieve is magic because it violates everything we know about human cognition. Real memory is not extractable. It cannot be replayed faithfully. It cannot be separated from the person who experienced it. Arguably, Freud’s work exists because memory is unstable, interpretive, and shaped by conflict and context.
AI training, by contrast, operates far closer to the Pensieve than to the human mind. It depends on perfect copies, durable internal representations, and the ability to replay and recombine expressive material at will.
The irony is unavoidable: the metaphor that claims to make AI training ordinary only works by invoking fantasy.
Humans Forget. Machines Remember.
Freud would not have been persuaded by the claim that machines “learn like humans.” He would have rejected it as a category error. Human cognition is defined by imperfection, distortion, and forgetting. AI training is defined by reproduction, scale, and recall.
To believe AI learns like a human, you have to believe humans have Pensieves. They don’t. That’s why Pensieves appear in Harry Potter—not neuroscience, copyright law, or reality.
If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.
The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.
It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.
This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?
The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.
Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.