The AI Subsidy Is Over. Or Maybe It’s Just Beginning.


The current narrative says the “AI subsidy era” is ending. Prices are rising. Rate limits are tightening. Ads are creeping in. Enterprise tiers are replacing all-you-can-eat plans. In short: users will finally start paying what AI actually costs.

Haydon Field writing in The Verge tells us:

Earlier this month, millions of OpenClaw users woke up to a sweeping mandate: The viral AI agent tool, which this year took the worldwide tech industry by storm, had been severely restricted by Anthropic.

Anthropic, like other leading AI labs, was under immense pressure to lessen the strain on its systems and start turning a profit. So if the users wanted its Claude AI to power their popular agents, they’d have to start paying handsomely for the privilege.

“Our subscriptions weren’t built for the usage patterns of these third-party tools,” wrote Boris Cherny, head of Claude Code, on X. “We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that.”

The announcement was a sign of the times. Investors have poured hundreds of billions of dollars into companies like OpenAI and Anthropic to help them scale and build out their compute. Now, they’re expecting returns. After years of offering cheap or totally free access to advanced AI systems, the bill is starting to come due — and downstream, users are beginning to feel the pinch.

That’s true but it’s leaving out a lot.

Yes, the consumer subsidy—venture-backed underpricing of inference—may be winding down. But the broader subsidy system that made AI possible isn’t going away. It’s expanding. Just ask President Trump.

To understand why, you have to go back to the last great digital disruption.

From P2P to Streaming to AI

Start with Napster.

P2P didn’t just enable infringement. It rewired expectations. It taught users that all music should be available, instantly, for free. Why? Because there was gold in them long tails. Forget about supply and demand, we had infinite supply so demand would take care of itself.

It’s for sale

Every artist, songwriter, label and publisher in the history of recorded music were not compensated for this shift. They were its involuntary financiers. Their catalogs created the demand, the network effects, and the user adoption that built the early internet music economy.

Streaming—think Spotify—didn’t reverse that logic. It formalized it. (Remember, streaming saved us from piracy and we should all be so grateful.) It actually transferred that involuntary financing from the p2p balance sheet to Spotify’s, and took it public.


Streaming platforms accepted a new baseline: the entire world’s repertoire must be available at all times, regardless of demand. That is a costly and structurally inefficient mandate, but it became the price of competing in a market shaped by P2P expectations. Licensing systems like the Mechanical Licensing Collective (MLC) were built to support that scale, but the underlying premise remained: total availability first, compensation second.

AI changes the game again.

AI Doesn’t Just Distribute Works. It Consumes Them.

P2P distributed music. Streaming licensed it. AI models ingest it.

That’s the critical difference.

Generative AI systems are trained on massive corpora that include copyrighted works, performances, and what we might call personhood signals—voice, style, tone, phrasing, and creative identity. These inputs are not just indexed or streamed. They are transmogrified (see what I did there) into model weights that can generate new outputs that compete with, mimic, or substitute for the originals.

So the role of the artist evolves:
    •    In P2P: unpaid distributor subsidy
    •    In streaming: underpaid inventory supplier
    •    In AI: uncompensated production input
That is not a marginal shift. It is a structural one.

The Real Subsidy Stack

When people say the “AI subsidy era is over,” they are usually talking about one thing: cheap access to compute.
But AI has always depended on a multi-layered subsidy stack:

    Creators – supply training data, cultural value, and identity signals without compensation or consent
    Users – supply prompts, feedback, and behavioral data that improve the models
    Communities – absorb land use, water consumption, and environmental costs
    Ratepayers – fund grid upgrades, transmission, and reliability for data center demand
    Venture capital – underwrites early losses to drive adoption and scale

The shift we are seeing now is not the end of subsidies. It’s a reallocation. Or as a cynic might say, it’s rearranging the deck chairs to hide the lifeboats.

Users may start paying more. But creators still aren’t being paid for training. Communities are still being asked to host infrastructure. And the physical footprint of AI is accelerating. Just ask President Trump.

The World Turned Upside Down

What makes this moment different is the scale of the buildout.
We are not just talking about apps anymore. We are talking about an industrial transformation:
    •    New data centers the size of small cities
    •    High-voltage transmission lines
    •    Water-intensive cooling systems
    •    Semiconductor supply chains
    •    And even discussions of new nuclear capacity to support compute demand

This is infrastructure on the scale of a national project, or more like national mobilization. But it is being built on top of a premise that has not been resolved: the uncompensated use of human creative work as training input.

That is the inversion: We are building power plants for systems that depend on not paying the people whose work makes those systems possible.

A Better Frame

The cleanest way to understand this is as a continuum:

P2P turned infringement into consumer expectation.
Streaming turned that expectation into platform infrastructure.
AI turns uncompensated authorship into industrial feedstock.

Or more bluntly:
The AI free ride is not ending. It is being re-invoiced. Users may now see higher prices. But the deeper subsidies—creative, environmental, and civic—remain off the books.

What Comes Next

If the industry is serious about “pricing AI correctly,” it cannot stop at compute.

It has to address:
    •    Compensation frameworks for training data
    •    Attribution and provenance standards
    •    Licensing models for style and voice
    •    Infrastructure cost allocation (who pays for the grid?)
    •    Governance of large-scale compute deployment

Otherwise, we are not exiting the subsidy era. We are doing what Big Tech lives for.

We are scaling it.

And this time, instead of a few server racks in a dorm room, we are building an global energy system around it.

The Constitutional Shadow of the White House AI Framework: Law Without Law

One of the most important things about the White House AI framework released last week is what it is not.

It is not an executive order.

That may sound like a technical distinction, but it is doing an enormous amount of work here. Because by avoiding the form of an executive order, the framework avoids something even more important: Judicial review.

An executive order that attempted to declare AI training on copyrighted works lawful—or to constrain Congress from acting—would immediately invite challenge in the very judicial branch the framework also seeks to influence. Oh, that would be fun.

It would raise Administrative Procedure Act questions. It would trigger separation-of-powers scrutiny. It would likely be litigated within days.

This framework does none of that and is not susceptible to judicial challenge.

Instead, it achieves much of the same practical effect—shaping legal outcomes, constraining policy space, and signaling preferred doctrine—without creating a justiciable action. It is, in effect, law without law, and outcomes by positioning. Silicon Valley’s favorite.

Takings by Policy, Not Statute

Start with the most obvious constitutional issue: the Takings Clause of Fifth Amendment of the U.S. Constitution which states that “private property [cannot] be taken for public use, without just compensation.”

Copyright is a form of property. That is not controversial. It is a statutory property right grounded in the Constitution’s Intellectual Property Clause, and it carries exclusive rights that have long been understood as economically valuable.

Now consider what the White House framework does.

It declares that AI training—mass, indiscriminate ingestion of copyrighted works—as lawful. It does so without requiring compensation. And it does so in a context where the resulting systems can substitute for, or diminish the market for, the original works.

If that official policy position of the Executive Branch were enacted into law, it would raise a straightforward question:

Has the government authorized the use of private property for public and commercial purposes without compensation? Or more directly, has the Executive Branch just announced that will not prosecute that indiscriminate ingestion for any reason? Can we expect to see amicus briefs from the Solicitor General opposing copyright owners pursuing their rights in court?

That is sounding a lot like a taking.

But because the framework is not law, it avoids the moment where that question must be answered. It does not extinguish rights formally. It renders them economically hollow in practice, while leaving the formal structure intact.

That is the key move: functional elimination without formal abolition.

Ex Post Facto in Everything but Name

The framework also raises a second, less discussed issue: the logic of ex post facto lawmaking.

The Ex Post Facto Clause technically applies to criminal law. But the underlying principle is broader: the government should not change the legal consequences of past conduct to benefit favored actors or disadvantage others. Of course, copyright owners raising this argument will have the Spotify retroactive safe harbor in Title I of the Music Modernization Act thrown in their face as rank hypocrisy, which they would richly deserve, although as any 10 year old can tell you, two wrongs don’t make a right, at least in theory.

Here, the timeline matters.

  • Massive datasets have already been scraped.
  • Models have already been trained.
  • The conduct that enabled this may, in many instances, have been legally questionable—and in cases of willful infringement, potentially criminal under federal copyright law. Or if you listen to me, the largest case of criminal copyright infringement in history.

Now comes the policy years after the fact in the face of over 150 AI lawsuits all based on copyright infringement to one degree or another:

Training is lawful.

That looks less like interpretation and more like retroactive validation.

Even if framed as civil doctrine, the effect is similar to retroactive decriminalization of conduct tied to vested rights. It sends a clear message: conduct that may have been unlawful when undertaken will be treated as lawful because it is now economically indispensable to the broligarchs.

That is not how the rule of law is supposed to work.

Separation of Powers by Suggestion

The framework’s treatment of Congress is equally striking. It does not say Congress lacks authority to legislate. The President cannot say that. Well…he can, but there’s no foundation for the statement. The Constitution is clear: Congress defines copyright.

Instead, the framework says Congress should not act in ways that would affect judicial resolution of the training question.

That is an unusual formulation. Congress legislates in areas under litigation all the time. Indeed, it is often expected to clarify statutory ambiguity.

What the framework is doing is more subtle: It is attempting to shape the legislative field without formally constraining it.

And it pairs that with an implicit second message:

  • Legislation that restricts training or mandates licensing is inconsistent with executive policy.
  • Such legislation is therefore unlikely to be signed by the President. So why bring it?

That is a veto signal—delivered without the political cost of an actual veto.

Judicial Signaling Without Command

The same dynamic applies to the courts.

The framework claims to “defer” to the judiciary. But it simultaneously declares a preferred outcome: training is lawful.

That is not deference. That is signaling.

Judges are, of course, independent. But they do not operate in a vacuum. They are aware of executive priorities, legislative inaction, and market realities. When all three align around a single policy direction, it creates an interpretive gravitational force that is difficult to ignore.

And the signal travels further.

To lawyers.
To regulators.
To anyone whose career may intersect with executive appointment.

It normalizes what counts as a “reasonable” position within the current policy environment.

Prosecutorial Silence as Policy

There is also a more immediate, practical consequence.

While the framework does not have the force of law, it functions as an indirect directive to the Department of Justice. By declaring training lawful as a matter of policy, it signals that federal enforcement resources should not be used to pursue cases premised on the opposite view.

In effect, it tells prosecutors:

Do not spend time considering criminal enforcement for large-scale copyright violations tied to AI training. Do not spend time considering antitrust enforcement against the broligarchs. In fact, don’t spend any time prosecuting anyone regarding AI.

That matters because, for example, willful copyright infringement at scale can, in certain circumstances, give rise to criminal liability. I mean if that doesn’t, what does? Yet under this framework, even the possibility of such enforcement is quietly set aside.

This is not formal immunity. But in practice, it can look very similar.

Why “Not an Executive Order” Matters

If this were an executive order, all of these issues would be front and center:

  • Is this a taking?
  • Does it exceed executive authority?
  • Does it interfere with Congress?
  • Does it interfere with the Judiciary?

Because it is not and EO, these important issues remain in the background—present but untested.

That is the genius, and the danger, of the approach.

It allows the executive branch to:

  • Shape doctrine
  • Influence courts
  • Constrain Congress
  • Guide enforcement priorities
  • Normalize contested conduct

—all without triggering the mechanisms designed to check it.

The Constitutional Shadow

The AI framework does not violate the Constitution in any formal sense.

It does something more complicated.

It operates in the constitutional shadow—where policy can reshape rights, incentives, and expectations without ever crossing the line that would allow a court to say no.

But shadows matter.

Because by the time the law catches up—if it ever does—the world the Constitution was meant to govern and protect may already have changed.

Schrödinger’s Training Clause: How Platforms Like WeTransfer Say They’re Not Using Your Files for AI—Until They Are

Tech companies want your content. Not just to host it, but for their training pipeline—to train models, refine algorithms, and “improve services” in ways that just happen to lead to new commercial AI products. But as public awareness catches up, we’ve entered a new phase: deniable ingestion.

Welcome to the world of the Schrödinger’s training clause—a legal paradox where your data is simultaneously not being used to train AI and fully licensed in case they decide to do so.

The Door That’s Always Open

Let’s take the WeTransfer case. For a brief period this month (in July 2025), their Terms of Service included an unmistakable clause: users granted them rights to use uploaded content to “improve the performance of machine learning models.” That language was direct. It caused backlash. And it disappeared.

Many mea culpas later, their TOS has been scrubbed clean of AI references. I appreciate the sentiment, really I do. But—and there’s always a but–the core license hasn’t changed. It’s still:

– Perpetual

– Worldwide

– Royalty-free

– Transferable

– Sub-licensable

They’ve simply returned the problem clause to its quantum box. No machine learning references. But nothing that stops it either.

 A Clause in Superposition

Platforms like WeTransfer—and others—have figured out the magic words: Don’t say you’re using data to train AI. Don’t say you’re not using it either. Instead, claim a sweeping license to do anything necessary to “develop or improve the service.”

That vague phrasing allows future pivots. It’s not a denial. It’s a delay. And to delay is to deny.

That’s what makes it Schrödinger’s training clause: Your content isn’t being used for AI. Unless it is. And you won’t know until someone leaks it, or a lawsuit makes discovery public.

The Scrape-Then-Scrub Scenario

Let’s reconstruct what could have happened–not saying it did happen, just could have–following the timeline in The Register:

1. Early July 2025: WeTransfer silently updates its Terms of Service to include AI training rights.

2. Users continue uploading sensitive or valuable content.

3. [Somebody’s] AI systems quickly ingest that data under the granted license.

4. Public backlash erupts mid-July.

5. WeTransfer removes the clause—but to my knowledge never revokes the license retroactively or promises to delete what was scraped. In fact, here’s their statement which includes this non-denial denial: “We don’t use machine learning or any form of AI to process content shared via WeTransfer.” OK, that’s nice but that wasn’t the question. And if their TOS was so clear, then why the amendment in the first place?

Here’s the Potential Legal Catch

Even if WeTransfer removed the clause later, any ingestion that occurred during the ‘AI clause window’ is arguably still valid under the terms then in force. As far as I know, they haven’t promised:

– To destroy any trained models

– To purge training data caches

– Or to prevent third-party partners from retaining data accessed lawfully at the time

What Would ‘Undoing’ Scraping Require?

– Audit logs to track what content was ingested and when

– Reversion of any models trained on user data

– Retroactive license revocation and sub-license termination

None of this has been offered that I have seen.

What ‘We Don’t Train on Your Data’ Actually Means

When companies say, “we don’t use your data to train AI,” ask:

– Do you have the technical means to prevent that?

– Is it contractually prohibited?

– Do you prohibit future sublicensing?

– Can I audit or opt out at the file level?

If the answer to those is “no,” then the denial is toothless.

How Creators Can Fight Back

1. Use platforms that require active opt-in for AI training.

2. Encrypt files before uploading.

3. Include counter-language in contracts or submission terms:

   “No content provided may be used, directly or indirectly, to train or fine-tune machine learning or artificial intelligence systems, unless separately and explicitly licensed for that purpose in writing” or something along those lines.

4. Call it out. If a platform uses Schrödinger’s language, name it. The only thing tech companies fear more than litigation is transparency.

What is to Be Done?

The most dangerous clauses aren’t the ones that scream “AI training.” They’re the ones that whisper, “We’re just improving the service.”

If you’re a creative, legal advisor, or rights advocate, remember: the future isn’t being stolen with force. It’s being licensed away in advance, one unchecked checkbox at a time.

And if a platform’s only defense is “we’re not doing that right now”—that’s not a commitment. That’s a pause.

That’s Schrödinger’s training clause.