Infrastructure, Not Aspiration: Why Permissioned AI Begins With a Hard Reset

Paul Sinclair’s framing of generative music AI as a choice between “open studios” and permissioned systems makes a basic category mistake. Consent is not a creative philosophy or a branding position. It is a systems constraint. You cannot “prefer” consent into existence. A permissioned system either enforces authorization at the level where machine learning actually occurs—or it does not exist at all.

That distinction matters not only for artists, but for the long-term viability of AI companies themselves. Platforms built on unresolved legal exposure may scale quickly, but they do so on borrowed time. Systems built on enforceable consent may grow more slowly at first, but they compound durability, defensibility, and investor confidence over time. Legality is not friction. It is infrastructure. It’s a real “eat your vegetables” moment.

The Great Reset

Before any discussion of opt-in, licensing, or future governance, one prerequisite must be stated plainly: a true permissioned system requires a hard reset of the model itself. A model trained on unlicensed material cannot be transformed into a consent-based system through policy changes, interface controls, or aspirational language. Once unauthorized material is ingested and used for training, it becomes inseparable from the trained model. There is no technical “undo” button.

The debate is often framed as openness versus restriction, innovation versus control. That framing misses the point. The real divide is whether a system is built to respect authorization where machine learning actually happens. A permissioned system cannot be layered on top of models trained without permission, nor can it be achieved by declaring legacy models “deprecated.” Machine learning systems do not forget unless they are reset. The purpose of a trained model is remembering—preserving statistical patterns learned from its data—not forgetting. Models persist, shape downstream outputs, and retain economic value long after they are removed from public view. Administrative terminology is not remediation.

Recent industry language about future “licensed models” implicitly concedes this reality. If a platform intends to operate on a consent basis, the logical consequence is unavoidable: permissioned AI begins with scrapping the contaminated model and rebuilding from zero using authorized data only.

Why “Untraining” Does Not Solve the Problem

Some argue that problematic material can simply be removed from an existing model through “untraining.” In practice, this is not a reliable solution. Modern machine-learning systems do not store discrete copies of works; they encode diffuse statistical relationships across millions or billions of parameters. Once learned, those relationships cannot be surgically excised with confidence. It’s not Harry Potter’s Pensieve.

Even where partial removal techniques exist, they are typically approximate, difficult to verify, and dependent on assumptions about how information is represented internally. A model may appear compliant while still reflecting patterns derived from unauthorized data. For systems claiming to operate on affirmative permission, approximation is not enough. If consent is foundational, the only defensible approach is reconstruction from a clean, authorized corpus.

The Structural Requirements of Consent

Once a genuine reset occurs, the technical requirements of a permissioned system become unavoidable.

Authorized training corpus. Every recording, composition, and performance used for training must be included through affirmative permission. If unauthorized works remain, the model remains non-consensual.

Provenance at the work level. Each training input must be traceable to specific authorized recordings and compositions with auditable metadata identifying the scope of permission.

Enforceable consent, including withdrawal. Authorization must allow meaningful limits and revocation, with systems capable of responding in ways that materially affect training and outputs.

Segregation of licensed and unlicensed data. Permissioned systems require strict internal separation to prevent contamination through shared embeddings or cross-trained models.

Transparency and auditability. Permission claims must be supported by documentation capable of independent verification. Transparency here is engineering documentation, not marketing copy.

These are not policy preferences. They are practical consequences of a consent-based architecture.

The Economic Reality—and Upside—of Reset

Rebuilding models from scratch is expensive. Curating authorized data, retraining systems, implementing provenance, and maintaining compliance infrastructure all require significant investment. Not every actor will be able—or willing—to bear that cost. But that burden is not an argument against permission. It is the price of admission.

Crucially, that cost is also largely non-recurring. A platform that undertakes a true reset creates something scarce in the current AI market: a verifiably permissioned model with reduced litigation risk, clearer regulatory posture, and greater long-term defensibility. Over time, such systems are more likely to attract durable partnerships, survive scrutiny, and justify sustained valuation.

Throughout technological history, companies that rebuilt to comply with emerging legal standards ultimately outperformed those that tried to outrun them. Permissioned AI follows the same pattern. What looks expensive in the short term often proves cheaper than compounding legal uncertainty.

Architecture, Not Branding

This is why distinctions between “walled garden,” “opt-in,” or other permission-based labels tend to collapse under technical scrutiny. Whatever the terminology, a system grounded in authorization must satisfy the same engineering conditions—and must begin with the same reset. Branding may vary; infrastructure does not.

Permissioned AI is possible. But it is reconstructive, not incremental. It requires acknowledging that past models are incompatible with future claims of consent. It requires making the difficult choice to start over.

The irony is that legality is not the enemy of scale—it is the only path to scale that survives. Permission is not aspiration. It is architecture.

Schrödinger’s Training Clause: How Platforms Like WeTransfer Say They’re Not Using Your Files for AI—Until They Are

Tech companies want your content. Not just to host it, but for their training pipeline—to train models, refine algorithms, and “improve services” in ways that just happen to lead to new commercial AI products. But as public awareness catches up, we’ve entered a new phase: deniable ingestion.

Welcome to the world of the Schrödinger’s training clause—a legal paradox where your data is simultaneously not being used to train AI and fully licensed in case they decide to do so.

The Door That’s Always Open

Let’s take the WeTransfer case. For a brief period this month (in July 2025), their Terms of Service included an unmistakable clause: users granted them rights to use uploaded content to “improve the performance of machine learning models.” That language was direct. It caused backlash. And it disappeared.

Many mea culpas later, their TOS has been scrubbed clean of AI references. I appreciate the sentiment, really I do. But—and there’s always a but–the core license hasn’t changed. It’s still:

– Perpetual

– Worldwide

– Royalty-free

– Transferable

– Sub-licensable

They’ve simply returned the problem clause to its quantum box. No machine learning references. But nothing that stops it either.

 A Clause in Superposition

Platforms like WeTransfer—and others—have figured out the magic words: Don’t say you’re using data to train AI. Don’t say you’re not using it either. Instead, claim a sweeping license to do anything necessary to “develop or improve the service.”

That vague phrasing allows future pivots. It’s not a denial. It’s a delay. And to delay is to deny.

That’s what makes it Schrödinger’s training clause: Your content isn’t being used for AI. Unless it is. And you won’t know until someone leaks it, or a lawsuit makes discovery public.

The Scrape-Then-Scrub Scenario

Let’s reconstruct what could have happened–not saying it did happen, just could have–following the timeline in The Register:

1. Early July 2025: WeTransfer silently updates its Terms of Service to include AI training rights.

2. Users continue uploading sensitive or valuable content.

3. [Somebody’s] AI systems quickly ingest that data under the granted license.

4. Public backlash erupts mid-July.

5. WeTransfer removes the clause—but to my knowledge never revokes the license retroactively or promises to delete what was scraped. In fact, here’s their statement which includes this non-denial denial: “We don’t use machine learning or any form of AI to process content shared via WeTransfer.” OK, that’s nice but that wasn’t the question. And if their TOS was so clear, then why the amendment in the first place?

Here’s the Potential Legal Catch

Even if WeTransfer removed the clause later, any ingestion that occurred during the ‘AI clause window’ is arguably still valid under the terms then in force. As far as I know, they haven’t promised:

– To destroy any trained models

– To purge training data caches

– Or to prevent third-party partners from retaining data accessed lawfully at the time

What Would ‘Undoing’ Scraping Require?

– Audit logs to track what content was ingested and when

– Reversion of any models trained on user data

– Retroactive license revocation and sub-license termination

None of this has been offered that I have seen.

What ‘We Don’t Train on Your Data’ Actually Means

When companies say, “we don’t use your data to train AI,” ask:

– Do you have the technical means to prevent that?

– Is it contractually prohibited?

– Do you prohibit future sublicensing?

– Can I audit or opt out at the file level?

If the answer to those is “no,” then the denial is toothless.

How Creators Can Fight Back

1. Use platforms that require active opt-in for AI training.

2. Encrypt files before uploading.

3. Include counter-language in contracts or submission terms:

   “No content provided may be used, directly or indirectly, to train or fine-tune machine learning or artificial intelligence systems, unless separately and explicitly licensed for that purpose in writing” or something along those lines.

4. Call it out. If a platform uses Schrödinger’s language, name it. The only thing tech companies fear more than litigation is transparency.

What is to Be Done?

The most dangerous clauses aren’t the ones that scream “AI training.” They’re the ones that whisper, “We’re just improving the service.”

If you’re a creative, legal advisor, or rights advocate, remember: the future isn’t being stolen with force. It’s being licensed away in advance, one unchecked checkbox at a time.

And if a platform’s only defense is “we’re not doing that right now”—that’s not a commitment. That’s a pause.

That’s Schrödinger’s training clause.