TikTok’s Divestment Ouroboros: How the “Sale” Changed the Optics but Not the Leverage

When the TikTok USDS deal was announced under the Protecting Americans from Foreign Adversary Controlled Applications Act, it was framed as a clean resolution to years of national-security concerns expressed by many in the US. TikTok was to be reborn as a U.S. company, with U.S. control, and foreign influence neutralized. But if you look past the press language and focus on incentives, ownership, and law, a different picture emerges.

TikTok’s “forced sale” under the PAFACA (not to be confused with COVFEFE) traces back to years of U.S. national-security concern that TikTok’s owner ByteDance—one of the People’s Republic of China’s biggest tech companies founded by Zhang Yiming among China’s richest men and a self-described member of the ruling Chinese Communist Party—could be compelled under PRC law to share data or to allow the CCP to influence the platform’s operations. TikTok and its lobbyists repeatedly attempted to deflect the attention of regulators through measures like U.S. data localization and third-party oversight (e.g., “Project Texas”). However, lawmakers concluded that aggressive structural separation—not promises which nobody was buying—was needed. Congress then passed, and President Biden signed, legislation requiring either divestiture of “foreign adversary controlled” apps like TikTok or face a total a U.S. ban. Facing app-store and infrastructure cutoff risk, TikTok and ByteDance pursued a restructuring to keep U.S. operations alive and maintain an exit to US financial markets.

Lawmakers’ concerns were real and obvious. By trading on social media addiction, TikTok can compile rich behavioral profiles—especially on minors—by combining what users watch, like, share, search, linger on, and who they interact with, along with device identifiers, network data, and (where permitted) location signals. At scale, that kind of telemetry can be used to infer vulnerabilities and target susceptibility. For the military, the concern is not only “TikTok tracks troop movements,” but also that social media posts, aggregated location and social-graph signals across hundreds of millions of users could reveal patterns around bases, deployments, routines, or sensitive communities—hence warnings that harvested information could “possibly even reveal troop movements,” hence TikTok’s longstanding bans on government-issued devices.

These concerns shot through government circles while the Tok became ubiquitous and carefully engineered social media addiction gripped the US, and indeed the West. (TikTok just this week settled out of the biggest social media litigation in history.) Congress was very concerned and with good reason—Rep. Mike Gallagher demanded that TikTok “Break up with the Chinese Communist Party (CCP) or lose access to your American users.”  Rep. Cathy McMorris Rodgers said the bill would “prevent foreign adversaries, such as China, from surveilling and manipulating the American people.” Sen. Pete Ricketts warned “If the Chinese Communist Party is refusing to let ByteDance sell TikTok… they don’t want [control of] those algorithms coming to America.” 

And of course, who can forget a classic Marsha line from Senator Marsha Blackburn. I don’t know how to say “Bless your heart” in Mandarin, but in English it’s “we heard you were opening a TikTok headquarters in Nashville and what you’re probably going to find is that the welcome mat isn’t going to be rolled out for you in Nashville.”

So there’s that.

TikTok can compile rich behavioral profiles—especially on minors—by combining what users post, watch, like, share, search, linger on, and who they interact with, along with device identifiers, network data, and location signals. At scale, that kind of telemetry can be used to infer vulnerabilities and targeting susceptibility. These exploits have real strategic value. With the CCP’s interest in undermining US interests and especially blunting the military, the concern is not necessarily that “the CCP tracks troop movements” directly (although who really knows), but that aggregated location and social-graph signals could reveal patterns around bases, deployments, routines, or sensitive communities—hence warnings that harvested information could “possibly even reveal troop movements,” and the TikTok’s longstanding bans on government-issued devices.  You know, kind of like if you flew a balloon across the CONUS. military bases.

It must also be said that when you watch TikTok’s poor performance before Congress at hearings, it really came down to a simple question of trust. I think nobody believed a word they said and the TikTok witnesses exuded a kind of arrogance that simply does not work when Congress has a bit in the teeth. Full disclosure, I have never believed a word they said and have always been troubled that artists were unwittingly leading their fans to the social media abattoir.

I’ve been writing about TikTok for years, and not because it was fashionable or politically easy. After a classic MTP-style presentation at the MusicBiz conference in 2020 where I laid out all the issues with TikTok and the CCP, somehow I never got invited back. Back in 2020, I warned that “you don’t need proof of misuse to have a national security problem—you only need legal leverage and opacity.” I also argued that “data localization doesn’t solve a governance problem when the parent company [Bytedance] remains subject to foreign national security law,” and that focusing on the location of data storage missed “the more important question of who controls the system that decides what people see.” The forced sale didn’t vindicate any one prediction so much as confirm the basic point: structure matters more than assurances, and control matters more than rhetoric. I still have that concern after all the sound and fury.

There is also a legitimate constitutional concern with PAFACA: a government-mandated divestiture risks resembling a Fifth Amendment taking if structured to coerce a sale without just compensation. PAFACA deserved serious scrutiny even given the legitimate national security concerns. Had the dust settled with the CCP suing the U.S. government under a takings theory, it would have been both too cute by half and entirely on-brand—an example of the CCP’s “unrestricted warfareapproach to lawfare, exploiting Western legal norms strategically. (The CCP’s leading military strategy doctrine, Unrestricted Warfare poses terrorism (and “terror-like” economic and information attacks such as TikTok’s potential use) as part of a spectrum of asymmetric methods that can weaken a technologically superior power like the US.)

Indeed, TikTok did challenge the divest-or-ban statute in the Supreme Court and mounted a SOPA-style campaign that largely failed. TikTok argued that a government-mandated forced sale violated the First Amendment rights of its users and exceeded Congress’s national-security authority. The Supreme Court upheld (unanimously) the PAFACA law, concluding that Congress permissibly targeted foreign-adversary control for national-security reasons rather than suppressing speech, and that the resulting burden on expression did not violate the First Amendment. The case ultimately underscored how far national-security rationales can narrow judicial appetite to second-guess political branches in foreign-adversary disputes no matter how many high-priced lawyers, lobbyists and spin doctors line up at your table. And, boy, did they have them. I think at one point close to half the shilleries in DC were on the PRC payroll.

In that sense, the TikTok deal itself may prove to be another illustration of Master Sun’s maxim about winning without fighting, i.e., achieving strategic advantage not through open confrontation, but by shaping the terrain, the rules, and the opponent’s choices in advance—and perhaps most importantly in this case…deception.

But the deal we got is the deal we have so let’s see what we actually have achieved (or how bad we got hosed this time). As I often say, it’s a damn good thing we never let another MTV build a business on our backs.

The Three Pillars of TikTok

TikTok USDS is the U.S.-domiciled parent holding company for TikTok’s American operations, created to comply with the divest-or-ban law. It is majority owned by U.S. investors, with ByteDance retaining a non-controlling minority stake (reported around 19.9%) and licensing core recommendation technology to the U.S. business. (Under U.S. GAAP, 20%+ ownership is a common rebuttable presumption of “significant influence,” which can trigger less favorable accounting and more scrutiny of the relationship. Staying below 20% helps keep the stake looking purely passive which is kind of a joke considering Byte still owns the key asset. And we still have to ask if BD (or CCP) has any special voting rights (“golden share”), board control, dual-class stock, etc.)

The deal appears to rest on three pillars—and taken together, they point to something closer to an ouroboros than a divestment: the structure consumes itself, leaving ByteDance, and by extension the PRC, in a position that is materially different on paper but strikingly similar in practice.

Pillar One: ByteDance Keeps the Crown Jewel

The first and most important point is the simplest: ByteDance retains ownership of TikTok’s recommendation algorithm.

That algorithm is not an ancillary asset. It is TikTok’s product. Engagement, ad pricing, cultural reach, and political concern all flow from it. Selling TikTok without selling the algorithm is like selling a car without the engine and calling it a divestiture because the buyer controls the steering wheel.

Public reporting strongly suggests the solution was not a sale of the algorithm, but a license or controlled use arrangement. TikTok USDS may own U.S.-specific “tweaks”—content moderation parameters, weighting adjustments, compliance filters—but those sit on top of a core system ByteDance still owns and controls.

That distinction matters, because ownership determines who ultimately controls:

  • architectural changes,
  • major updates,
  • retraining methodology,
  • and long-term evolution of the system.

In other words, the cap table changed, but the switch did not necessarily move.

Pillar Two: IPO Optionality Without Immediate Disclosure

The second pillar is liquidity. ByteDance did not fight this battle simply to keep operating TikTok in the U.S.; it fought to preserve access to an exit in US financial markets.

The TikTok USDS structure clearly keeps open a path to an eventual IPO. Waiting a year or two is not a downside. There is a crowded IPO pipeline already—AI platforms, infrastructure plays, defense-adjacent tech—and time helps normalize the structure politically and operationally.

But here’s the catch: an IPO collapses ambiguity.

A public S-1 would have to disclose, in plain English:

  • who owns the algorithm,
  • whether TikTok USDS owns it or licenses it,
  • the material terms of any license,
  • and the risks associated with dependence on a foreign related party.

This is where old Obama-era China-listing tricks no longer work. Based on what I’ve read, TikTok USDS would likely be a U.S. issuer with a U.S.-inspectable auditor. ByteDance can’t lean on the old HFCAA/PCAOB opacity playbook, because HFCAA is about audit access—not about shielding a related-party licensor from scrutiny.

ByteDance surely knows this. Which is why the structure buys time, not relief from transparency. The IPO is possible—but only when the market is ready to price the risk that the politics are currently papering over.

Pillar Three: PRC Law as the Ultimate Escape Hatch

The third pillar is the quiet one, but it may be the most consequential: PRC law as an external constraint. As long as ByteDance owns the algorithm, PRC law is always waiting in the wings. Those laws include:

Export-control rules on recommendation algorithms.
Data security and cross-border transfer regimes.
National security and intelligence laws that impose duties on PRC companies and citizens.

Together, they form a universal answer to every hard question:

  • Why can’t the algorithm be sold? PRC export controls.
  • Why can’t certain technical details be disclosed? PRC data laws.
  • Why can’t ByteDance fully disengage? PRC legal obligations.

This is not hypothetical. It’s the same concern that animated the original TikTok controversy, just reframed through contracts instead of ownership.

So while TikTok USDS may be auditable, governed by a U.S. board, and compliant with U.S. operational rules, the moment oversight turns upstream—toward the algorithm, updates, or technical dependencies—PRC law reenters the picture.

The result is a U.S. company that is transparent at the edges and opaque at the core. My hunch is that this sovereign control risk is clearly spelled out in any license document and will get disclosed in an IPO.

Putting It Together: Divestment of Optics, Not Control

Taken together, the three pillars tell a consistent story:

  • ByteDance keeps the algorithm.
  • ByteDance gets paid and retains an exit.
  • PRC law remains available to constrain transfer, disclosure, or cooperation.
  • U.S. regulators oversee the wrapper, not the engine.

That does not mean ByteDance is in exactly the same legal position as before. Governance and ownership optics have changed. Some forms of U.S. oversight are real. But in terms of practical control leverage, ByteDance—and by extension Beijing—may be uncomfortably close to where they started.

The foreign control problem that launched the TikTok saga was never just about equity. It was about who controls the system that shapes attention, culture, and information flow. If that system remains owned upstream, the rest is scaffolding.

The Ouroboros Moment

This is why Congress is likely to be furious once the implications sink in.

The story began with concerns about PRC control.
It moved through years of negotiation and political theater.
It ends with an “approved structure” that may leave PRC leverage intact—just expressed through licenses, contracts, and sovereign law rather than a majority stake.

The divestment eats its own tail.

Or put more bluntly: the sale may have changed the paperwork, but it did not necessarily change who can say no when it matters most. And that’s control.

As we watch the People’s Liberation Army practicing its invasion of Taiwan, it’s not rocket science to ask how all this will look if the PRC invades Taiwan tomorrow and America comes to Taiwan’s defense. In a U.S.–PRC shooting war, TikTok USDS would likely face either a rapid U.S. distribution ban on national-security grounds (already blessed by SCOTUS), a forced clean-room severance from ByteDance’s algorithm and services, or an operational breakdown if PRC law or wartime measures disrupt the licensed technology the platform depends on.

The TikTok “sale” looks less like a divestiture of control than a divestiture of optics. ByteDance may have reduced its equity stake and ceded governance formalities, but if it retained ownership of the recommendation algorithm and the U.S. company remains dependent on ByteDance by license, then ByteDance’s—and by extension the CCP’s—legal leverage over ByteDance—can remain in a largely similar control position in practice.

TikTok USDS may change the cap table, but it doesn’t necessarily change the sovereign. As long as ByteDance owns the algorithm and PRC law can be invoked to restrict transfer, disclosure, or cooperation without CCP approval, the end state risks looking eerily familiar: a U.S.-branded wrapper around a system Beijing can still influence at the critical junctions. The whole saga starts with bitter complaints in Congress about “foreign control,” ends with “approved structure,” but largely lands right back where it began—an ouroboros of governance optics swallowing itself.

Surely I’m missing something.

Kid Rock Takes the Hill: What the Senate’s Ticketing Hearing Really Signals

On Wednesday, January 28, 2026, the U.S. Senate Commerce Committee’s Subcommittee on Consumer Protection, Technology, and Data Privacy will convene a hearing titled “Examining the Impact of Ticket Sales Practices and Bot Resales on Concert Fans.” At the center of the witness list is Kid Rock who bridges fan frustration, executive action, and emerging legislative priorities.

From the Oval Office to the Senate Hearing Room

The dysfunction in the U.S. ticketing market has rarely been stated more plainly than in the Oval Office on March 31, 2025, when President Trump signed an Executive Order targeting scalping and speculative ticket abuse. Standing alongside him, Kid Rock articulated what millions of fans and artists already felt: “I think this is a great first step…. I would love down the road if there’d be legislation that we could actually put a cap on the resale of tickets.”

President Trump captured the broken social contract in stark terms: “I see the artists… they go out with a $100 ticket, and it sells for $2,000 the following night.” His emphasis was not merely about price, but about the erosion of the artist-fan bond: “Bob is more interested in the fans and the people that are having to pay crazy prices.”

That moment matters now because it set a policy through-line: unchecked resale markets and bot-driven harvesting have turned access to live events into a rigged system. The Senate hearing is where that narrative meets legislative scrutiny.

A Broader Landscape: States Taking Ticketing Reform into Their Own Hands

While Congress now faces the issue nationally, state lawmakers haven’t waited. A growing number of states have responded to speculative ticketing and resale abuses with enforceable reforms.

Most recently, the California legislature took up AB 1349, which goes further than many prior state laws by empowering local authorities with enforceable tools to govern resale practices, ticket transparency, and anti-scalping enforcement at the local level. The bill reflects the emerging view that ticketing dysfunction is not just a pricing problem—it’s a governance gap that undermines the free market, local cultural economies and consumer trust. (California AB 1349 and the Case for Enforceable Local Ticketing Reform.)

These state experiments matter because they show how legislative remedies—if properly structured—can move beyond symbolic rhetoric to real market impact by regulating resellers like StubHub. They also heighten the stakes for federal action: if states can innovate meaningful protections, why shouldn’t the national marketplace?

The Witnesses and What They Represent

Kid Rock’s testimony brings the artist-fan perspective directly into the policy forum—where ticketing has often been treated as a narrow consumer pricing issue, rather than a systemic failure of market design.

The hearing also includes testimony from policy analysts and independent venue advocates. These voices underscore two critical points:

  • All-in pricing alone isn’t sufficient if speculative trading, bots, and opaque distribution channels continue to dominate.
  • Smaller venues and promoters lack the leverage and infrastructure to safeguard fair access, meaning reform must account for market structure, not just disclosure.

Together, the witnesses frame the issue as one not only about price, but about who gets to participate in culture on fair terms.

A Moment of Reckoning for Ticketing Policy

This hearing isn’t just another Congressional event—it’s the logical next step in a policy arc that runs from state legislatures, to executive recognition, to national scrutiny. The message is the system as it exists today privileges speculation over connection, and intermediaries over communities.

Whether the Senate responds with binding reforms or more symbolic gestures will determine whether this hearing marks a genuine inflection point but President Trump has made his policy clear and Kid Rock can tell you all about it.

The Devil’s Greatest Trick: Ro Khanna’s “Creator Bill of Rights” Is a Political Shield, Not a Charter for Creative Labor

La plus belle des ruses du Diable est de vous persuader qu’il n’existe pas! (“The greatest trick the Devil ever pulled was convincing the world he didn’t exist.”)

Charles Baudelaire, Le Joueur généreux

Ro Khanna’s so‑called “Creator Bill of Rights” is being sold as a long‑overdue charter for fairness in the digital economy—you know, like for gig workers. In reality, it functions as a political shield for Silicon Valley platforms: a non‑binding, influencer‑centric framework built on a false revenue‑share premise that bypasses child labor, unionized creative labor, professional creators, non‑featured artists, and the central ownership and consent crises posed by generative AI. 

Mr. Khanna’s resolution treats transparency as leverage, consent as vibes, and platform monetization as deus ex machina-style natural law of the singularity—while carefully avoiding enforceable rights, labor classification, copyright primacy, artist consent for AI training, work‑for‑hire abuse, and real remedies against AI labs for artists. What flows from his assumptions is not a “bill of rights” for creators, but a narrative framework designed to pacify the influencer economy and legitimize platform power at the exact moment that judges are determining that creative labor is being illegally scraped, displaced, and erased by AI leviathans including some publicly traded companies with trillion-dollar market caps.

The First Omission: Child Labor in the Creator Economy

Rep. Khanna’s newly unveiled “Creator Bill of Rights” has been greeted with the kind of headlines Silicon Valley loves: Congress finally standing up for creators, fairness, and transparency in the digital economy. But the very first thing it doesn’t do should set off alarm bells. The resolution never meaningfully addresses child labor in the creator economy, a sector now infamous for platform-driven exploitation of minors through user generated content, influencer branding, algorithmic visibility contests, and monetized childhood.  (Wikipedia is Exhibit A, Facebook Exhibit B, YouTube Exhibit C and Instagram Exhibit D.)

There is no serious discussion of child worker protections and all that comes with it, often under state laws: working-hour limits, trust accounts, consent frameworks, or the psychological and economic coercion baked into platform monetization systems. For a document that styles itself as a “bill of rights,” that omission alone is disqualifying. But perhaps understandable given AI Viceroy David Sacks’ obsession with blocking enforcement of state laws that “impede” AI.

And it’s not an isolated miss. Once you read Khanna’s framework closely, a pattern emerges. This isn’t a bill of rights for creators. It’s a political shield for platforms that is built on a false economic premise, framed around influencers, silent on professional creative labor, evasive on AI ownership and training consent, and carefully structured to avoid enforceable obligations.

The Foundational Error: Treating Revenue Share as Natural Law that Justifies A Stream Share Threshold

The foundational error appears right at the center of the resolution: its uncritical embrace of the Internet’s coin of the realm: revenue-sharing. Khanna calls for “clear, transparent, and predictable revenue-sharing terms” between platforms and creators. That phrase sounds benign, even progressive. But it quietly locks in the single worst idea anyone ever had for royalty economics: big-pool platform revenue share.  An idea that is being rejected by pretty much everyone except Spotify with its stream share threshold. In case Mr. Khanna didn’t get the memo, artist-centric is the new new thing.

Revenue sharing treats creators as participants in a platform monetization program, not as rights-holders.  You know, “partners.”  Artists don’t get a share of Spotify stock, they get a “revenue share” because they’re “partnering” with Spotify.   If that’s how Spotify treats “partners”….

Under that revenue share model, the platform defines what counts as revenue, what gets excluded, how it’s allocated, which metrics matter, and how the rules change. The platform controls all the data. The platform controls the terms. And the platform retains unilateral power to rewrite the deal. Hey “partner,” that’s not compensation grounded in intellectual property or labor rights. It’s a dodge grounded in platform policy.

We already know how this story ends. Big-pool revenue share regimes hide cross-subsidies, reward algorithm gaming over quality, privilege viral noise over durable cultural work, and collapse bargaining power into opaque market share payments of microscopic proportion. Revenue share deals destroy price signals, hollow out licensing markets, and make creative income volatile and non-forecastable. This is exceptionally awful for songwriters and nobody can tell a songwriter today what that burger on Tuesday will actually bring.

A advertising revenue-share model penalizes artists because they receive only a tiny fraction of the ads served against their own music, while platforms like Google capture roughly half of the total advertising revenue generated across the entire network. Naturally they love it.

Rev shares of advertising revenue are the core economic pathology behind what happened to music, journalism, and digital publishing over the last fifteen years.  As we have seen from Spotify’s stream share threshold, a platform can unilaterally decide to cut off payments at any time for any absurd reason and get away with it.  And Khanna’s resolution doesn’t challenge that logic. It blesses it.

He doesn’t say creators are entitled to enforceable royalties tied to uses of their work at rates set by the artist. He doesn’t say there should be statutory floors, audit rights, underpayment penalties, nondiscrimination rules, or retaliation protections. He doesn’t say platforms should be prohibited from unilaterally redefining the pie. He says let’s make the revenue share more “transparent” and “predictable.” That’s not a power shift. That’s UX optimization for exploitation.

This Is an Influencer Bill, Not a Creator Bill

The second fatal flaw is sociological. Khanna’s resolution is written for the creator economy, not the creative economy.

The “creator” in Khanna’s bill is a YouTuber, a TikToker, a Twitch streamer, a podcast personality, a Substack writer, a platform-native entertainer (but no child labor protection). Those are real jobs, and the people doing them face real precarity. But they are not the same thing as professional creative labor. They are usually not professional musicians, songwriters, composers, journalists, photographers, documentary filmmakers, authors, screenwriters, actors, directors, designers, engineers, visual artists, or session musicians. They are not non-featured performers. They are not investigative reporters. They are not the people whose works are being scraped at industrial scale to train generative AI systems.

Those professional creators are workers who produce durable cultural goods governed by copyright, contract, and licensing markets. They rely on statutory royalties, collective bargaining, residuals, reuse frameworks, audit rights, and enforceable ownership rules. They face synthetic displacement and market destruction from AI systems trained on their work without consent. Khanna’s resolution barely touches any of that. It governs platform participation. It does not govern creative labor.  It’s not that influencers shouldn’t be able to rely on legal protections; it’s that if you’re going to have a bill of rights for creators it should include all creators and very often the needs are different.  Starting with collective bargaining and unions.

The Total Bypass of Unionized Labor

Nowhere is this shortcoming more glaring than in the complete bypass of unionized labor. The framework lives in a parallel universe where SAG-AFTRA, WGA, DGA, IATSE, AFM, Equity, newsroom unions, residuals, new-use provisions, grievance procedures, pension and health funds, minimum rates, credit rules, and collective bargaining simply do not exist. That entire legal architecture is invisible.  And Khanna’s approach could easily roll back the gains on AI protections that unions have made through collective bargaining.

Which means the resolution is not attempting to interface with how creative work actually functions in film, television, music, journalism, or publishing. It is not creative labor policy. It is platform fairness rhetoric.

Invisible Labor: Non-Featured Artists and the People the Platform Model Erases

The same erasure applies to non-featured artists and invisible creative labor. Session musicians, backup singers, supporting actors, dancers, crew, editors, photographers on assignment, sound engineers, cinematographers — these people don’t live inside platform revenue-share dashboards. They are paid through wage scales, reuse payments, residuals, statutory royalty regimes, and collective agreements.

None of that exists in Khanna’s world. His “creator” is an account, not a worker.

AI Without Consent Is Not Accountability

The AI plank in the resolution follows the same pattern of rhetorical ambition and structural emptiness. Khanna gestures at transparency, consent, and accountability for AI and synthetic media. But he never defines what consent actually means.

Consent for training? For style mimicry? For voice cloning? For archival scraping of journalism and music catalogs? For derivative outputs? For model fine-tuning? For prompt exploitation? For replacement economics?

The bill carefully avoids the training issue. Which is the whole issue.

A real AI consent regime would force Congress to confront copyright primacy, opt-in licensing, derivative works, NIL rights, data theft, model ownership, and platform liability. Khanna’s framework gestures at harms while preserving the industrial ingestion model intact.

The Ownership Trap: Work-for-Hire and AI Outputs

This omission is especially telling. Nowhere does Khanna say platforms may not claim authorship or ownership of AI outputs by default. Nowhere does he say AI-assisted works are not works made for hire. Nowhere does he say users retain rights in their contributions and edits. Nowhere does he say WFH boilerplate cannot be used to convert prompts into platform-owned assets.

That silence is catastrophic.

Right now, platforms are already asserting ownership contractually, claiming assignments of outputs, claiming compilation rights, claiming derivative rights, controlling downstream licensing, locking creators out of monetization, and building synthetic catalogs they own. Even though U.S. law says purely AI-generated content isn’t copyrightable absent human authorship, platforms can still weaponize terms of service, automated enforcement, and contractual asymmetry to create “synthetic  ownership” or “practical control.” Khanna’s resolution says nothing about any of it.

Portable Benefits as a Substitute for Labor Rights

Then there’s the portable-benefits mirage. Portable benefits sound progressive. They are also the classic substitute for confronting misclassification. So first of all, Khanna starts our saying that “gig workers” in the creative economy don’t get health care—aside from the union health plans, I guess. But then he starts with the portable benefits mirage. So which is it? Surely he doesn’t mean nothing from nothing leaves nothing?

If you don’t want to deal with whether creators are actually employees, whether platforms owe payroll taxes, whether wage-and-hour law applies, whether unemployment insurance applies, whether workers’ comp applies, whether collective bargaining rights attach, or…wait for it…stock options apply…you propose portable benefits without dealing with the reality that there are no benefits. You preserve contractor status. You socialize costs and privatize upside. You deflect labor-law reform and health insurance reform for that matter. You look compassionate. And you change nothing structurally.

Khanna’s framework sits squarely in that tradition of nothing from nothing leaves nothing.

A Non-Binding Resolution for a Reason

The final tell is procedural. Khanna didn’t introduce a bill. He introduced a non-binding resolution.

No enforceable rights. No regulatory mandates. No private causes of action. No remedies. No penalties. No agency duties. No legal obligations.

This isn’t legislation. It’s political signaling.

What This Really Is: A Political Shield

Put all of this together and the picture becomes clear. Khanna’s “Creator Bill of Rights” is built on a false revenue-share premise. It is framed around influencers. It bypasses professional creators. It bypasses unions. It bypasses non-featured artists. It bypasses child labor. It bypasses training consent. It bypasses copyright primacy. It bypasses WFH abuse. It bypasses platform ownership grabs. It bypasses misclassification. It bypasses enforceability. I give you…Uber.

It doesn’t fail because it’s hostile to creators, rather because it is indifferent to creators. It fails because it redefines “creator” downward until every hard political and legal question disappears.

And in doing so, it functions as a political shield for the very platforms headquartered in Khanna’s district.

When the Penny Drops

Ro Khanna’s “Creator Bill of Rights” isn’t a rights charter.

It’s a narrative framework designed to stabilize the influencer economy, legitimize platform compensation models, preserve contractor status, soften AI backlash, avoid copyright primacy, avoid labor-law reform, avoid ownership reform, and avoid real accountability.

It treats transparency as leverage. It treats consent as vibes. It treats revenue share as natural law. It treats AI as branding. It treats creative labor as content. It treats platforms as inevitable.

And it leaves out the people who are actually being scraped, displaced, devalued, erased, and replaced: musicians, journalists, photographers, actors, directors, songwriters, composers, engineers, non-featured performers, visual artists, and professional creators.

If Congress actually wants a bill of rights for creators, it won’t start with influencer UX and non-binding resolutions. It will start with enforceable intellectual-property rights, training consent, opt-in regimes, audit rights, statutory floors, collective bargaining, exclusion of AI outputs from work-for-hire, limits on platform ownership claims, labor classification clarity, and real remedies.

Until then, this isn’t a bill of rights.

It’s a press release with footnotes.

Grassroots Revolt Against Data Centers Goes National: Water Use Now the Flashpoint

Over the last two weeks, grassroots opposition to data centers has moved from sporadic local skirmishes to a recognizable national pattern. While earlier fights centered on land use, noise, and tax incentives, the current phase is more focused and more dangerous for developers: water.

Across multiple states, residents are demanding to see the “water math” behind proposed data centers—how much water will be consumed (not just withdrawn), where it will come from, whether utilities can actually supply it during drought conditions, and what enforceable reporting and mitigation requirements will apply. In arid regions, water scarcity is an obvious constraint. But what’s new is that even in traditionally water-secure states, opponents are now framing data centers as industrial-scale consumptive users whose needs collide directly with residential growth, agriculture, and climate volatility.

The result: moratoria, rezoning denials, delayed hearings, task forces, and early-stage organizing efforts aimed at blocking projects before entitlements are locked in.

Below is a snapshot of how that opposition has played out state by state over the last two weeks.

State-by-State Breakdown

Virginia  

Virginia remains ground zero for organized pushback.

Botetourt County: Residents confronted the Western Virginia Water Authority over a proposed Google data center, pressing officials about long-term water supply impacts and groundwater sustainability.  

Hanover County (Richmond region): The Planning Commission voted against recommending rezoning for a large multi-building data center project.  

State Legislature: Lawmakers are advancing reform proposals that would require water-use modeling and disclosure.

Georgia  

Metro Atlanta / Middle Georgia: Local governments’ recruitment of hyperscale facilities is colliding with resident concerns.  

DeKalb County: An extended moratorium reflects a pause-and-rewrite-the-rules strategy.  

Monroe County / Forsyth area: Data centers have become a local political issue.

Arizona  

The state has moved to curb groundwater use in rural basins via new regulatory designations requiring tracking and reporting.  

Local organizing frames AI data centers as unsuitable for arid regions.

Maryland  

Prince George’s County (Landover Mall site): Organized opposition centered on environmental justice and utility burdens.  

Authorities have responded with a pause/moratorium and a task force.

Indiana  

Indianapolis (Martindale-Brightwood): Packed rezoning hearings forced extended timelines.  

Greensburg: Overflow crowds framed the fight around water-user rankings.

Oklahoma  

Luther (OKC metro): Organized opposition before formal filings.

Michigan  

Broad local opposition with water and utility impacts cited.  

State-level skirmishes over incentives intersect with water-capacity debates.

North Carolina  

Apex (Wake County area): Residents object to strain on electricity and water.

Wisconsin & Pennsylvania 

Corporate messaging shifts in response to opposition; Microsoft acknowledged infrastructure and water burdens.

The Through-Line: “Show Us the Water Math”

Lawrence of Arabia: The Well Scene

Across these states, the grassroots playbook has converged:

Pack the hearing.  

Demand water-use modeling and disclosure.  

Attack rezoning and tax incentives.  

Force moratoria until enforceable rules exist.

Residents are demanding hard numbers: consumptive losses, aquifer drawdown rates, utility-system capacity, drought contingencies, and legally binding mitigation.

Why This Matters for AI Policy

This revolt exposes the physical contradiction at the heart of the AI infrastructure build-out: compute is abstract in policy rhetoric but experienced locally as land, water, power, and noise.

Communities are rejecting a development model that externalizes its physical costs onto local water systems and ratepayers.

Water is now the primary political weapon communities are using to block, delay, and reshape AI infrastructure projects.

Read the local news:

America’s AI Boom Is Running Into An Unplanned Water Problem (Ken Silverstein/Forbes)

Residents raise water concerns over proposed Google data center (Allyssa Beatty/WDBJ7 News)

How data centers are rattling a Georgia Senate special election (Greg Bluesetein/Atlanta Journal Constitution)

A perfect, wild storm’: widely loathed datacenters see little US political opposition (Tom Perkins/The Guardian) 

Hanover Planning Commission votes to deny rezoning request for data center development (Joi Fultz/WTVR)

Microsoft rolls out initiative to limit data-center power costs, water use impact (Reuters)

South Korea’s AI Action Plan and the Global Drift Toward “Use First, Pay Later”

South Korea has become the latest flashpoint in a rapidly globalizing conflict over artificial intelligence, creator rights and copyright. A broad coalition of Korean creator and copyright organizations—spanning literature, journalism, broadcasting, screenwriting, music, choreography, performance, and visual arts—has issued a joint statement rejecting the government’s proposed Korea AI Action Plan, warning that it risks allowing AI companies to use copyrighted works without meaningful permission or payment.

The groups argue that the plan signals a fundamental shift away from a permission-based copyright framework toward a regime that prioritizes AI deployment speed and “legal certainty” for developers, even if that certainty comes at the expense of creators’ control and compensation. Their statement is unusually blunt: they describe the policy direction as a threat to the sustainability of Korea’s cultural industries and pledge continued opposition unless the government reverses course.

The controversy centers on Action Plan No. 32, which promotes “activating the ecosystem for the use and distribution of copyrighted works for AI training and evaluation.” The plan directs relevant ministries to prepare amendments—either to Korea’s Copyright Act, the AI Basic Act, or through a new “AI Special Act”—that would enable AI training uses of copyrighted works without legal ambiguity.

Creators argue that “eliminating legal ambiguity” reallocates legal risk rather than resolves it. Instead of clarifying consent requirements or building licensing systems, the plan appears to reduce the legal exposure of AI developers while shifting enforcement burdens onto creators through opt-out or technical self-help mechanisms.

Similar policy patterns have emerged in the United Kingdom and India, where governments have emphasized legal certainty and innovation speed while creative sectors warn of erosion to prior-permission and fair-compensation norms. South Korea’s debate stands out for the breadth of its opposition and the clarity of the warning from cultural stakeholders.

The South Korean government avoids using the term “safe harbor,” but its plan to remove “legal ambiguity” reads like an effort to build one. The asymmetry is telling: rather than eliminating ambiguity by strengthening consent and payment mechanisms, the plan seeks to eliminate ambiguity by making AI training easier to defend as lawful—without meaningful consent or compensation frameworks. That is, in substance, a safe harbor, and a species of blanket license. The resulting “certainty” would function as a pass for AI companies, while creators are left to police unauthorized use after the fact, often through impractical opt-out mechanisms—to the extent such rights remain enforceable at all.

Grass‑Roots Rebellion Against Data Centers and Grid Expansion

A grass‑roots “data center and electric grid rebellion” is emerging across the United States as communities push back against the local consequences of AI‑driven infrastructure expansion. Residents are increasingly challenging large‑scale data centers and the transmission lines needed to power them, citing concerns about enormous electricity demand, water consumption, noise pollution, land use, declining property values, and opaque approval processes. What were once routine zoning or utility hearings are now crowded, contentious events, with citizens organizing quickly and sharing strategies across counties and states.



This opposition is no longer ad hoc. In Northern Virginia—often described as the global epicenter of data centers—organized campaigns such as the Coalition to Protect Prince William County have mobilized voters, fundraised for local elections, demanded zoning changes, and challenged approvals in court. In Maryland’s Prince George’s County, resistance has taken on a strong environmental‑justice framing, with groups like the South County Environmental Justice Coalition arguing that data centers concentrate environmental and energy burdens in historically marginalized communities and calling for moratoria and stronger safeguards.



Nationally, consumer and civic groups are increasingly coordinated, using shared data, mapping tools, and media pressure to argue that unchecked data‑center growth threatens grid reliability and shifts costs onto ratepayers. Together, these campaigns signal a broader political reckoning over who bears the costs of the AI economy.

Global Data Centers

Here’s a snapshot of grass roots opposition in Texas, Louisiana and Nevada:

Texas

Texas has some of the most active and durable local opposition, driven by land use, water, and transmission corridors.

  • Hill Country & Central Texas (Burnet, Llano, Gillespie, Blanco Counties)
    Grass-roots groups formed initially around high-voltage transmission lines (765 kV) tied to load growth, now explicitly linking those lines to data center demand. Campaigns emphasize:
    • rural land fragmentation
    • wildfire risk
    • eminent domain abuse
    • lack of local benefit
      These groups are often informal coalitions of landowners rather than NGOs, but they coordinate testimony, public-records requests, and local elections.
  • DFW & North Texas
    Neighborhood associations opposing rezoning for hyperscale facilities focus on noise (backup generators), property values, and school-district tax distortions created by data-center abatements.
  • ERCOT framing
    Texas groups uniquely argue that data centers are socializing grid instability risk onto residential ratepayers while privatizing upside—an argument that resonates with conservative voters.

Louisiana

Opposition is newer but coalescing rapidly, often tied to petrochemical and LNG resistance networks.

  • North Louisiana & Mississippi River Corridor
    Community groups opposing new data centers frame them as:
    • “energy parasites” tied to gas plants
    • extensions of an already overburdened industrial corridor
    • threats to water tables and wetlands
      Organizers often overlap with environmental-justice and faith-based coalitions that previously fought refineries and export terminals.
  • Key tactic: reframing data centers as industrial facilities, not “tech,” triggering stricter land-use scrutiny.

Nevada

Nevada opposition centers on water scarcity and public-land use.

  • Clark County & Northern Nevada
    Residents and conservation groups question:
    • water allocations for evaporative cooling
    • siting near public or BLM-managed land
    • grid upgrades subsidized by ratepayers for private AI firms
  • Distinct Nevada argument: data centers compete directly with housing and tribal water needs, not just environmental values.

The Data Center Rebellion is Here and It’s Reshaping the Political Landscape (Washington Post)

Residents protest high-voltage power lines that could skirt Dinosaur Valley State Park (ALEJANDRA MARTINEZ AND PAUL COBLER/Texas Tribune)

US Communities Halt $64B Data Center Expansions Amid Backlash (Lucas Greene/WebProNews)

Big Tech’s fast-expanding plans for data centers are running into stiff community opposition (Marc Levy/Associated Press)

Data center ‘gold rush’ pits local officials’ hunt for new revenue against residents’ concerns (Alander Rocha/Georgia Record)

Frozen Ledgers and Living Systems: What King William’s Domesday Book Can Teach Us About the Mechanical Licensing Collective

A static record can support governance, but it cannot replace it. When a dynamic economy is ruled by a frozen ledger, injustice is structural rather than accidental. The lesson of Domesday is not to abandon centralized records, but to build institutions that acknowledge change, dispute, and time.

Introduction: The Problem of the Frozen Record

The Domesday Book was not wrong so much as frozen. It rendered a living, changing system of land tenure into a static ledger that became authoritative precisely because it could not keep up with reality. The Mechanical Licensing Collective (“MLC”) repeats this error in digital form. Musical works ownership is dynamic, relational, and contested, yet royalties flow based on a fixed snapshot that is at least potentially outdated the moment it is operationalized. In both systems, the problem is not bad data but the pretense that a static record can govern a dynamic economy without producing systemic error.[1] That’s why I always recommend Weapons of Math Destruction by Kathy O’Neil to MLC executives, which they promptly ignore. 

I argue that the failure mode is mostly structural, not technical. The technical part is relatively trivial, compared to say AI training or protein folds.  I think it could be built far quicker, far cheaper, and far more accurately than the MLC which has blown a unique opportunity to start with a blank sheet of paper and instead perpetuated the Harry Fox Agency which was founded well before FoxPro.  The lesson of Domesday is not that better enumeration solves governance problems, but that static records require institutional counterweights to prevent injustice, capture, and permanent misallocation.  That is, to prevent the MLC to be like the MLC.

Background: Two Authoritative Ledgers

A. The Domesday Book

Commissioned by William the Conqueror in 1085–1086, the Domesday Book was a comprehensive survey of landholding and economic resources in post‑Conquest England.[2] Its purpose was fiscal and administrative: to identify who held land, what that land was worth, and what obligations were owed to the Crown.[3] Domesday recorded information through sworn local inquests and was intended to be definitive.

Crucially, Domesday was never designed to be updated, at least not in real time. It froze a moment in time and became authoritative precisely because it was fixed. Almost immediately, it diverged from reality as land changed hands through death, forfeiture, re‑grant, and political favor.[4] Rather than revise Domesday, medieval England developed supplementary institutions—annual fiscal records, local courts, and royal adjudication—to manage change and dispute.[5]

B. The Mechanical Licensing Collective

The Mechanical Licensing Collective was created by Congress in Title I of the Music Modernization Act of 2018 to administer the blanket mechanical license for digital music services in the United States.[6]  (More accurately, Title I was written by the lobbyists and imposed on the world with Congress’s chop.). The MLC maintains a centralized database of musical works ownership, collects mechanical royalties from digital service providers, and distributes those royalties to songwriters and publishers.[7]

Musical works ownership, however, is inherently dynamic. Writers change publishers, estates open and close, ownership splits are disputed, and metadata is frequently incomplete or corrected only after use aka “Copyright Control.[8] As a result, the MLC’s database—however well‑intentioned—is outdated almost as soon as it is operationalized (particularly because it was and is based on the Harry Fox Agency’s database that MLC passed off as state of the art over the objections of others).

Domesday as a Governance Tool, Not a Truth Machine

Domesday succeeded at centralizing authority, not at preserving truth over time. Land tenure in eleventh‑century England was dynamic, relational, and politically contingent. Domesday froze these relationships into an official record that quickly diverged from lived reality, yet retained legal force because it was authoritative rather than accurate.[9]. Nothing that a Norman knight with broadsword and mace couldn’t fix.

Importantly, medieval England did not rely on Domesday alone. The development of Pipe Rolls, hundred and shire courts, and royal justice provided mechanisms to contextualize, correct, and supersede the frozen record.[10]

The MLC as Digital Domesday

The MLC performs a structurally similar function today. It fixes ownership claims, establishes a canonical record, and allocates ongoing revenue streams while disputes remain unresolved. Royalties flow based on the database snapshot in effect at the time of use, even when that snapshot is known to be incomplete or incorrect.[11]

As with Domesday, authority substitutes for adaptability. The database becomes dispositive not because it reflects reality, but because it governs the flow of money. In other words, the MLC is not authoritative because it is accurate or complete; it is authoritative because Congress made its use compulsory. That’s right—it’s not authoritative because it’s accurate, it’s authoritative because it’s authorized.

Three Solutions Grounded in Domesday’s Afterlife

1. Authoritative Record Plus Living Supplement (The Pipe Roll Model)

Domesday was supplemented by the Pipe Rolls—annual fiscal records that reflected changes in obligations over time.[12] Applied to the MLC, this suggests separating baseline records from continuous reconciliation layers and treating unmatched royalties as unreconciled obligations of the MLC rather than abandoned property of the songwriter.

2. Jurisdictional Pluralism (The Hundred and Shire Court Model)

Domesday did not eliminate local adjudication. Disputes were resolved in courts that contextualized Domesday entries rather than deferring blindly to them.[13]  Similarly, ownership and split disputes should be resolved in external and independent fora, with the MLC conforming its records and payouts to those determinations.

3. No Profit from Unresolved Ownership (The No Escheat Without Judgment Model)

In medieval England, the Crown could claim land only through recognized legal mechanisms such as forfeiture or escheat.[14] Uncertainty alone did not justify enrichment.  A Domesday‑informed reform would prohibit institutional profit from unresolved ownership and require segregation of disputed funds.

By contrast, the MLC “black box” is not escheatment at all—yet it functionally resembles one-sided escheatment without due process. Under traditional escheat or unclaimed-property regimes, the state’s claim arises only after defined predicates: notice, diligence, and a lawful adjudication or administrative determination of abandonment, coupled with a public fiduciary obligation to locate the owner. The black box instead permits private retention and deployment of other people’s money based solely on unresolved ownership, without a judgment of abandonment, without a comparable duty to search for the owner, and with the economic upside of delay accruing to the intermediary rather than the missing payee.

For example, California requires some effort:

California law requires all holders (corporations, businesses, associations, financial institutions, and insurance companies) of unclaimed property to attempt to contact owners before reporting their property to the State Controller’s Office.

Holders are required to send a notice to the owner’s last known address informing them that the property will be transferred to the State Controller’s Office for safekeeping if the owner does not contact them to retrieve it.

The State Controller’s Office sends notices to all owners of property that will be transferred to the state. These notices are sent out before the property is to be transferred, giving owners an opportunity to retrieve property directly from the holder.

The constitutional problem is sharpened by Title I of the MMA, which expressly preempts state escheatment and unclaimed-property laws—but arguably does not replace them with functionally equivalent federal protections. States are forbidden to take custody of abandoned property without notice, diligence, and due process; yet the MMA authorizes a private entity to hold, invest (or so MLC argues), and ultimately distribute unmatched royalties on a market share basis (including to companies represented on MLC’s board of directors) without any finding of abandonment, without judicial process, and without a neutral public custodian.

Specifically, Title I provides at 17 U.S.C. § 115(d)(11)(E):

(E)Preemption of state property laws.—

The holding and distribution of funds by the mechanical licensing collective in accordance with this subsection shall supersede and preempt any State law (including common law) concerning escheatment or abandoned property, or any analogous provision, that might otherwise apply.

So with a waive of the hand, Title I preempts the detailed protection of escheatment traditions that date back to the doctrine of defectus sanguinis in the 12th century (the Pipe Roll of 1130 (31 Henry I)). This asymmetry raises serious Due Process and Equal Protection concerns (not to mention conflicts of interest), and potentially a Takings Clause problem: Congress may not displace state escheat safeguards and simultaneously permit private enrichment from unresolved ownership where states themselves would be constitutionally barred from proceeding without judgment and owner-protective procedures.  It also raises a classic unconstitutional state preemption without federal statute problem.[15]

Three Contemporary Reforms the MLC Could Adopt

1. Authoritative Record + Living Reconciliation Layer (The Pipe Roll Model)

Adopt a structural separation between the MLC’s baseline ownership database and a continuous reconciliation system that tracks changes, corrections, disputes, and late‑arriving claims on a monthly basis.

In practice, unmatched royalties would be treated as unreconciled obligations rather than quasi‑abandoned funds. The MLC would maintain a rolling, auditable ledger capable of updating distributions when ownership data changes, including retroactive true‑ups once claims are resolved, instead of locking outcomes to a stale snapshot.

This reform acknowledges that ownership is dynamic and prevents early database errors from permanently reallocating value.

2. Independent Adjudication with Mandatory Conformance (The Hundred and Shire Court Model)

Formally decouple ownership and split dispute resolution from the MLC’s internal processes and require the MLC to conform its records and payouts to determinations made by independent fora.

In practice, disputes would be resolved in courts, arbitrations, or designated independent neutral bodies, and the MLC would treat those determinations as binding inputs rather than discretionary metadata updates. The database would no longer enjoy a presumption of correctness when ownership is contested and disputes would not be resolved by conflicted statutory committees.

This prevents the MLC from acting as judge, jury, and paymaster and restores legitimacy to ownership determinations.

3. Mandatory Segregation and No Profit from Unresolved Ownership (The No Escheat Without Judgment Model)

Prohibit the MLC from retaining, investing, or reallocating royalties tied to unresolved ownership and incentives them to find the correct owners.

In practice, all black‑box royalties would be held in segregated custodial accounts or at least ledgers. Market‑share distributions would be barred unless and until lawful abandonment is established, and the MLC would carry an affirmative duty to search for and notify potential claimants, analogous to the duties of traditional unclaimed‑property regimes.

This removes perverse incentives to delay resolution and aligns the MLC with basic due‑process and fiduciary norms, especially critical given the MMA’s preemption of state escheat laws (which itself may be unconstitutional).

Taken together, these reforms shift the MLC away from treating a frozen ledger as dispositive authority and toward an institutional design that acknowledges change, dispute, and time—without sacrificing administrative efficiency. At $40 million a year, they should be able to pull this off or at least start slouching toward Bethlehem.


[1] S.F.C. Milsom, Historical Foundations of the Common Law (2d ed. 1981).

[2] Domesday Book (1086).

[3] R. Allen Brown, The Normans and the Norman Conquest (2d ed. 1985).

[4] J.C. Holt, Domesday Studies (1987).

[5] Mark Hagger, William: King and Conqueror (2012).

[6] Music Modernization Act, Pub. L. No. 115‑264, 132 Stat. 3676 (2018).

[7] 17 U.S.C. § 115(d).

[8] U.S. Copyright Office, Music Modernization Act Implementation Report (2019). “Copyright Control” is often a metadata band-aid: it flags that publishing info is incomplete or self-administered. The publisher share can wind up unmatched/unallocated even though ownership is knowable or is ultimately known after an indeterminate number of accounting periods.

[9] F.W. Maitland, Domesday Book and Beyond (1897).

[10] Richard FitzNigel, Dialogus de Scaccario (Dialogue concerning the Exchequer) (c. 1179).

[11] Copyright Royalty Judges, Phonorecords III & IV.

[12] Pipe Roll Society, The Pipe Roll of Henry I.

[13] Paul Brand, The Origins of the English Legal Profession (1992).

[14] Escheat is a common-law legal mechanism by which real property reverted to the Crown when a tenant died intestate and without lawful heirs. At common law, escheat required the extinction of the tenant’s line of inheritance; mere uncertainty of title or ownership was insufficient. In modern U.S. law, escheat has been adapted to intangible and unclaimed property, but it retains the same core features: notice, diligence, and a lawful determination of abandonment or lack of heirs before the sovereign (in our case a State) may take custody.

[15] See Connecticut Mutual Life Ins. Co. v. Moore, 333 U.S. 541 (1948); Texas v. New Jersey, 379 U.S. 674 (1965) (states may take custody of abandoned property only subject to procedural protections and priority rules); Cf. Webb’s Fabulous Pharmacies, Inc. v. Beckwith, 449 U.S. 155 (1980) (interest on private funds held by a custodian remains private property; government may not appropriate economic benefits without just compensation).

What Would Freud Do? The Unconscious Is Not a Database — and Humans Are Not Machines

What would Freud do?

It’s a strange question to ask about AI and copyright, but a useful one. When generative-AI fans insist that training models on copyrighted works is merely “learning like a human,” they rely on a metaphor that collapses under even minimal scrutiny. Psychoanalysis—whatever one thinks of Freud’s conclusions—begins from a premise that modern AI rhetoric quietly denies: the unconscious is not a database, and humans are not machines.

As Freud wrote in The Interpretation of Dreams, “Our memory has no guarantees at all, and yet we bow more often than is objectively justified to the compulsion to believe what it says.” No AI truthiness there.

Human learning does not involve storing perfect, retrievable copies of what we read, hear, or see. Memory is reconstructive, shaped by context, emotion, repression, and time. Dreams do not replay inputs; they transform them. What persists is meaning, not a file.

AI training works in the opposite direction—obviously. Training begins with high-fidelity copying at industrial scale. It converts human expressive works into durable statistical parameters designed for reuse, recall, and synthesis for eternity. Where the human mind forgets, distorts, and misremembers as a feature of cognition, models are engineered to remember as much as possible, as efficiently as possible, and to deploy those memories at superhuman speed. Nothing like humans.

Calling these two processes “the same kind of learning” is not analogy—it is misdirection. And that misdirection matters, because copyright law was built around the limits of human expression: scarcity, imperfection, and the fact that learning does not itself create substitute works at scale.

Dream-Work Is Not a Training Pipeline

Freud’s theory of dreams turns on a simple but powerful idea: the mind does not preserve experience intact. Instead, it subjects experience to dream-work—processes like condensation (many ideas collapsed into one image), displacement (emotional significance shifted from one object to another), and symbolization (one thing representing another, allowing humans to create meaning and understanding through symbols). The result is not a copy of reality but a distorted, overdetermined construction whose origins cannot be cleanly traced.

This matters because it shows what makes human learning human. We do not internalize works as stable assets. We metabolize them. Our memories are partial, fallible, and personal. Two people can read the same book and walk away with radically different understandings—and neither “contains” the book afterward in any meaningful sense. There is no Rashamon effect for an AI.

AI training is the inverse of dream-work. It depends on perfect copying at ingestion, retention of expressive regularities across vast parameter spaces, and repeatable reuse untethered from embodiment, biography, or forgetting. If Freud’s model describes learning as transformation through loss, AI training is transformation through compression without forgetting.

One produces meaning. The other produces capacity.

The Unconscious Is Not a Database

Psychoanalysis rejects the idea that memory functions like a filing cabinet. The unconscious is not a warehouse of intact records waiting to be retrieved. Memory is reconstructed each time it is recalled, reshaped by narrative, emotion, and social context. Forgetting is not a failure of the system; it is a defining feature.

AI systems are built on the opposite premise. Training assumes that more retention is better, that fidelity is a virtue, and that expressive regularities should remain available for reuse indefinitely. What human cognition resists by design—perfect recall at scale—machine learning seeks to maximize.

This distinction alone is fatal to the “AI learns like a human” claim. Human learning is inseparable from distortion, limitation, and individuality. AI training is inseparable from durability, scalability, and reuse.

In The Divided Self, R. D. Laing rejects the idea that the mind is a kind of internal machine storing stable representations of experience. What we encounter instead is a self that exists only precariously, defined by what Laing calls ontological security” or its absence—the sense of being real, continuous, and alive in relation to others. Experience, for Laing, is not an object that can be detached, stored, or replayed; it is lived, relational, and vulnerable to distortion. He warns repeatedly against confusing outward coherence with inner unity, emphasizing that a person may present a fluent, organized surface while remaining profoundly divided within. That distinction matters here: performance is not understanding, and intelligible output is not evidence of an interior life that has “learned” in any human sense.

Why “Unlearning” Is Not Forgetting

Once you understand this distinction, the problem with AI “unlearning” becomes obvious.

In human cognition, there is no clean undo. Memories are never stored as discrete objects that can be removed without consequence. They reappear in altered forms, entangled with other experiences. Freud’s entire thesis rests on the impossibility of clean erasure.

AI systems face the opposite dilemma. They begin with discrete, often unlawful copies, but once those works are distributed across parameters, they cannot be surgically removed with certainty. At best, developers can stop future use, delete datasets, retrain models, or apply partial mitigation techniques (none of which they are willing to even attempt). What they cannot do is prove that the expressive contribution of a particular work has been fully excised.

This is why promises (especially contractual promises) to “reverse” improper ingestion are so often overstated. The system was never designed for forgetting. It was designed for reuse.

Why This Matters for Fair Use and Market Harm

The “AI = human learning” analogy does real damage in copyright analysis because it smuggles conclusions into fair-use factor one (transformative purpose and character) and obscures factor four (market harm).

Learning has always been tolerated under copyright law because learning does not flood markets. Humans do not emerge from reading a novel with the ability to generate thousands of competing substitutes at scale. Generative models do exactly that—and only because they are trained through industrial-scale copying.

Copyright law is calibrated to human limits. When those limits disappear, the analysis must change with them. Treating AI training as merely “learning” collapses the very distinction that makes large-scale substitution legally and economically significant.

The Pensieve Fallacy

There is a world in which minds function like databases. It is a fictional one.

In Harry Potter and the Goblet of Fire, wizards can extract memories, store them in vials, and replay them perfectly using a Pensieve. Memories in that universe are discrete, stable, lossless objects. They can be removed, shared, duplicated, and inspected without distortion. As Dumbledore explained to Harry, “I use the Pensieve. One simply siphons the excess thoughts from one’s mind, pours them into the basin, and examines them at one’s leisure. It becomes easier to spot patterns and links, you understand, when they are in this form.”

That is precisely how AI advocates want us to imagine learning works.

But the Pensieve is magic because it violates everything we know about human cognition. Real memory is not extractable. It cannot be replayed faithfully. It cannot be separated from the person who experienced it. Arguably, Freud’s work exists because memory is unstable, interpretive, and shaped by conflict and context.

AI training, by contrast, operates far closer to the Pensieve than to the human mind. It depends on perfect copies, durable internal representations, and the ability to replay and recombine expressive material at will.

The irony is unavoidable: the metaphor that claims to make AI training ordinary only works by invoking fantasy.

Humans Forget. Machines Remember.

Freud would not have been persuaded by the claim that machines “learn like humans.” He would have rejected it as a category error. Human cognition is defined by imperfection, distortion, and forgetting. AI training is defined by reproduction, scale, and recall.

To believe AI learns like a human, you have to believe humans have Pensieves. They don’t. That’s why Pensieves appear in Harry Potter—not neuroscience, copyright law, or reality.

The Paradox of Huang’s Rope

If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.

The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.

It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.

This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?

The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.

Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.