The Devil’s Greatest Trick: Ro Khanna’s “Creator Bill of Rights” Is a Political Shield, Not a Charter for Creative Labor

La plus belle des ruses du Diable est de vous persuader qu’il n’existe pas! (“The greatest trick the Devil ever pulled was convincing the world he didn’t exist.”)

Charles Baudelaire, Le Joueur généreux

Ro Khanna’s so‑called “Creator Bill of Rights” is being sold as a long‑overdue charter for fairness in the digital economy—you know, like for gig workers. In reality, it functions as a political shield for Silicon Valley platforms: a non‑binding, influencer‑centric framework built on a false revenue‑share premise that bypasses child labor, unionized creative labor, professional creators, non‑featured artists, and the central ownership and consent crises posed by generative AI. 

Mr. Khanna’s resolution treats transparency as leverage, consent as vibes, and platform monetization as deus ex machina-style natural law of the singularity—while carefully avoiding enforceable rights, labor classification, copyright primacy, artist consent for AI training, work‑for‑hire abuse, and real remedies against AI labs for artists. What flows from his assumptions is not a “bill of rights” for creators, but a narrative framework designed to pacify the influencer economy and legitimize platform power at the exact moment that judges are determining that creative labor is being illegally scraped, displaced, and erased by AI leviathans including some publicly traded companies with trillion-dollar market caps.

The First Omission: Child Labor in the Creator Economy

Rep. Khanna’s newly unveiled “Creator Bill of Rights” has been greeted with the kind of headlines Silicon Valley loves: Congress finally standing up for creators, fairness, and transparency in the digital economy. But the very first thing it doesn’t do should set off alarm bells. The resolution never meaningfully addresses child labor in the creator economy, a sector now infamous for platform-driven exploitation of minors through user generated content, influencer branding, algorithmic visibility contests, and monetized childhood.  (Wikipedia is Exhibit A, Facebook Exhibit B, YouTube Exhibit C and Instagram Exhibit D.)

There is no serious discussion of child worker protections and all that comes with it, often under state laws: working-hour limits, trust accounts, consent frameworks, or the psychological and economic coercion baked into platform monetization systems. For a document that styles itself as a “bill of rights,” that omission alone is disqualifying. But perhaps understandable given AI Viceroy David Sacks’ obsession with blocking enforcement of state laws that “impede” AI.

And it’s not an isolated miss. Once you read Khanna’s framework closely, a pattern emerges. This isn’t a bill of rights for creators. It’s a political shield for platforms that is built on a false economic premise, framed around influencers, silent on professional creative labor, evasive on AI ownership and training consent, and carefully structured to avoid enforceable obligations.

The Foundational Error: Treating Revenue Share as Natural Law that Justifies A Stream Share Threshold

The foundational error appears right at the center of the resolution: its uncritical embrace of the Internet’s coin of the realm: revenue-sharing. Khanna calls for “clear, transparent, and predictable revenue-sharing terms” between platforms and creators. That phrase sounds benign, even progressive. But it quietly locks in the single worst idea anyone ever had for royalty economics: big-pool platform revenue share.  An idea that is being rejected by pretty much everyone except Spotify with its stream share threshold. In case Mr. Khanna didn’t get the memo, artist-centric is the new new thing.

Revenue sharing treats creators as participants in a platform monetization program, not as rights-holders.  You know, “partners.”  Artists don’t get a share of Spotify stock, they get a “revenue share” because they’re “partnering” with Spotify.   If that’s how Spotify treats “partners”….

Under that revenue share model, the platform defines what counts as revenue, what gets excluded, how it’s allocated, which metrics matter, and how the rules change. The platform controls all the data. The platform controls the terms. And the platform retains unilateral power to rewrite the deal. Hey “partner,” that’s not compensation grounded in intellectual property or labor rights. It’s a dodge grounded in platform policy.

We already know how this story ends. Big-pool revenue share regimes hide cross-subsidies, reward algorithm gaming over quality, privilege viral noise over durable cultural work, and collapse bargaining power into opaque market share payments of microscopic proportion. Revenue share deals destroy price signals, hollow out licensing markets, and make creative income volatile and non-forecastable. This is exceptionally awful for songwriters and nobody can tell a songwriter today what that burger on Tuesday will actually bring.

A advertising revenue-share model penalizes artists because they receive only a tiny fraction of the ads served against their own music, while platforms like Google capture roughly half of the total advertising revenue generated across the entire network. Naturally they love it.

Rev shares of advertising revenue are the core economic pathology behind what happened to music, journalism, and digital publishing over the last fifteen years.  As we have seen from Spotify’s stream share threshold, a platform can unilaterally decide to cut off payments at any time for any absurd reason and get away with it.  And Khanna’s resolution doesn’t challenge that logic. It blesses it.

He doesn’t say creators are entitled to enforceable royalties tied to uses of their work at rates set by the artist. He doesn’t say there should be statutory floors, audit rights, underpayment penalties, nondiscrimination rules, or retaliation protections. He doesn’t say platforms should be prohibited from unilaterally redefining the pie. He says let’s make the revenue share more “transparent” and “predictable.” That’s not a power shift. That’s UX optimization for exploitation.

This Is an Influencer Bill, Not a Creator Bill

The second fatal flaw is sociological. Khanna’s resolution is written for the creator economy, not the creative economy.

The “creator” in Khanna’s bill is a YouTuber, a TikToker, a Twitch streamer, a podcast personality, a Substack writer, a platform-native entertainer (but no child labor protection). Those are real jobs, and the people doing them face real precarity. But they are not the same thing as professional creative labor. They are usually not professional musicians, songwriters, composers, journalists, photographers, documentary filmmakers, authors, screenwriters, actors, directors, designers, engineers, visual artists, or session musicians. They are not non-featured performers. They are not investigative reporters. They are not the people whose works are being scraped at industrial scale to train generative AI systems.

Those professional creators are workers who produce durable cultural goods governed by copyright, contract, and licensing markets. They rely on statutory royalties, collective bargaining, residuals, reuse frameworks, audit rights, and enforceable ownership rules. They face synthetic displacement and market destruction from AI systems trained on their work without consent. Khanna’s resolution barely touches any of that. It governs platform participation. It does not govern creative labor.  It’s not that influencers shouldn’t be able to rely on legal protections; it’s that if you’re going to have a bill of rights for creators it should include all creators and very often the needs are different.  Starting with collective bargaining and unions.

The Total Bypass of Unionized Labor

Nowhere is this shortcoming more glaring than in the complete bypass of unionized labor. The framework lives in a parallel universe where SAG-AFTRA, WGA, DGA, IATSE, AFM, Equity, newsroom unions, residuals, new-use provisions, grievance procedures, pension and health funds, minimum rates, credit rules, and collective bargaining simply do not exist. That entire legal architecture is invisible.  And Khanna’s approach could easily roll back the gains on AI protections that unions have made through collective bargaining.

Which means the resolution is not attempting to interface with how creative work actually functions in film, television, music, journalism, or publishing. It is not creative labor policy. It is platform fairness rhetoric.

Invisible Labor: Non-Featured Artists and the People the Platform Model Erases

The same erasure applies to non-featured artists and invisible creative labor. Session musicians, backup singers, supporting actors, dancers, crew, editors, photographers on assignment, sound engineers, cinematographers — these people don’t live inside platform revenue-share dashboards. They are paid through wage scales, reuse payments, residuals, statutory royalty regimes, and collective agreements.

None of that exists in Khanna’s world. His “creator” is an account, not a worker.

AI Without Consent Is Not Accountability

The AI plank in the resolution follows the same pattern of rhetorical ambition and structural emptiness. Khanna gestures at transparency, consent, and accountability for AI and synthetic media. But he never defines what consent actually means.

Consent for training? For style mimicry? For voice cloning? For archival scraping of journalism and music catalogs? For derivative outputs? For model fine-tuning? For prompt exploitation? For replacement economics?

The bill carefully avoids the training issue. Which is the whole issue.

A real AI consent regime would force Congress to confront copyright primacy, opt-in licensing, derivative works, NIL rights, data theft, model ownership, and platform liability. Khanna’s framework gestures at harms while preserving the industrial ingestion model intact.

The Ownership Trap: Work-for-Hire and AI Outputs

This omission is especially telling. Nowhere does Khanna say platforms may not claim authorship or ownership of AI outputs by default. Nowhere does he say AI-assisted works are not works made for hire. Nowhere does he say users retain rights in their contributions and edits. Nowhere does he say WFH boilerplate cannot be used to convert prompts into platform-owned assets.

That silence is catastrophic.

Right now, platforms are already asserting ownership contractually, claiming assignments of outputs, claiming compilation rights, claiming derivative rights, controlling downstream licensing, locking creators out of monetization, and building synthetic catalogs they own. Even though U.S. law says purely AI-generated content isn’t copyrightable absent human authorship, platforms can still weaponize terms of service, automated enforcement, and contractual asymmetry to create “synthetic  ownership” or “practical control.” Khanna’s resolution says nothing about any of it.

Portable Benefits as a Substitute for Labor Rights

Then there’s the portable-benefits mirage. Portable benefits sound progressive. They are also the classic substitute for confronting misclassification. So first of all, Khanna starts our saying that “gig workers” in the creative economy don’t get health care—aside from the union health plans, I guess. But then he starts with the portable benefits mirage. So which is it? Surely he doesn’t mean nothing from nothing leaves nothing?

If you don’t want to deal with whether creators are actually employees, whether platforms owe payroll taxes, whether wage-and-hour law applies, whether unemployment insurance applies, whether workers’ comp applies, whether collective bargaining rights attach, or…wait for it…stock options apply…you propose portable benefits without dealing with the reality that there are no benefits. You preserve contractor status. You socialize costs and privatize upside. You deflect labor-law reform and health insurance reform for that matter. You look compassionate. And you change nothing structurally.

Khanna’s framework sits squarely in that tradition of nothing from nothing leaves nothing.

A Non-Binding Resolution for a Reason

The final tell is procedural. Khanna didn’t introduce a bill. He introduced a non-binding resolution.

No enforceable rights. No regulatory mandates. No private causes of action. No remedies. No penalties. No agency duties. No legal obligations.

This isn’t legislation. It’s political signaling.

What This Really Is: A Political Shield

Put all of this together and the picture becomes clear. Khanna’s “Creator Bill of Rights” is built on a false revenue-share premise. It is framed around influencers. It bypasses professional creators. It bypasses unions. It bypasses non-featured artists. It bypasses child labor. It bypasses training consent. It bypasses copyright primacy. It bypasses WFH abuse. It bypasses platform ownership grabs. It bypasses misclassification. It bypasses enforceability. I give you…Uber.

It doesn’t fail because it’s hostile to creators, rather because it is indifferent to creators. It fails because it redefines “creator” downward until every hard political and legal question disappears.

And in doing so, it functions as a political shield for the very platforms headquartered in Khanna’s district.

When the Penny Drops

Ro Khanna’s “Creator Bill of Rights” isn’t a rights charter.

It’s a narrative framework designed to stabilize the influencer economy, legitimize platform compensation models, preserve contractor status, soften AI backlash, avoid copyright primacy, avoid labor-law reform, avoid ownership reform, and avoid real accountability.

It treats transparency as leverage. It treats consent as vibes. It treats revenue share as natural law. It treats AI as branding. It treats creative labor as content. It treats platforms as inevitable.

And it leaves out the people who are actually being scraped, displaced, devalued, erased, and replaced: musicians, journalists, photographers, actors, directors, songwriters, composers, engineers, non-featured performers, visual artists, and professional creators.

If Congress actually wants a bill of rights for creators, it won’t start with influencer UX and non-binding resolutions. It will start with enforceable intellectual-property rights, training consent, opt-in regimes, audit rights, statutory floors, collective bargaining, exclusion of AI outputs from work-for-hire, limits on platform ownership claims, labor classification clarity, and real remedies.

Until then, this isn’t a bill of rights.

It’s a press release with footnotes.

Grassroots Revolt Against Data Centers Goes National: Water Use Now the Flashpoint

Over the last two weeks, grassroots opposition to data centers has moved from sporadic local skirmishes to a recognizable national pattern. While earlier fights centered on land use, noise, and tax incentives, the current phase is more focused and more dangerous for developers: water.

Across multiple states, residents are demanding to see the “water math” behind proposed data centers—how much water will be consumed (not just withdrawn), where it will come from, whether utilities can actually supply it during drought conditions, and what enforceable reporting and mitigation requirements will apply. In arid regions, water scarcity is an obvious constraint. But what’s new is that even in traditionally water-secure states, opponents are now framing data centers as industrial-scale consumptive users whose needs collide directly with residential growth, agriculture, and climate volatility.

The result: moratoria, rezoning denials, delayed hearings, task forces, and early-stage organizing efforts aimed at blocking projects before entitlements are locked in.

Below is a snapshot of how that opposition has played out state by state over the last two weeks.

State-by-State Breakdown

Virginia  

Virginia remains ground zero for organized pushback.

Botetourt County: Residents confronted the Western Virginia Water Authority over a proposed Google data center, pressing officials about long-term water supply impacts and groundwater sustainability.  

Hanover County (Richmond region): The Planning Commission voted against recommending rezoning for a large multi-building data center project.  

State Legislature: Lawmakers are advancing reform proposals that would require water-use modeling and disclosure.

Georgia  

Metro Atlanta / Middle Georgia: Local governments’ recruitment of hyperscale facilities is colliding with resident concerns.  

DeKalb County: An extended moratorium reflects a pause-and-rewrite-the-rules strategy.  

Monroe County / Forsyth area: Data centers have become a local political issue.

Arizona  

The state has moved to curb groundwater use in rural basins via new regulatory designations requiring tracking and reporting.  

Local organizing frames AI data centers as unsuitable for arid regions.

Maryland  

Prince George’s County (Landover Mall site): Organized opposition centered on environmental justice and utility burdens.  

Authorities have responded with a pause/moratorium and a task force.

Indiana  

Indianapolis (Martindale-Brightwood): Packed rezoning hearings forced extended timelines.  

Greensburg: Overflow crowds framed the fight around water-user rankings.

Oklahoma  

Luther (OKC metro): Organized opposition before formal filings.

Michigan  

Broad local opposition with water and utility impacts cited.  

State-level skirmishes over incentives intersect with water-capacity debates.

North Carolina  

Apex (Wake County area): Residents object to strain on electricity and water.

Wisconsin & Pennsylvania 

Corporate messaging shifts in response to opposition; Microsoft acknowledged infrastructure and water burdens.

The Through-Line: “Show Us the Water Math”

Lawrence of Arabia: The Well Scene

Across these states, the grassroots playbook has converged:

Pack the hearing.  

Demand water-use modeling and disclosure.  

Attack rezoning and tax incentives.  

Force moratoria until enforceable rules exist.

Residents are demanding hard numbers: consumptive losses, aquifer drawdown rates, utility-system capacity, drought contingencies, and legally binding mitigation.

Why This Matters for AI Policy

This revolt exposes the physical contradiction at the heart of the AI infrastructure build-out: compute is abstract in policy rhetoric but experienced locally as land, water, power, and noise.

Communities are rejecting a development model that externalizes its physical costs onto local water systems and ratepayers.

Water is now the primary political weapon communities are using to block, delay, and reshape AI infrastructure projects.

Read the local news:

America’s AI Boom Is Running Into An Unplanned Water Problem (Ken Silverstein/Forbes)

Residents raise water concerns over proposed Google data center (Allyssa Beatty/WDBJ7 News)

How data centers are rattling a Georgia Senate special election (Greg Bluesetein/Atlanta Journal Constitution)

A perfect, wild storm’: widely loathed datacenters see little US political opposition (Tom Perkins/The Guardian) 

Hanover Planning Commission votes to deny rezoning request for data center development (Joi Fultz/WTVR)

Microsoft rolls out initiative to limit data-center power costs, water use impact (Reuters)

South Korea’s AI Action Plan and the Global Drift Toward “Use First, Pay Later”

South Korea has become the latest flashpoint in a rapidly globalizing conflict over artificial intelligence, creator rights and copyright. A broad coalition of Korean creator and copyright organizations—spanning literature, journalism, broadcasting, screenwriting, music, choreography, performance, and visual arts—has issued a joint statement rejecting the government’s proposed Korea AI Action Plan, warning that it risks allowing AI companies to use copyrighted works without meaningful permission or payment.

The groups argue that the plan signals a fundamental shift away from a permission-based copyright framework toward a regime that prioritizes AI deployment speed and “legal certainty” for developers, even if that certainty comes at the expense of creators’ control and compensation. Their statement is unusually blunt: they describe the policy direction as a threat to the sustainability of Korea’s cultural industries and pledge continued opposition unless the government reverses course.

The controversy centers on Action Plan No. 32, which promotes “activating the ecosystem for the use and distribution of copyrighted works for AI training and evaluation.” The plan directs relevant ministries to prepare amendments—either to Korea’s Copyright Act, the AI Basic Act, or through a new “AI Special Act”—that would enable AI training uses of copyrighted works without legal ambiguity.

Creators argue that “eliminating legal ambiguity” reallocates legal risk rather than resolves it. Instead of clarifying consent requirements or building licensing systems, the plan appears to reduce the legal exposure of AI developers while shifting enforcement burdens onto creators through opt-out or technical self-help mechanisms.

Similar policy patterns have emerged in the United Kingdom and India, where governments have emphasized legal certainty and innovation speed while creative sectors warn of erosion to prior-permission and fair-compensation norms. South Korea’s debate stands out for the breadth of its opposition and the clarity of the warning from cultural stakeholders.

The South Korean government avoids using the term “safe harbor,” but its plan to remove “legal ambiguity” reads like an effort to build one. The asymmetry is telling: rather than eliminating ambiguity by strengthening consent and payment mechanisms, the plan seeks to eliminate ambiguity by making AI training easier to defend as lawful—without meaningful consent or compensation frameworks. That is, in substance, a safe harbor, and a species of blanket license. The resulting “certainty” would function as a pass for AI companies, while creators are left to police unauthorized use after the fact, often through impractical opt-out mechanisms—to the extent such rights remain enforceable at all.

Grass‑Roots Rebellion Against Data Centers and Grid Expansion

A grass‑roots “data center and electric grid rebellion” is emerging across the United States as communities push back against the local consequences of AI‑driven infrastructure expansion. Residents are increasingly challenging large‑scale data centers and the transmission lines needed to power them, citing concerns about enormous electricity demand, water consumption, noise pollution, land use, declining property values, and opaque approval processes. What were once routine zoning or utility hearings are now crowded, contentious events, with citizens organizing quickly and sharing strategies across counties and states.



This opposition is no longer ad hoc. In Northern Virginia—often described as the global epicenter of data centers—organized campaigns such as the Coalition to Protect Prince William County have mobilized voters, fundraised for local elections, demanded zoning changes, and challenged approvals in court. In Maryland’s Prince George’s County, resistance has taken on a strong environmental‑justice framing, with groups like the South County Environmental Justice Coalition arguing that data centers concentrate environmental and energy burdens in historically marginalized communities and calling for moratoria and stronger safeguards.



Nationally, consumer and civic groups are increasingly coordinated, using shared data, mapping tools, and media pressure to argue that unchecked data‑center growth threatens grid reliability and shifts costs onto ratepayers. Together, these campaigns signal a broader political reckoning over who bears the costs of the AI economy.

Global Data Centers

Here’s a snapshot of grass roots opposition in Texas, Louisiana and Nevada:

Texas

Texas has some of the most active and durable local opposition, driven by land use, water, and transmission corridors.

  • Hill Country & Central Texas (Burnet, Llano, Gillespie, Blanco Counties)
    Grass-roots groups formed initially around high-voltage transmission lines (765 kV) tied to load growth, now explicitly linking those lines to data center demand. Campaigns emphasize:
    • rural land fragmentation
    • wildfire risk
    • eminent domain abuse
    • lack of local benefit
      These groups are often informal coalitions of landowners rather than NGOs, but they coordinate testimony, public-records requests, and local elections.
  • DFW & North Texas
    Neighborhood associations opposing rezoning for hyperscale facilities focus on noise (backup generators), property values, and school-district tax distortions created by data-center abatements.
  • ERCOT framing
    Texas groups uniquely argue that data centers are socializing grid instability risk onto residential ratepayers while privatizing upside—an argument that resonates with conservative voters.

Louisiana

Opposition is newer but coalescing rapidly, often tied to petrochemical and LNG resistance networks.

  • North Louisiana & Mississippi River Corridor
    Community groups opposing new data centers frame them as:
    • “energy parasites” tied to gas plants
    • extensions of an already overburdened industrial corridor
    • threats to water tables and wetlands
      Organizers often overlap with environmental-justice and faith-based coalitions that previously fought refineries and export terminals.
  • Key tactic: reframing data centers as industrial facilities, not “tech,” triggering stricter land-use scrutiny.

Nevada

Nevada opposition centers on water scarcity and public-land use.

  • Clark County & Northern Nevada
    Residents and conservation groups question:
    • water allocations for evaporative cooling
    • siting near public or BLM-managed land
    • grid upgrades subsidized by ratepayers for private AI firms
  • Distinct Nevada argument: data centers compete directly with housing and tribal water needs, not just environmental values.

The Data Center Rebellion is Here and It’s Reshaping the Political Landscape (Washington Post)

Residents protest high-voltage power lines that could skirt Dinosaur Valley State Park (ALEJANDRA MARTINEZ AND PAUL COBLER/Texas Tribune)

US Communities Halt $64B Data Center Expansions Amid Backlash (Lucas Greene/WebProNews)

Big Tech’s fast-expanding plans for data centers are running into stiff community opposition (Marc Levy/Associated Press)

Data center ‘gold rush’ pits local officials’ hunt for new revenue against residents’ concerns (Alander Rocha/Georgia Record)

Frozen Ledgers and Living Systems: What King William’s Domesday Book Can Teach Us About the Mechanical Licensing Collective

A static record can support governance, but it cannot replace it. When a dynamic economy is ruled by a frozen ledger, injustice is structural rather than accidental. The lesson of Domesday is not to abandon centralized records, but to build institutions that acknowledge change, dispute, and time.

Introduction: The Problem of the Frozen Record

The Domesday Book was not wrong so much as frozen. It rendered a living, changing system of land tenure into a static ledger that became authoritative precisely because it could not keep up with reality. The Mechanical Licensing Collective (“MLC”) repeats this error in digital form. Musical works ownership is dynamic, relational, and contested, yet royalties flow based on a fixed snapshot that is at least potentially outdated the moment it is operationalized. In both systems, the problem is not bad data but the pretense that a static record can govern a dynamic economy without producing systemic error.[1] That’s why I always recommend Weapons of Math Destruction by Kathy O’Neil to MLC executives, which they promptly ignore. 

I argue that the failure mode is mostly structural, not technical. The technical part is relatively trivial, compared to say AI training or protein folds.  I think it could be built far quicker, far cheaper, and far more accurately than the MLC which has blown a unique opportunity to start with a blank sheet of paper and instead perpetuated the Harry Fox Agency which was founded well before FoxPro.  The lesson of Domesday is not that better enumeration solves governance problems, but that static records require institutional counterweights to prevent injustice, capture, and permanent misallocation.  That is, to prevent the MLC to be like the MLC.

Background: Two Authoritative Ledgers

A. The Domesday Book

Commissioned by William the Conqueror in 1085–1086, the Domesday Book was a comprehensive survey of landholding and economic resources in post‑Conquest England.[2] Its purpose was fiscal and administrative: to identify who held land, what that land was worth, and what obligations were owed to the Crown.[3] Domesday recorded information through sworn local inquests and was intended to be definitive.

Crucially, Domesday was never designed to be updated, at least not in real time. It froze a moment in time and became authoritative precisely because it was fixed. Almost immediately, it diverged from reality as land changed hands through death, forfeiture, re‑grant, and political favor.[4] Rather than revise Domesday, medieval England developed supplementary institutions—annual fiscal records, local courts, and royal adjudication—to manage change and dispute.[5]

B. The Mechanical Licensing Collective

The Mechanical Licensing Collective was created by Congress in Title I of the Music Modernization Act of 2018 to administer the blanket mechanical license for digital music services in the United States.[6]  (More accurately, Title I was written by the lobbyists and imposed on the world with Congress’s chop.). The MLC maintains a centralized database of musical works ownership, collects mechanical royalties from digital service providers, and distributes those royalties to songwriters and publishers.[7]

Musical works ownership, however, is inherently dynamic. Writers change publishers, estates open and close, ownership splits are disputed, and metadata is frequently incomplete or corrected only after use aka “Copyright Control.[8] As a result, the MLC’s database—however well‑intentioned—is outdated almost as soon as it is operationalized (particularly because it was and is based on the Harry Fox Agency’s database that MLC passed off as state of the art over the objections of others).

Domesday as a Governance Tool, Not a Truth Machine

Domesday succeeded at centralizing authority, not at preserving truth over time. Land tenure in eleventh‑century England was dynamic, relational, and politically contingent. Domesday froze these relationships into an official record that quickly diverged from lived reality, yet retained legal force because it was authoritative rather than accurate.[9]. Nothing that a Norman knight with broadsword and mace couldn’t fix.

Importantly, medieval England did not rely on Domesday alone. The development of Pipe Rolls, hundred and shire courts, and royal justice provided mechanisms to contextualize, correct, and supersede the frozen record.[10]

The MLC as Digital Domesday

The MLC performs a structurally similar function today. It fixes ownership claims, establishes a canonical record, and allocates ongoing revenue streams while disputes remain unresolved. Royalties flow based on the database snapshot in effect at the time of use, even when that snapshot is known to be incomplete or incorrect.[11]

As with Domesday, authority substitutes for adaptability. The database becomes dispositive not because it reflects reality, but because it governs the flow of money. In other words, the MLC is not authoritative because it is accurate or complete; it is authoritative because Congress made its use compulsory. That’s right—it’s not authoritative because it’s accurate, it’s authoritative because it’s authorized.

Three Solutions Grounded in Domesday’s Afterlife

1. Authoritative Record Plus Living Supplement (The Pipe Roll Model)

Domesday was supplemented by the Pipe Rolls—annual fiscal records that reflected changes in obligations over time.[12] Applied to the MLC, this suggests separating baseline records from continuous reconciliation layers and treating unmatched royalties as unreconciled obligations of the MLC rather than abandoned property of the songwriter.

2. Jurisdictional Pluralism (The Hundred and Shire Court Model)

Domesday did not eliminate local adjudication. Disputes were resolved in courts that contextualized Domesday entries rather than deferring blindly to them.[13]  Similarly, ownership and split disputes should be resolved in external and independent fora, with the MLC conforming its records and payouts to those determinations.

3. No Profit from Unresolved Ownership (The No Escheat Without Judgment Model)

In medieval England, the Crown could claim land only through recognized legal mechanisms such as forfeiture or escheat.[14] Uncertainty alone did not justify enrichment.  A Domesday‑informed reform would prohibit institutional profit from unresolved ownership and require segregation of disputed funds.

By contrast, the MLC “black box” is not escheatment at all—yet it functionally resembles one-sided escheatment without due process. Under traditional escheat or unclaimed-property regimes, the state’s claim arises only after defined predicates: notice, diligence, and a lawful adjudication or administrative determination of abandonment, coupled with a public fiduciary obligation to locate the owner. The black box instead permits private retention and deployment of other people’s money based solely on unresolved ownership, without a judgment of abandonment, without a comparable duty to search for the owner, and with the economic upside of delay accruing to the intermediary rather than the missing payee.

For example, California requires some effort:

California law requires all holders (corporations, businesses, associations, financial institutions, and insurance companies) of unclaimed property to attempt to contact owners before reporting their property to the State Controller’s Office.

Holders are required to send a notice to the owner’s last known address informing them that the property will be transferred to the State Controller’s Office for safekeeping if the owner does not contact them to retrieve it.

The State Controller’s Office sends notices to all owners of property that will be transferred to the state. These notices are sent out before the property is to be transferred, giving owners an opportunity to retrieve property directly from the holder.

The constitutional problem is sharpened by Title I of the MMA, which expressly preempts state escheatment and unclaimed-property laws—but arguably does not replace them with functionally equivalent federal protections. States are forbidden to take custody of abandoned property without notice, diligence, and due process; yet the MMA authorizes a private entity to hold, invest (or so MLC argues), and ultimately distribute unmatched royalties on a market share basis (including to companies represented on MLC’s board of directors) without any finding of abandonment, without judicial process, and without a neutral public custodian.

Specifically, Title I provides at 17 U.S.C. § 115(d)(11)(E):

(E)Preemption of state property laws.—

The holding and distribution of funds by the mechanical licensing collective in accordance with this subsection shall supersede and preempt any State law (including common law) concerning escheatment or abandoned property, or any analogous provision, that might otherwise apply.

So with a waive of the hand, Title I preempts the detailed protection of escheatment traditions that date back to the doctrine of defectus sanguinis in the 12th century (the Pipe Roll of 1130 (31 Henry I)). This asymmetry raises serious Due Process and Equal Protection concerns (not to mention conflicts of interest), and potentially a Takings Clause problem: Congress may not displace state escheat safeguards and simultaneously permit private enrichment from unresolved ownership where states themselves would be constitutionally barred from proceeding without judgment and owner-protective procedures.  It also raises a classic unconstitutional state preemption without federal statute problem.[15]

Three Contemporary Reforms the MLC Could Adopt

1. Authoritative Record + Living Reconciliation Layer (The Pipe Roll Model)

Adopt a structural separation between the MLC’s baseline ownership database and a continuous reconciliation system that tracks changes, corrections, disputes, and late‑arriving claims on a monthly basis.

In practice, unmatched royalties would be treated as unreconciled obligations rather than quasi‑abandoned funds. The MLC would maintain a rolling, auditable ledger capable of updating distributions when ownership data changes, including retroactive true‑ups once claims are resolved, instead of locking outcomes to a stale snapshot.

This reform acknowledges that ownership is dynamic and prevents early database errors from permanently reallocating value.

2. Independent Adjudication with Mandatory Conformance (The Hundred and Shire Court Model)

Formally decouple ownership and split dispute resolution from the MLC’s internal processes and require the MLC to conform its records and payouts to determinations made by independent fora.

In practice, disputes would be resolved in courts, arbitrations, or designated independent neutral bodies, and the MLC would treat those determinations as binding inputs rather than discretionary metadata updates. The database would no longer enjoy a presumption of correctness when ownership is contested and disputes would not be resolved by conflicted statutory committees.

This prevents the MLC from acting as judge, jury, and paymaster and restores legitimacy to ownership determinations.

3. Mandatory Segregation and No Profit from Unresolved Ownership (The No Escheat Without Judgment Model)

Prohibit the MLC from retaining, investing, or reallocating royalties tied to unresolved ownership and incentives them to find the correct owners.

In practice, all black‑box royalties would be held in segregated custodial accounts or at least ledgers. Market‑share distributions would be barred unless and until lawful abandonment is established, and the MLC would carry an affirmative duty to search for and notify potential claimants, analogous to the duties of traditional unclaimed‑property regimes.

This removes perverse incentives to delay resolution and aligns the MLC with basic due‑process and fiduciary norms, especially critical given the MMA’s preemption of state escheat laws (which itself may be unconstitutional).

Taken together, these reforms shift the MLC away from treating a frozen ledger as dispositive authority and toward an institutional design that acknowledges change, dispute, and time—without sacrificing administrative efficiency. At $40 million a year, they should be able to pull this off or at least start slouching toward Bethlehem.


[1] S.F.C. Milsom, Historical Foundations of the Common Law (2d ed. 1981).

[2] Domesday Book (1086).

[3] R. Allen Brown, The Normans and the Norman Conquest (2d ed. 1985).

[4] J.C. Holt, Domesday Studies (1987).

[5] Mark Hagger, William: King and Conqueror (2012).

[6] Music Modernization Act, Pub. L. No. 115‑264, 132 Stat. 3676 (2018).

[7] 17 U.S.C. § 115(d).

[8] U.S. Copyright Office, Music Modernization Act Implementation Report (2019). “Copyright Control” is often a metadata band-aid: it flags that publishing info is incomplete or self-administered. The publisher share can wind up unmatched/unallocated even though ownership is knowable or is ultimately known after an indeterminate number of accounting periods.

[9] F.W. Maitland, Domesday Book and Beyond (1897).

[10] Richard FitzNigel, Dialogus de Scaccario (Dialogue concerning the Exchequer) (c. 1179).

[11] Copyright Royalty Judges, Phonorecords III & IV.

[12] Pipe Roll Society, The Pipe Roll of Henry I.

[13] Paul Brand, The Origins of the English Legal Profession (1992).

[14] Escheat is a common-law legal mechanism by which real property reverted to the Crown when a tenant died intestate and without lawful heirs. At common law, escheat required the extinction of the tenant’s line of inheritance; mere uncertainty of title or ownership was insufficient. In modern U.S. law, escheat has been adapted to intangible and unclaimed property, but it retains the same core features: notice, diligence, and a lawful determination of abandonment or lack of heirs before the sovereign (in our case a State) may take custody.

[15] See Connecticut Mutual Life Ins. Co. v. Moore, 333 U.S. 541 (1948); Texas v. New Jersey, 379 U.S. 674 (1965) (states may take custody of abandoned property only subject to procedural protections and priority rules); Cf. Webb’s Fabulous Pharmacies, Inc. v. Beckwith, 449 U.S. 155 (1980) (interest on private funds held by a custodian remains private property; government may not appropriate economic benefits without just compensation).

What Would Freud Do? The Unconscious Is Not a Database — and Humans Are Not Machines

What would Freud do?

It’s a strange question to ask about AI and copyright, but a useful one. When generative-AI fans insist that training models on copyrighted works is merely “learning like a human,” they rely on a metaphor that collapses under even minimal scrutiny. Psychoanalysis—whatever one thinks of Freud’s conclusions—begins from a premise that modern AI rhetoric quietly denies: the unconscious is not a database, and humans are not machines.

As Freud wrote in The Interpretation of Dreams, “Our memory has no guarantees at all, and yet we bow more often than is objectively justified to the compulsion to believe what it says.” No AI truthiness there.

Human learning does not involve storing perfect, retrievable copies of what we read, hear, or see. Memory is reconstructive, shaped by context, emotion, repression, and time. Dreams do not replay inputs; they transform them. What persists is meaning, not a file.

AI training works in the opposite direction—obviously. Training begins with high-fidelity copying at industrial scale. It converts human expressive works into durable statistical parameters designed for reuse, recall, and synthesis for eternity. Where the human mind forgets, distorts, and misremembers as a feature of cognition, models are engineered to remember as much as possible, as efficiently as possible, and to deploy those memories at superhuman speed. Nothing like humans.

Calling these two processes “the same kind of learning” is not analogy—it is misdirection. And that misdirection matters, because copyright law was built around the limits of human expression: scarcity, imperfection, and the fact that learning does not itself create substitute works at scale.

Dream-Work Is Not a Training Pipeline

Freud’s theory of dreams turns on a simple but powerful idea: the mind does not preserve experience intact. Instead, it subjects experience to dream-work—processes like condensation (many ideas collapsed into one image), displacement (emotional significance shifted from one object to another), and symbolization (one thing representing another, allowing humans to create meaning and understanding through symbols). The result is not a copy of reality but a distorted, overdetermined construction whose origins cannot be cleanly traced.

This matters because it shows what makes human learning human. We do not internalize works as stable assets. We metabolize them. Our memories are partial, fallible, and personal. Two people can read the same book and walk away with radically different understandings—and neither “contains” the book afterward in any meaningful sense. There is no Rashamon effect for an AI.

AI training is the inverse of dream-work. It depends on perfect copying at ingestion, retention of expressive regularities across vast parameter spaces, and repeatable reuse untethered from embodiment, biography, or forgetting. If Freud’s model describes learning as transformation through loss, AI training is transformation through compression without forgetting.

One produces meaning. The other produces capacity.

The Unconscious Is Not a Database

Psychoanalysis rejects the idea that memory functions like a filing cabinet. The unconscious is not a warehouse of intact records waiting to be retrieved. Memory is reconstructed each time it is recalled, reshaped by narrative, emotion, and social context. Forgetting is not a failure of the system; it is a defining feature.

AI systems are built on the opposite premise. Training assumes that more retention is better, that fidelity is a virtue, and that expressive regularities should remain available for reuse indefinitely. What human cognition resists by design—perfect recall at scale—machine learning seeks to maximize.

This distinction alone is fatal to the “AI learns like a human” claim. Human learning is inseparable from distortion, limitation, and individuality. AI training is inseparable from durability, scalability, and reuse.

In The Divided Self, R. D. Laing rejects the idea that the mind is a kind of internal machine storing stable representations of experience. What we encounter instead is a self that exists only precariously, defined by what Laing calls ontological security” or its absence—the sense of being real, continuous, and alive in relation to others. Experience, for Laing, is not an object that can be detached, stored, or replayed; it is lived, relational, and vulnerable to distortion. He warns repeatedly against confusing outward coherence with inner unity, emphasizing that a person may present a fluent, organized surface while remaining profoundly divided within. That distinction matters here: performance is not understanding, and intelligible output is not evidence of an interior life that has “learned” in any human sense.

Why “Unlearning” Is Not Forgetting

Once you understand this distinction, the problem with AI “unlearning” becomes obvious.

In human cognition, there is no clean undo. Memories are never stored as discrete objects that can be removed without consequence. They reappear in altered forms, entangled with other experiences. Freud’s entire thesis rests on the impossibility of clean erasure.

AI systems face the opposite dilemma. They begin with discrete, often unlawful copies, but once those works are distributed across parameters, they cannot be surgically removed with certainty. At best, developers can stop future use, delete datasets, retrain models, or apply partial mitigation techniques (none of which they are willing to even attempt). What they cannot do is prove that the expressive contribution of a particular work has been fully excised.

This is why promises (especially contractual promises) to “reverse” improper ingestion are so often overstated. The system was never designed for forgetting. It was designed for reuse.

Why This Matters for Fair Use and Market Harm

The “AI = human learning” analogy does real damage in copyright analysis because it smuggles conclusions into fair-use factor one (transformative purpose and character) and obscures factor four (market harm).

Learning has always been tolerated under copyright law because learning does not flood markets. Humans do not emerge from reading a novel with the ability to generate thousands of competing substitutes at scale. Generative models do exactly that—and only because they are trained through industrial-scale copying.

Copyright law is calibrated to human limits. When those limits disappear, the analysis must change with them. Treating AI training as merely “learning” collapses the very distinction that makes large-scale substitution legally and economically significant.

The Pensieve Fallacy

There is a world in which minds function like databases. It is a fictional one.

In Harry Potter and the Goblet of Fire, wizards can extract memories, store them in vials, and replay them perfectly using a Pensieve. Memories in that universe are discrete, stable, lossless objects. They can be removed, shared, duplicated, and inspected without distortion. As Dumbledore explained to Harry, “I use the Pensieve. One simply siphons the excess thoughts from one’s mind, pours them into the basin, and examines them at one’s leisure. It becomes easier to spot patterns and links, you understand, when they are in this form.”

That is precisely how AI advocates want us to imagine learning works.

But the Pensieve is magic because it violates everything we know about human cognition. Real memory is not extractable. It cannot be replayed faithfully. It cannot be separated from the person who experienced it. Arguably, Freud’s work exists because memory is unstable, interpretive, and shaped by conflict and context.

AI training, by contrast, operates far closer to the Pensieve than to the human mind. It depends on perfect copies, durable internal representations, and the ability to replay and recombine expressive material at will.

The irony is unavoidable: the metaphor that claims to make AI training ordinary only works by invoking fantasy.

Humans Forget. Machines Remember.

Freud would not have been persuaded by the claim that machines “learn like humans.” He would have rejected it as a category error. Human cognition is defined by imperfection, distortion, and forgetting. AI training is defined by reproduction, scale, and recall.

To believe AI learns like a human, you have to believe humans have Pensieves. They don’t. That’s why Pensieves appear in Harry Potter—not neuroscience, copyright law, or reality.

The Paradox of Huang’s Rope

If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.

The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.

It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.

This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?

The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.

Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.

You Can’t Prosecute Smuggling NVIDIA chips to CCP and Authorize Sales to CCP at the Same Time

The Trump administration is attempting an impossible contradiction: selling advanced NVIDIA AI chips to China while the Department of Justice prosecutes criminal cases for smuggling the exact same chips into China.

According to the DOJ:

“Operation Gatekeeper has exposed a sophisticated smuggling network that threatens our Nation’s security by funneling cutting-edge AI technology to those who would use it against American interests,” said Ganjei. “These chips are the building blocks of AI superiority and are integral to modern military applications. The country that controls these chips will control AI technology; the country that controls AI technology will control the future. The Southern District of Texas will aggressively prosecute anyone who attempts to compromise America’s technological edge.”

That divergence from the prosecutors is not industrial policy. That is incoherence. But mostly it’s just bad advice, likely coming from White House AI Czar David Sacks, Mr. Trump’s South African AI policy advisor who may have a hard time getting a security clearance in the first place..

On one hand, DOJ is rightly bringing cases over the illegal diversion of restricted AI chips—recognizing that these processors are strategic technologies with direct national-security implications. On the other hand, the White House is signaling that access to those same chips is negotiable, subject to licensing workarounds, regulatory carve-outs, or political discretion.

You cannot treat a technology as contraband in federal court and as a commercial export in the West Wing.

Pick one.

AI Chips Are Not Consumer Electronics

The United States does not sell China F-35 fighter jets. We do not sell Patriot missile systems. We do not sell advanced avionics platforms and then act surprised when they show up embedded in military infrastructure. High-end AI accelerators are in the same category.

NVIDIA’s most advanced chips are not merely commercial products. They are general-purpose intelligence infrastructure or what China calls military-civil fusion. They train surveillance systems, military logistics platforms, cyber-offensive tools, and models capable of operating autonomous weapons and battlefield decision-making pipelines with no human in the loop.

If DOJ treats the smuggling of these chips into China as a serious federal crime—and it should—there is no coherent justification for authorizing their sale through executive discretion. Except, of course, money, or in Mr. Sacks case, more money.

Fully Autonomous Weapons—and Selling the Rope

China does not need U.S. chips to build consumer AI. It wants them for military acceleration.Advanced NVIDIA AI chips are not just about chatbots or recommendation engines. They are the backbone of fully autonomous weapons systems—autonomous targeting, swarm coordination, battlefield logistics, and decision-support models that compress the kill chain beyond meaningful human control.

There is an old warning attributed to Vladimir Lenin—that capitalists would sell the rope by which they would later be hanged. Apocryphal or not, it captures this moment with uncomfortable precision.

If NVIDIA chips are powerful enough to underpin autonomous weapons systems for allied militaries, they are powerful enough to underpin autonomous weapons systems for adversaries like China. Trump’s own National Security Strategy statement clearly says previous U.S. elites made “mistaken” assumptions about China such as the famous one that letting China into the WTO would integrate Beijing into the famous rules-based international order. Trump tells us that instead China “got rich and powerful” and used this against us, and goes on to describe the CCP’s well known predatory subsidies, unfair trade, IP theft, industrial espionage, supply-chain leverage, and fentanyl precursor exports as threats the U.S. must “end.” By selling them the most advanced AI chips?

Western governments and investors simultaneously back domestic autonomous-weapons firms—such as Europe-based Helsing, supported by Spotify CEO Daniel Ek—explicitly building AI-enabled munitions for allied defense. That makes exporting equivalent enabling infrastructure to a strategic competitor indefensible.

The AI Moratorium Makes This Worse, Not Better

This contradiction unfolds alongside a proposed federal AI moratorium executive order originating with Mr. Sacks and Adam Thierer of Google’s R Street Institute that would preempt state-level AI protections.
States are told AI is too consequential for local regulation, yet the federal government is prepared to license exports of AI’s core infrastructure abroad.

If AI is too dangerous for states to regulate, it is too dangerous to export. Preemption at home combined with permissiveness abroad is not leadership. It is capture.

This Is What Policy Capture Looks Like

The common thread is not national security. It is Silicon Valley access. David Sacks and others in the AI–VC orbit argue that AI regulation threatens U.S. competitiveness while remaining silent on where the chips go and how they are used.

When DOJ prosecutes smugglers while the White House authorizes exports, the public is entitled to ask whose interests are actually being served. Advisory roles that blur public power and private investment cannot coexist with credible national-security policymaking particularly when the advisor may not even be able to get a US national security clearance unless the President blesses it.

A Line Has to Be Drawn

If a technology is so sensitive that its unauthorized transfer justifies prosecution, its authorized transfer should be prohibited absent extraordinary national interest. AI accelerators meet that test.

Until the administration can articulate a coherent justification for exporting these capabilities to China, the answer should be no. Not licensed. Not delayed. Not cosmetically restricted.

And if that position conflicts with Silicon Valley advisers who view this as a growth opportunity, they should return to where they belong. The fact that the US is getting 25% of the deal (which i bet never finds its way into America’s general account), means nothing except confirming Lenin’s joke about selling the rope to hang ourselves, you know, kind of like TikTok.

David Sacks should go back to Silicon Valley.

This is not venture capital. This is our national security and he’s selling it like rope.

Back to Commandeering Again: David Sacks, the AI Moratorium, and the Executive Order Courts Will Hate

Why Silicon Valley’s in-network defenses can’t paper over federalism limits.

The old line attributed to music lawyer Allen Grubman is, “No conflict, no interest.” Conflicts are part of the music business. But the AI moratorium that David Sacks is pushing onto President Trump (the idea that Washington should freeze or preempt state AI protections in the absence of federal AI policy) takes that logic to a different altitude. It asks the public to accept not just conflicts of interest, but centralized control of AI governance built around the financial interests of a small advisory circle, including Mr. Sacks himself.

When the New York Times published its reporting on Sacks’s hundreds of AI investments and his role in shaping federal AI and chip policy, the reaction from Silicon Valley was immediate and predictable. What’s most notable is who didn’t show up. No broad political coalition. No bipartisan defense. Just a tight cluster of VC and AI-industry figures from he AI crypto–tech nexus, praising their friend Mr. Sacks and attacking the story.

And the pattern was unmistakable: a series of non-denial denials from people who it is fair to say are massively conflicted themselves.

No one said the Times lied.

No one refuted the documented conflicts.

Instead, Sacks’ tech bros defenders attacked tone and implied bias, and suggested the article merely arranged “negative truths” in an unflattering narrative (although the Times did not even bring up Mr. Sacks’ moratorium scheme).

And you know who has yet to defend Mr. Sacks? Donald J. Trump. Which tells you all you need to know.

The Rumored AI Executive Order and Federal Lawsuits Against States

Behind the spectacle sits the most consequential part of the story: a rumored executive order that would direct the U.S. Department of Justice to sue states whose laws “interfere with AI development.” Reuters reports that “U.S. President Donald Trump is considering an executive order that would seek to preempt state laws on artificial intelligence through lawsuits and by withholding federal funding, according to a draft of the order seen by Reuters….”

That is not standard economic policy. That is not innovation strategy. That is commandeering — the same old unconstitutional move in shiny AI packaging that we’ve discussed many times starting with the One Big Beautiful Bill Act catastrophe.

The Supreme Court has been clear on this such as in Printz v. United States (521 U.S. 898 (1997) at 925): “[O]pinions of ours have made clear that the Federal Government may not compel the States to implement,by legislation or executive action, federal regulatory programs.”

Crucially, the Printz Court teaches us what I think is the key fact. Federal policy for all the United States is to be made by the legislative process in regular order subject to a vote of the people’s representatives, or by executive branch agencies that are led by Senate-confirmed officers of the United States appointed by the President and subject to public scrutiny under the Administrative Procedures Act. Period.

The federal government then implements its own policies directly. It cannot order states to implement federal policy, including in the negative by prohibiting states from exercising their Constitutional powers in the absence of federal policy. The Supreme Court crystalized this issue in a recent Congressional commandeering case of Murphy v. NCAA (138 S. Ct. 1461 (2018)) where the court held “[t]he distinction between compelling a State to enact legislation and prohibiting a State from enacting new laws is an empty one. The basic principle—that Congress cannot issue direct orders to state legislatures—applies in either event.” Read together, Printz and Murphy extend this core principle of federalism to executive orders.

The “presumption against preemption” is a canon of statutory interpretation that the Supreme Court has repeatedly held to be a foundational principle of American federalism. It also has the benefit of common sense. The canon reflects the deep Constitutional understanding that, unless Congress clearly says otherwise—which implies Congress has spoken—states retain their traditional police powers over matters such as the health, safety, land use, consumer protection, labor, and property rights of their citizens. Courts begin with the assumption that federal law does not displace state law, especially in areas the states have regulated for generations, all of which are implicated in the AI “moratorium”.

The Supreme Court has repeatedly affirmed this principle. When Congress legislates in fields historically occupied by the states, courts require a clear and manifest purpose to preempt state authority. Ambiguous statutory language is interpreted against preemption. This is not a policy preference—it is a rule of interpretation rooted in constitutional structure and respect for state sovereignty that goes back to the Founders.

The presumption is strongest where federal action would displace general state laws rather than conflict with a specific federal command. Consumer protection statutes, zoning and land-use controls, tort law, data privacy, and child-safety laws fall squarely within this protected zone. Federal silence is not enough; nor is agency guidance or executive preference.

In practice, the presumption against preemption forces Congress to own the consequences of preemption. If lawmakers intend to strip states of enforcement authority, they must do so plainly and take political responsibility for that choice. This doctrine serves as a crucial brake on back-door federalization, preventing hidden preemption in technical provisions and preserving the ability of states to respond to emerging harms when federal action lags or stalls. Like in A.I.

Applied to an A.I. moratorium, the presumption against preemption cuts sharply against federal action. A moratorium that blocks states from legislating even where Congress has chosen not to act flips federalism on its head—turning federal inaction into total regulatory paralysis, precisely what the presumption against preemption forbids.

As the Congressional Research Service primer on preemption concludes:

The Constitution’s Supremacy Clause provides that federal law is “the supreme Law of the Land” notwithstanding any state law to the contrary. This language is the foundation for the doctrine of federal preemption, according to which federal law supersedes conflicting state laws. The Supreme Court has identified two general ways in which federal law can preempt state law. First, federal law can expressly preempt state law when a federal statute or regulation contains explicit preemptive language. Second, federal law can impliedly preempt state law when Congress’s preemptive intent is implicit in the relevant federal law’s structure and purpose.

In both express and implied preemption cases, the Supreme Court has made clear that Congress’s purpose is the “ultimate touchstone” of its statutory analysis. In analyzing congressional purpose, the Court has at times applied a canon of statutory construction known as the “presumption against preemption,” which instructs that federal law should not be read as superseding states’ historic police powers “unless that was the clear and manifest purpose of Congress.”

If there is no federal statute, no one has any idea what that purpose is, certainly no justiciabile idea. Therefore, my bet is that the Court would hold that the Executive Branch cannot unilaterally create preemption, and neither can the DOJ sue states simply because the White House dislikes their AI, privacy, or biometric laws, much less their zoning laws applied to data centers.

Why David Sacks’s Involvement Raises the Political Temperature

As Scott Fitzgerald famously wrote, the very rich are different. But here’s what’s not different—David Sacks has something he’s not used to having. A boss. And that boss has polls. And those polls are not great at the moment. It’s pretty simple, really. When you work for a politician, your job is to make sure his polls go up, not down.

David Sacks is making his boss look bad. Presidents do not relish waking up to front-page stories that suggest their “A.I. czar” holds hundreds of investments directly affected by federal A.I. strategy, that major policy proposals track industry wish lists more closely than public safeguards, or that rumored executive orders could ignite fifty-state constitutional litigation led by your supporters like Mike Davis and egged on by people like Steve Bannon.

Those stories don’t just land on the advisor; they land on the President’s desk, framed as questions of his judgment, control, and competence. And in politics, loyalty has a shelf life. The moment an advisor stops being an asset and starts becoming a daily distraction much less liability, the calculus changes fast. What matters then is not mansions, brilliance, ideology, or past service, but whether keeping that adviser costs more than cutting them loose. I give you Elon Musk.

AI Policy Cannot Be Built on Preemption-by-Advisor

At bottom, this is a bet. The question isn’t whether David Sacks is smart, well-connected, or persuasive inside the room. The real question is whether Donald Trump wants to stake his presidency on David Sacks being right—right about constitutional preemption, right about executive authority, right about federal power to block the states, and right about how courts will react.

Because if Sacks is wrong, the fallout doesn’t land on him. It lands on the President. A collapsed A.I. moratorium, fifty-state litigation, injunctions halting executive action, and judges citing basic federalism principles would all be framed as defeats for Trump, not for an advisor operating at arm’s length.

Betting the presidency on an untested legal theory pushed by a politically exposed “no conflict no interest” tech investor isn’t bold leadership. It’s unnecessary risk. When Trump’s second term is over in a few years, Trump will be in the history books for all time. No one will remember who David Sacks was.