Infrastructure, Not Aspiration: Why Permissioned AI Begins With a Hard Reset

Paul Sinclair’s framing of generative music AI as a choice between “open studios” and permissioned systems makes a basic category mistake. Consent is not a creative philosophy or a branding position. It is a systems constraint. You cannot “prefer” consent into existence. A permissioned system either enforces authorization at the level where machine learning actually occurs—or it does not exist at all.

That distinction matters not only for artists, but for the long-term viability of AI companies themselves. Platforms built on unresolved legal exposure may scale quickly, but they do so on borrowed time. Systems built on enforceable consent may grow more slowly at first, but they compound durability, defensibility, and investor confidence over time. Legality is not friction. It is infrastructure. It’s a real “eat your vegetables” moment.

The Great Reset

Before any discussion of opt-in, licensing, or future governance, one prerequisite must be stated plainly: a true permissioned system requires a hard reset of the model itself. A model trained on unlicensed material cannot be transformed into a consent-based system through policy changes, interface controls, or aspirational language. Once unauthorized material is ingested and used for training, it becomes inseparable from the trained model. There is no technical “undo” button.

The debate is often framed as openness versus restriction, innovation versus control. That framing misses the point. The real divide is whether a system is built to respect authorization where machine learning actually happens. A permissioned system cannot be layered on top of models trained without permission, nor can it be achieved by declaring legacy models “deprecated.” Machine learning systems do not forget unless they are reset. The purpose of a trained model is remembering—preserving statistical patterns learned from its data—not forgetting. Models persist, shape downstream outputs, and retain economic value long after they are removed from public view. Administrative terminology is not remediation.

Recent industry language about future “licensed models” implicitly concedes this reality. If a platform intends to operate on a consent basis, the logical consequence is unavoidable: permissioned AI begins with scrapping the contaminated model and rebuilding from zero using authorized data only.

Why “Untraining” Does Not Solve the Problem

Some argue that problematic material can simply be removed from an existing model through “untraining.” In practice, this is not a reliable solution. Modern machine-learning systems do not store discrete copies of works; they encode diffuse statistical relationships across millions or billions of parameters. Once learned, those relationships cannot be surgically excised with confidence. It’s not Harry Potter’s Pensieve.

Even where partial removal techniques exist, they are typically approximate, difficult to verify, and dependent on assumptions about how information is represented internally. A model may appear compliant while still reflecting patterns derived from unauthorized data. For systems claiming to operate on affirmative permission, approximation is not enough. If consent is foundational, the only defensible approach is reconstruction from a clean, authorized corpus.

The Structural Requirements of Consent

Once a genuine reset occurs, the technical requirements of a permissioned system become unavoidable.

Authorized training corpus. Every recording, composition, and performance used for training must be included through affirmative permission. If unauthorized works remain, the model remains non-consensual.

Provenance at the work level. Each training input must be traceable to specific authorized recordings and compositions with auditable metadata identifying the scope of permission.

Enforceable consent, including withdrawal. Authorization must allow meaningful limits and revocation, with systems capable of responding in ways that materially affect training and outputs.

Segregation of licensed and unlicensed data. Permissioned systems require strict internal separation to prevent contamination through shared embeddings or cross-trained models.

Transparency and auditability. Permission claims must be supported by documentation capable of independent verification. Transparency here is engineering documentation, not marketing copy.

These are not policy preferences. They are practical consequences of a consent-based architecture.

The Economic Reality—and Upside—of Reset

Rebuilding models from scratch is expensive. Curating authorized data, retraining systems, implementing provenance, and maintaining compliance infrastructure all require significant investment. Not every actor will be able—or willing—to bear that cost. But that burden is not an argument against permission. It is the price of admission.

Crucially, that cost is also largely non-recurring. A platform that undertakes a true reset creates something scarce in the current AI market: a verifiably permissioned model with reduced litigation risk, clearer regulatory posture, and greater long-term defensibility. Over time, such systems are more likely to attract durable partnerships, survive scrutiny, and justify sustained valuation.

Throughout technological history, companies that rebuilt to comply with emerging legal standards ultimately outperformed those that tried to outrun them. Permissioned AI follows the same pattern. What looks expensive in the short term often proves cheaper than compounding legal uncertainty.

Architecture, Not Branding

This is why distinctions between “walled garden,” “opt-in,” or other permission-based labels tend to collapse under technical scrutiny. Whatever the terminology, a system grounded in authorization must satisfy the same engineering conditions—and must begin with the same reset. Branding may vary; infrastructure does not.

Permissioned AI is possible. But it is reconstructive, not incremental. It requires acknowledging that past models are incompatible with future claims of consent. It requires making the difficult choice to start over.

The irony is that legality is not the enemy of scale—it is the only path to scale that survives. Permission is not aspiration. It is architecture.

The Devil’s Greatest Trick: Ro Khanna’s “Creator Bill of Rights” Is a Political Shield, Not a Charter for Creative Labor

La plus belle des ruses du Diable est de vous persuader qu’il n’existe pas! (“The greatest trick the Devil ever pulled was convincing the world he didn’t exist.”)

Charles Baudelaire, Le Joueur généreux

Ro Khanna’s so‑called “Creator Bill of Rights” is being sold as a long‑overdue charter for fairness in the digital economy—you know, like for gig workers. In reality, it functions as a political shield for Silicon Valley platforms: a non‑binding, influencer‑centric framework built on a false revenue‑share premise that bypasses child labor, unionized creative labor, professional creators, non‑featured artists, and the central ownership and consent crises posed by generative AI. 

Mr. Khanna’s resolution treats transparency as leverage, consent as vibes, and platform monetization as deus ex machina-style natural law of the singularity—while carefully avoiding enforceable rights, labor classification, copyright primacy, artist consent for AI training, work‑for‑hire abuse, and real remedies against AI labs for artists. What flows from his assumptions is not a “bill of rights” for creators, but a narrative framework designed to pacify the influencer economy and legitimize platform power at the exact moment that judges are determining that creative labor is being illegally scraped, displaced, and erased by AI leviathans including some publicly traded companies with trillion-dollar market caps.

The First Omission: Child Labor in the Creator Economy

Rep. Khanna’s newly unveiled “Creator Bill of Rights” has been greeted with the kind of headlines Silicon Valley loves: Congress finally standing up for creators, fairness, and transparency in the digital economy. But the very first thing it doesn’t do should set off alarm bells. The resolution never meaningfully addresses child labor in the creator economy, a sector now infamous for platform-driven exploitation of minors through user generated content, influencer branding, algorithmic visibility contests, and monetized childhood.  (Wikipedia is Exhibit A, Facebook Exhibit B, YouTube Exhibit C and Instagram Exhibit D.)

There is no serious discussion of child worker protections and all that comes with it, often under state laws: working-hour limits, trust accounts, consent frameworks, or the psychological and economic coercion baked into platform monetization systems. For a document that styles itself as a “bill of rights,” that omission alone is disqualifying. But perhaps understandable given AI Viceroy David Sacks’ obsession with blocking enforcement of state laws that “impede” AI.

And it’s not an isolated miss. Once you read Khanna’s framework closely, a pattern emerges. This isn’t a bill of rights for creators. It’s a political shield for platforms that is built on a false economic premise, framed around influencers, silent on professional creative labor, evasive on AI ownership and training consent, and carefully structured to avoid enforceable obligations.

The Foundational Error: Treating Revenue Share as Natural Law that Justifies A Stream Share Threshold

The foundational error appears right at the center of the resolution: its uncritical embrace of the Internet’s coin of the realm: revenue-sharing. Khanna calls for “clear, transparent, and predictable revenue-sharing terms” between platforms and creators. That phrase sounds benign, even progressive. But it quietly locks in the single worst idea anyone ever had for royalty economics: big-pool platform revenue share.  An idea that is being rejected by pretty much everyone except Spotify with its stream share threshold. In case Mr. Khanna didn’t get the memo, artist-centric is the new new thing.

Revenue sharing treats creators as participants in a platform monetization program, not as rights-holders.  You know, “partners.”  Artists don’t get a share of Spotify stock, they get a “revenue share” because they’re “partnering” with Spotify.   If that’s how Spotify treats “partners”….

Under that revenue share model, the platform defines what counts as revenue, what gets excluded, how it’s allocated, which metrics matter, and how the rules change. The platform controls all the data. The platform controls the terms. And the platform retains unilateral power to rewrite the deal. Hey “partner,” that’s not compensation grounded in intellectual property or labor rights. It’s a dodge grounded in platform policy.

We already know how this story ends. Big-pool revenue share regimes hide cross-subsidies, reward algorithm gaming over quality, privilege viral noise over durable cultural work, and collapse bargaining power into opaque market share payments of microscopic proportion. Revenue share deals destroy price signals, hollow out licensing markets, and make creative income volatile and non-forecastable. This is exceptionally awful for songwriters and nobody can tell a songwriter today what that burger on Tuesday will actually bring.

A advertising revenue-share model penalizes artists because they receive only a tiny fraction of the ads served against their own music, while platforms like Google capture roughly half of the total advertising revenue generated across the entire network. Naturally they love it.

Rev shares of advertising revenue are the core economic pathology behind what happened to music, journalism, and digital publishing over the last fifteen years.  As we have seen from Spotify’s stream share threshold, a platform can unilaterally decide to cut off payments at any time for any absurd reason and get away with it.  And Khanna’s resolution doesn’t challenge that logic. It blesses it.

He doesn’t say creators are entitled to enforceable royalties tied to uses of their work at rates set by the artist. He doesn’t say there should be statutory floors, audit rights, underpayment penalties, nondiscrimination rules, or retaliation protections. He doesn’t say platforms should be prohibited from unilaterally redefining the pie. He says let’s make the revenue share more “transparent” and “predictable.” That’s not a power shift. That’s UX optimization for exploitation.

This Is an Influencer Bill, Not a Creator Bill

The second fatal flaw is sociological. Khanna’s resolution is written for the creator economy, not the creative economy.

The “creator” in Khanna’s bill is a YouTuber, a TikToker, a Twitch streamer, a podcast personality, a Substack writer, a platform-native entertainer (but no child labor protection). Those are real jobs, and the people doing them face real precarity. But they are not the same thing as professional creative labor. They are usually not professional musicians, songwriters, composers, journalists, photographers, documentary filmmakers, authors, screenwriters, actors, directors, designers, engineers, visual artists, or session musicians. They are not non-featured performers. They are not investigative reporters. They are not the people whose works are being scraped at industrial scale to train generative AI systems.

Those professional creators are workers who produce durable cultural goods governed by copyright, contract, and licensing markets. They rely on statutory royalties, collective bargaining, residuals, reuse frameworks, audit rights, and enforceable ownership rules. They face synthetic displacement and market destruction from AI systems trained on their work without consent. Khanna’s resolution barely touches any of that. It governs platform participation. It does not govern creative labor.  It’s not that influencers shouldn’t be able to rely on legal protections; it’s that if you’re going to have a bill of rights for creators it should include all creators and very often the needs are different.  Starting with collective bargaining and unions.

The Total Bypass of Unionized Labor

Nowhere is this shortcoming more glaring than in the complete bypass of unionized labor. The framework lives in a parallel universe where SAG-AFTRA, WGA, DGA, IATSE, AFM, Equity, newsroom unions, residuals, new-use provisions, grievance procedures, pension and health funds, minimum rates, credit rules, and collective bargaining simply do not exist. That entire legal architecture is invisible.  And Khanna’s approach could easily roll back the gains on AI protections that unions have made through collective bargaining.

Which means the resolution is not attempting to interface with how creative work actually functions in film, television, music, journalism, or publishing. It is not creative labor policy. It is platform fairness rhetoric.

Invisible Labor: Non-Featured Artists and the People the Platform Model Erases

The same erasure applies to non-featured artists and invisible creative labor. Session musicians, backup singers, supporting actors, dancers, crew, editors, photographers on assignment, sound engineers, cinematographers — these people don’t live inside platform revenue-share dashboards. They are paid through wage scales, reuse payments, residuals, statutory royalty regimes, and collective agreements.

None of that exists in Khanna’s world. His “creator” is an account, not a worker.

AI Without Consent Is Not Accountability

The AI plank in the resolution follows the same pattern of rhetorical ambition and structural emptiness. Khanna gestures at transparency, consent, and accountability for AI and synthetic media. But he never defines what consent actually means.

Consent for training? For style mimicry? For voice cloning? For archival scraping of journalism and music catalogs? For derivative outputs? For model fine-tuning? For prompt exploitation? For replacement economics?

The bill carefully avoids the training issue. Which is the whole issue.

A real AI consent regime would force Congress to confront copyright primacy, opt-in licensing, derivative works, NIL rights, data theft, model ownership, and platform liability. Khanna’s framework gestures at harms while preserving the industrial ingestion model intact.

The Ownership Trap: Work-for-Hire and AI Outputs

This omission is especially telling. Nowhere does Khanna say platforms may not claim authorship or ownership of AI outputs by default. Nowhere does he say AI-assisted works are not works made for hire. Nowhere does he say users retain rights in their contributions and edits. Nowhere does he say WFH boilerplate cannot be used to convert prompts into platform-owned assets.

That silence is catastrophic.

Right now, platforms are already asserting ownership contractually, claiming assignments of outputs, claiming compilation rights, claiming derivative rights, controlling downstream licensing, locking creators out of monetization, and building synthetic catalogs they own. Even though U.S. law says purely AI-generated content isn’t copyrightable absent human authorship, platforms can still weaponize terms of service, automated enforcement, and contractual asymmetry to create “synthetic  ownership” or “practical control.” Khanna’s resolution says nothing about any of it.

Portable Benefits as a Substitute for Labor Rights

Then there’s the portable-benefits mirage. Portable benefits sound progressive. They are also the classic substitute for confronting misclassification. So first of all, Khanna starts our saying that “gig workers” in the creative economy don’t get health care—aside from the union health plans, I guess. But then he starts with the portable benefits mirage. So which is it? Surely he doesn’t mean nothing from nothing leaves nothing?

If you don’t want to deal with whether creators are actually employees, whether platforms owe payroll taxes, whether wage-and-hour law applies, whether unemployment insurance applies, whether workers’ comp applies, whether collective bargaining rights attach, or…wait for it…stock options apply…you propose portable benefits without dealing with the reality that there are no benefits. You preserve contractor status. You socialize costs and privatize upside. You deflect labor-law reform and health insurance reform for that matter. You look compassionate. And you change nothing structurally.

Khanna’s framework sits squarely in that tradition of nothing from nothing leaves nothing.

A Non-Binding Resolution for a Reason

The final tell is procedural. Khanna didn’t introduce a bill. He introduced a non-binding resolution.

No enforceable rights. No regulatory mandates. No private causes of action. No remedies. No penalties. No agency duties. No legal obligations.

This isn’t legislation. It’s political signaling.

What This Really Is: A Political Shield

Put all of this together and the picture becomes clear. Khanna’s “Creator Bill of Rights” is built on a false revenue-share premise. It is framed around influencers. It bypasses professional creators. It bypasses unions. It bypasses non-featured artists. It bypasses child labor. It bypasses training consent. It bypasses copyright primacy. It bypasses WFH abuse. It bypasses platform ownership grabs. It bypasses misclassification. It bypasses enforceability. I give you…Uber.

It doesn’t fail because it’s hostile to creators, rather because it is indifferent to creators. It fails because it redefines “creator” downward until every hard political and legal question disappears.

And in doing so, it functions as a political shield for the very platforms headquartered in Khanna’s district.

When the Penny Drops

Ro Khanna’s “Creator Bill of Rights” isn’t a rights charter.

It’s a narrative framework designed to stabilize the influencer economy, legitimize platform compensation models, preserve contractor status, soften AI backlash, avoid copyright primacy, avoid labor-law reform, avoid ownership reform, and avoid real accountability.

It treats transparency as leverage. It treats consent as vibes. It treats revenue share as natural law. It treats AI as branding. It treats creative labor as content. It treats platforms as inevitable.

And it leaves out the people who are actually being scraped, displaced, devalued, erased, and replaced: musicians, journalists, photographers, actors, directors, songwriters, composers, engineers, non-featured performers, visual artists, and professional creators.

If Congress actually wants a bill of rights for creators, it won’t start with influencer UX and non-binding resolutions. It will start with enforceable intellectual-property rights, training consent, opt-in regimes, audit rights, statutory floors, collective bargaining, exclusion of AI outputs from work-for-hire, limits on platform ownership claims, labor classification clarity, and real remedies.

Until then, this isn’t a bill of rights.

It’s a press release with footnotes.

Marc Andreessen’s Dormant Commerce Clause Fantasy

There’s a special kind of hubris in Silicon Valley, but Marc Andreessen may have finally discovered its purest form: imagining that the Dormant Commerce Clause (DCC) — a Constitutional doctrine his own philosophical allies loathe — will be his golden chariot into the Supreme Court to eliminate state AI regulation.

If you know the history, it borders on comedic, if you think that Ayn Rand is a great comedienne.

The DCC is a judge‑created doctrine inferred from the Commerce Clause (Article I, Section 8, Clause 3), preventing states from discriminating against or unduly burdening interstate commerce. Conservatives have long attacked it as a textless judicial invention. Justice Scalia called it a “judicial fraud”; Justice Thomas wants it abolished outright. Yet Andreessen’s Commerce Clause playbook is built on expanding a doctrine the conservative legal movement has spent 40 years dismantling.

Worse for him, the current Supreme Court is the least sympathetic audience possible.

Justice Gorsuch has repeatedly questioned DCC’s legitimacy and rejects free‑floating “extraterritoriality” theories. Justice Barrett, a Scalia textualist, shows no appetite for expanding the doctrine beyond anti‑protectionism. Justice Kavanaugh is business‑friendly but wary of judicial policymaking. None of these justices would give Silicon Valley a nationwide deregulatory veto disguised as constitutional doctrine. Add Alito and Thomas, and Andreessen couldn’t scrape a majority.

And then there’s Ted Cruz — Scalia’s former clerk — loudly cheerleading a doctrine his mentor spent decades attacking.

National Pork Producers Council v. Ross (2023): The Warning Shot

Andreessen’s theory also crashes directly into the Supreme Court’s fractured decision in the most recent DCC case before SCOTUS, National Pork Producers Council v. Ross (2023), where industry groups tried to use the DCC to strike down California’s animal‑welfare law due to its national economic effects.

The result? A deeply splintered Court produced several opinions.  Justice Gorsuch  announced the judgment of the Court, and delivered the opinion of the Court with respect to Parts I, II, III, IV–A, and V, in which Justices Thomas, Sotomayor, Kagan and Barrett joined, an opinion with respect to Parts IV–B and IV–D, in which Justice Thomas and Barrett joined, and an opinion with respect to Part IV–C, in which Justices Thomas, Sotomayor, and Kagan joined.  Justice Sotomayor filed an opinion concurring in part, in which Justice Kagan joined.  Justice Barrett filed an opinion concurring in part. Chief Justice Roberts filed an opinion concurring in part and dissenting in part, in which Justices Alito, Kavanaugh and Jackson joined. Justice Kavanaugh filed an opinion concurring in part and dissenting in part.

Got it?  

The upshot:
– No majority for expanding DCC “extraterritoriality.”
– No appetite for using DCC to invalidate state laws simply because they influence out‑of‑state markets.
– Multiple justices signaling that courts should not second‑guess state policy judgments through DCC balancing.
– Gorsuch’s lead opinion rejected the very arguments Silicon Valley now repackages for AI.

If Big Tech thinks this Court that decided National Pork—no pun intendedwill hand them a nationwide kill‑switch on state AI laws, they profoundly misunderstand the doctrine and the Court.

Andreessen didn’t just pick the wrong legal strategy. He picked the one doctrine the current Court is least willing to expand. The Dormant Commerce Clause isn’t a pathway to victory — it’s a constitutional dead end masquerading as innovation policy.

But…maybe he’s crazy like a fox.  

The Delay’s the Thing: The Dormant Commerce Clause as Delay Warfare

To paraphrase Saul Alinksy, the issue is never the issue, the issue is always delay.  Of course, if delay is the true objective, you couldn’t pick a better stalling tactic than hanging an entire federal moratorium on one of the Supreme Court’s most obscure and internally conflicted doctrines. The Dormant Commerce Clause isn’t a real path to victory—not with a Court where Scalia’s intellectual heirs openly question its legitimacy. But it is the perfect fig leaf for an executive order.

The point isn’t to win the case. The point is to give Trump just enough constitutional garnish to issue the EO, freeze state enforcement, and force every challenge into multi‑year litigation. That buys the AI industry exactly what it needs:  time. Time to scale. Time to consolidate. Time to embed itself into public infrastructure and defense procurement. Time  to become “too essential to regulate” or as Senator Hawley asked, too big to prosecute?

Big Tech doesn’t need a Supreme Court victory. It needs a judicial cloud, a preemption smokescreen, and a procedural maze that chills state action long enough for the industry to entrench itself permanently.  And no one knows that better than the moratorium’s biggest cheerleader, Senator Ted Cruz the Scalia clerk.

The Dormant Commerce Clause, in this context, isn’t a doctrine. It’s delay‑ware—legal molasses poured over every attempt by states to protect their citizens. And that delay may just be the real prize.

Structural Capture and the Trump AI Executive Order

The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. 


In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.”  This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.

AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions.  States also control the electrical utilities and zoning infrastructure that AI data centers depend on. 

Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.

This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. 
Those engineering incentives map cleanly onto the EO’s enforcement logic. 

The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.

There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.

The constitutional landscape is equally important.  Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress.  Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding.  And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.

So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.

The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?

A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.

But the deeper issue is how the AI ecosystem responds to a constrait.  If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.

At the systems level, a DOJ loss may even feed back into corporate strategy.  Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.

Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.

The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.

But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states.  If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.

This deserves close scrutiny before it becomes the template for AI governance moving forward.

Too Dynamic to Question, Too Dangerous to Ignore

When Ed Newton-Rex left Stability AI, he didn’t just make a career move — he issued a warning. His message was simple: we’ve built an industry that moves too fast to be honest.

AI’s defenders insist that regulation can’t keep up, that oversight will “stifle innovation.” But that speed isn’t a by-product; it’s the business model. The system is engineered for planned obsolescence of accountability — every time the public begins to understand one layer of technology, another version ships, invalidating the debate. The goal isn’t progress; it’s perpetual synthetic novelty, where nothing stays still long enough to be measured or governed, and “nothing says freedom like getting away with it.”

We’ve seen this play before. Car makers built expensive sensors we don’t want that fail on schedule; software platforms built policies that expire the moment they bite. In both cases, complexity became a shield and a racket — “too dynamic to question.” And yet, like those unasked-for, but paid for, features in the cars we don’t want, AI’s design choices are too dangerous to ignore. (Like what if your brakes really are going out, not just the sensor is malfunctioning.)

Ed Newton-Rex’s point — echoed in his tweets and testimony — is that the industry has mistaken velocity for virtue. He’s right. The danger is not that these systems evolve too quickly to regulate; it’s that they’re designed that way designed to fail just like that brake sensor. And until lawmakers recognize that speed itself is a form of governance, we’ll keep mistaking momentum for inevitability.

From Fictional “Looking Backward” to Nonfiction Silicon Valley: Will Technologists Crown the New Philosopher‑Kings?

More than a century ago, writers like Edward Bellamy and Edward Mandell House asked a question that feels as urgent in 2025 as it did in their era: Should society be shaped by its people, or designed by its elites? Both grappled with this tension in fiction. Bellamy’s Looking Backward (1888) imagined a future society run by rational experts — technocrats and bureaucrats centralizing economic and social life for the greater good. House’s Philip Dru: Administrator (1912) went a step further, envisioning an American civil war where a visionary figure seizes control from corrupt institutions to impose a new era of equity and order.  Sound familiar?

Today, Silicon Valley’s titans are rehearsing their own versions of these stories. In an era dominated by artificial intelligence, climate crisis, and global instability, the tension between democratic legitimacy and technocratic efficiency is more pronounced than ever.

The Bellamy Model: Eric Schmidt and Biden’s AI Order

President Biden’s sweeping Executive Order on AI issued in late 2023 feels like a chapter lifted from Looking Backward. Its core premise is unmistakable: Trust our national champion “trusted” technologists to design and govern the rules for an era shaped by artificial intelligence. At the heart of this approach is Eric Schmidt, former CEO of Google and a key advisor in shaping the AI order at least according to Eric Schmidt

Schmidt has long advocated for centralizing AI policymaking within a circle of vetted, elite technologists — a belief reminiscent of Bellamy’s idealistic vision. According to Schmidt, AI and other disruptive technologies are too pivotal, too dangerous, and too impactful to be left to messy democratic debates. For people in Schmidt’s cabal, this approach is prudent: a bulwark against AI’s darker possibilities. But it doesn’t do much to protect against darker possibilities from AI platforms.  For skeptics like me, it raises a haunting question posed by Bellamy himself: Are we delegating too much authority to a technocratic elite?

The Philip Dru Model: Musk, Sacks, and Trump’s Disruption Politics

Meanwhile, across the aisle, another faction of Silicon Valley is aligning itself with Donald Trump and making a very different bet for the future. Here, the nonfiction playbook is closer to the fictional Philip Dru. In House’s novel, an idealistic and forceful figure emerges from a broken system to impose order and equity. Enter Elon Musk and David Sacks, both positioning themselves as champions of disruption, backed by immense platforms, resources, and their own venture funds. 

Musk openly embraces a worldview wherein technologists have both the tools and the mandate to save society by reshaping transportation, energy, space, and AI itself. Meanwhile, Sacks advocates Silicon Valley as a de facto policymaker, disrupting traditional institutions and aligning with leaders like Trump to advance a new era of innovation-driven governance—with no Senate confirmation or even a security clearance. This competing cabal operates with the implicit belief that traditional democratic institutions, inevitiably bogged down by process, gridlock, and special interests can no longer solve society’s biggest problems. To Special Government Employees like Musk and Sacks, their disruption is not a threat to democracy, but its savior.

A New Gilded Age? Or a New Social Contract?

Both threads — Biden and Schmidt’s technocratic centralization and Musk, Sacks, and Trump’s disruption-driven politics — grapple with the legacy of Bellamy and House. In the Gilded Age that inspired those writers, industrial barons sought to justify their dominance with visions of rational, top-down progress. Today’s Silicon Valley billionaires carry a similar vision for the digital era, suggesting that elite technologists can govern more effectively than traditional democratic institutions like Plato’s “guardians” of The Republic.

But at what cost? Will AI policymaking and its implementation evolve as a public endeavor, shaped by citizen accountability? Or will it be molded by corporate elites making decisions in the background? Will future leaders consolidate their role as philosopher-kings and benevolent administrators — making themselves indispensable to the state?

The Stakes Are Clear

As the lines between Silicon Valley and Washington continue to blur, the questions posed by Bellamy and House have never been more relevant: Will technologist philosopher-kings write the rules for our collective future? Will democratic institutions evolve to balance AI and climate crisis effectively? Will the White House of 2025 (and beyond) cede authority to the titans of Silicon Valley? In this pivotal moment, America must ask itself: What kind of future do we want — one that is chosen by its citizens, or one that is designed for its citizens? The answer will define the character of American democracy for the rest of the 21st century — and likely beyond.

AI’s Manhattan Project Rhetoric, Clearance-Free Reality

Every time a tech CEO compares frontier AI to the Manhattan Project, take a breath—and remember what that actually means.  Master spycatcher James Jesus Angleton is rolling in his grave. (aka Matt Damon in The Good Shepherd.). And like most elevator pitch talking points, that analogy starts to fall apart on inspection.

The Manhattan Project wasn’t just a moonshot scientific collaboration. It was the most tightly controlled, security-obsessed R&D operation in American history. Every physicist, engineer, and janitor involved had a federal security clearance. Facilities were locked down under military command of General Leslie Groves. Communications were monitored. Access was compartmentalized. And still—still—the Soviets penetrated it.  See Klaus Fuchs.  Let’s understand just how secret the Manhattan Project was—General Curtis LeMay had no idea it was happening until he was asked to set up facilities for the Enola Gay on his bomber base on Tinian a few months before the first nuclear bomb.  You want to find out about the details of any frontier lab, just pick up the newspaper.  Not nearly the same thing. There were no chatbots involved and there were no Special Government Employees with no security clearance.

Oppie Sacks

So when today’s AI executives name-drop Oppenheimer and invoke the gravity of dual-use technologies, what exactly are they suggesting? That we’re building world-altering capabilities without any of the safeguards that even the AI Whiz Kids admit are historically necessary by their Manhattan Project talking point in the pitch deck?

These frontier labs aren’t locked down. They’re open-plan. They’re not vetting personnel. They’re recruiting from Discord servers. They’re not subject to classified environments. They’re training military-civilian dual-use models on consumer cloud platforms. And when questioned, they invoke private sector privilege and push back against any suggestion of state or federal regulation.  And here’s a newsflash—requiring a security clearance for scientific work in the vital national interest is not regulation.  (Neither is copyright but that’s another story.)

Meanwhile, they’re angling for access to Department of Energy nuclear real estate, government compute subsidies, and preferred status in export policy—all under the justification of “national security” because, you know, China.  They want the symbolism of the Manhattan Project without the substance. They want to be seen as indispensable without being held accountable.

The truth is that AI is dual-use. It can power logistics and surveillance, language learning and warfare. That’s not theoretical—it’s already happening. China openly treats AI as part of its military-civil fusion strategy. Russia has targeted U.S. systems with information warfare bots. And our labs? They’re scraping from the open internet and assuming the training data hasn’t been poisoned with the massive misinformation campaigns on Wikipedia, Reddit and X that are routine.

If even the Manhattan Project—run under maximum secrecy—was infiltrated by Soviet spies, what are the chances that today’s AI labs, operating in the wide open are immune?  Wouldn’t a good spycatcher like Angleton assume these wunderkinds have already been penetrated?

We have no standard vetting for employees. No security clearances. No model release controls. No audit trail for pretraining data integrity. And no clear protocol for foreign access to model weights, inference APIs, or sensitive safety infrastructure. It’s not a matter of if. It’s a matter of when—or more likely, a matter of already.

Remember–nobody got rich out of working on the Manhattan Project. That’s another big difference. These guys are in it for the money, make no mistake.

So when you hear the Manhattan Project invoked again, ask the follow-up question: Where’s the security clearance?  Where’s the classification?  Where’s the real protection?  Who’s playing the role of Klaus Fuchs?

Because if AI is our new Manhattan Project, then running it without security is more than hypocrisy. It’s incompetence at scale.

David Sacks Is Learning That the States Still Matter

For a moment, it looked like the tech world’s powerbrokers had pulled it off. Buried deep in a Republican infrastructure and tax package was a sleeper provision — the so-called AI moratorium — that would have blocked states from passing their own AI laws for up to a decade. It was an audacious move: centralize control over one of the most consequential technologies in history, bypass 50 state legislatures, and hand the reins to a small circle of federal agencies and especially to tech industry insiders.

But then it collapsed.

The Senate voted 99–1 to strike the moratorium. Governors rebelled. Attorneys general sounded the alarm. Artists, parents, workers, and privacy advocates from across the political spectrum said “no.” Even hardline conservatives like Ted Cruz eventually reversed course when it came down to the final vote. The message to Big Tech or the famous “Little Tech” was clear: the states still matter — and America’s tech elite ignore that at their peril.  (“Little Tech” is the latest rhetorical deflection promoted by Big Tech aka propaganda.)

The old Google crowd pushed the moratorium–their fingerprints were obvious. Having gotten fabulously rich off of their two favorites: The DMCA farce and the Section 230 shakedown. But there’s increasing speculation that White House AI Czar and Silicon Valley Viceroy David Sacks, PayPal alum and vocal MAGA-world player, was calling the ball. If true, that makes this defeat even more revealing.

Sacks represents something of a new breed of power-hungry tech-right influencer — part of the emerging “Red Tech” movement that claims to reject woke capitalism and coastal elitism but still wants experts to shape national policy from Silicon Valley, a chapter straight out of Philip Dru: Administrator. Sacks is tied to figures like Peter Thiel, Elon Musk, and a growing network of Trump-aligned venture capitalists. But even that alignment couldn’t save the moratorium.

Why? Because the core problem wasn’t left vs. right. It was top vs. bottom.

In 1964, Ronald Reagan’s classic speech called A Time for Choosing warned about “a little intellectual elite in a far-distant capitol” deciding what’s best for everyone else. That warning still rings true — except now the “capitol” might just be a server farm in Menlo Park or a podcast studio in LA.

The AI moratorium was an attempt to govern by preemption and fiat, not by consent. And the backlash wasn’t partisan. It came from red states and blue ones alike — places where elected leaders still think they have the right to protect their citizens from unregulated surveillance, deepfakes, data scraping, and economic disruption.

So yes, the defeat of the moratorium was a blow to Google’s strategy of soft-power dominance. But it was also a shot across the bow for David Sacks and the would-be masters of tech populism. You can’t have populism without the people.

If Sacks and his cohort want to play a long game in AI policy, they’ll have to do more than drop ideas into the policy laundry of think tank white papers and Beltway briefings. They’ll need to win public trust, respect state sovereignty, and remember that governing by sneaky safe harbors is no substitute for legitimacy.  

The moratorium failed because it presumed America could be governed like a tech startup — from the top, at speed, with no dissent. Turns out the country is still under the impression they have something to say about how they are governed, especially by Big Tech.