The DLC Nails it on Conditional Redesignation of the MLC

I’m certainly not a fan of really any of the companies that comprise the Digital Licensee Coordinator’s membership (DLC). In fact, you probably couldn’t find a more complete rogues’ gallery of most of my least favorite Big Tech companies—but when they’re right, they’re right.

Redesignation is the Copyright Office’s periodic check on whether the Mechanical Licensing Collective still meets the Music Modernization Act’s criteria to run the §115 blanket license. The Office can renew, or replace the designation to protect songwriters and licensees. In my view and the view of many others including the Digital Licensee Coordinator, The Office could also condition any renewal (or “redesignation”) of the MLC on improving its lackluster performance and postpone the renewal until the MLC improves, if ever. That’s just common sense.

The DLC’s most recent “ex parte” letter answers years of songwriter and publisher requests that the MLC has brushed aside—better matching, transparency, governance, timeliness, metrics, and accountability. Crucially, it confronts repeated, credible criticisms that the MLC’s investment of unmatched royalties is ultra vires (outside the law): the MMA authorizes collection and distribution, not portfoio-management schemes of a fund that is likely in excess of $1.2 billion of the songwriters’ money.

The Digital Licensee Coordinator urges the Copyright Office to conditionally redesignate the Mechanical Licensing Collective (MLC) and pair that step with stronger oversight. This approach reflects common sense and Congressional intent: if redesignation weren’t meant to be used as leverage to correct course, Congress wouldn’t have created a periodic redesignation process at all—it would have handed the MLC lifetime appointments. They didn’t, as one would expect. The MLC isn’t the Harry Fox Agency after all. Conditional redesignation is therefore the appropriate tool to ensure the MLC performs its uniquely powerful statutory role responsibly, transparently, and in the interest of all rightsholders. 

The DLC stresses how the MLC’s powers—collecting and distributing over a billion dollars annually, enforcing the blanket license, and imposing costs on licensees—demand robust governance and accountability distinct from what’s expected of the DLC itself. With that asymmetry in mind, the Office should focus the redesignation decision on whether the MLC needs additional safeguards to fulfill Congress’s vision for §115. Debating whether those safeguards arrive as explicit conditions on redesignation or as stand-alone regulations is a matter of form, not substance; either pathway legitimately implements the MMA and squarely fits within the Office’s authority. 

To “tee up” the record, the DLC attaches a helpful and representative Exhibit cataloging songwriter, independent publisher, and creator-group critiques across six themes: unmatched “black box” royalties; data/matching problems; governance and conflicts; transparency and accountability gaps; operational and technical delays; and the investment of unclaimed royalties. That comment supports conditional redesignation backed by measurable performance metrics(e.g., black-box reduction targets, matching accuracy, timeliness, dispute resolution KPIs) or by new, targeted regulations—and, if needed, both. 

Finally, immediate triage should begin with abandoning the contested investment policy for unclaimed royalties—criticized by many stakeholders as ultra vires (which by the way, eliminates any indemnity protection in the MMA)—and liquidating the portfolio so cash flows to the people Congress intended to benefit: songwriters. Conditional redesignation gives the Office the oversight handle to make those corrections now, align incentives going forward, and ensure the MLC’s stewardship is limited to the scale of its statutory power. 

It also must be said that if the MLC doesn’t clean up its act, what comes next may not be so genteel. Conditional redesignation may look awfully good in the rear view mirror.

Google’s “AI Overviews” Draws a Formal Complaint in Germany under the EU Digital Services Act

A coalition of NGOs, media associations, and publishers in Germany has filed a formal Digital Services Act (DSA) complaint against Google’s AI Overviews, arguing the feature diverts traffic and revenue from independent media, increases misinformation risks via opaque systems, and threatens media plurality. Under the DSA, violations can carry fines up to 6% of global revenue—a potentially multibillion-dollar exposure.

The complaint claims that AI Overviews answer users’ queries inside Google, short-circuiting click-throughs to the original sources and starving publishers of ad and subscription revenues. Because users can’t see how answers are generated or verified, the coalition warns of heightened misinformation risk and erosion of democratic discourse.

Why the Digital Services Act Matters

As I understand the DSA, the news publishers can either (1) lodge a complaint with their national Digital Services Coordinator alleging a platform’s DSA breach (triggers regulatory scrutiny);  (2) Use the platform dispute tools: first the internal complaint-handling system, then certified out-of-court dispute settlement for moderation/search-display disputes—often faster practical relief; (3) Sue for damages in national courts for losses caused by a provider’s DSA infringement (Art. 54); or (4) Act collectively by mandating a qualified entity or through the EU Representative Actions Directive to seek injunctions/redress (kind of like class actions in the US but more limited in scope). 

Under the DSA, Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) are services with more than 45 million EU users (approximately 10% of the population). Once formally designated by the European Commission, they face stricter obligations than smaller platforms: conducting annual systemic risk assessments, implementing mitigation measures, submitting to independent audits, providing data access to researchers, and ensuring transparency in recommender systems and advertising. Enforcement is centralized at the Commission, with penalties up to 6% of global revenue. This matters because VLOPs like Google, Meta, and TikTok must alter core design choices that directly affect media visibility and revenue.In parallel, the European Commission/DSCs retain powerful public-enforcement tools against Very Large Online Platforms. 

As a designated Very Large Online Platform, Google faces strict duties to mitigate systemic risks, provide algorithmic transparency, and avoid conduct that undermines media pluralism. The complaint contends AI Overviews violate these requirements by replacing outbound links with Google’s own synthesized answers.

The U.S. Angle: Penske lawsuit

A Major Publisher Has Sued Google in Federal Court Over AI Overview

On Sept. 14, 2025, Penske Media (Rolling Stone, Billboard, Variety) sued Google in D.C. federal court, alleging AI Overviews repurpose its journalism, depress clicks, and damage revenue—marking the first lawsuit by a major U.S. publisher aimed squarely at AI Overviews. The claims include an allegation on training-use claiming that Google enriched itself by using PMC’s works to train and ground models powering Gemini/AI Overviews, seeking restitution and disgorgement. Penske also argues that Google abuses its search monopoly to coerce publishers: indexing is effectively tied to letting Google (a) republish/summarize their material in AI Overviews, Featured Snippets, and AI Mode, and (b) use their works to train Google’s LLMs—reducing click-through and revenues while letting Google expand its monopoly into online publishing. 

Trade Groups Urged FTC/DOJ Action

The News/Media Alliance had previously asked the FTC and DOJ to investigate AI Overviews for diverting traffic and ‘misappropriating’ publishers’ investments, calling for enforcement under FTC Act §5 and Sherman Act §2.

Data Showing Traffic Harm

Industry analyses indicate material referral declines tied to AI Overviews. Digital Content Next reports Google Search referrals down 1%–25% for most member publishers over recent weeks; Digiday pegs impacts as much as 25%. The trend feeds a broader ‘Google Zero’ concern—zero-click results displacing publisher visits.

Why Europe vs. U.S. Paths Differ

The EU/DSA offers a procedural path to assess systemic risk and platform design choices like AI Overviews and levy platform-wide remedies and fines. In the U.S., the fight currently runs through private litigation (Penske) and competition/consumer-protection advocacy at FTC/DOJ, where enforcement tools differ and take longer to mobilize.

RAG vs. Training Data Issues

AI Overviews are best understood as a Retrieval-Augmented Generation (RAG) issue. Readers will recall that RAG is probably the most direct example of verbatim copying in AI outputs. The harms arise because Google as middleman retrieves live publisher content and synthesizes it into an answer inside the Search Engine Results Page (SERP), reducing traffic to the sources. This is distinct from the training-data lawsuits (Kadrey, Bartz) that allege unlawful ingestion of works during model pretraining.

Kadrey: Indirect Market Harm

A RAG case like Penske’s could also be characterized as indirect market harm. Judge Chhabria’s ruling in Kadrey under U.S. law highlights that market harm isn’t limited to direct substitution for fair use purposes. Factor 4 in fair use analysis includes foreclosure of licensing and derivative markets. For AI/search, that means reduced referrals depress ad and subscription revenue, while widespread zero-click synthesis may foreclose an emerging licensing market for summaries and excerpts. Evidence of harm includes before/after referral data, revenue deltas, and qualitative harms like brand erasure and loss of attribution. Remedies could include more prominent linking, revenue-sharing, compliance with robots/opt-outs, and provenance disclosures.

I like them RAG cases.

The Essential Issue is Similar in EU and US

Whether in Brussels or Washington, the core dispute is very similar: Who captures the value of journalism in an AI-mediated search world? Germany’s DSA complaint and Penske’s U.S. lawsuit frame twin fronts of a larger conflict—one about control of distribution, payment for content, and the future of a pluralistic press. Not to mention the usual free-riding and competition issues swirling around Google as it extracts rents by inserting itself into places it’s not wanted.

How an AI Moratorium Would Preclude Penske’s Lawsuit

Many “AI moratorium” proposals function as broad safe harbors with preemption. A moratorium to benefit AI and pick national champions was the subject of an IP Subcommittee hearing on September 18. If Congress enacted a moratorium that (1) expressly immunizes core AI practices (training, grounding, and SERP-level summaries), (2) preempts overlapping state claims, and (3) channels disputes into agency processes with exclusive public enforcement, it would effectively close the courthouse door to private suits like Penske and make the US more like Europe without the enforcement apparatus. Here’s how:

Express immunity for covered conduct. If the statute declares that using publicly available content for training and for retrieval-augmented summaries in search is lawful during the moratorium, Penske’s core theory (RAG substitution plus training use) loses its predicate.
No private right of action / exclusive public enforcement. Limiting enforcement to the FTC/DOJ (or a designated tech regulator) would bar private plaintiffs from seeking damages or injunctions over covered AI conduct.
Antitrust carve-out or agency preclearance. Congress could provide that covered AI practices (AI Overviews, featured snippets powered by generative models, training/grounding on public web content) cannot form the basis for Sherman/Clayton liability during the moratorium, or must first be reviewed by the agency—undercutting Penske’s §1/§2 counts.
Primary-jurisdiction plus statutory stay. Requiring first resort to the agency with a mandatory stay of court actions would pause (or dismiss) Penske until the regulator acts.
Preemption of state-law theories. A preemption clause would sweep in state unjust-enrichment and consumer-protection claims that parallel the covered AI practices.
Limits on injunctive relief. Barring courts from enjoining covered AI features (e.g., SERP-level summaries) and reserving design changes to the agency would eliminate the centerpiece remedy Penske seeks.
Potential retroactive shield. If drafted to apply to past conduct, a moratorium could moot pending suits by deeming prior training/RAG uses compliant for the moratorium period.

A moratorium with safe harbors, preemption, and agency-first review would either stay, gut, or bar Penske’s antitrust and unjust-enrichment claims—reframing the dispute as a regulatory matter rather than a private lawsuit. Want to bet that White House AI Viceroy David Sacks will be sitting in judgement?

Missile Gap, Again: Big Tech’s Private Power vs. the Public Grid

If we let a hyped “AI gap” dictate land and energy policy, we’ll privatize essential infrastructure and socialize the fallout.

Every now and then, it’s important to focus on what our alleged partners in music distribution are up to, because the reality is they’re not record people—their real goal is getting their hands on the investment we’ve all made in helping compelling artists find and keep an audience. And when those same CEOs use the profits from our work to pivot to “defense tech” or “dual use” AI (civilian and military), we should hear what that euphemism really means: killing machines.

Daniel Ek is backing battlefield-AI ventures; Eric Schmidt has spent years bankrolling and lobbying for the militarization of AI while shaping the policies that green-light it. This is what happens when we get in business with people who don’t share our values: the capital, data, and social license harvested from culture gets recycled into systems built to find, fix, and finish human beings. As Bob Dylan put it in Masters of War, “You fasten the triggers for the others to fire.” These deals aren’t value-neutral—they launder credibility from art into combat. If that’s the future on offer, our first duty is to say so plainly—and refuse to be complicit.

The same AI outfits that for decades have refused to license or begrudgingly licensed the culture they ingest are now muscling into the hard stuff—power grids, water systems, and aquifers—wherever governments are desperate to win their investment. Think bespoke substations, “islanded” microgrids dedicated to single corporate users, priority interconnects, and high-volume water draws baked into “innovation” deals. It’s happening globally, but nowhere more aggressively than in the U.S., where policy and permitting are being bent toward AI-first infrastructure—thanks in no small part to Silicon Valley’s White House “AI viceroy,” David Sacks. If we don’t demand accountability at the point of data and at the point of energy and water, we’ll wake up to AI that not only steals our work but also commandeers our utilities. Just like Senator Wyden accomplished for Oregon.

These aren’t pop-up server farms; they’re decades-long fixtures. Substations and transmission are built on 30–50-year horizons, generation assets run 20–60, with multi-decade PPAs, water rights, and recorded easements that outlive elections. Once steel’s in the ground, rate designs and priority interconnects get contractually sticky. Unlike the Internet fights of the last 25 years—where you could force a license for what travels through the pipe—this AI footprint binds communities for generations; it’s essentially forever. So we will be stuck for generations with the decisions we make today.

Because China–The New Missle Gap

There’s a familiar ring to the way America is now talking about AI, energy, and federal land use (and likely expropriation). In the 1950s Cold War era, politicians sold the country on a “missile gap” that later proved largely mythical, yet it hardened budgets, doctrine, and concrete in ways that lasted decades.

Today’s version is the “AI gap”—a story that says China is sprinting on AI, so we must pave faster, permit faster, and relax old guardrails to keep up. Of course, this diverts attention from China’s advances in directed-energy weapons and hypersonic missiles which are here right now today and will play havoc in an actual battlefield—which the West has no counter to. But let’s not talk about those (at least not until we lose a carrier in the South China Sea), let’s worry about AI because that will make Silicon Valley even richer.

Watch any interview of executives from the frontier AI labs and within minutes they will hit their “because China” talking point. National security and competitiveness are real concerns, but they don’t justify blank checks and Constitutional-level safe harbors. The missile‑gap analogy is useful because it reminds us how a compelling threat narrative propaganda can swamp due diligence. We can support strategic compute and energy without letting an AI‑gap story permanently bulldoze open space and saddle communities with the bill.

Energy Haves (Them) and Have Nots (Everyone else)

The result is a two‑track energy state AKA hell on earth. On Track A, the frontier AI lab hyperscalers like Google, Meta, Microsoft, OpenAI & Co. build company‑town infrastructure for AI—on‑site electricity generation by microgrids outside of everyone else’s electric grid, dedicated interties and other interconnections between electric operators—often on or near federal land. On Track B, the public grid carries everyone else: homes, hospitals, small manufacturers, water districts. As President Trump said at the White House AI dinner this week, Track A promises to “self‑supply,” but even self‑supplied campuses still lean on the public grid for backup and monetization, and they compete for scarce interconnection headroom.

President Trump is allowing the hyperscalers to get permanent rights to build on massive parcels of government land, including private utilities to power the massive electricity and water cooling needs for AI data centers. Strangely enough, this is continuing a Biden policy under an executive order issued late in Biden Presidency that Trump now takes credit for, and is a 180 out from America First according to people who ought to know like Steve Bannon. And yet it is happening.

White House Dinners are Old News in Silicon Valley

If someone says “AI labs will build their own utilities on federal land,” that land comes in two flavors: Department of Defense (now War Department) or Department of Energy sites and land owned by the Bureau of Land Management (BLM). This are vastly different categories.  DoD/DOE sites such as Idaho National Laboratory Oak Ridge Reservation, Paducah GDP, and the Savannah River Site, imply behind-the-fence, mission-tied microgrids with limited public friction; BLM land implies public-land rights-of-way and multi-use trade-offs (grazing, wildlife, cultural), longer timelines, and grid-export dynamics with potential “curtailment” which means prioritizing electricity for the hyperscalers. For example, Idaho National Laboratory (INL) as one of the four AI/data-center sites. INL’s own environmental reports state that about 60% of the INL site is open to livestock grazing, with monitoring of grazing impacts on habitat.  That’s likely over.

This is about how we power anything not controlled by a handful of firms. And it’s about the land footprint: fenced solar yards, switchyards, substations, massive transport lines, wider roads, laydown areas. On BLM range and other open spaces, those facilities translate into real, local losses—grazable acres inside fences, stock trails detoured, range improvements relocated.

What the two tracks really do

Track A solves a business problem: compute growth outpacing the public grid’s construction cycle. By putting electrons next to servers (literally), operators avoid waiting years for a substation or a 230‑kV line. Microgrids provide islanding during emergencies and participation in wholesale markets when connected. It’s nimble, and it works—for the operator.

Track B inherits the volatility: planners must consider a surge of large loads that may or may not appear, while maintaining reliability for everyone else. Capacity margins tighten; transmission projects get reprioritized; retail rates absorb the externalities. When utilities plan for speculative loads and those projects cancel or slide, the region can be left with stranded costs or deferred maintenance elsewhere.

The land squeeze we’re not counting

Public agencies tout gigawatts permitted. They rarely publish the acreage fenced, AUMs affected, or water commitments. Utility‑scale solar commonly pencils out to on the order of 5–7 acres per megawatt of capacity depending on layout and topography. At that ratio, a single gigawatt occupies thousands of acres—acres that, unlike wind, often can’t be grazed once panels and security fences go in. Even where grazing is technically possible, access roads, laydown yards, and vegetation control impose real costs on neighboring users.

Wind is more compatible with grazing, but it isn’t footprint‑free. Pads, roads, and safety buffers fragment pasture. Transmission to move that energy still needs corridors—and those corridors cross someone’s water lines and gates. Multiple use is a principle; on the ground it’s a schedule, a map, and a cost. Just for reference, a rule‑of‑thumb for acres/electricity produces is approximately 5–7 acres per megawatt of direct current (“MWdc”), but access roads, laydown, and buffers extend beyond the fence line.

We are going through this right now in my part of the world. Central Texas is bracing for a wave of new high-voltage transmission. These are 345-kV corridors cutting (literally) across the Hill Country to serve load growth for chip fabricators and data centers and tie-in distant generation (so big lines are a must once you commit to the usage). Ranchers and small towns are pushing back hard: eminent-domain threats, devalued land, scarred vistas, live-oak and wildlife impacts, and routes that ignore existing roads and utility corridors. Packed hearings and county resolutions demand co-location, undergrounding studies, and real alternatives—not “pick a line on a map” after the deal is done. The fight isn’t against reliability; it’s against a planning process that externalizes costs onto farmers, ranchers, other landowners and working landscapes.

Texas’s latest SB 6 is the case study. After a wave of ultra-large AI/data-center loads, frontier labs and their allies pushed lawmakers to rewrite reliability rules so the grid would accommodate them. SB 6 empowers the Texas grid operator ERCOT to police new mega-loads—through emergency curtailment and/or firm-backup requirements—effectively reshaping interconnection priorities and shifting reliability risk and costs onto everyone else. “Everyone else” means you and me, kind of like the “full faith and credit of the US”. Texas SB 6 was signed into law in June 2025 by Gov. Greg Abbott. It’s now in effect and directs PUCT/ERCOT to set new rules for very large loads (e.g., data centers), including curtailment during emergencies and added interconnection/backup-power requirements. So the devil will be in the details and someone needs to put on the whole armor of God, so to speak.

The phantom problem

Another quiet driver of bad outcomes is phantom demand: developers filing duplicative load or interconnection requests to keep options open. On paper, it looks like a tidal wave; in practice, only a slice gets built. If every inquiry triggers a utility study, a route survey, or a placeholder in a capital plan, neighborhoods can end up paying for capacity that never comes online to serve them.

A better deal for the public and the range

Prioritize already‑disturbed lands—industrial parks, mines, reservoirs, existing corridors—before greenfield BLM range land. Where greenfield is unavoidable, set a no‑net‑loss goal for AUMs and require real compensation and repair SLAs for affected range improvements.

Milestone gating for large loads: require non‑refundable deposits, binding site control, and equipment milestones before a project can hold scarce interconnection capacity or trigger grid upgrades. Count only contracted loads in official forecasts; publish scenario bands so rate cases aren’t built on hype.

Common‑corridor rules: make developers prove they can’t use existing roads or rights‑of‑way before claiming new footprints. Where fencing is required, use wildlife‑friendly designs and commit to seasonal gates that preserve stock movement.

Public equity for public land: if a campus wins accelerated federal siting and long‑term locational advantage, tie that to a public revenue share or capacity rights that directly benefit local ratepayers and counties. Public land should deliver public returns, not just private moats.

Grid‑help obligations: if a private microgrid islands to protect its own uptime, it should also help the grid when connected. Enroll batteries for frequency and reserve services; commit to emergency export; and pay a fair share of fixed transmission costs instead of shifting them onto households.

Or you could do what the Dutch and Irish governments proposed under the guise of climate change regulations—kill all the cattle. I can tell you right now that that ain’t gonna happen in Texas.

Will We Get Fooled Again?

If we let a hyped latter day “missile gap” set the terms, we’ll lock in a two‑track energy state: private power for those who can afford to build it, a more fragile and more expensive public grid for everyone else, and open spaces converted into permanent infrastructure at a discount. The alternative is straightforward: price land and grid externalities honestly, gate speculative demand, require public returns on public siting, and design corridor rules that protect working landscapes. That’s not anti‑AI; it’s pro‑public. Everything not controlled by Big Tech—will be better for it.

Let’s be clear: the data-center onslaught will be financed by the taxpayer one way or another—either as direct public outlays or through sweet-heart “leases” of federal land to build private utilities behind the fence for the richest corporations in commercial history. After all the goodies that Trump is handing to the AI platforms, let’s not have any loose talk of “selling” excess electricity to the public–that price should be zero. Even so, the sales pitch about “excess” electricity they’ll generously sell back to the grid is a fantasy; when margins tighten, they’ll throttle output costs, not volunteer philanthropy. Picture it: do you really think these firms won’t optimize for themselves first and last? We’ll be left with the bills, the land impacts, and a grid redesigned around their needs. Ask yourself—what in the last 25 years of Big Tech behavior says “trustworthy” to you?

Speaker Updates for September 18 Artist Rights Roundtable in DC

We’re pleased to welcome Josh Hurvitz, Partner, NVG and Head of Advocacy for A2IM and Kevin Amer, Chief Legal Officer, The Authors Guild to the Roundtable on September 18 at American University in DC!

Artist Rights Roundtable on AI and Copyright: 
Coffee with Humans and the Machines     

Join the Artist Rights Institute (ARI) and Kogod’s Entertainment Business Program for a timely morning roundtable on AI and copyright from the artist’s perspective. We’ll explore how emerging artificial intelligence technologies challenge authorship, licensing, and the creative economy — and what courts, lawmakers, and creators are doing in response.

🗓️ Date: September 18, 2025
🕗 Time: 8:00 a.m. – 12:00 noon
📍 Location: Butler Board Room, Bender Arena, American University, 4400 Massachusetts Ave NW, Washington D.C. 20016

🎟️ Admission:
Free and open to the public. Registration required at Eventbrite. Seating is limited.

🅿️ Parking map is available here. Pay-As-You-Go parking is available in hourly or daily increments ($2/hour, or $16/day) using the pay stations in the elevator lobbies of Katzen Arts Center, East Campus Surface Lot, the Spring Valley Building, Washington College of Law, and the School of International Service

Hosted by the Artist Rights Institute & American University’s Kogod School of Business, Entertainment Business Program

🔹 Overview:

☕ Coffee served starting at 8:00 a.m.
🧠 Program begins at 8:50 a.m.
🕛 Concludes by 12:00 noon — you’ll be free to have lunch with your clone.

🗂️ Program:

8:00–8:50 a.m. – Registration and Coffee

8:50–9:00 a.m. – Introductory Remarks by KOGOD Dean David Marchick and ARI Director Chris Castle

9:00–10:00 a.m. – Topic 1: AI Provenance Is the Cornerstone of Legitimate AI Licensing:

Speakers:

  • Dr. Moiya McTier Human Artistry Campaign
  • Ryan Lehnning, Assistant General Counsel, International at SoundExchange
  • The Chatbot

Moderator: Chris Castle, Artist Rights Institute

10:10–10:30 a.m. – Briefing: Current AI Litigation

  • Speaker: Kevin Madigan, Senior Vice President, Policy and Government Affairs, Copyright Alliance

10:30–11:30 a.m. – Topic 2: Ask the AI: Can Integrity and Innovation Survive Without Artist Consent?

Speakers:

  • Erin McAnally, Executive Director, Songwriters of North America
  • Jen Jacobsen, Executive Director, Artist Rights Alliance
  • Josh Hurvitz, Partner, NVG and Head of Advocacy for A2IM
  • Kevin Amer, Chief Legal Officer, The Authors Guild

Moderator: Linda Bloss-Baum, Director, Business and Entertainment Program, KOGOD School of Business

11:40–12:00 p.m. – Briefing: US and International AI Legislation

  • Speaker: George York, SVP, International Policy Recording Industry Association of America

🔗 Stay Updated:

Watch this space and visit Eventbrite for updates and speaker announcements.

Denmark’s Big Idea: Protect Personhood from the Blob With Consent First and Platform Duty Built In

Denmark has given the rest of us a simple, powerful starting point: protect the personhood of citizens from the blob—the borderless slurry of synthetic media that can clone your face, your voice, and your performance at scale. Crucially, Denmark isn’t trying to turn name‑image‑likeness into a mini‑copyright. It’s saying something more profound: your identity isn’t a “work”; it’s you. It’s what is sometimes called “personhood.” That framing changes everything. It’s not commerce, it’s a human right.

The Elements of Personhood

Personhood raises human reality as moral consideration, not a piece of content. For example, the European Court of Human Rights reads Article 8 ECHR (“private life”) to include personal identity (name, identity integrity, etc.), protecting individual identity against unjustified interference. This is, of course, anathema to Silicon Valley, but the world takes a different view.

In fact, Denmark’s proposal echoes the Universal Declaration of Human Rights. It starts with dignity (Art. 1) and recognition of each person before the law (Art. 6), and it squarely protects private life, honor, and reputation against synthetic impersonation (Art. 12). It balances freedom of expression (Art. 19) with narrow, clearly labeled carve-outs, and it respects creators’ moral and material interests (Art. 27(2)). Most importantly, it delivers an effective remedy (Art. 8): a consent-first rule backed by provenance and cross-platform stay-down, so individuals aren’t forced into DMCA-style learned helplessness.

Why does this matter? Because the moment we call identity or personhood a species of copyright, platforms will reach for a familiar toolbox—quotation, parody, transient copies, text‑and‑data‑mining (TDM)—and claim exceptions to protect them from “data holders”. That’s bleed‑through: the defenses built for expressive works ooze into an identity context where they don’t belong. The result is an unearned permission slip to scrape faces and voices “because the web is public.” Denmark points us in the opposite direction: consent or it’s unlawful. Not “fair use,” not “lawful access,” not “industry custom., not “data profile.” Consent. Pretty easy concept. It’s one of the main reasons tech executives keep their kids away from cell phones and social media.

Not Replicating the Safe Harbor Disaster

Think about how we got here. The first generation of the internet scaled by pushing risk downstream with a portfolio of safe harbors like the God-awful DMCA and Section 230 in the US. Platforms insisted they were deserving of blanket liability shields because they were special. They were “neutral pipes” which no one believed then and don’t believe now. These massive safe harbors hardened into a business model that likely added billions to the FAANG bottom line. We taught millions of rightsholders and users to live with learned helplessness: file a notice, watch copies multiply, rinse and repeat. Many users did not know they could even do that much, and frankly still may not. That DMCA‑era whack‑a‑mole turned into a faux license, a kind of “catch me if you can” bargain where exhaustion replaces consent.

Denmark’s New Protection of Personhood for the AI Era

Denmark’s move is a chance to break that pattern—if we resist the gravitational pull back to copyright. A fresh right of identity (called a “sui generis” right among Latin fans) is not subject to copyright or database exceptions, especially fair use, DMCA, and TDM. In plain English: “publicly available” is not permission to clone your face, train on your voice, or fabricate your performance. Or your children, either. If an AI platform wants to use identity, they ask first. If they don’t ask, they don’t get to do it, and they don’t get to keep the model they trained on it. And like many other areas, children can’t consent.

That legal foundation unlocks the practical fix creators and citizens actually need: stay‑down across platforms, not endless piecemeal takedowns. Imagine a teacher discovers a convincing deepfake circulating on two social networks and a messaging app. If we treat that deepfake as a copyright issue under the old model, she sends three notices, then five, then twelve. Week two, the video reappears with a slight change. Week three, it’s re‑encoded, mirrored, and captioned. The message she receives under a copyright regime is “you can never catch up.” So why don’t you just give up. Which, of course, in the world of Silicon Valley monopoly rents, is called the plan. That’s the learned helplessness Denmark gives us permission to reject.

Enforcing Personhood

How would the new plan work? First, we treat realistic digital imitations of a person’s face, voice, or performance as illegal absent consent, with only narrow, clearly labeled carve‑outs for genuine public‑interest reporting (no children, no false endorsement, no biometric spoofing risk, provenance intact). That’s the rights architecture: bright lines and human‑centered. Hence, “personhood.”

Second, we wire enforcement to succeed at internet scale. The way out of whack‑a‑mole is a cross‑platform deepfake registry operated with real governance. A deepfake registry doesn’t store videos; it stores non‑reversible fingerprints—exact file hashes for byte‑for‑byte matches and robust, perceptual fingerprints for the variants (different encodes, crops, borders). For audio, we use acoustic fingerprints; for video, scene/frame signatures. These markers will evolve and so should the deepfakes registry. One confirmed case becomes a family of identifiers that platforms check at upload and on re‑share. The first takedown becomes the last.

Third, we pair that with provenance by default: Provenance isn’t a license; it’s evidence. When credentials are present, it’s easier to authenticate so there is an incentive to use them. Provenance is the rebar that turns legal rules into reliable, automatable processes. However, absence of credentials doesn’t mean free for all.

Finally, we put the onus where it belongs—on platforms. Europe’s Digital Services Act at least theoretically already replaced “willful blindness” with “notice‑and‑action” duties and oversight for very large platforms. Denmark’s identity right gives citizens a clear, national‑law basis to say: “This is illegal content—remove it and keep it down.” The platform’s job isn’t to litigate fair use in the abstract or hide behind TDM. It’s to implement upload checks, preserve provenance, run repeat‑offender policies, and prevent recurrences. If a case was verified yesterday, it shouldn’t be back tomorrow with a 10‑pixel border or other trivial alteration to defeat the rules.

Some will ask: what about creativity and satire? The answer is what it has always been in responsible speech law—more speech not fake speech. If you’re lampooning a politician with a clearly labeled synthetic speech, no implied endorsement, provenance intact, and no risk of biometric spoofing or fraud, you have defenses. The point isn’t to smother satire; it’s to end the pretense that satire requires open season on the biometric identities of private citizens and working artists.

Others will ask: what about research and innovation? Good research runs on consent, especially human subject research (see 45 C.F.R. part 46). If a lab wants to study voice cloning, it recruits consenting participants, documents scope and duration, and keeps data and models in controlled settings. That’s science. What isn’t science is scraping the voices of a country’s population “because the web is public,” then shipping a model that anyone can use to spoof a bank’s call‑center checks. A no‑TDM‑bleed‑through clause draws that line clearly.

And yes, edge cases exist. There will be appeals, mistakes, and hard calls at the margins. That is why the registry must be governed—with identity verification, transparent logs, fast appeals, and independent oversight. Done right, it will look less like a black box and more like infrastructure: a quiet backbone that keeps people safe while allowing reporting and legitimate creative work to thrive.

If Denmark’s spark is to become a firebreak, the message needs to be crisp:

— This is not copyright. Identity is a personal right; copyright defenses don’t apply.

— Consent is the rule. Deepfakes without consent is unlawful.

— No TDM bleed‑through. “Publicly available” does not equate to permission to clone or train.

— Provenance helps prove, not permit. Keep credentials intact; stripping them has consequences.

— Stay‑down, cross‑platform. One verified case should not become a thousand reuploads.

That’s how you protect personhood from the blob. By refusing to treat humans like “content,” by ending the faux‑license of whack‑a‑mole, and by making platforms responsible for prevention, not just belated reaction. Denmark has given us the right opening line. Now we should finish the paragraph: consent or block. Label it, prove it, or remove it.

Judge Failla’s Opinion in Dow Jones v. Perplexity: RAG as Mechanism of Infringement

Judge Failla’s opinion in Dow Jones v. Perplexity doesn’t just keep the case alive—it frames RAG itself as the act of copying, and raises the specter of inducement liability under Grokster.

Although Judge Katherine Polk Failla’s August 21, 2025 opinion in Dow Jones & Co. v. Perplexity is technically a procedural ruling denying Perplexity’s motions to dismiss or transfer, Judge Failla offers an unusually candid window into how the Court may view the substance of the case. In particular, her treatment of retrieval-augmented generation (RAG) is striking: rather than describing it as Perplexity’s background plumbing, she identified it as the mechanism by which copyright infringement and trademark misattribution allegedly occur.  

Remember, Perplexity’s CEO described the company to Forbes as “It’s almost like Wikipedia and ChatGPT had a kid.” I’m still looking for that attribution under the Wikipedia Creative Commons license.

As readers may recall, I’ve been very interested in RAG as an open door for infringement actions, so naturally this discussion caught my eye.  So we’re all on the page, retrieval-augmented generation (RAG) uses a “vector database” to expand an AI system’s knowledge beyond what is locked in its training data, including recent news sources for example. 

When you prompt a RAG-enabled model, it first searches the database for context, then weaves that information into its generated answer. This architecture makes outputs more accurate, current, and domain-specific, but also raises questions about copyright, data governance, and intentional use of third-party content mostly because RAG may rely on information outside of its training data.  Like if I queried “single bullet theory” the AI might have a copy of the Warren Commission report, but would need to go out on the web for the latest declassified JFK materials or news reports about those materials to give a complete answer.

You can also think of Google Search or Bing as a kind of RAG index—and you can see how that would give search engines a big leg up in the AI race, even though none of their various safe harbors, Creative Commons licenses, Google Books or direct licenses were for this RAG purpose.  So there’s that.

Judge Failla’s RAG Analysis

As Judge Failla explained, Perplexity’s system “relies on a retrieval-augmented generation (‘RAG’) database, comprised of ‘content from original sources,’ to provide answers to users,” with the indices “comprised of content that [Perplexity] want[s] to use as source material from which to generate the ‘answers’ to user prompts and questions.’” The model then “repackages the original, indexed content in written responses … to users,” with the RAG technology “tell[ing] the LLM exactly which original content to turn into its ‘answer.’” Or as another judge once said, “One who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, going beyond mere distribution with knowledge of third-party action, is liable for the resulting acts of infringement by third parties using the device, regardless of the device’s lawful uses.” Or something like that.

On that basis, Judge Failla recognized Plaintiffs’ claim that infringement occurred at both ends of the process: “first, by ‘copying a massive amount of Plaintiffs’ copyrighted works as inputs into its RAG index’; second, by providing consumers with outputs that ‘contain full or partial verbatim reproductions of Plaintiffs’ copyrighted articles’; and third, by ‘generat[ing] made-up text (hallucinations) … attribut[ed] … to Plaintiffs’ publications using Plaintiffs’ trademarks.’” In her jurisdictional analysis, Judge Failla stressed that these “inputs are significant because they cause Defendant’s website to produce answers that are reproductions or detailed summaries of Plaintiffs’ copyrighted works,” thus tying the alleged misconduct directly to Perplexity’s business activities in New York although she was not making a substantive ruling in this instance.

What is RAG and Why It Matters

Retrieval-augmented generation is a method that pairs two steps: (1) retrieval of content from external databases or the open web, and (2) generation of a synthetic answer using a large language model. Instead of relying solely on the model’s pre-training, RAG systems point the model toward selected source material such as news articles, scientific papers, legal databases and instruct it to weave that content into an answer. 

From a user perspective, this can produce more accurate, up-to-date results. But from a legal perspective, the same pipeline can directly copy or closely paraphrase copyrighted material, often without attribution, and can even misattribute hallucinated text to legitimate sources. This dual role of RAG—retrieving copyrighted works as inputs and reproducing them as outputs—is exactly what made it central to Judge Failla’s opinion procedurally, but also may show where she is thinking substantively.

RAG in Frontier Labs

RAG is not a niche technique. It has become standard practice at nearly every frontier AI lab:

– OpenAI uses retrieval plug-ins and Bing integrations to ground ChatGPT answers.
– Anthropic deploys RAG pipelines in Claude for enterprise customers.
– Google DeepMind integrates RAG into Gemini and search-linked models.
– Meta builds retrieval into LLaMA applications and experimental assistants like Grok.
– Microsoft has made Copilot fundamentally a RAG product, pairing Bing with GPT.
– Cohere, Mistral, and other independents market RAG as a service layer for enterprises.

Why Dow Jones Matters Beyond Perplexity

Perplexity just happened to be first reported opinion as far as I know. The technical structure of its answer engine—indexing copyrighted content into a RAG system, then repackaging it for users—is not unique. It mirrors how the rest of the frontier labs are building their flagship products. What makes this case important is not that Perplexity is an outlier, but that it illustrates the legal vulnerability inherent in the RAG architecture itself.

Is RAG the Low-Hanging Fruit?

What makes this case so consequential is not just that Judge Failla recognized, at least for this ruling, that RAG is at least one mechanism of infringement, but that RAG cases may be easier to prove than disputes over model training inputs. Training claims often run into evidentiary hurdles: plaintiffs must show that their works were included in massive opaque training corpora, that those works influenced model parameters, and that the resulting outputs are “substantially similar.” That chain of proof can be complex and indirect.

By contrast, RAG systems operate in the open. They index specific copyrighted articles, feed them directly into a generation process, and sometimes output verbatim or near-verbatim passages. Plaintiffs can point to before-and-after evidence: the copyrighted article itself, the RAG index that ingested it, and the system’s generated output reproducing it. That may make proving copyright infringement far more straightforward to demonstrate than in a pure training case.

For that reason, Perplexity just happened to be first, but it will not be the last. Nearly every frontier lab such as OpenAI, Anthropic, Google, Meta, Microsoft is relying on RAG as the architecture of choice to ground their models. If RAG is the legal weak point, this opinion could mark the opening salvo in a much broader wave of litigation aimed at AI platforms, with courts treating RAG not as a technical curiosity but as a direct, provable conduit for infringement. 

And lurking in the background is a bigger question: is Grokster going to be Judge Failla’s roundhouse kick? That irony is delicious.  By highlighting how Perplexity (and the others) deliberately designed its system to ingest and repackage copyrighted works, the opinion sets the stage for a finding of intentionality that could make RAG the twenty-first-century version of inducement liability.

TikTok After Xi’s Qiushi Article: Why China’s Security Laws Are the Whole Ballgame

Xi Jinping’s new article in Qiushi (the Chinese Communist Party Central Committee’s flagship theoretical public policy journal) repackages a familiar message: China will promote the “healthy and high-quality development” of the private economy, but under the leadership of the Chinese Communist Party. This is expressed in China’s statutory law as the “Private Economy Promotion Law.”  And of course we have to always remember that under the PRC “constitution,” statutes are primarily designed to safeguard the authority and interests of the Chinese Communist Party (CCP) rather than to protect the rights and privileges of individuals—because individuals don’t really have any protections against the CCP.  

For U.S. policymakers weighing what to do about TikTok, this is not reassuring rhetoric in my view. It is instead a reminder that, in China, private platforms ultimately operate within a legal-and-political framework that gives state-security organs binding powers over companies, the Chinese people, and their data.

According to the South China Morning Post:

In another show of support for China’s private sector, Beijing has released the details of a speech from President Xi Jinping which included vows the country would guarantee a level playing field for private firms, safeguard entrepreneurs’ lawful rights and interests, and step up efforts to solve their long-standing challenges, including overdue payments.

The full address, delivered in February to a group of China’s leading entrepreneurs, had not been made available to the public before Friday, when Qiushi – the ruling Communist Party’s theoretical journal – posted a transcript on its website.

“The policies and measures to promote the development of the private economy must be implemented in a solid and thorough manner,” Xi said in February. “Whatever the party Central Committee has decided must be resolutely carried out – without ambiguity, delay, or compromise

I will try to explain why the emphasis of Xi’s policy speech matters, and why the divest-or-ban logic for TikTok under US law (and it is a law) remains intact regardless of what may seem like “friendly” language about private enterprise.  It’s also worth remembering that whatever the result of the TikTok divestment may be, it’s just another stop along the way in the Sino-American struggle­—or something more kinetic.  As Clausewitz wrote in his other famous quotation, the outcomes produced by war are never final. (See Book I Chapter 1 aka the good stuff.)  Even the most decisive battlefield victory may have no lasting political achievement.  As we have seen time and again, the termination of one conflict often produces the necessary conditions for future conflict.

What Xi’s piece actually signals

Xi’s article combines pro-private-sector language (property-rights protection, market access, financing support) with an explicit call for Party leadership and ideological guidance in the private economy. In other words, the promise is growth within control, and not just any control but the control of the Party. There is no carve‑out from national-security statutes, no “TikTok exemption,” and no suggestion that private firms can decline cooperation when state-security laws apply consistent with China’s “unrestricted warfare” doctrine.

Recall that the CCP has designated the TikTok algorithm as a strategic national asset, and “national” in this context and the context of Xi’s article means the Chinese Communist Party of which Xi is President-for-Life.  This brother is not playing.

The laws that define the TikTok Divestment risk (not the press releases)

The core concern about TikTok is jurisdiction, or the CCP’s extra-territorial jurisdiction, a concept we don’t fully comprehend. Xi’s Qiushi article promises support for private firms under Party leadership. That means that the National Intelligence Law, Cybersecurity Law, Counter‑Espionage Law, and China’s data‑export regime remain in force and are controlling authority over companies like TikTok. For U.S. reviewers like CIFIUS, that means ByteDance‑controlled TikTok is, by design, subject to compelled, confidential cooperation with state‑security organs. 

As long as the TikTok platform and algorithm is ultimately controlled by a company subject to the CCP’s security laws, U.S. reviewers correctly assume those laws can reach the service, even if operations are partly localized abroad. MTP readers will recall the four pillars of China’s statutory security regime that matter most in this context, being:

National Intelligence Law (2017). Requires all organizations and citizens to support, assist, and cooperate with state intelligence work, and to keep that cooperation secret. Corporate policies and NDAs do not trump statutory duties, especially in the PRC.

Cybersecurity Law (2017). Obligates “network operators” to provide technical support and assistance to public‑security and state‑security organs, and sets the baseline for security reviews and Multi‑Level Protection (MLPS) obligations.

Counter‑Espionage Law (2023 amendment). Broadens the scope of what counts as “espionage” to include data, documents, and materials related to national security or the “national interest,” increasing the zone where requests can be justified.

Data regime (Data Security Law (DSL)Personal Information Protection Law (PIPL), and the Cyberspace Administration of China (CAC) regulatory measures). Controls cross‑border transfers through security assessments or standard contracts and allows denials on national‑security grounds. Practically, many datasets can’t leave China without approval—and keys/cryptography used onshore must follow onshore rules.

None of the above is changed by the Private Economy Promotion Law or by Xi’s supportive tone toward entrepreneurs. The laws remain superior in any conflict such as the TikTok divest-or-ban law.

It is these laws that are at the bottom of U.S. concerns about TikTok’s data scraping–it is, after all, spyware with a soundtrack.  There’s a strong case to be made that U.S. artists, songwriters, creators and fans are all dupes of TikTok as a data collection tool  in a country that requires its companies to hand over to the Ministry of State Security all it needs to support the intelligence mission (MSS is like the FBI and CIA in one agency with a heavy ration of FSB).

Zhang Yiming, founder of ByteDance and former public face of TikTok, stepped down as CEO in 2021 but remains Chairman and key shareholder. He controls more than half of the company’s voting rights and retains about a 21% stake. That also makes him China’s richest man. Though low-profile publicly, he is actively guiding ByteDance’s AI strategy and long-term direction. Mr. Zhang does not discuss this part.  It should come as no surprise–according to his Wikipedia page, Mr. Zhang understands what happens when you don’t toe the Party line:

ByteDance’s first app, Neihan Duanzi, was shut down in 2018 by the National Radio and Television Administration. In response, Zhang issued an apology stating that the app was “incommensurate with socialist core values“, that it had a “weak” implementation of Xi Jinping Thought, and promised that ByteDance would “further deepen cooperation” with the ruling Chinese Communist Party to better promote its policies.

ByteDance’s AI strategy is built on aggressive large-scale data scraping including from TikTok. Its proprietary crawler, ByteSpider, dominates global web-scraping traffic, collecting vast amounts of content at speeds far beyond rivals like OpenAI. This raw data fuels TikTok’s recommendation engine and broader generative AI development, giving ByteDance rapid adaptability and massive training inputs. Unlike OpenAI, which emphasizes curated datasets, ByteDance prioritizes scale, velocity, and real-time responsiveness, integrating insights from TikTok user behavior and the wider internet. This approach positions ByteDance as a formidable AI competitor, leveraging its enormous data advantage to strengthen consumer products, expand generative AI capabilities, and consolidate global influence.

I would find it very, very hard to believe that Mr. Zhang is not a member of the Chinese Communist Party, but in any event he understands very clearly what his role is under the National Intelligence Law and related statutes.  Do you think that standing up to the MSS to protect the data privacy of American teenagers is consistent with “Xi Jinping Thought”?

Why this makes TikTok’s case harder, not easier

For Washington, the TikTok problem is not market access or entrepreneurship. It’s the data governance chain. Xi’s article underscores that private firms are expected to align with the Party Center’s decisions and to embed Party structures. Combine that political expectation with the statutory duties described above, and you get a simple inference: if China’s security services want something—from data access to algorithmic levers—ByteDance and its affiliates are obliged to give it to them or at least help, and are often barred from disclosing that help.

That’s why divestiture has become the U.S. default: the only durable mitigation against TikTok is to place ownership and effective control outside PRC legal reach, with clean technical and organizational separation (code, data, keys, staffing, and change control). Anything short of that leaves the fundamental risk untouched.

Where the U.S. law and process fit

Congress’s divest‑or‑ban statute requires TikTok to be controlled by an entity not subject to PRC direction, on terms approved by U.S. authorities. Beijing’s export‑control rules on recommendation algorithms make a full transfer difficult if not impossible; that’s why proposals have floated a U.S. “fork” with separate code, ops, and data. But Xi’s article doesn’t move the ball: it simply reinforces that CCP jurisdiction over private platforms is a feature, not a bug, of the system.

Practical implications (policy and product)

For policymakers: Treat Xi’s article as confirmation that political control and security statutes are baked in. Negotiated “promises” won’t outweigh legal duties to assist intelligence work. Any compliance plan that assumes voluntary transparency or a “hands‑off” approach is fragile by design.

For platforms: If you operate in China, assume compelled and confidential cooperation is possible and in this case almost a certainty if it hasn’t already happened. Architect China operations as least‑privilege, least‑data environments; segregate code and keys; plan for outbound data barrrers as a normal business condition.

For users and advertisers: The risk discussion is about governance and jurisdiction, not whether a particular management team “would never do that.” They would.  Corporate intent can’t override state legal authority particularly when the Party’s Ministry of State Security is doing the asking.

Now What?

Xi’s article does not soften TikTok’s regulatory problem in the United States. If anything, it sharpens it by reiterating that the private economy advances under the Party’s direction, never apart from it. When you combine Mr. Zhang’s role with Bytedance in China’s AI national champions, it’s pretty obvious whose side TikTok is on.

Wherever the divest-or-ban legislation ends up, it will inevitably set the stage for the next conflict.  If I had to bet today, my bet is that Xi has no intention of making a deal with the US that involves giving up the TikTok algorithm in violation of the Party’s export-control rules and access to US user data for AI training.

From Fictional “Looking Backward” to Nonfiction Silicon Valley: Will Technologists Crown the New Philosopher‑Kings?

More than a century ago, writers like Edward Bellamy and Edward Mandell House asked a question that feels as urgent in 2025 as it did in their era: Should society be shaped by its people, or designed by its elites? Both grappled with this tension in fiction. Bellamy’s Looking Backward (1888) imagined a future society run by rational experts — technocrats and bureaucrats centralizing economic and social life for the greater good. House’s Philip Dru: Administrator (1912) went a step further, envisioning an American civil war where a visionary figure seizes control from corrupt institutions to impose a new era of equity and order.  Sound familiar?

Today, Silicon Valley’s titans are rehearsing their own versions of these stories. In an era dominated by artificial intelligence, climate crisis, and global instability, the tension between democratic legitimacy and technocratic efficiency is more pronounced than ever.

The Bellamy Model: Eric Schmidt and Biden’s AI Order

President Biden’s sweeping Executive Order on AI issued in late 2023 feels like a chapter lifted from Looking Backward. Its core premise is unmistakable: Trust our national champion “trusted” technologists to design and govern the rules for an era shaped by artificial intelligence. At the heart of this approach is Eric Schmidt, former CEO of Google and a key advisor in shaping the AI order at least according to Eric Schmidt

Schmidt has long advocated for centralizing AI policymaking within a circle of vetted, elite technologists — a belief reminiscent of Bellamy’s idealistic vision. According to Schmidt, AI and other disruptive technologies are too pivotal, too dangerous, and too impactful to be left to messy democratic debates. For people in Schmidt’s cabal, this approach is prudent: a bulwark against AI’s darker possibilities. But it doesn’t do much to protect against darker possibilities from AI platforms.  For skeptics like me, it raises a haunting question posed by Bellamy himself: Are we delegating too much authority to a technocratic elite?

The Philip Dru Model: Musk, Sacks, and Trump’s Disruption Politics

Meanwhile, across the aisle, another faction of Silicon Valley is aligning itself with Donald Trump and making a very different bet for the future. Here, the nonfiction playbook is closer to the fictional Philip Dru. In House’s novel, an idealistic and forceful figure emerges from a broken system to impose order and equity. Enter Elon Musk and David Sacks, both positioning themselves as champions of disruption, backed by immense platforms, resources, and their own venture funds. 

Musk openly embraces a worldview wherein technologists have both the tools and the mandate to save society by reshaping transportation, energy, space, and AI itself. Meanwhile, Sacks advocates Silicon Valley as a de facto policymaker, disrupting traditional institutions and aligning with leaders like Trump to advance a new era of innovation-driven governance—with no Senate confirmation or even a security clearance. This competing cabal operates with the implicit belief that traditional democratic institutions, inevitiably bogged down by process, gridlock, and special interests can no longer solve society’s biggest problems. To Special Government Employees like Musk and Sacks, their disruption is not a threat to democracy, but its savior.

A New Gilded Age? Or a New Social Contract?

Both threads — Biden and Schmidt’s technocratic centralization and Musk, Sacks, and Trump’s disruption-driven politics — grapple with the legacy of Bellamy and House. In the Gilded Age that inspired those writers, industrial barons sought to justify their dominance with visions of rational, top-down progress. Today’s Silicon Valley billionaires carry a similar vision for the digital era, suggesting that elite technologists can govern more effectively than traditional democratic institutions like Plato’s “guardians” of The Republic.

But at what cost? Will AI policymaking and its implementation evolve as a public endeavor, shaped by citizen accountability? Or will it be molded by corporate elites making decisions in the background? Will future leaders consolidate their role as philosopher-kings and benevolent administrators — making themselves indispensable to the state?

The Stakes Are Clear

As the lines between Silicon Valley and Washington continue to blur, the questions posed by Bellamy and House have never been more relevant: Will technologist philosopher-kings write the rules for our collective future? Will democratic institutions evolve to balance AI and climate crisis effectively? Will the White House of 2025 (and beyond) cede authority to the titans of Silicon Valley? In this pivotal moment, America must ask itself: What kind of future do we want — one that is chosen by its citizens, or one that is designed for its citizens? The answer will define the character of American democracy for the rest of the 21st century — and likely beyond.

Shilling Like It’s 1999: Ars, Anthropic, and the Internet of Other People’s Things

Ars Technica just ran a piece headlined “AI industry horrified to face largest copyright class action ever certified.”

It’s the usual breathless “innovation under siege” framing—complete with quotes from “public interest” groups that, if you check the Google Shill List submitted to Judge Alsup in the Oracle case and Public Citizen’s Mission Creep-y, have long been in the paid service of Big Tech. Judge Alsup…hmmm…isn’t he the judge in the very Anthropic case that Ars is going on about?

Here’s what Ars left out: most of these so-called advocacy outfits—EFF, Public Knowledge, CCIA, and their cousins—have been doing Google’s bidding for years, rebranding corporate priorities as public interest. It’s an old play: weaponize the credibility of “independent” voices to protect your bottom line.

The article parrots the industry’s favorite excuse: proving copyright ownership is too hard, so these lawsuits are bound to fail. That line would be laughable if it weren’t so tired; it’s like elder abuse. We live in the age of AI deduplication, manifest checking, and robust content hashing—technologies the AI companies themselves use daily to clean, track, and optimize their training datasets. If they can identify and strip duplicates to improve model efficiency, they can identify and track copyrighted works. What they mean is: “We’d rather not, because it would expose the scale of our free-riding.”

That’s the unspoken truth behind these lawsuits. They’re not about “stifling innovation.” They’re about holding accountable an industry that’s built its fortunes on what can only be called the Internet of Other People’s Things—a business model where your creative output, your data, and your identity are raw material for someone else’s product, without permission, payment, or even acknowledgment.

Instead of cross-examining these corporate talking points like you know…journalists…Ars lets them pass unchallenged, turning what could have been a watershed moment for transparency into a PR assist. That’s not journalism—it’s message laundering.

The lawsuit doesn’t threaten the future of AI. It threatens the profitability of a handful of massive labs—many backed by the same investors and platforms that bankroll these “public interest” mouthpieces. If the case succeeds, it could force AI companies to abandon the Internet of Other People’s Things and start building the old-fashioned way: by paying for what they use.

Come on, Ars. Do we really have to go through this again? If you’re going to quote industry-adjacent lobbyists as if they were neutral experts, at least tell readers who’s paying the bills. Otherwise, it’s just shilling like it’s 1999.

AI’s Manhattan Project Rhetoric, Clearance-Free Reality

Every time a tech CEO compares frontier AI to the Manhattan Project, take a breath—and remember what that actually means.  Master spycatcher James Jesus Angleton is rolling in his grave. (aka Matt Damon in The Good Shepherd.). And like most elevator pitch talking points, that analogy starts to fall apart on inspection.

The Manhattan Project wasn’t just a moonshot scientific collaboration. It was the most tightly controlled, security-obsessed R&D operation in American history. Every physicist, engineer, and janitor involved had a federal security clearance. Facilities were locked down under military command of General Leslie Groves. Communications were monitored. Access was compartmentalized. And still—still—the Soviets penetrated it.  See Klaus Fuchs.  Let’s understand just how secret the Manhattan Project was—General Curtis LeMay had no idea it was happening until he was asked to set up facilities for the Enola Gay on his bomber base on Tinian a few months before the first nuclear bomb.  You want to find out about the details of any frontier lab, just pick up the newspaper.  Not nearly the same thing. There were no chatbots involved and there were no Special Government Employees with no security clearance.

Oppie Sacks

So when today’s AI executives name-drop Oppenheimer and invoke the gravity of dual-use technologies, what exactly are they suggesting? That we’re building world-altering capabilities without any of the safeguards that even the AI Whiz Kids admit are historically necessary by their Manhattan Project talking point in the pitch deck?

These frontier labs aren’t locked down. They’re open-plan. They’re not vetting personnel. They’re recruiting from Discord servers. They’re not subject to classified environments. They’re training military-civilian dual-use models on consumer cloud platforms. And when questioned, they invoke private sector privilege and push back against any suggestion of state or federal regulation.  And here’s a newsflash—requiring a security clearance for scientific work in the vital national interest is not regulation.  (Neither is copyright but that’s another story.)

Meanwhile, they’re angling for access to Department of Energy nuclear real estate, government compute subsidies, and preferred status in export policy—all under the justification of “national security” because, you know, China.  They want the symbolism of the Manhattan Project without the substance. They want to be seen as indispensable without being held accountable.

The truth is that AI is dual-use. It can power logistics and surveillance, language learning and warfare. That’s not theoretical—it’s already happening. China openly treats AI as part of its military-civil fusion strategy. Russia has targeted U.S. systems with information warfare bots. And our labs? They’re scraping from the open internet and assuming the training data hasn’t been poisoned with the massive misinformation campaigns on Wikipedia, Reddit and X that are routine.

If even the Manhattan Project—run under maximum secrecy—was infiltrated by Soviet spies, what are the chances that today’s AI labs, operating in the wide open are immune?  Wouldn’t a good spycatcher like Angleton assume these wunderkinds have already been penetrated?

We have no standard vetting for employees. No security clearances. No model release controls. No audit trail for pretraining data integrity. And no clear protocol for foreign access to model weights, inference APIs, or sensitive safety infrastructure. It’s not a matter of if. It’s a matter of when—or more likely, a matter of already.

Remember–nobody got rich out of working on the Manhattan Project. That’s another big difference. These guys are in it for the money, make no mistake.

So when you hear the Manhattan Project invoked again, ask the follow-up question: Where’s the security clearance?  Where’s the classification?  Where’s the real protection?  Who’s playing the role of Klaus Fuchs?

Because if AI is our new Manhattan Project, then running it without security is more than hypocrisy. It’s incompetence at scale.