Less Than Zero: The Significance of the Per Stream Rate and Why It Matters

Spotify’s insistence that it’s “misleading” to compare services based on a derived per-stream rate reveals exactly how out of touch the company has become with the very artists whose labor fuels its stock price. Artists experience streaming one play at a time, not as an abstract revenue pool or a complex pro-rata formula. Each stream represents a listener’s decision, a moment of engagement, and a microtransaction of trust. Dismissing the per-stream metric as irrelevant is a rhetorical dodge that shields Spotify from accountability for its own value proposition. (The same applies to all streamers, but Spotify is the only one that denies the reality of the per-stream rate.)

Spotify further claims that users don’t pay per stream but for access as if that negates the artist’s per stream rate payments. It is fallacious to claim that because Spotify users pay a subscription fee for “access,” there is no connection between that payment and any one artist they stream. This argument treats music like a public utility rather than a marketplace of individual works. In reality, users subscribe because of the artists and songs they want to hear; the value of “access” is wholly derived from those choices and the fans that artists drive to the platform. Each stream represents a conscious act of consumption and engagement that justifies compensation.

Economically, the subscription fee is not paid into a vacuum — it forms a revenue pool that Spotify divides among rights holders according to streams. Thus, the distribution of user payments is directly tied to which artists are streamed, even if the payment mechanism is indirect. To say otherwise erases the causal relationship between fan behavior and artist earnings.

The “access” framing serves only to obscure accountability. It allows Spotify to argue that artists are incidental to its product when, in truth, they are the product. Without individual songs, there is nothing to access. The subscription model may bundle listening into a single fee, but it does not sever the fundamental link between listener choice and the artist’s right to be paid fairly for that choice.

Less Than Zero Effect: AI, Infinite Supply and Erasing Artist

In fact, this “access” argument may undermine Spotify’s point entirely. If subscribers pay for access, not individual plays, then there’s an even greater obligation to ensure that subscription revenue is distributed fairly across the artists who generate the listening engagement that keeps fans paying each month. The opacity of this system—where listeners have no idea how their money is allocated—protects Spotify, not artists. If fans understood how little of their monthly fee reached the musicians they actually listen to, they might demand a user-centric payout model or direct licensing alternatives. Or they might be more inclined to use a site like Bandcamp. And Spotify really doesn’t want that.

And to anticipate Spotify’s typical deflection—that low payments are the label’s fault—that’s not correct either. Spotify sets the revenue pool, defines the accounting model, and negotiates the rates. Labels may divide the scraps, but it’s Spotify that decides how small the pie is in the first place either through its distribution deals or exercising pricing power.

Three Proofs of Intention

Daniel Ek, the Spotify CEO and arms dealer, made a Dickensian statement that tells you everything you need to know about how Spotify perceives their role as the Streaming Scrooge—“Today, with the cost of creating content being close to zero, people can share an incredible amount of content”.

That statement perfectly illustrates how detached he has become from the lived reality of the people who actually make the music that powers his platform’s market capitalization (which allows him to invest in autonomous weapons). First, music is not generic “content.” It is art, labor, and identity. Reducing it to “content” flattens the creative act into background noise for an algorithmic feed. That’s not rhetoric; it’s a statement of his values. Of course in his defense, “near zero cost” to a billionaire like Ek is not the same as “near zero cost” to any artist. This disharmonious statement shows that Daniel Ek mistakes the harmony of the people for the noise of the marketplace—arming algorithms instead of artists.

Second, the notion that the cost of creating recordings is “close to zero” is absurd. Real artists pay for instruments, studios, producers, engineers, session musicians, mixing, mastering, artwork, promotion, and often the cost of simply surviving long enough to make the next record or write the next song. Even the so-called “bedroom producer” incurs real expenses—gear, software, electricity, distribution, and years of unpaid labor learning the craft. None of that is zero. As I said in the UK Parliament’s Inquiry into the Economics of Streaming, when the day comes that a soloist aspires to having their music included on a Spotify “sleep” playlist, there’s something really wrong here.

Ek’s comment reveals the Silicon Valley mindset that art is a frictionless input for data platforms, not an enterprise of human skill, sacrifice, and emotion. When the CEO of the world’s dominant streaming company trivializes the cost of creation, he’s not describing an economy—he’s erasing one.

While Spotify tries to distract from the “per-stream rate,” it conveniently ignores the reality that whatever it pays “the music industry” or “rights holders” for all the artists signed to one label still must be broken down into actual payments to the individual artists and songwriters who created the work. Labels divide their share among recording artists; publishers do the same for composers and lyricists. If Spotify refuses to engage on per-stream value, what it’s really saying is that it doesn’t want to address the people behind the music—the very creators whose livelihoods depend on those streams. In pretending the per-stream question doesn’t matter, Spotify admits the artist doesn’t matter either.

Less Than Zero or Zeroing Out: Where Do We Go from Here?

The collapse of artist revenue and the rise of AI aren’t coincidences; they’re two gears in the same machine. Streaming’s economics rewards infinite supply at near-zero unit cost which is really the nugget of truth in Daniel Ek’s statements. This is evidenced by Spotify’s dalliances with Epidemic Sound and the like. But—human-created music is finite and costly; AI music is effectively infinite and cheap. For a platform whose margins improve as payout obligations shrink, the logical endgame is obvious: keep the streams, remove the artists.

  • Two-sided market math. Platforms sell audience attention to advertisers and access to subscribers. Their largest variable cost is royalties. Every substitution of human tracks with synthetic “sound-alikes,” noise, functional audio, or AI mashup reduces royalty liability while keeping listening hours—and revenue—intact. You count the AI streams just long enough to reduce the royalty pool, then you remove them from the system, only to be replace by more AI tracks. Spotify’s security is just good enough to miss the AI tracks for at least one royalty accounting period.
  • Perpetual content glut as cover. Executives say creation costs are “near zero,” justifying lower per-stream value. That narrative licenses a race to the bottom, then invites AI to flood the catalog so the floor can fall further.
  • Training to replace, not to pay. Models ingest human catalogs to learn style and voice, then output “good enough” music that competes with the very works that trained them—without the messy line item called “artist compensation.”
  • Playlist gatekeeping. When discovery is centralized in editorial and algorithmic playlists, platforms can steer demand toward low-or-no-royalty inventory (functional audio, public-domain, in-house/commissioned AI), starving human repertoire while claiming neutrality.
  • Investor alignment. The story that scales is not “fair pay”; it’s “gross margin expansion.” AI is the lever that turns culture into a fixed cost and artists into externalities.

Where does that leave us? Both streaming and AI “work” best for Big Tech, financially, when the artist is cheap enough to ignore or easy enough to replace. AI doesn’t disrupt that model; it completes it. It also gives cover through a tortured misreading through the “national security” lens so natural for a Lord of War investor like Mr. Ek who will no doubt give fellow Swede and one of the great Lords of War, Alfred Nobel, a run for his money. (Perhaps Mr. Ek will reimagine the Peace Prize.) If we don’t hard-wire licensing, provenance, and payout floors, the platform’s optimal future is music without musicians.

Plato conceived justice as each part performing its proper function in harmony with the whole—a balance of reason, spirit, and appetite within the individual and of classes within the city. Applied to AI synthetic works like those generated by Sora 2, injustice arises when this order collapses: when technology imitates creation without acknowledging the creators whose intellect and labor made it possible. Such systems allow the “appetitive” side—profit and scale—to dominate reason and virtue. In Plato’s terms, an AI trained on human art yet denying its debt to artists enacts the very disorder that defines injustice.

Denmark’s Big Idea: Protect Personhood from the Blob With Consent First and Platform Duty Built In

Denmark has given the rest of us a simple, powerful starting point: protect the personhood of citizens from the blob—the borderless slurry of synthetic media that can clone your face, your voice, and your performance at scale. Crucially, Denmark isn’t trying to turn name‑image‑likeness into a mini‑copyright. It’s saying something more profound: your identity isn’t a “work”; it’s you. It’s what is sometimes called “personhood.” That framing changes everything. It’s not commerce, it’s a human right.

The Elements of Personhood

Personhood raises human reality as moral consideration, not a piece of content. For example, the European Court of Human Rights reads Article 8 ECHR (“private life”) to include personal identity (name, identity integrity, etc.), protecting individual identity against unjustified interference. This is, of course, anathema to Silicon Valley, but the world takes a different view.

In fact, Denmark’s proposal echoes the Universal Declaration of Human Rights. It starts with dignity (Art. 1) and recognition of each person before the law (Art. 6), and it squarely protects private life, honor, and reputation against synthetic impersonation (Art. 12). It balances freedom of expression (Art. 19) with narrow, clearly labeled carve-outs, and it respects creators’ moral and material interests (Art. 27(2)). Most importantly, it delivers an effective remedy (Art. 8): a consent-first rule backed by provenance and cross-platform stay-down, so individuals aren’t forced into DMCA-style learned helplessness.

Why does this matter? Because the moment we call identity or personhood a species of copyright, platforms will reach for a familiar toolbox—quotation, parody, transient copies, text‑and‑data‑mining (TDM)—and claim exceptions to protect them from “data holders”. That’s bleed‑through: the defenses built for expressive works ooze into an identity context where they don’t belong. The result is an unearned permission slip to scrape faces and voices “because the web is public.” Denmark points us in the opposite direction: consent or it’s unlawful. Not “fair use,” not “lawful access,” not “industry custom., not “data profile.” Consent. Pretty easy concept. It’s one of the main reasons tech executives keep their kids away from cell phones and social media.

Not Replicating the Safe Harbor Disaster

Think about how we got here. The first generation of the internet scaled by pushing risk downstream with a portfolio of safe harbors like the God-awful DMCA and Section 230 in the US. Platforms insisted they were deserving of blanket liability shields because they were special. They were “neutral pipes” which no one believed then and don’t believe now. These massive safe harbors hardened into a business model that likely added billions to the FAANG bottom line. We taught millions of rightsholders and users to live with learned helplessness: file a notice, watch copies multiply, rinse and repeat. Many users did not know they could even do that much, and frankly still may not. That DMCA‑era whack‑a‑mole turned into a faux license, a kind of “catch me if you can” bargain where exhaustion replaces consent.

Denmark’s New Protection of Personhood for the AI Era

Denmark’s move is a chance to break that pattern—if we resist the gravitational pull back to copyright. A fresh right of identity (called a “sui generis” right among Latin fans) is not subject to copyright or database exceptions, especially fair use, DMCA, and TDM. In plain English: “publicly available” is not permission to clone your face, train on your voice, or fabricate your performance. Or your children, either. If an AI platform wants to use identity, they ask first. If they don’t ask, they don’t get to do it, and they don’t get to keep the model they trained on it. And like many other areas, children can’t consent.

That legal foundation unlocks the practical fix creators and citizens actually need: stay‑down across platforms, not endless piecemeal takedowns. Imagine a teacher discovers a convincing deepfake circulating on two social networks and a messaging app. If we treat that deepfake as a copyright issue under the old model, she sends three notices, then five, then twelve. Week two, the video reappears with a slight change. Week three, it’s re‑encoded, mirrored, and captioned. The message she receives under a copyright regime is “you can never catch up.” So why don’t you just give up. Which, of course, in the world of Silicon Valley monopoly rents, is called the plan. That’s the learned helplessness Denmark gives us permission to reject.

Enforcing Personhood

How would the new plan work? First, we treat realistic digital imitations of a person’s face, voice, or performance as illegal absent consent, with only narrow, clearly labeled carve‑outs for genuine public‑interest reporting (no children, no false endorsement, no biometric spoofing risk, provenance intact). That’s the rights architecture: bright lines and human‑centered. Hence, “personhood.”

Second, we wire enforcement to succeed at internet scale. The way out of whack‑a‑mole is a cross‑platform deepfake registry operated with real governance. A deepfake registry doesn’t store videos; it stores non‑reversible fingerprints—exact file hashes for byte‑for‑byte matches and robust, perceptual fingerprints for the variants (different encodes, crops, borders). For audio, we use acoustic fingerprints; for video, scene/frame signatures. These markers will evolve and so should the deepfakes registry. One confirmed case becomes a family of identifiers that platforms check at upload and on re‑share. The first takedown becomes the last.

Third, we pair that with provenance by default: Provenance isn’t a license; it’s evidence. When credentials are present, it’s easier to authenticate so there is an incentive to use them. Provenance is the rebar that turns legal rules into reliable, automatable processes. However, absence of credentials doesn’t mean free for all.

Finally, we put the onus where it belongs—on platforms. Europe’s Digital Services Act at least theoretically already replaced “willful blindness” with “notice‑and‑action” duties and oversight for very large platforms. Denmark’s identity right gives citizens a clear, national‑law basis to say: “This is illegal content—remove it and keep it down.” The platform’s job isn’t to litigate fair use in the abstract or hide behind TDM. It’s to implement upload checks, preserve provenance, run repeat‑offender policies, and prevent recurrences. If a case was verified yesterday, it shouldn’t be back tomorrow with a 10‑pixel border or other trivial alteration to defeat the rules.

Some will ask: what about creativity and satire? The answer is what it has always been in responsible speech law—more speech not fake speech. If you’re lampooning a politician with a clearly labeled synthetic speech, no implied endorsement, provenance intact, and no risk of biometric spoofing or fraud, you have defenses. The point isn’t to smother satire; it’s to end the pretense that satire requires open season on the biometric identities of private citizens and working artists.

Others will ask: what about research and innovation? Good research runs on consent, especially human subject research (see 45 C.F.R. part 46). If a lab wants to study voice cloning, it recruits consenting participants, documents scope and duration, and keeps data and models in controlled settings. That’s science. What isn’t science is scraping the voices of a country’s population “because the web is public,” then shipping a model that anyone can use to spoof a bank’s call‑center checks. A no‑TDM‑bleed‑through clause draws that line clearly.

And yes, edge cases exist. There will be appeals, mistakes, and hard calls at the margins. That is why the registry must be governed—with identity verification, transparent logs, fast appeals, and independent oversight. Done right, it will look less like a black box and more like infrastructure: a quiet backbone that keeps people safe while allowing reporting and legitimate creative work to thrive.

If Denmark’s spark is to become a firebreak, the message needs to be crisp:

— This is not copyright. Identity is a personal right; copyright defenses don’t apply.

— Consent is the rule. Deepfakes without consent is unlawful.

— No TDM bleed‑through. “Publicly available” does not equate to permission to clone or train.

— Provenance helps prove, not permit. Keep credentials intact; stripping them has consequences.

— Stay‑down, cross‑platform. One verified case should not become a thousand reuploads.

That’s how you protect personhood from the blob. By refusing to treat humans like “content,” by ending the faux‑license of whack‑a‑mole, and by making platforms responsible for prevention, not just belated reaction. Denmark has given us the right opening line. Now we should finish the paragraph: consent or block. Label it, prove it, or remove it.

Schrödinger’s Training Clause: How Platforms Like WeTransfer Say They’re Not Using Your Files for AI—Until They Are

Tech companies want your content. Not just to host it, but for their training pipeline—to train models, refine algorithms, and “improve services” in ways that just happen to lead to new commercial AI products. But as public awareness catches up, we’ve entered a new phase: deniable ingestion.

Welcome to the world of the Schrödinger’s training clause—a legal paradox where your data is simultaneously not being used to train AI and fully licensed in case they decide to do so.

The Door That’s Always Open

Let’s take the WeTransfer case. For a brief period this month (in July 2025), their Terms of Service included an unmistakable clause: users granted them rights to use uploaded content to “improve the performance of machine learning models.” That language was direct. It caused backlash. And it disappeared.

Many mea culpas later, their TOS has been scrubbed clean of AI references. I appreciate the sentiment, really I do. But—and there’s always a but–the core license hasn’t changed. It’s still:

– Perpetual

– Worldwide

– Royalty-free

– Transferable

– Sub-licensable

They’ve simply returned the problem clause to its quantum box. No machine learning references. But nothing that stops it either.

 A Clause in Superposition

Platforms like WeTransfer—and others—have figured out the magic words: Don’t say you’re using data to train AI. Don’t say you’re not using it either. Instead, claim a sweeping license to do anything necessary to “develop or improve the service.”

That vague phrasing allows future pivots. It’s not a denial. It’s a delay. And to delay is to deny.

That’s what makes it Schrödinger’s training clause: Your content isn’t being used for AI. Unless it is. And you won’t know until someone leaks it, or a lawsuit makes discovery public.

The Scrape-Then-Scrub Scenario

Let’s reconstruct what could have happened–not saying it did happen, just could have–following the timeline in The Register:

1. Early July 2025: WeTransfer silently updates its Terms of Service to include AI training rights.

2. Users continue uploading sensitive or valuable content.

3. [Somebody’s] AI systems quickly ingest that data under the granted license.

4. Public backlash erupts mid-July.

5. WeTransfer removes the clause—but to my knowledge never revokes the license retroactively or promises to delete what was scraped. In fact, here’s their statement which includes this non-denial denial: “We don’t use machine learning or any form of AI to process content shared via WeTransfer.” OK, that’s nice but that wasn’t the question. And if their TOS was so clear, then why the amendment in the first place?

Here’s the Potential Legal Catch

Even if WeTransfer removed the clause later, any ingestion that occurred during the ‘AI clause window’ is arguably still valid under the terms then in force. As far as I know, they haven’t promised:

– To destroy any trained models

– To purge training data caches

– Or to prevent third-party partners from retaining data accessed lawfully at the time

What Would ‘Undoing’ Scraping Require?

– Audit logs to track what content was ingested and when

– Reversion of any models trained on user data

– Retroactive license revocation and sub-license termination

None of this has been offered that I have seen.

What ‘We Don’t Train on Your Data’ Actually Means

When companies say, “we don’t use your data to train AI,” ask:

– Do you have the technical means to prevent that?

– Is it contractually prohibited?

– Do you prohibit future sublicensing?

– Can I audit or opt out at the file level?

If the answer to those is “no,” then the denial is toothless.

How Creators Can Fight Back

1. Use platforms that require active opt-in for AI training.

2. Encrypt files before uploading.

3. Include counter-language in contracts or submission terms:

   “No content provided may be used, directly or indirectly, to train or fine-tune machine learning or artificial intelligence systems, unless separately and explicitly licensed for that purpose in writing” or something along those lines.

4. Call it out. If a platform uses Schrödinger’s language, name it. The only thing tech companies fear more than litigation is transparency.

What is to Be Done?

The most dangerous clauses aren’t the ones that scream “AI training.” They’re the ones that whisper, “We’re just improving the service.”

If you’re a creative, legal advisor, or rights advocate, remember: the future isn’t being stolen with force. It’s being licensed away in advance, one unchecked checkbox at a time.

And if a platform’s only defense is “we’re not doing that right now”—that’s not a commitment. That’s a pause.

That’s Schrödinger’s training clause.

AI Needs Ever More Electricity—And Google Wants Us to Pay for It

Uncle Sugar’s “National Emergency” Pitch to Congress

At a recent Congressional hearing, former Google CEO Eric “Uncle Sugar” Schmidt delivered a message that was as jingoistic as it was revealing: if America wants to win the AI arms race, it better start building power plants. Fast. But the subtext was even clearer—he expects the taxpayer to foot the bill because, you know, the Chinese Communist Party. Yes, when it comes to fighting the Red Menace, the all-American boys in Silicon Valley will stand ready to fight to the last Ukrainian, or Taiwanese, or even Texan.

Testifying before the House Energy & Commerce Committee on April 9, Schmidt warned that AI’s natural limit isn’t chips—it’s electricity. He projected that the U.S. would need 92 gigawatts of new generation capacity—the equivalent of nearly 100 nuclear reactors—to keep up with AI demand.

Schmidt didn’t propose that Google, OpenAI, Meta, or Microsoft pay for this themselves, just like they didn’t pay for broadband penetration. No, Uncle Sugar pushed for permitting reform, federal subsidies, and government-driven buildouts of new energy infrastructure. In plain English? He wants the public sector to do the hard and expensive work of generating the electricity that Big Tech will profit from.

Will this Improve the Grid?

And let’s not forget: the U.S. electric grid is already dangerously fragile. It’s aging, fragmented, and increasingly vulnerable to cyberattacks, electromagnetic pulse (EMP) weapons, and even extreme weather events. Pouring public money into ultra-centralized AI data infrastructure—without first securing the grid itself—is like building a mansion on a cracked foundation.

If we are going to incur public debt, we should prioritize resilience, distributed energy, grid security, and community-level reliability—not a gold-plated private infrastructure buildout for companies that already have trillion-dollar valuations.

Big Tech’s Growing Appetite—and Private Hoarding

This isn’t just a future problem. The data center buildout is already in full swing and your Uncle Sugar must be getting nervous about where he’s going to get the money from to run his AI and his autonomous drone weapons. In Oregon, where electricity is famously cheap thanks to the Bonneville Power Administration’s hydroelectric dams on the Columbia River, tech companies have quietly snapped up huge portions of the grid’s output. What was once a shared public benefit—affordable, renewable power—is now being monopolized by AI compute farms whose profits leave the region to the bank accounts in Silicon Valley.

Meanwhile, Microsoft is investing in a nuclear-powered data center next to the defunct Three Mile Island reactor—but again, it’s not about public benefit. It’s about keeping Azure’s training workloads running 24/7. And don’t expect them to share any of that power capacity with the public—or even with neighboring hospitals, schools, or communities.

Letting the Public Build Private Fortresses

The real play here isn’t just to use public power—it’s to get the public to build the power infrastructure, and then seal it off for proprietary use. Moats work both ways.

That includes:
– Publicly funded transmission lines across hundreds of miles to deliver power to remote server farms;
– Publicly subsidized generation capacity (nuclear, gas, solar, hydro—you name it);
– And potentially, prioritized access to the grid that lets AI workloads run while the rest of us face rolling blackouts during heatwaves.

All while tech giants don’t share their models, don’t open their training data, and don’t make their outputs public goods. It’s a privatized extractive model, powered by your tax dollars.

Been Burning for Decades

Don’t forget: Google and YouTube have already been burning massive amounts of electricity for 20 years. It didn’t start with ChatGPT or Gemini. Serving billions of search queries, video streams, and cloud storage events every day requires a permanent baseload—yet somehow this sudden “AI emergency” is being treated like a surprise, as if nobody saw it coming.

If they knew this was coming (and they did), why didn’t they build the power? Why didn’t they plan for sustainability? Why is the public now being told it’s our job to fix their bottleneck?

The Cold War Analogy—Flipped on Its Head

Some industry advocates argue that breaking up Big Tech or slowing AI infrastructure would be like disarming during a new Cold War with China. But Gail Slater, the Assistant Attorney General leading the DOJ’s Antitrust Division, pushed back forcefully—not at a hearing, but on the War Room podcast.

In that interview, Slater recalled how AT&T tried to frame its 1980s breakup as a national security threat, arguing it would hurt America’s Cold War posture. But the DOJ did it anyway—and it led to an explosion of innovation in wireless technology.

“AT&T said, ‘You can’t do this. We are a national champion. We are critical to this country’s success. We will lose the Cold War if you break up AT&T,’ in so many words. … Even so, [the DOJ] moved forward … America didn’t lose the Cold War, and … from that breakup came a lot of competition and innovation.”

“I learned that in order to compete against China, we need to be in all these global races the American way. And what I mean by that is we’ll never beat China by becoming more like China. China has national champions, they have a controlled economy, et cetera, et cetera.

We win all these races and history has taught by our free market system, by letting the ball rip, by letting companies compete, by innovating one another. And the reason why antitrust matters to that picture, to the free market system is because we’re the cop on the beat at the end of the day. We step in when competition is not working and we ensure that markets remain competitive.”

Slater’s message was clear: regulation and competition enforcement are not threats to national strength—they’re prerequisites to it. So there’s no way that the richest corporations in commercial history should be subsidized by the American taxpayer.

Bottom Line: It’s Public Risk, Private Reward

Let’s be clear:

– They want the public to bear the cost of new electricity generation.
– They want the public to underwrite transmission lines.
– They want the public to streamline regulatory hurdles.
– And they plan to privatize the upside, lock down the infrastructure, keep their models secret and socialize the investment risk.

This isn’t a public-private partnership. It’s a one-way extraction scheme. America needs a serious conversation about energy—but it shouldn’t begin with asking taxpayers to bail out the richest companies in commercial history.

David Sacks Is Learning That the States Still Matter

For a moment, it looked like the tech world’s powerbrokers had pulled it off. Buried deep in a Republican infrastructure and tax package was a sleeper provision — the so-called AI moratorium — that would have blocked states from passing their own AI laws for up to a decade. It was an audacious move: centralize control over one of the most consequential technologies in history, bypass 50 state legislatures, and hand the reins to a small circle of federal agencies and especially to tech industry insiders.

But then it collapsed.

The Senate voted 99–1 to strike the moratorium. Governors rebelled. Attorneys general sounded the alarm. Artists, parents, workers, and privacy advocates from across the political spectrum said “no.” Even hardline conservatives like Ted Cruz eventually reversed course when it came down to the final vote. The message to Big Tech or the famous “Little Tech” was clear: the states still matter — and America’s tech elite ignore that at their peril.  (“Little Tech” is the latest rhetorical deflection promoted by Big Tech aka propaganda.)

The old Google crowd pushed the moratorium–their fingerprints were obvious. Having gotten fabulously rich off of their two favorites: The DMCA farce and the Section 230 shakedown. But there’s increasing speculation that White House AI Czar and Silicon Valley Viceroy David Sacks, PayPal alum and vocal MAGA-world player, was calling the ball. If true, that makes this defeat even more revealing.

Sacks represents something of a new breed of power-hungry tech-right influencer — part of the emerging “Red Tech” movement that claims to reject woke capitalism and coastal elitism but still wants experts to shape national policy from Silicon Valley, a chapter straight out of Philip Dru: Administrator. Sacks is tied to figures like Peter Thiel, Elon Musk, and a growing network of Trump-aligned venture capitalists. But even that alignment couldn’t save the moratorium.

Why? Because the core problem wasn’t left vs. right. It was top vs. bottom.

In 1964, Ronald Reagan’s classic speech called A Time for Choosing warned about “a little intellectual elite in a far-distant capitol” deciding what’s best for everyone else. That warning still rings true — except now the “capitol” might just be a server farm in Menlo Park or a podcast studio in LA.

The AI moratorium was an attempt to govern by preemption and fiat, not by consent. And the backlash wasn’t partisan. It came from red states and blue ones alike — places where elected leaders still think they have the right to protect their citizens from unregulated surveillance, deepfakes, data scraping, and economic disruption.

So yes, the defeat of the moratorium was a blow to Google’s strategy of soft-power dominance. But it was also a shot across the bow for David Sacks and the would-be masters of tech populism. You can’t have populism without the people.

If Sacks and his cohort want to play a long game in AI policy, they’ll have to do more than drop ideas into the policy laundry of think tank white papers and Beltway briefings. They’ll need to win public trust, respect state sovereignty, and remember that governing by sneaky safe harbors is no substitute for legitimacy.  

The moratorium failed because it presumed America could be governed like a tech startup — from the top, at speed, with no dissent. Turns out the country is still under the impression they have something to say about how they are governed, especially by Big Tech.

The Patchwork They Fear Is Accountability: Why Big AI Wants a Moratorium on State Laws

Why Big Tech’s Push for a Federal AI Moratorium Is Really About Avoiding State Investigations, Liability, and Transparency

As Congress debates the so-called “One Big Beautiful Bill Act,” one of its most explosive provisions has stayed largely below the radar: a 10-year or 5-year or any-year federal moratorium on state and local regulation of artificial intelligence. Supporters frame it as a common sense way to prevent a “patchwork” of conflicting state laws. But the real reason for the moratorium may be more self-serving—and more ominous.

The truth is, the patchwork they fear is not complexity. It’s accountability.

Liability Landmines Beneath the Surface

As has been well-documented by the New York Times and others, generative AI platforms have likely ingested and processed staggering volumes of data that implicate state-level consumer protections. This includes biometric data (like voiceprints and faces), personal communications, educational records, and sensitive metadata—all of which are protected under laws in states like Illinois (BIPA), California (CCPA/CPRA), and Texas.

If these platforms scraped and trained on such data without notice or consent, they are sitting on massive latent liability. Unlike federal laws, which are often narrow or toothless, many state statutes allow private lawsuits and statutory damages. Class action risk is not hypothetical—it is systemic.  It is crucial for policymakers to have a clear understanding of where we are today with respect to the collision between AI and consumer rights, including copyright.  The corrosion of consumer rights by the richest corporations in commercial history is not something that may happen in the future.  Massive violations have  already occurred, are occurring this minute, and will continue to occur into the future at an increasing rate.  

The Quiet Race to Avoid Discovery

State laws don’t just authorize penalties; they open the door to discovery. Once an investigation or civil case proceeds, AI platforms could be forced to disclose exactly what data they trained on, how it was retained, and whether any red flags were ignored.

This mirrors the arc of the social media addiction lawsuits now consolidated in multidistrict litigation. Platforms denied culpability for years—until internal documents showed what they knew and when. The same thing could happen here, but on a far larger scale.

Preemption as Shield and Sword

The proposed AI moratorium isn’t a regulatory timeout. It’s a firewall. By halting enforcement of state AI laws, the moratorium could prevent lawsuits, derail investigations, and shield past conduct from scrutiny.

Even worse, the Senate version conditions broadband infrastructure funding (BEAD) on states agreeing to the moratorium—an unconstitutional act of coercion that trades state police powers for federal dollars. The legal implications are staggering, especially under the anti-commandeering doctrine of Murphy v. NCAA and Printz v. United States.

This Isn’t About Clarity. It’s About Control.

Supporters of the moratorium, including senior federal officials and lobbying arms of Big Tech, claim that a single federal standard is needed to avoid chaos. But the evidence tells a different story.

States are acting precisely because Congress hasn’t. Illinois’ BIPA led to real enforcement. California’s privacy framework has teeth. Dozens of other states are pursuing legislation to respond to harms AI is already causing.

In this light, the moratorium is not a policy solution. It’s a preemptive strike.

Who Gets Hurt?
– Consumers, whose biometric data may have been ingested without consent
– Parents and students, whose educational data may now be part of generative models
– Artists, writers, and journalists, whose copyrighted work has been scraped and reused
– State AGs and legislatures, who lose the ability to investigate and enforce

Google Is an Example of Potential Exposure

Google’s former executive chairman Eric Schmidt has seemed very, very interested in writing the law for AI.  For example, Schmidt worked behind the scenes for the two years at least to establish US artificial intelligence policy under President Biden. Those efforts produced the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence“, the longest executive order in history. That EO was signed into effect by President Biden on October 30.  In his own words during an Axios interview with Mike Allen, the Biden AI EO was signed just in time for Mr. Schmidt to present that EO as what Mr. Schmidt calls “bait” to the UK government–which convened a global AI safety conference at Bletchley Park in the UK convened by His Excellency Rishi Sunak (the UK’s tech bro Prime Minister) that just happened to start on November 1, the day after President Biden signed the EO.  And now look at the disaster that the UK AI proposal would be.  

As Mr. Schmidt told Axios:

So far we are on a win, the taste of winning is there.  If you look at the UK event which I was part of, the UK government took the bait, took the ideas, decided to lead, they’re very good at this,  and they came out with very sensible guidelines.  Because the US and UK have worked really well together—there’s a group within the National Security Council here that is particularly good at this, and they got it right, and that produced this EO which is I think is the longest EO in history, that says all aspects of our government are to be organized around this.

Apparently, Mr. Schmidt hasn’t gotten tired of winning.  Of course, President Trump rescinded the Biden AI EO which may explain why we are now talking about a total moratorium on state enforcement which percolated at a very pro-Google shillery called R Street Institute, apparently by one Adam Thierer .  But why might Google be so interested in this idea?

Google may face exponentially acute liability under state laws if it turns out that biometric or behavioral data from platforms like YouTube Kids or Google for Education were ingested into AI training sets. 

These services, marketed to families and schools, collect sensitive information from minors—potentially implicating both federal protections like COPPA and more expansive state statutes. As far back as 2015, Senator Ben Nelson raised alarms about YouTube Kids, calling it “ridiculously porous” in terms of oversight and lack of safeguards. If any of that youth-targeted data has been harvested by generative AI tools, the resulting exposure is not just a regulatory lapse—it’s a landmine. 

The moratorium could be seen as an attempt to preempt the very investigations that might uncover how far that exposure goes.

What is to be Done?

Instead of smuggling this moratorium into a must-pass bill, Congress should strip it out and hold open hearings. If there’s merit to federal preemption, let it be debated on its own. But do not allow one of the most sweeping power grabs in modern tech policy to go unchallenged.

The public deserves better. Our children deserve better.  And the states have every right to defend their people. Because the patchwork they fear isn’t legal confusion.

It’s accountability.

The OBBBA’s AI Moratorium Provision Has Existential Constitutional Concerns and Policy Implications

As we watch the drama of the One Big Beautiful Bill Act play out there’s a plot twist waiting in the wings that could create a cliffhanger in the third act: The poorly thought out, unnecessary and frankly offensive AI moratorium safe harbor that serves only the Biggest of Big Tech that we were gifted by Adam Theirer of the R Street Institute.

The latest version of the AI moratorium poison pill in the Senate version of OBBBA (aka HR1) reads something like this:

The AI moratorium provision within the One Big Beautiful Bill Act (OBBBA) reads like the fact pattern for a bar exam crossover question. The proposed legislation raises significant Constitutional and policy concerns. Before it even gets to the President’s desk, the legislation likely violates the Senate’s Byrd Rule that allows the OBBBA to avoid the 60 vote threshold (and the filibuster) and get voted on in “reconciliation” on a simple majority. The President’s party has a narrow simple majority in the Senate so if it were not for the moratorium the OBBBA should pass.

There are lots of people who think that the moratorium should fail the “Byrd Bath” analysis because it is not “germane” to the budget and tax process required to qualify for reconciliation. This is important because if the Senate Parliamentarian does not hold the line on germaine-ness, everyone will get into the act for every bill simply by attaching a chunk of money to your favorite donor, and that will not go over well. According to Roll Call, Senator Cruz is already talking about introducing regulatory legislation with the moratorium, which would likely only happen if the OBBBA poison pill was cut out:

The AI moratorium has already picked up some serious opponents in the Senate who would likely have otherwise voted for the President’s signature legislation with the President’s tax and spending policies in place. The difference between the moratorium and spending cuts is that money is fungible and a moratorium banning states from acting under their police powers really, really, really is not fungible at all. The moratorium is likely going to fail or get close to failing, and if the art of the deal says getting 80% of something is better than 100% of nothing, that moratorium is going to go away in the context of a closing. Maybe.

And don’t forget, the bill has to go back to the House which passed it by a single vote and there are already Members of the House who are getting buyers remorse about the AI moratorium specifically. So when they get a chance to vote again…who knows.

Even if it passes, the 40 state Attorneys General who oppose it may be gearing up to launch a Constitutional challenge to the provision on a number of grounds starting with the Tenth Amendment, its implications for federalism, and other Constitutional issues that just drip out of this thing. And my bet is that Adam Thierer will be eyeball witness #1 in that litigation.

So to recap the vulnerabilities:

Byrd Rule Violation

The Byrd Rule prohibits non-budgetary provisions in reconciliation bills. The AI moratorium’s primary effect is regulatory, not fiscal, as it preempts state laws without directly impacting federal revenues or expenditures. Senators, including Ed Markey (D-MA) as reported by Roll Call, have indicated intentions to challenge the provision under the Byrd Rule. The Hill reports:

Federal Preemption, the Tenth Amendment and Anti-Commandeering Doctrine

The Tenth Amendment famously reserves powers not delegated to the federal government to the states and to the people (remember them?). The constitutional principle of “anticommandeering” is a doctrine under U.S. Constitutional law that prohibits the federal government from compelling states or state officials to enact, enforce, or administer federal regulatory programs.

Anticommandeering is grounded primarily in the Tenth Amendment. Under this principle, while the federal government can regulate individuals directly under its enumerated powers (such as the Commerce Clause), it cannot force state governments to govern according to federal instructions. Which is, of course, exactly what the moratorium does, although the latest version would have you believe that the feds aren’t really commandeering, they are just tying behavior to money which the feds do all the time. I doubt anyone believes it.

The AI moratorium infringes upon the good old Constitution by:

  • Overriding State Authority: It prohibits states from enacting or enforcing AI regulations, infringing upon their traditional police powers to legislate for the health, safety, and welfare of their citizens.
  • Lack of Federal Framework: Unlike permissible federal preemption, which operates within a comprehensive federal regulatory scheme, the AI moratorium lacks such a framework, making it more akin to unconstitutional commandeering.
  • Precedent in Murphy v. NCAA: The Supreme Court held that Congress cannot prohibit states from enacting laws, as that prohibition violates the anti-commandeering principle. The AI moratorium, by preventing states from regulating AI, mirrors the unconstitutional aspects identified in Murphy. So there’s that.

The New Problem: Coercive Federalism

By conditioning federal broadband funds (“BEAD money”) on states’ agreement to pause AI regulations , the provision exerts undue pressure on states, potentially violating principles established in cases like NFIB v. Sebelius. Plus, the Broadband Equity, Access, and Deployment (BEAD) Program is a $42.45 billion federal initiative established under the Infrastructure Investment and Jobs Act of 2021. Administered by the National Telecommunications and Information Administration (NTIA), BEAD aims to expand high-speed internet access across the United States by funding planning, infrastructure deployment, and adoption programs. In other words, BEAD has nothing to do with the AI moratorium. So there’s that.

Supremacy Clause Concerns

The moratorium may conflict with existing state laws, leading to legal ambiguities and challenges regarding federal preemption. That’s one reason why 40 state AGs are going to the mattresses for the fight.

Lawmakers Getting Cold Feet or In Opposition

Several lawmakers have voiced concerns or opposition to the AI moratorium:

  • Rep. Marjorie Taylor Greene (R-GA): Initially voted for the bill but later stated she was unaware of the AI provision and would have opposed it had she known. She has said that she will vote no on the OBBBA when it comes back to the House if the Mr. T’s moratorium poison pill is still in there.
  • Sen. Josh Hawley (R-MO): Opposes the moratorium, emphasizing the need to protect individual rights over corporate interests.
  • Sen. Marsha Blackburn (R-TN): Expressed concerns that the moratorium undermines state protections, particularly referencing Tennessee’s AI-related laws.
  • Sen. Edward Markey (D-MA): Intends to challenge the provision under the Byrd Rule, citing its potential to harm vulnerable communities.

Recommendation: Allow Dissenting Voices

Full disclosure, I don’t think Trump gives a damn about the AI moratorium. I also think this is performative and is tied to giving the impression to people like Masa at Softbank that he tried. It must be said that Masa’s billions are not quite as important after Trump’s Middle East roadshow than they were before, speaking of leverage. While much has been made of the $1 million contributions that Zuckerberg, Tim Apple, & Co. made to attend the inaugural, there’s another way to look at that tableau–remember Titus Andronicus when the general returned to Rome with Goth prisoners in chains following his chariot? That was Tamora, the Queen of the Goths, her three sons Alarbus, Chiron, and Demetrius along with Aaron the Moor. Titus and the Goth’s still hated each other. Just sayin’.

Somehow I wouldn’t be surprised if this entire exercise was connected to the TikTok divestment in ways that aren’t entirely clear. So, given the constitutional concerns and growing opposition, it is advisable for President Trump to permit members of Congress to oppose the AI moratorium provision without facing political repercussions, particularly since Rep. Greene has already said she’s a no vote–on the 214-213 vote the first time around. This approach would:

  • Respect the principles of federalism and states’ rights.
  • Tell Masa he tried, but oh well.
  • Demonstrate responsiveness to legitimate legislative concerns on a bi-partisan basis.
  • Ensure that the broader objectives of the OBBBA are not jeopardized by a contentious provision.

Let’s remember: The tax and spend parts of OBBBA are existential to the Trump agenda; the AI moratorium definitely is not, no matter what Mr. T wants you to believe. While the OBBBA encompasses significant policy initiatives which are highly offensive to a lot of people, the AI moratorium provision presents constitutional and procedural challenges and fundamental attacks on our Constitution that warrant its removal. Cutting it out will strengthen the bill’s likelihood of passing and uphold the foundational principles of American governance, at least for now.

Hopefully Trump looks at it that way, too.

How the AI Moratorium Threatens Local Educational Control

The proposed federal AI moratorium currently in the One Big Beautiful Bill Act states:

[N]o State or political subdivision thereof may enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation of that State or a political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.

What is a “political subdivision”?  According to a pretty standard definition offered by the Social Security Administration:

A political subdivision is a separate legal entity of a State which usually has specific governmental functions.  The term ordinarily includes a county, city, town, village, or school district, and, in many States, a sanitation, utility, reclamation, drainage, flood control, or similar district.

The proposed moratorium would prevent school districts—classified as political subdivisions—from adopting policies that regulate artificial intelligence. This includes rules restricting students’ use of AI tools such as ChatGPT, Gemini, or other platforms in school assignments, exams, and academic work. Districts may be unable to prohibit AI-generated content in essays, discipline AI-related cheating, or require disclosures about AI use unless they write broad rules for ‘unauthorized assistance’ in general or something like that.

Without clear authority to restrict AI in educational contexts, school districts will likely struggle to maintain academic integrity or to update honor codes. The moratorium could even interfere with schools’ ability to assess or certify genuine student performance. 

Parallels with Google’s Track Record in Education

The dangers of preempting local educational control over AI echo prior controversies involving Google’s deployment of tools like Chromebooks, Google Classroom, and Workspace for Education in K–12 environments. Despite being marketed as free and privacy-safe, Google has repeatedly been accused of covertly tracking students, profiling minors, and failing to meet federal privacy standards.  It’s entirely likely that Google has integrated its AI into all of its platforms including those used in school districts, so Google could likely raise the AI moratorium as a safe harbor defense to claims by parents or schools that they violate privacy or other rights with their products.

2015 complaint by the Electronic Frontier Foundation (EFF) alleged that Google tracked student activity even with privacy settings enabled although this was probably an EFF ‘big help, little bad mouth’ situation. New Mexico sued Google in 2020 for collecting student data without parental consent. Most recently, lawsuits in California allege that Google continues to fingerprint students and gather metadata despite educational safeguards.

Although the EFF filed an FTC complaint against Google in 2015, it did not launch a broad campaign or litigation strategy afterward. Critics argue that EFF’s muted follow-up may reflect its financial ties to Google, which has funded the organization in the past. This creates a potential conflict: while EFF publicly supports student privacy, its response to Google’s misconduct has been comparatively restrained.

This has led to the suggestion that EFF operates in a ‘big help, little bad mouth’ mode—providing substantial policy support to Google on issues like net neutrality and platform immunity, while offering limited criticism on privacy violations that directly affect vulnerable groups like students.

AI Use in Schools vs. Google’s Educational Data Practices: A Dangerous Parallel

The proposed AI moratorium would prevent school districts from regulating how artificial intelligence tools are used in classrooms—including tools that generate student work or analyze student behavior. This prohibition becomes even more alarming when we consider the historical abuses tied to Google’s education technologies, which have long raised concerns about student profiling and surveillance.

Over the past decade, Google has aggressively expanded its presence in American classrooms through products like Google Classroom, Chromebooks with Google Workspace for Education, Google Docs and Gmail for student accounts.

Although marketed as free tools, these services have been criticized for tracking children’s browsing behavior and location, storing search histories, even when privacy settings were enabled, creating behavioral profiles for advertising or product development, and sharing metadata with third-party advertisers or internal analytics teams.

Google previously entered into a 2014 agreement with the Electronic Frontier Foundation (EFF) to curb these practices—but watchdog groups and investigative journalists have continued to document covert tracking of minors, even in K–12 settings where children cannot legally consent to data collection.

AI Moratorium: Legalizing a New Generation of Surveillance Tools

The AI moratorium would take these concerns a step further by prohibiting school districts from regulating newer AI systems, even if they profile students using facial recognition, emotion detection, or predictive analytics, auto-grade essays and responses, building proprietary datasets on student writing patterns, offer “personalized learning” in exchange for access to sensitive performance and behavior data, or encourage use of generative tools (like ChatGPT) that may store and analyze student prompts and usage patterns

If school districts cannot ban or regulate these tools, they are effectively stripped of their local authority to protect students from the next wave of educational surveillance.

Contrast in Power Dynamics

IssueGoogle for EducationAI Moratorium Impacts
Privacy ConcernsTracked students via Gmail, Docs, and Classroom without proper disclosures.Prevents districts from banning or regulating AI tools that collect behavioral or academic data.
Policy ResponseLimited voluntary reforms; Google maintains a dominant K–12 market share.Preempts all local regulation, even if communities demand stricter safeguards.
Legal RemediesFew successful lawsuits due to weak enforcement of COPPA and FERPA.Moratorium would block even the potential for future local rules.
Educational ImpactCreated asymmetries in access and data protection between schools.Risks deepening digital divides and eroding academic integrity.

Why It Matters

Allowing companies to introduce AI tools into classrooms—while simultaneously barring school districts from regulating them—opens the door to widespread, unchecked profiling of minors, with no meaningful local oversight. Just as Google was allowed to shape a generation’s education infrastructure behind closed doors, this moratorium would empower new AI actors to do the same, shielded from accountability.

Parents groups should let lawmakers know that the AI moratorium has to come out of the legislation.

Now What? Can the AI Moratorium Survive the Byrd Rule on “Germaneness”?

Yes, the Big Beautiful Bill Act has passed the House of Representatives and is on its way to the Senate–with the AI safe harbor moratorium and its $500,000,000 giveaway appropriation intact. Yes, right next to Medicaid cuts, etc.

So now what? The controversial AI regulation moratorium tucked inside the reconciliation package is still a major point of contention. Critics argue that the provision—which would block state and local governments from enforcing or adopting AI-related laws for a decade—is blatantly non-germane to a budget bill. But what if the AI moratorium, in the context of a broader $500 million appropriation for a federal AI modernization initiative, isn’t so clearly in violation of the Byrd Rule? Just remember–these guys are not babies. They’ve thought about this and they intend to win–that’s why the language survived the House.

Remember, the assumption is that President Trump can’t get the BBB through the Senate in regular order which would require 60 votes and instead is going to jam it through under “budget reconciliation” rules which requires a simple majority vote in the Republican-held Senate. Reconciliation requires that there not be shenanigans (hah) and that the budget reconciliation actually deals with the budget and not some policy change that is getting sneaked under the tent. Well, what if it’s both?

Let’s consider what the Senate’s Byrd Rule actually requires.

To survive reconciliation, a provision must:
1. Affect federal outlays or revenues;
2. Have a budgetary impact that is not “merely incidental” to its policy effects;
3. Fall within the scope of the congressional instructions to the committees of jurisdiction;
4. Not increase the federal deficit outside the budget window;
5. Not make recommendations regarding Social Security;
6. Not violate Senate rules on germaneness or jurisdiction.

Critics rightly point out that a sweeping 10-year regulatory moratorium in Section 43201(c) smells more like federal policy overreach than fiscal fine-tuning, particularly since it’s pretty clearly a 10th Amendment violation of state police powers. But the moratorium exists within a broader federal AI modernization framework in Section 43201(a) that does involve a substantial appropriation: $500 million allocated for updating federal AI infrastructure, developing national standards, and coordinating interagency protocols. That money is real, scoreable, and central to the bill’s stated purpose.

Here’s the crux of the argument: if the appropriation is deemed valid under the Byrd Rule, the guardrails that enable its effective execution may also be valid – especially if they condition the use of federal funds on a coherent national framework. The moratorium can then be interpreted not as an abstract policy preference, but as a necessary precondition for ensuring that the $500 million achieves its budgetary goals without fragmentation.

In other words, the moratorium could be cast as a budget safeguard. Allowing 50 different state AI rules to proliferate while the federal government invests in a national AI backbone could undercut the very purpose of the expenditure. If that fragmentation leads to duplicative spending, legal conflict, or wasted infrastructure, then the moratorium arguably serves a protective fiscal function.

Precedent matters here. Reconciliation has been used in the past to impose conditions on Medicaid, restrict use of federal education funds, and shape how states comply with federal energy and transportation programs. The Supreme Court has rejected some of these on 10th Amendment grounds (NFIB v. Sebelius), but the Byrd Rule test is about budgetary relevance, not constitutional viability.

And that’s where the moratorium finds its most plausible defense: it is incidental only if you believe the spending exists in a vacuum. In truth, the $500 million appropriation depends on consistent, scalable implementation. A federal moratorium ensures that states don’t undermine the utility of that spending. It may be unwise. It may be a budget buster. It may be unpopular. But if it’s tightly tied to the execution of a federal program with scoreable fiscal effects, it just might survive the Byrd test.

So while artists, civil liberties advocates and state officials rightly decry the moratorium on policy grounds, its procedural fate may ultimately rest on a more mundane calculus: Does this provision help protect federal funds from inefficiency? If the answer is yes—and the appropriation stays—then the moratorium may live on, not because it deserves to, but because it was drafted just cleverly enough to thread the eye of the Byrd Rule needle.

Like I said, these guys aren’t babies and they thought about this because they mean to win. Ideally, somebody should have stopped it from ever getting into the bill in the first place. But since they didn’t, our challenge is going to be stopping it from getting through attached to a triple-whip, too big to fail, must pass signature legislation that Trump campaigned on and was elected.

And even if we are successful in stopping the AI moratorium safe harbor in the Senate, do you think it’s just going to go away? Will the Tech Bros just say, you got me, now I’ll happily pay those wrongful death claims?

What Bell Labs and Xerox PARC Can Teach Us About the Future of Music

When we talk about the great innovation engines of the 20th century, two names stand out: Bell Labs and Xerox PARC. These legendary research institutions didn’t just push the boundaries of science and technology—they found solutions that brought us breakthroughs to challenges. The transistor, the laser, the UNIX operating system, the graphical user interface, and Ethernet networking all trace their origins to these hubs of long-range, cross-disciplinary thinking.

These breakthroughs didn’t happen by accident. They were the product of institutions that were intentionally designed to explore what might be possible outside the pressures of quarterly earnings reports–which means monthly which means weekly. Bell Labs and Xerox PARC proved that bold ideas need space, time, and a mandate to explore—even if commercial applications aren’t immediately apparent. You cannot solve big problems with an eye on weekly revenues–and I know that because I worked at A&M Records.

Now imagine if music had something like Bell Labs and Xerox PARC.

What if there were a Bell Labs for Music—an independent research and development hub where songwriters, engineers, logisticians, rights experts, and economists could collaborate to solve deep-rooted industry challenges? Instead of letting dominant tech platforms dictate the future, the music industry could build its own innovation engine, tailored to the needs of creators. Let’s consider how similar institutions could empower the music industry to reclaim its creative and economic future particularly confronted by AI and its institutional takeover.

Big Tech’s Self-Dealing: A $500 Million Taxpayer-Funded Windfall

While creators are being told to “adapt” to the age of AI, Big Tech has quietly written itself a $500 million check—funded by taxpayers—for AI infrastructure. Buried within the sprawling “innovation and competitiveness” sections of legislation being promoted as part of Trump’s “big beautiful bill,” this provision would hand over half a billion dollars in public funding—more accurately, public debt—to cloud providers, chipmakers, and AI monopolists with little transparency and even fewer obligations to the public.

Don’t bother looking–it will come as no surprise that there are no offsetting provisions for musicians, authors, educators, or even news publishers whose work is routinely scraped to train these AI models. There are no earmarks for building fair licensing infrastructure or consent-based AI training databases. There is no “AI Bell Labs” for the creative economy.

Once again, we see that innovation policy is being written by and for the same old monopolists who already control the platforms and the Internet itself, while the people whose work fills those platforms are left unprotected, uncompensated, and uninformed. If we are willing to borrow hundreds of millions to accelerate private AI growth, we should be at least as willing to invest in creator-centered infrastructure that ensures innovation is equitable—not extractive.

Innovation Needs a Home—and a Conscience

Bell Labs and Xerox PARC were designed not just to build technology, but to think ahead. They solved many future challenges often before the world even knew they existed.

The music industry can—and must—do the same. Instead of waiting for another monopolist to exercise its political clout to grant itself new safe harbors to upend the rules–like AI platforms are doing right now–we can build a space where songwriters, developers, and rights holders collaborate to define a better future. That means metadata that respects rights and tracks payments to creators. That means fair discovery systems. That means artist-first economic models.

It’s time for a Bell Labs for music. And it’s time to fund it not through government dependency—but through creator-led coalitions, industry responsibility, and platform accountability.

Because the future of music shouldn’t be written in Silicon Valley boardrooms. It should be composed, engineered, and protected by the people who make it matter.