The Patchwork They Fear Is Accountability: Why Big AI Wants a Moratorium on State Laws

Why Big Tech’s Push for a Federal AI Moratorium Is Really About Avoiding State Investigations, Liability, and Transparency

As Congress debates the so-called “One Big Beautiful Bill Act,” one of its most explosive provisions has stayed largely below the radar: a 10-year or 5-year or any-year federal moratorium on state and local regulation of artificial intelligence. Supporters frame it as a common sense way to prevent a “patchwork” of conflicting state laws. But the real reason for the moratorium may be more self-serving—and more ominous.

The truth is, the patchwork they fear is not complexity. It’s accountability.

Liability Landmines Beneath the Surface

As has been well-documented by the New York Times and others, generative AI platforms have likely ingested and processed staggering volumes of data that implicate state-level consumer protections. This includes biometric data (like voiceprints and faces), personal communications, educational records, and sensitive metadata—all of which are protected under laws in states like Illinois (BIPA), California (CCPA/CPRA), and Texas.

If these platforms scraped and trained on such data without notice or consent, they are sitting on massive latent liability. Unlike federal laws, which are often narrow or toothless, many state statutes allow private lawsuits and statutory damages. Class action risk is not hypothetical—it is systemic.  It is crucial for policymakers to have a clear understanding of where we are today with respect to the collision between AI and consumer rights, including copyright.  The corrosion of consumer rights by the richest corporations in commercial history is not something that may happen in the future.  Massive violations have  already occurred, are occurring this minute, and will continue to occur into the future at an increasing rate.  

The Quiet Race to Avoid Discovery

State laws don’t just authorize penalties; they open the door to discovery. Once an investigation or civil case proceeds, AI platforms could be forced to disclose exactly what data they trained on, how it was retained, and whether any red flags were ignored.

This mirrors the arc of the social media addiction lawsuits now consolidated in multidistrict litigation. Platforms denied culpability for years—until internal documents showed what they knew and when. The same thing could happen here, but on a far larger scale.

Preemption as Shield and Sword

The proposed AI moratorium isn’t a regulatory timeout. It’s a firewall. By halting enforcement of state AI laws, the moratorium could prevent lawsuits, derail investigations, and shield past conduct from scrutiny.

Even worse, the Senate version conditions broadband infrastructure funding (BEAD) on states agreeing to the moratorium—an unconstitutional act of coercion that trades state police powers for federal dollars. The legal implications are staggering, especially under the anti-commandeering doctrine of Murphy v. NCAA and Printz v. United States.

This Isn’t About Clarity. It’s About Control.

Supporters of the moratorium, including senior federal officials and lobbying arms of Big Tech, claim that a single federal standard is needed to avoid chaos. But the evidence tells a different story.

States are acting precisely because Congress hasn’t. Illinois’ BIPA led to real enforcement. California’s privacy framework has teeth. Dozens of other states are pursuing legislation to respond to harms AI is already causing.

In this light, the moratorium is not a policy solution. It’s a preemptive strike.

Who Gets Hurt?
– Consumers, whose biometric data may have been ingested without consent
– Parents and students, whose educational data may now be part of generative models
– Artists, writers, and journalists, whose copyrighted work has been scraped and reused
– State AGs and legislatures, who lose the ability to investigate and enforce

Google Is an Example of Potential Exposure

Google’s former executive chairman Eric Schmidt has seemed very, very interested in writing the law for AI.  For example, Schmidt worked behind the scenes for the two years at least to establish US artificial intelligence policy under President Biden. Those efforts produced the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence“, the longest executive order in history. That EO was signed into effect by President Biden on October 30.  In his own words during an Axios interview with Mike Allen, the Biden AI EO was signed just in time for Mr. Schmidt to present that EO as what Mr. Schmidt calls “bait” to the UK government–which convened a global AI safety conference at Bletchley Park in the UK convened by His Excellency Rishi Sunak (the UK’s tech bro Prime Minister) that just happened to start on November 1, the day after President Biden signed the EO.  And now look at the disaster that the UK AI proposal would be.  

As Mr. Schmidt told Axios:

So far we are on a win, the taste of winning is there.  If you look at the UK event which I was part of, the UK government took the bait, took the ideas, decided to lead, they’re very good at this,  and they came out with very sensible guidelines.  Because the US and UK have worked really well together—there’s a group within the National Security Council here that is particularly good at this, and they got it right, and that produced this EO which is I think is the longest EO in history, that says all aspects of our government are to be organized around this.

Apparently, Mr. Schmidt hasn’t gotten tired of winning.  Of course, President Trump rescinded the Biden AI EO which may explain why we are now talking about a total moratorium on state enforcement which percolated at a very pro-Google shillery called R Street Institute, apparently by one Adam Thierer .  But why might Google be so interested in this idea?

Google may face exponentially acute liability under state laws if it turns out that biometric or behavioral data from platforms like YouTube Kids or Google for Education were ingested into AI training sets. 

These services, marketed to families and schools, collect sensitive information from minors—potentially implicating both federal protections like COPPA and more expansive state statutes. As far back as 2015, Senator Ben Nelson raised alarms about YouTube Kids, calling it “ridiculously porous” in terms of oversight and lack of safeguards. If any of that youth-targeted data has been harvested by generative AI tools, the resulting exposure is not just a regulatory lapse—it’s a landmine. 

The moratorium could be seen as an attempt to preempt the very investigations that might uncover how far that exposure goes.

What is to be Done?

Instead of smuggling this moratorium into a must-pass bill, Congress should strip it out and hold open hearings. If there’s merit to federal preemption, let it be debated on its own. But do not allow one of the most sweeping power grabs in modern tech policy to go unchallenged.

The public deserves better. Our children deserve better.  And the states have every right to defend their people. Because the patchwork they fear isn’t legal confusion.

It’s accountability.

AI’s Legal Defense Team Looks Familiar — Because It Is

If you feel like you’ve seen this movie before, you have.

Back in the 2003-ish runup to the 2005 MGM Studios, Inc. v. Grokster, Ltd. Supreme Court case, I met with the founder of one of the major p2p platforms in an effort to get him to go legal.  I reminded him that he knew there was all kinds of bad stuff that got uploaded to his platform.  However much he denied it, he was filtering it out and he was able to do that because he had the control over the content that he (and all his cohorts) denied he had.  

I reminded him that if this case ever went bad, someone was going to invade his space and find out exactly what he was up to. Just because the whole distributed p2p model (unlike Napster, by the way) was built to both avoid knowledge and be a perpetual motion machine, there was going to come a day when none of that legal advice was going to matter.  Within a few months the platform shut down, not because he didn’t want to go legal, but because he couldn’t, at least not without actually devoting himself to respecting other people’s rights.

Everything Old is New Again

Back in the early 2000s, peer-to-peer (P2P) piracy platforms claimed they weren’t responsible for the illegal music and videos flooding their networks. Today, AI companies claim they don’t know what’s in their training data. The defense is essentially the same: “We’re just the neutral platform. We don’t control the content.”  It’s that distorted view of the DMCA and Section 230 safe harbors that put many lawyers’ children through prep school, college and graduate school.

But just like with Morpheus, eDonkey, Grokster, and LimeWire, everyone knew that was BS because the evidence said otherwise — and here’s the kicker: many of the same lawyers are now running essentially the same playbook to defend AI giants.

The P2P Parallel: “We Don’t Control Uploads… Except We Clearly Do”

In the 2000s, platforms like Kazaa and LimeWire were like my little buddy–magically they  never had illegal pornography or extreme violence available to consumers, they prioritized popular music and movies, and filtered out the worst of the web

That selective filtering made it clear: they knew what was on their network. It wasn’t even a question of “should have known”, they actually knew and they did it anyway.  Courts caught on. 

In Grokster,  the Supreme Court side stepped the hosting issue and essentially said that if you design a platform with the intent to enable infringement, you’re liable.

The Same Playbook in the AI Era

Today’s AI platforms — OpenAI, Anthropic, Meta, Google, and others — essentially argue:
“Our model doesn’t remember where it learned [fill in the blank]. It’s just statistics.”

But behind the curtain, they:
– Run deduplication tools to avoid overloading, for example on copyrighted books
– Filter out NSFW or toxic content
– Choose which datasets to include and exclude
– Fine-tune models to align with somebody’s social norms or optics

This level of control shows they’re not ignorant — they’re deflecting liability just like they did with p2p.

Déjà Vu — With Many of the Same Lawyers

Many of the same law firms that defended Grokster, Kazaa, and other P2P pirate defendants as well as some of the ISPs are now representing AI companies—and the AI companies are very often some, not all, but some of the same ones that started screwing us on DMCA, etc., for the last 25 years.  You’ll see familiar names all of whom have done their best to destroy the creative community for big, big bucks in litigation and lobbying billable hours while filling their pockets to overflowing. 

The legal cadre pioneered the ‘willful blindness’ defense and are now polishing it up for AI, hoping courts haven’t learned the lesson.  And judging…no pun intended…from some recent rulings, maybe they haven’t.

Why do they drive their clients into a position where they pose an existential threat to all creators?  Do they not understand that they are creating a vast community of humans that really, truly, hate their clients?  I think they do understand, but there is a corresponding hatred of the super square Silicon Valley types who hate “Hollywood” right back.

Because, you know, information wants to be free—unless they are selling it.  And your data is their new oil. They apply this “ethic” not just to data, but to everything: books, news, music, images, and voice. Copyright? A speed bump. Terms of service? A suggestion. Artist consent? Optional.  Writing a song is nothing compared to the complexities of Biggest Tech.

Why do they do this?  OCPD Much?

Because control over training data is strategic dominance and these people are the biggest control freaks that mankind has ever produced.  They exhibit persistent and inflexible patterns of behavior characterized by an excessive need to control people, environments, and outcomes, often associated with traits of obsessive-compulsive personality disorder.  

So empathy will get you nowhere with these people, although their narcissism allows them to believe that they are extremely empathetic.  Pathetic, yes, empathetic, not so much.  

Pay No Attention to that Pajama Boy Behind the Curtain

The driving force behind AI is very similar to the driving force behind the Internet.   If pajama boy can harvest the world’s intellectual property and use it to train his proprietary AI model, he now owns a simulation of the culture he is not otherwise part of, and not only can he monetize it without sharing profits or credit, he can deny profits and credit to the people who actually created it.

So just like the heyday of Pirate Bay, Grokster & Co.  (and Daniel Ek’s pirate incarnation) the goal isn’t innovation. The goal is control over language, imagery, and the markets that used to rely on human creators.  This should all sound familiar if you were around for the p2p era.

Why This Matters

Like p2p platforms, it’s just not believable that the AI companies do know what’s in their models.  They may build their chatbot interface so that the public can’t ask the chatbot to blow the whistle on the platform operator, but that doesn’t mean  the company can’t tell what they are training on.  These operators have to be able to know what’s in the training materials and manipulate that data daily.  

They fingerprint, deduplicate, and sanitize their datasets. How else can they avoid having multiple copies of books, for example, that would be a compute nightmare.  They store “embeddings” in a way that they can optimize their AI to use only the best copy of any particular book.  They control the pipeline.

It’s not about the model’s memory. It’s about the platform’s intent and awareness.

If they’re smart enough to remove illegal content and prioritize clean data, they’re smart enough to be held accountable.

We’re not living through the first digital content crisis — just the most powerful one yet. The legal defenses haven’t changed much. But the stakes — for copyright, competition, and consumer protection — are much higher now.

Courts, Congress, and the public should recognize this for what it is: a recycled defense strategy in service of unchecked AI power. Eventually Grokster ran into Grokster— and all these lawyers are praying that there won’t be an AI version of the Grokster case. 

Creators Rally Behind Cyril Vetter’s Termination Rights Case in the Fifth Circuit

Songwriter and publisher Cyril Vetter is at the center of a high-stakes copyright case over his song “Double Shot of My Baby’s Love” with massive implications for authors’ termination rights under U.S. law. His challenge to Resnik Music Group has reached the Fifth Circuit Court of Appeals, and creators across the country are showing up in force—with a wave of amicus briefs filed in support including Artist Rights Institute.  Let’s consider the case on appeal.

At the heart of Vetter’s case is a crucial question: When a U.S. author signs a U.S. contract governed by U.S. law and then later the author (or the author’s heirs) invokes their 35-year termination right under Sections 203 and 304 of the U.S. Copyright Act, does that termination recover only U.S. rights (the conventional wisdom)—or the entire copyright, including worldwide rights?  Vetter argued for the worldwide rights at trial.  And the trial judge agreed over strenuous objections by the music publisher opposing Cyril.

Judge Shelly Dick of the U.S. District Court for the Middle District of Louisiana agreed. Her ruling made clear that a grant of worldwide rights under a U.S. contract is subject to U.S. termination. To hold otherwise would defeat the statute’s purpose which seems obvious.

I’ve known Vetter’s counsel Tim Kappel since he was a law student and have followed this case closely. Tim built a strong record in the District Court and secured a win against tough odds. MTP readers may recall our interviews with him about the case, which attracted considerable attention. Tim’s work with Cyril has energized a creator community long skeptical of the industry’s ‘U.S. rights only’ narrative—a narrative more tradition than law, an artifact of smoke filled rooms and backroom lawyers.

The Artist Rights Institute (David Lowery, Nikki Rowling, and Chris Castle), along with allies including Abby North (daughter-in-law of the late film composer Alex North), Blake Morgan (#IRespectMusic), and Angela Rose White (daughter of the late television composer and music director David Rose), filed a brief supporting Vetter. The message is simple: Congress did not grant a second bite at half the apple. Termination rights are meant to restore the full copyright—not just fragments.

As we explained in our brief, Vetter’s original grant of rights was typical: worldwide and perpetual, sometimes described as ‘throughout the universe.’ The idea that termination lets an author reclaim only U.S. rights—leaving the rest with the publisher—is both absurd and dangerous.

This case is a wake-up call. Artists shouldn’t belong to the  ‘torturable class’—doomed to accept one-sided deals as normal. Termination was Congress’s way of correcting those imbalances. Terminations are designed by Congress to give a second bite at the whole apple, not the half.

Stay tuned—we’ll spotlight more briefs soon. Until then, here’s ours for your review.

Steve’s Not Here–Why AI Platforms Are Still Acting Like Pirate Bay

In 2006, I wrote “Why Not Sell MP3s?” — a simple question pointing to an industry in denial. The dominant listening format was the MP3 file, yet labels were still trying to sell CDs or hide digital files behind brittle DRM. It seems kind of incredible in retrospect, but believe me it happened. Many cycles were burned on that conversation. Fans had moved on. The business hadn’t.

Then came Steve Jobs.

At the launch of the iTunes Store — and I say this as someone who sat in the third row — Jobs gave one of the most brilliant product presentations I’ve ever seen. He didn’t bulldoze the industry. He waited for permission, but only after crafting an offer so compelling it was as if the labels should be paying him to get in. He brought artists on board first. He made it cool, tactile, intuitive. He made it inevitable.

That’s not what’s happening in AI.

Incantor: DRM for the Input Layer

Incantor is trying to be the clean-data solution for AI — a system that wraps content in enforceable rights metadata, licenses its use for training and inference, and tracks compliance. It’s DRM, yes — but applied to training inputs instead of music downloads.

It may be imperfect, but at least it acknowledges that rights exist.

What’s more troubling is the contrast between Incantor’s attempt to create structure and the behavior of the major AI platforms, which have taken a very different route.

AI Platforms = Pirate Bay in a Suit

Today’s generative AI platforms — the big ones — aren’t behaving like Apple. They’re behaving like The Pirate Bay with a pitch deck.

– They ingest anything they can crawl.
– They claim “public availability” as a legal shield.
– They ignore licensing unless forced by litigation or regulation.
– They posture as infrastructure, while vacuuming up the cultural labor of others.

These aren’t scrappy hackers. They’re trillion-dollar companies acting like scraping is a birthright. Where Jobs sat down with artists and made the economics work, the platforms today are doing everything they can to avoid having that conversation.

This isn’t just indifference — it’s design. The entire business model depends on skipping the licensing step and then retrofitting legal justifications later. They’re not building an ecosystem. They’re strip-mining someone else’s.

What Incantor Is — and Isn’t

Incantor isn’t Steve Jobs. It doesn’t control the hardware, the model, the platform, or the user experience. It can’t walk into the room and command the majors to listen with elegance. But what it is trying to do is reintroduce some form of accountability — to build a path for data that isn’t scraped, stolen, or in legal limbo.

That’s not an iTunes power move. It’s a cleanup job. And it won’t work unless the AI companies stop pretending they’re search engines and start acting like publishers, licensees, and creative partners.

What the MP3 Era Actually Taught Us

The MP3 era didn’t end because DRM won. It ended because someone found a way to make the business model and the user experience better — not just legal, but elegant. Jobs didn’t force the industry to change. He gave them a deal they couldn’t refuse.

Today, there’s no Steve Jobs. No artists on stage at AI conferences. No tactile beauty. Just cold infrastructure, vague promises, and a scramble to monetize other people’s work before the lawsuits catch up. Let’s face it–when it comes to Elon, Sam, or Zuck, would you buy a used Mac from that man?

If artists and AI platforms were in one of those old “I’m a Mac / I’m a PC” commercials, you wouldn’t need to be told which is which. One side is creative, curious, collaborative. The other is corporate, defensive, and vaguely annoyed that you even asked the question.

Until that changes, platforms like Incantor will struggle to matter — and the AI industry will continue to look less like iTunes, and more like Pirate Bay with an enterprise sales team.

The OBBBA’s AI Moratorium Provision Has Existential Constitutional Concerns and Policy Implications

As we watch the drama of the One Big Beautiful Bill Act play out there’s a plot twist waiting in the wings that could create a cliffhanger in the third act: The poorly thought out, unnecessary and frankly offensive AI moratorium safe harbor that serves only the Biggest of Big Tech that we were gifted by Adam Theirer of the R Street Institute.

The latest version of the AI moratorium poison pill in the Senate version of OBBBA (aka HR1) reads something like this:

The AI moratorium provision within the One Big Beautiful Bill Act (OBBBA) reads like the fact pattern for a bar exam crossover question. The proposed legislation raises significant Constitutional and policy concerns. Before it even gets to the President’s desk, the legislation likely violates the Senate’s Byrd Rule that allows the OBBBA to avoid the 60 vote threshold (and the filibuster) and get voted on in “reconciliation” on a simple majority. The President’s party has a narrow simple majority in the Senate so if it were not for the moratorium the OBBBA should pass.

There are lots of people who think that the moratorium should fail the “Byrd Bath” analysis because it is not “germane” to the budget and tax process required to qualify for reconciliation. This is important because if the Senate Parliamentarian does not hold the line on germaine-ness, everyone will get into the act for every bill simply by attaching a chunk of money to your favorite donor, and that will not go over well. According to Roll Call, Senator Cruz is already talking about introducing regulatory legislation with the moratorium, which would likely only happen if the OBBBA poison pill was cut out:

The AI moratorium has already picked up some serious opponents in the Senate who would likely have otherwise voted for the President’s signature legislation with the President’s tax and spending policies in place. The difference between the moratorium and spending cuts is that money is fungible and a moratorium banning states from acting under their police powers really, really, really is not fungible at all. The moratorium is likely going to fail or get close to failing, and if the art of the deal says getting 80% of something is better than 100% of nothing, that moratorium is going to go away in the context of a closing. Maybe.

And don’t forget, the bill has to go back to the House which passed it by a single vote and there are already Members of the House who are getting buyers remorse about the AI moratorium specifically. So when they get a chance to vote again…who knows.

Even if it passes, the 40 state Attorneys General who oppose it may be gearing up to launch a Constitutional challenge to the provision on a number of grounds starting with the Tenth Amendment, its implications for federalism, and other Constitutional issues that just drip out of this thing. And my bet is that Adam Thierer will be eyeball witness #1 in that litigation.

So to recap the vulnerabilities:

Byrd Rule Violation

The Byrd Rule prohibits non-budgetary provisions in reconciliation bills. The AI moratorium’s primary effect is regulatory, not fiscal, as it preempts state laws without directly impacting federal revenues or expenditures. Senators, including Ed Markey (D-MA) as reported by Roll Call, have indicated intentions to challenge the provision under the Byrd Rule. The Hill reports:

Federal Preemption, the Tenth Amendment and Anti-Commandeering Doctrine

The Tenth Amendment famously reserves powers not delegated to the federal government to the states and to the people (remember them?). The constitutional principle of “anticommandeering” is a doctrine under U.S. Constitutional law that prohibits the federal government from compelling states or state officials to enact, enforce, or administer federal regulatory programs.

Anticommandeering is grounded primarily in the Tenth Amendment. Under this principle, while the federal government can regulate individuals directly under its enumerated powers (such as the Commerce Clause), it cannot force state governments to govern according to federal instructions. Which is, of course, exactly what the moratorium does, although the latest version would have you believe that the feds aren’t really commandeering, they are just tying behavior to money which the feds do all the time. I doubt anyone believes it.

The AI moratorium infringes upon the good old Constitution by:

  • Overriding State Authority: It prohibits states from enacting or enforcing AI regulations, infringing upon their traditional police powers to legislate for the health, safety, and welfare of their citizens.
  • Lack of Federal Framework: Unlike permissible federal preemption, which operates within a comprehensive federal regulatory scheme, the AI moratorium lacks such a framework, making it more akin to unconstitutional commandeering.
  • Precedent in Murphy v. NCAA: The Supreme Court held that Congress cannot prohibit states from enacting laws, as that prohibition violates the anti-commandeering principle. The AI moratorium, by preventing states from regulating AI, mirrors the unconstitutional aspects identified in Murphy. So there’s that.

The New Problem: Coercive Federalism

By conditioning federal broadband funds (“BEAD money”) on states’ agreement to pause AI regulations , the provision exerts undue pressure on states, potentially violating principles established in cases like NFIB v. Sebelius. Plus, the Broadband Equity, Access, and Deployment (BEAD) Program is a $42.45 billion federal initiative established under the Infrastructure Investment and Jobs Act of 2021. Administered by the National Telecommunications and Information Administration (NTIA), BEAD aims to expand high-speed internet access across the United States by funding planning, infrastructure deployment, and adoption programs. In other words, BEAD has nothing to do with the AI moratorium. So there’s that.

Supremacy Clause Concerns

The moratorium may conflict with existing state laws, leading to legal ambiguities and challenges regarding federal preemption. That’s one reason why 40 state AGs are going to the mattresses for the fight.

Lawmakers Getting Cold Feet or In Opposition

Several lawmakers have voiced concerns or opposition to the AI moratorium:

  • Rep. Marjorie Taylor Greene (R-GA): Initially voted for the bill but later stated she was unaware of the AI provision and would have opposed it had she known. She has said that she will vote no on the OBBBA when it comes back to the House if the Mr. T’s moratorium poison pill is still in there.
  • Sen. Josh Hawley (R-MO): Opposes the moratorium, emphasizing the need to protect individual rights over corporate interests.
  • Sen. Marsha Blackburn (R-TN): Expressed concerns that the moratorium undermines state protections, particularly referencing Tennessee’s AI-related laws.
  • Sen. Edward Markey (D-MA): Intends to challenge the provision under the Byrd Rule, citing its potential to harm vulnerable communities.

Recommendation: Allow Dissenting Voices

Full disclosure, I don’t think Trump gives a damn about the AI moratorium. I also think this is performative and is tied to giving the impression to people like Masa at Softbank that he tried. It must be said that Masa’s billions are not quite as important after Trump’s Middle East roadshow than they were before, speaking of leverage. While much has been made of the $1 million contributions that Zuckerberg, Tim Apple, & Co. made to attend the inaugural, there’s another way to look at that tableau–remember Titus Andronicus when the general returned to Rome with Goth prisoners in chains following his chariot? That was Tamora, the Queen of the Goths, her three sons Alarbus, Chiron, and Demetrius along with Aaron the Moor. Titus and the Goth’s still hated each other. Just sayin’.

Somehow I wouldn’t be surprised if this entire exercise was connected to the TikTok divestment in ways that aren’t entirely clear. So, given the constitutional concerns and growing opposition, it is advisable for President Trump to permit members of Congress to oppose the AI moratorium provision without facing political repercussions, particularly since Rep. Greene has already said she’s a no vote–on the 214-213 vote the first time around. This approach would:

  • Respect the principles of federalism and states’ rights.
  • Tell Masa he tried, but oh well.
  • Demonstrate responsiveness to legitimate legislative concerns on a bi-partisan basis.
  • Ensure that the broader objectives of the OBBBA are not jeopardized by a contentious provision.

Let’s remember: The tax and spend parts of OBBBA are existential to the Trump agenda; the AI moratorium definitely is not, no matter what Mr. T wants you to believe. While the OBBBA encompasses significant policy initiatives which are highly offensive to a lot of people, the AI moratorium provision presents constitutional and procedural challenges and fundamental attacks on our Constitution that warrant its removal. Cutting it out will strengthen the bill’s likelihood of passing and uphold the foundational principles of American governance, at least for now.

Hopefully Trump looks at it that way, too.

How the AI Moratorium Threatens Local Educational Control

The proposed federal AI moratorium currently in the One Big Beautiful Bill Act states:

[N]o State or political subdivision thereof may enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation of that State or a political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.

What is a “political subdivision”?  According to a pretty standard definition offered by the Social Security Administration:

A political subdivision is a separate legal entity of a State which usually has specific governmental functions.  The term ordinarily includes a county, city, town, village, or school district, and, in many States, a sanitation, utility, reclamation, drainage, flood control, or similar district.

The proposed moratorium would prevent school districts—classified as political subdivisions—from adopting policies that regulate artificial intelligence. This includes rules restricting students’ use of AI tools such as ChatGPT, Gemini, or other platforms in school assignments, exams, and academic work. Districts may be unable to prohibit AI-generated content in essays, discipline AI-related cheating, or require disclosures about AI use unless they write broad rules for ‘unauthorized assistance’ in general or something like that.

Without clear authority to restrict AI in educational contexts, school districts will likely struggle to maintain academic integrity or to update honor codes. The moratorium could even interfere with schools’ ability to assess or certify genuine student performance. 

Parallels with Google’s Track Record in Education

The dangers of preempting local educational control over AI echo prior controversies involving Google’s deployment of tools like Chromebooks, Google Classroom, and Workspace for Education in K–12 environments. Despite being marketed as free and privacy-safe, Google has repeatedly been accused of covertly tracking students, profiling minors, and failing to meet federal privacy standards.  It’s entirely likely that Google has integrated its AI into all of its platforms including those used in school districts, so Google could likely raise the AI moratorium as a safe harbor defense to claims by parents or schools that they violate privacy or other rights with their products.

2015 complaint by the Electronic Frontier Foundation (EFF) alleged that Google tracked student activity even with privacy settings enabled although this was probably an EFF ‘big help, little bad mouth’ situation. New Mexico sued Google in 2020 for collecting student data without parental consent. Most recently, lawsuits in California allege that Google continues to fingerprint students and gather metadata despite educational safeguards.

Although the EFF filed an FTC complaint against Google in 2015, it did not launch a broad campaign or litigation strategy afterward. Critics argue that EFF’s muted follow-up may reflect its financial ties to Google, which has funded the organization in the past. This creates a potential conflict: while EFF publicly supports student privacy, its response to Google’s misconduct has been comparatively restrained.

This has led to the suggestion that EFF operates in a ‘big help, little bad mouth’ mode—providing substantial policy support to Google on issues like net neutrality and platform immunity, while offering limited criticism on privacy violations that directly affect vulnerable groups like students.

AI Use in Schools vs. Google’s Educational Data Practices: A Dangerous Parallel

The proposed AI moratorium would prevent school districts from regulating how artificial intelligence tools are used in classrooms—including tools that generate student work or analyze student behavior. This prohibition becomes even more alarming when we consider the historical abuses tied to Google’s education technologies, which have long raised concerns about student profiling and surveillance.

Over the past decade, Google has aggressively expanded its presence in American classrooms through products like Google Classroom, Chromebooks with Google Workspace for Education, Google Docs and Gmail for student accounts.

Although marketed as free tools, these services have been criticized for tracking children’s browsing behavior and location, storing search histories, even when privacy settings were enabled, creating behavioral profiles for advertising or product development, and sharing metadata with third-party advertisers or internal analytics teams.

Google previously entered into a 2014 agreement with the Electronic Frontier Foundation (EFF) to curb these practices—but watchdog groups and investigative journalists have continued to document covert tracking of minors, even in K–12 settings where children cannot legally consent to data collection.

AI Moratorium: Legalizing a New Generation of Surveillance Tools

The AI moratorium would take these concerns a step further by prohibiting school districts from regulating newer AI systems, even if they profile students using facial recognition, emotion detection, or predictive analytics, auto-grade essays and responses, building proprietary datasets on student writing patterns, offer “personalized learning” in exchange for access to sensitive performance and behavior data, or encourage use of generative tools (like ChatGPT) that may store and analyze student prompts and usage patterns

If school districts cannot ban or regulate these tools, they are effectively stripped of their local authority to protect students from the next wave of educational surveillance.

Contrast in Power Dynamics

IssueGoogle for EducationAI Moratorium Impacts
Privacy ConcernsTracked students via Gmail, Docs, and Classroom without proper disclosures.Prevents districts from banning or regulating AI tools that collect behavioral or academic data.
Policy ResponseLimited voluntary reforms; Google maintains a dominant K–12 market share.Preempts all local regulation, even if communities demand stricter safeguards.
Legal RemediesFew successful lawsuits due to weak enforcement of COPPA and FERPA.Moratorium would block even the potential for future local rules.
Educational ImpactCreated asymmetries in access and data protection between schools.Risks deepening digital divides and eroding academic integrity.

Why It Matters

Allowing companies to introduce AI tools into classrooms—while simultaneously barring school districts from regulating them—opens the door to widespread, unchecked profiling of minors, with no meaningful local oversight. Just as Google was allowed to shape a generation’s education infrastructure behind closed doors, this moratorium would empower new AI actors to do the same, shielded from accountability.

Parents groups should let lawmakers know that the AI moratorium has to come out of the legislation.

Now What? Can the AI Moratorium Survive the Byrd Rule on “Germaneness”?

Yes, the Big Beautiful Bill Act has passed the House of Representatives and is on its way to the Senate–with the AI safe harbor moratorium and its $500,000,000 giveaway appropriation intact. Yes, right next to Medicaid cuts, etc.

So now what? The controversial AI regulation moratorium tucked inside the reconciliation package is still a major point of contention. Critics argue that the provision—which would block state and local governments from enforcing or adopting AI-related laws for a decade—is blatantly non-germane to a budget bill. But what if the AI moratorium, in the context of a broader $500 million appropriation for a federal AI modernization initiative, isn’t so clearly in violation of the Byrd Rule? Just remember–these guys are not babies. They’ve thought about this and they intend to win–that’s why the language survived the House.

Remember, the assumption is that President Trump can’t get the BBB through the Senate in regular order which would require 60 votes and instead is going to jam it through under “budget reconciliation” rules which requires a simple majority vote in the Republican-held Senate. Reconciliation requires that there not be shenanigans (hah) and that the budget reconciliation actually deals with the budget and not some policy change that is getting sneaked under the tent. Well, what if it’s both?

Let’s consider what the Senate’s Byrd Rule actually requires.

To survive reconciliation, a provision must:
1. Affect federal outlays or revenues;
2. Have a budgetary impact that is not “merely incidental” to its policy effects;
3. Fall within the scope of the congressional instructions to the committees of jurisdiction;
4. Not increase the federal deficit outside the budget window;
5. Not make recommendations regarding Social Security;
6. Not violate Senate rules on germaneness or jurisdiction.

Critics rightly point out that a sweeping 10-year regulatory moratorium in Section 43201(c) smells more like federal policy overreach than fiscal fine-tuning, particularly since it’s pretty clearly a 10th Amendment violation of state police powers. But the moratorium exists within a broader federal AI modernization framework in Section 43201(a) that does involve a substantial appropriation: $500 million allocated for updating federal AI infrastructure, developing national standards, and coordinating interagency protocols. That money is real, scoreable, and central to the bill’s stated purpose.

Here’s the crux of the argument: if the appropriation is deemed valid under the Byrd Rule, the guardrails that enable its effective execution may also be valid – especially if they condition the use of federal funds on a coherent national framework. The moratorium can then be interpreted not as an abstract policy preference, but as a necessary precondition for ensuring that the $500 million achieves its budgetary goals without fragmentation.

In other words, the moratorium could be cast as a budget safeguard. Allowing 50 different state AI rules to proliferate while the federal government invests in a national AI backbone could undercut the very purpose of the expenditure. If that fragmentation leads to duplicative spending, legal conflict, or wasted infrastructure, then the moratorium arguably serves a protective fiscal function.

Precedent matters here. Reconciliation has been used in the past to impose conditions on Medicaid, restrict use of federal education funds, and shape how states comply with federal energy and transportation programs. The Supreme Court has rejected some of these on 10th Amendment grounds (NFIB v. Sebelius), but the Byrd Rule test is about budgetary relevance, not constitutional viability.

And that’s where the moratorium finds its most plausible defense: it is incidental only if you believe the spending exists in a vacuum. In truth, the $500 million appropriation depends on consistent, scalable implementation. A federal moratorium ensures that states don’t undermine the utility of that spending. It may be unwise. It may be a budget buster. It may be unpopular. But if it’s tightly tied to the execution of a federal program with scoreable fiscal effects, it just might survive the Byrd test.

So while artists, civil liberties advocates and state officials rightly decry the moratorium on policy grounds, its procedural fate may ultimately rest on a more mundane calculus: Does this provision help protect federal funds from inefficiency? If the answer is yes—and the appropriation stays—then the moratorium may live on, not because it deserves to, but because it was drafted just cleverly enough to thread the eye of the Byrd Rule needle.

Like I said, these guys aren’t babies and they thought about this because they mean to win. Ideally, somebody should have stopped it from ever getting into the bill in the first place. But since they didn’t, our challenge is going to be stopping it from getting through attached to a triple-whip, too big to fail, must pass signature legislation that Trump campaigned on and was elected.

And even if we are successful in stopping the AI moratorium safe harbor in the Senate, do you think it’s just going to go away? Will the Tech Bros just say, you got me, now I’ll happily pay those wrongful death claims?

Winning without Fighting: Strategic Parallels between TikTok and China’s “Assassin’s Mace” Weapons

To fight and conquer in all your battles is not supreme excellence; supreme excellence consists in breaking the enemy’s resistance without fighting.
Sun Tzu, The Art of War (Giles trans.)

In his must-read book The Hundred-Year Marathon, Michael Pillsbury describes China’s “Assassin’s Mace” weapons strategy as strategic systems designed to neutralize superior adversaries, particularly the United States. Assassin’s Mace weapons are asymmetric, cost-effective, and intended to exploit specific vulnerabilities in order to deliver a knockout blow.

Key characteristics include:

  • Asymmetry: Undermines U.S. advantages without matching its power.
  • Concealment: Many programs are secretive and deceptive.
  • Psychological Disruption: Designed to shock and paralyze response.
  • Preemptive Advantage: Intended to disable key systems early in a conflict.

Examples Pillsbury cites include anti-satellite weapons, cyberwarfare tools, EMPs, anti-ship ballistic missiles, and hypersonic glide vehicles.

It must also be said that the PRC has long had a doctrine of “military-civil fusion.” Military-Civil Fusion (MCF) doctrine is a national strategy aimed at integrating civilian industries, research institutions, and private enterprises with military development to enhance the capabilities of the People’s Liberation Army (PLA). The policy seeks to eliminate barriers between China’s civilian and military sectors, ensuring that technological advancements in areas like artificial intelligence (of which Bytedance is one of the top 5 AI developers in China), quantum computing, aerospace, and biotechnology serve both economic and defense purposes.

Key aspects of MCF include:

  • Technology Acquisition – The Chinese government encourages the transfer of cutting-edge civilian technologies to military applications, often through state-backed research programs and corporate partnerships.
  • Institutional Integration – The Central Military-Civil Fusion Development Committee, chaired by Xi Jinping, oversees the strategy to ensure seamless coordination between civilian and military entities.
  • Global Concerns – The U.S. and other nations view MCF as a security risk, citing concerns over intellectual property theft and the potential for civilian technologies to be repurposed for military dominance.

MCF is a cornerstone of China’s long-term military modernization, with the goal of developing a world-class military by 2049. If you’re familiar with China’s National Intelligence Law mandating cooperation by the civilian sector with the Ministry of State Security, this should all sound pretty familiar vis a vis TikTok.

Comparison to TikTok’s Data Mining and AI Algorithms

While not traditional kinetic weapons, TikTok’s AI and data collection tactics mirror many elements of an Assassin’s Mace—particularly in the information and psychological warfare domains.

Comparison:

FeatureAssassin’s Mace (Military)TikTok Data/A.I. (Civil-Info)
AsymmetricTargets U.S. military dependence on techTargets U.S. cultural and cognitive weaknesses
Concealed capabilitiesHidden programs in cyberwarfare or spaceOpaque algorithms and data harvesting
Psychological effectShock and morale disruptionBehavioral influence and identity shaping
Preemptive edgeDeployed early in conflictInfluences before conflict or overt tension
Cost/AttributionCheap and hard to detectSocial media disguise, plausible deniability
Dependency creationReduces U.S. tech autonomyEntrenches digital reliance on foreign platform

Strategic Parallels, MCF and National Security Implications

  • Informational Warfare: TikTok’s algorithmic controls may shape narratives aligned with CCP objectives.
  • Data as Weaponized Intel: TikTok collects biometric and behavioral data potentially usable for state profiling or surveillance.
  • AI as Force Multiplier: Data harvested fuels China’s military-linked AI development.
  • Cultural Erosion: Gradual influence can diminish U.S. civic cohesion and resilience.

Surrender Videos and CCP Use of Video as Psychological Operations (PsyOps)

The Chinese Communist Party (CCP) has increasingly leveraged video platforms—including domestic networks like WeChat and global platforms like TikTok—for strategic psychological operations aimed at foreign populations. These campaigns serve to erode morale, stir political divisions, and promote favorable perceptions of the Chinese regime.

A notable example includes the circulation of staged or coerced “surrender videos” purportedly featuring Taiwanese soldiers or civilians pledging allegiance to Beijing. Such footage is designed to sap resistance and cultivate an image of inevitable Chinese dominance over Taiwan, particularly in the event of an invasion or political crisis.

Another instance occurred on TikTok, where a Chinese user posted a video in fluent English urging Americans to support China and reject then-President Trump’s trade and tariff policies. I’m not a huge fan of the tariffs, but I found this video to be very suspicious.

The video called for solidarity with China and implied that U.S. opposition to Chinese economic expansion was both unjust and self-destructive. Though framed as personal opinion, such content aligns with Chinese state interests and is amplified by algorithms that may favor politically charged engagement. These efforts form part of a broader information warfare strategy wherein short-form video is used not only to manipulate algorithms and audience emotions but to subtly shift public opinion in democracies. By flooding feeds with curated messages, the CCP could exploit free speech protections in adversary nations to inject authoritarian narratives under the guise of popular expression

TikTok Could be a Combination Punch to Win Without Fighting

TikTok’s AI algorithms and extensive data collection constitute a modern parallel to China’s Assassin’s Mace strategy. Instead of missiles or EMPs, Beijing may be relying on AI-powered cognitive and cultural influence to erode Western resilience over time. This information-first strategy aligns with Pillsbury’s warning that America’s adversaries may seek to win without fighting a conventional war by use of strategic weapons like the Assassin’s Mace. As Master Sun said, win without fighting.

What Bell Labs and Xerox PARC Can Teach Us About the Future of Music

When we talk about the great innovation engines of the 20th century, two names stand out: Bell Labs and Xerox PARC. These legendary research institutions didn’t just push the boundaries of science and technology—they found solutions that brought us breakthroughs to challenges. The transistor, the laser, the UNIX operating system, the graphical user interface, and Ethernet networking all trace their origins to these hubs of long-range, cross-disciplinary thinking.

These breakthroughs didn’t happen by accident. They were the product of institutions that were intentionally designed to explore what might be possible outside the pressures of quarterly earnings reports–which means monthly which means weekly. Bell Labs and Xerox PARC proved that bold ideas need space, time, and a mandate to explore—even if commercial applications aren’t immediately apparent. You cannot solve big problems with an eye on weekly revenues–and I know that because I worked at A&M Records.

Now imagine if music had something like Bell Labs and Xerox PARC.

What if there were a Bell Labs for Music—an independent research and development hub where songwriters, engineers, logisticians, rights experts, and economists could collaborate to solve deep-rooted industry challenges? Instead of letting dominant tech platforms dictate the future, the music industry could build its own innovation engine, tailored to the needs of creators. Let’s consider how similar institutions could empower the music industry to reclaim its creative and economic future particularly confronted by AI and its institutional takeover.

Big Tech’s Self-Dealing: A $500 Million Taxpayer-Funded Windfall

While creators are being told to “adapt” to the age of AI, Big Tech has quietly written itself a $500 million check—funded by taxpayers—for AI infrastructure. Buried within the sprawling “innovation and competitiveness” sections of legislation being promoted as part of Trump’s “big beautiful bill,” this provision would hand over half a billion dollars in public funding—more accurately, public debt—to cloud providers, chipmakers, and AI monopolists with little transparency and even fewer obligations to the public.

Don’t bother looking–it will come as no surprise that there are no offsetting provisions for musicians, authors, educators, or even news publishers whose work is routinely scraped to train these AI models. There are no earmarks for building fair licensing infrastructure or consent-based AI training databases. There is no “AI Bell Labs” for the creative economy.

Once again, we see that innovation policy is being written by and for the same old monopolists who already control the platforms and the Internet itself, while the people whose work fills those platforms are left unprotected, uncompensated, and uninformed. If we are willing to borrow hundreds of millions to accelerate private AI growth, we should be at least as willing to invest in creator-centered infrastructure that ensures innovation is equitable—not extractive.

Innovation Needs a Home—and a Conscience

Bell Labs and Xerox PARC were designed not just to build technology, but to think ahead. They solved many future challenges often before the world even knew they existed.

The music industry can—and must—do the same. Instead of waiting for another monopolist to exercise its political clout to grant itself new safe harbors to upend the rules–like AI platforms are doing right now–we can build a space where songwriters, developers, and rights holders collaborate to define a better future. That means metadata that respects rights and tracks payments to creators. That means fair discovery systems. That means artist-first economic models.

It’s time for a Bell Labs for music. And it’s time to fund it not through government dependency—but through creator-led coalitions, industry responsibility, and platform accountability.

Because the future of music shouldn’t be written in Silicon Valley boardrooms. It should be composed, engineered, and protected by the people who make it matter.