The OBBBA’s AI Moratorium Provision Has Existential Constitutional Concerns and Policy Implications

As we watch the drama of the One Big Beautiful Bill Act play out there’s a plot twist waiting in the wings that could create a cliffhanger in the third act: The poorly thought out, unnecessary and frankly offensive AI moratorium safe harbor that serves only the Biggest of Big Tech that we were gifted by Adam Theirer of the R Street Institute.

The latest version of the AI moratorium poison pill in the Senate version of OBBBA (aka HR1) reads something like this:

The AI moratorium provision within the One Big Beautiful Bill Act (OBBBA) reads like the fact pattern for a bar exam crossover question. The proposed legislation raises significant Constitutional and policy concerns. Before it even gets to the President’s desk, the legislation likely violates the Senate’s Byrd Rule that allows the OBBBA to avoid the 60 vote threshold (and the filibuster) and get voted on in “reconciliation” on a simple majority. The President’s party has a narrow simple majority in the Senate so if it were not for the moratorium the OBBBA should pass.

There are lots of people who think that the moratorium should fail the “Byrd Bath” analysis because it is not “germane” to the budget and tax process required to qualify for reconciliation. This is important because if the Senate Parliamentarian does not hold the line on germaine-ness, everyone will get into the act for every bill simply by attaching a chunk of money to your favorite donor, and that will not go over well. According to Roll Call, Senator Cruz is already talking about introducing regulatory legislation with the moratorium, which would likely only happen if the OBBBA poison pill was cut out:

The AI moratorium has already picked up some serious opponents in the Senate who would likely have otherwise voted for the President’s signature legislation with the President’s tax and spending policies in place. The difference between the moratorium and spending cuts is that money is fungible and a moratorium banning states from acting under their police powers really, really, really is not fungible at all. The moratorium is likely going to fail or get close to failing, and if the art of the deal says getting 80% of something is better than 100% of nothing, that moratorium is going to go away in the context of a closing. Maybe.

And don’t forget, the bill has to go back to the House which passed it by a single vote and there are already Members of the House who are getting buyers remorse about the AI moratorium specifically. So when they get a chance to vote again…who knows.

Even if it passes, the 40 state Attorneys General who oppose it may be gearing up to launch a Constitutional challenge to the provision on a number of grounds starting with the Tenth Amendment, its implications for federalism, and other Constitutional issues that just drip out of this thing. And my bet is that Adam Thierer will be eyeball witness #1 in that litigation.

So to recap the vulnerabilities:

Byrd Rule Violation

The Byrd Rule prohibits non-budgetary provisions in reconciliation bills. The AI moratorium’s primary effect is regulatory, not fiscal, as it preempts state laws without directly impacting federal revenues or expenditures. Senators, including Ed Markey (D-MA) as reported by Roll Call, have indicated intentions to challenge the provision under the Byrd Rule. The Hill reports:

Federal Preemption, the Tenth Amendment and Anti-Commandeering Doctrine

The Tenth Amendment famously reserves powers not delegated to the federal government to the states and to the people (remember them?). The constitutional principle of “anticommandeering” is a doctrine under U.S. Constitutional law that prohibits the federal government from compelling states or state officials to enact, enforce, or administer federal regulatory programs.

Anticommandeering is grounded primarily in the Tenth Amendment. Under this principle, while the federal government can regulate individuals directly under its enumerated powers (such as the Commerce Clause), it cannot force state governments to govern according to federal instructions. Which is, of course, exactly what the moratorium does, although the latest version would have you believe that the feds aren’t really commandeering, they are just tying behavior to money which the feds do all the time. I doubt anyone believes it.

The AI moratorium infringes upon the good old Constitution by:

  • Overriding State Authority: It prohibits states from enacting or enforcing AI regulations, infringing upon their traditional police powers to legislate for the health, safety, and welfare of their citizens.
  • Lack of Federal Framework: Unlike permissible federal preemption, which operates within a comprehensive federal regulatory scheme, the AI moratorium lacks such a framework, making it more akin to unconstitutional commandeering.
  • Precedent in Murphy v. NCAA: The Supreme Court held that Congress cannot prohibit states from enacting laws, as that prohibition violates the anti-commandeering principle. The AI moratorium, by preventing states from regulating AI, mirrors the unconstitutional aspects identified in Murphy. So there’s that.

The New Problem: Coercive Federalism

By conditioning federal broadband funds (“BEAD money”) on states’ agreement to pause AI regulations , the provision exerts undue pressure on states, potentially violating principles established in cases like NFIB v. Sebelius. Plus, the Broadband Equity, Access, and Deployment (BEAD) Program is a $42.45 billion federal initiative established under the Infrastructure Investment and Jobs Act of 2021. Administered by the National Telecommunications and Information Administration (NTIA), BEAD aims to expand high-speed internet access across the United States by funding planning, infrastructure deployment, and adoption programs. In other words, BEAD has nothing to do with the AI moratorium. So there’s that.

Supremacy Clause Concerns

The moratorium may conflict with existing state laws, leading to legal ambiguities and challenges regarding federal preemption. That’s one reason why 40 state AGs are going to the mattresses for the fight.

Lawmakers Getting Cold Feet or In Opposition

Several lawmakers have voiced concerns or opposition to the AI moratorium:

  • Rep. Marjorie Taylor Greene (R-GA): Initially voted for the bill but later stated she was unaware of the AI provision and would have opposed it had she known. She has said that she will vote no on the OBBBA when it comes back to the House if the Mr. T’s moratorium poison pill is still in there.
  • Sen. Josh Hawley (R-MO): Opposes the moratorium, emphasizing the need to protect individual rights over corporate interests.
  • Sen. Marsha Blackburn (R-TN): Expressed concerns that the moratorium undermines state protections, particularly referencing Tennessee’s AI-related laws.
  • Sen. Edward Markey (D-MA): Intends to challenge the provision under the Byrd Rule, citing its potential to harm vulnerable communities.

Recommendation: Allow Dissenting Voices

Full disclosure, I don’t think Trump gives a damn about the AI moratorium. I also think this is performative and is tied to giving the impression to people like Masa at Softbank that he tried. It must be said that Masa’s billions are not quite as important after Trump’s Middle East roadshow than they were before, speaking of leverage. While much has been made of the $1 million contributions that Zuckerberg, Tim Apple, & Co. made to attend the inaugural, there’s another way to look at that tableau–remember Titus Andronicus when the general returned to Rome with Goth prisoners in chains following his chariot? That was Tamora, the Queen of the Goths, her three sons Alarbus, Chiron, and Demetrius along with Aaron the Moor. Titus and the Goth’s still hated each other. Just sayin’.

Somehow I wouldn’t be surprised if this entire exercise was connected to the TikTok divestment in ways that aren’t entirely clear. So, given the constitutional concerns and growing opposition, it is advisable for President Trump to permit members of Congress to oppose the AI moratorium provision without facing political repercussions, particularly since Rep. Greene has already said she’s a no vote–on the 214-213 vote the first time around. This approach would:

  • Respect the principles of federalism and states’ rights.
  • Tell Masa he tried, but oh well.
  • Demonstrate responsiveness to legitimate legislative concerns on a bi-partisan basis.
  • Ensure that the broader objectives of the OBBBA are not jeopardized by a contentious provision.

Let’s remember: The tax and spend parts of OBBBA are existential to the Trump agenda; the AI moratorium definitely is not, no matter what Mr. T wants you to believe. While the OBBBA encompasses significant policy initiatives which are highly offensive to a lot of people, the AI moratorium provision presents constitutional and procedural challenges and fundamental attacks on our Constitution that warrant its removal. Cutting it out will strengthen the bill’s likelihood of passing and uphold the foundational principles of American governance, at least for now.

Hopefully Trump looks at it that way, too.

How the AI Moratorium Threatens Local Educational Control

The proposed federal AI moratorium currently in the One Big Beautiful Bill Act states:

[N]o State or political subdivision thereof may enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation of that State or a political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.

What is a “political subdivision”?  According to a pretty standard definition offered by the Social Security Administration:

A political subdivision is a separate legal entity of a State which usually has specific governmental functions.  The term ordinarily includes a county, city, town, village, or school district, and, in many States, a sanitation, utility, reclamation, drainage, flood control, or similar district.

The proposed moratorium would prevent school districts—classified as political subdivisions—from adopting policies that regulate artificial intelligence. This includes rules restricting students’ use of AI tools such as ChatGPT, Gemini, or other platforms in school assignments, exams, and academic work. Districts may be unable to prohibit AI-generated content in essays, discipline AI-related cheating, or require disclosures about AI use unless they write broad rules for ‘unauthorized assistance’ in general or something like that.

Without clear authority to restrict AI in educational contexts, school districts will likely struggle to maintain academic integrity or to update honor codes. The moratorium could even interfere with schools’ ability to assess or certify genuine student performance. 

Parallels with Google’s Track Record in Education

The dangers of preempting local educational control over AI echo prior controversies involving Google’s deployment of tools like Chromebooks, Google Classroom, and Workspace for Education in K–12 environments. Despite being marketed as free and privacy-safe, Google has repeatedly been accused of covertly tracking students, profiling minors, and failing to meet federal privacy standards.  It’s entirely likely that Google has integrated its AI into all of its platforms including those used in school districts, so Google could likely raise the AI moratorium as a safe harbor defense to claims by parents or schools that they violate privacy or other rights with their products.

2015 complaint by the Electronic Frontier Foundation (EFF) alleged that Google tracked student activity even with privacy settings enabled although this was probably an EFF ‘big help, little bad mouth’ situation. New Mexico sued Google in 2020 for collecting student data without parental consent. Most recently, lawsuits in California allege that Google continues to fingerprint students and gather metadata despite educational safeguards.

Although the EFF filed an FTC complaint against Google in 2015, it did not launch a broad campaign or litigation strategy afterward. Critics argue that EFF’s muted follow-up may reflect its financial ties to Google, which has funded the organization in the past. This creates a potential conflict: while EFF publicly supports student privacy, its response to Google’s misconduct has been comparatively restrained.

This has led to the suggestion that EFF operates in a ‘big help, little bad mouth’ mode—providing substantial policy support to Google on issues like net neutrality and platform immunity, while offering limited criticism on privacy violations that directly affect vulnerable groups like students.

AI Use in Schools vs. Google’s Educational Data Practices: A Dangerous Parallel

The proposed AI moratorium would prevent school districts from regulating how artificial intelligence tools are used in classrooms—including tools that generate student work or analyze student behavior. This prohibition becomes even more alarming when we consider the historical abuses tied to Google’s education technologies, which have long raised concerns about student profiling and surveillance.

Over the past decade, Google has aggressively expanded its presence in American classrooms through products like Google Classroom, Chromebooks with Google Workspace for Education, Google Docs and Gmail for student accounts.

Although marketed as free tools, these services have been criticized for tracking children’s browsing behavior and location, storing search histories, even when privacy settings were enabled, creating behavioral profiles for advertising or product development, and sharing metadata with third-party advertisers or internal analytics teams.

Google previously entered into a 2014 agreement with the Electronic Frontier Foundation (EFF) to curb these practices—but watchdog groups and investigative journalists have continued to document covert tracking of minors, even in K–12 settings where children cannot legally consent to data collection.

AI Moratorium: Legalizing a New Generation of Surveillance Tools

The AI moratorium would take these concerns a step further by prohibiting school districts from regulating newer AI systems, even if they profile students using facial recognition, emotion detection, or predictive analytics, auto-grade essays and responses, building proprietary datasets on student writing patterns, offer “personalized learning” in exchange for access to sensitive performance and behavior data, or encourage use of generative tools (like ChatGPT) that may store and analyze student prompts and usage patterns

If school districts cannot ban or regulate these tools, they are effectively stripped of their local authority to protect students from the next wave of educational surveillance.

Contrast in Power Dynamics

IssueGoogle for EducationAI Moratorium Impacts
Privacy ConcernsTracked students via Gmail, Docs, and Classroom without proper disclosures.Prevents districts from banning or regulating AI tools that collect behavioral or academic data.
Policy ResponseLimited voluntary reforms; Google maintains a dominant K–12 market share.Preempts all local regulation, even if communities demand stricter safeguards.
Legal RemediesFew successful lawsuits due to weak enforcement of COPPA and FERPA.Moratorium would block even the potential for future local rules.
Educational ImpactCreated asymmetries in access and data protection between schools.Risks deepening digital divides and eroding academic integrity.

Why It Matters

Allowing companies to introduce AI tools into classrooms—while simultaneously barring school districts from regulating them—opens the door to widespread, unchecked profiling of minors, with no meaningful local oversight. Just as Google was allowed to shape a generation’s education infrastructure behind closed doors, this moratorium would empower new AI actors to do the same, shielded from accountability.

Parents groups should let lawmakers know that the AI moratorium has to come out of the legislation.

Now What? Can the AI Moratorium Survive the Byrd Rule on “Germaneness”?

Yes, the Big Beautiful Bill Act has passed the House of Representatives and is on its way to the Senate–with the AI safe harbor moratorium and its $500,000,000 giveaway appropriation intact. Yes, right next to Medicaid cuts, etc.

So now what? The controversial AI regulation moratorium tucked inside the reconciliation package is still a major point of contention. Critics argue that the provision—which would block state and local governments from enforcing or adopting AI-related laws for a decade—is blatantly non-germane to a budget bill. But what if the AI moratorium, in the context of a broader $500 million appropriation for a federal AI modernization initiative, isn’t so clearly in violation of the Byrd Rule? Just remember–these guys are not babies. They’ve thought about this and they intend to win–that’s why the language survived the House.

Remember, the assumption is that President Trump can’t get the BBB through the Senate in regular order which would require 60 votes and instead is going to jam it through under “budget reconciliation” rules which requires a simple majority vote in the Republican-held Senate. Reconciliation requires that there not be shenanigans (hah) and that the budget reconciliation actually deals with the budget and not some policy change that is getting sneaked under the tent. Well, what if it’s both?

Let’s consider what the Senate’s Byrd Rule actually requires.

To survive reconciliation, a provision must:
1. Affect federal outlays or revenues;
2. Have a budgetary impact that is not “merely incidental” to its policy effects;
3. Fall within the scope of the congressional instructions to the committees of jurisdiction;
4. Not increase the federal deficit outside the budget window;
5. Not make recommendations regarding Social Security;
6. Not violate Senate rules on germaneness or jurisdiction.

Critics rightly point out that a sweeping 10-year regulatory moratorium in Section 43201(c) smells more like federal policy overreach than fiscal fine-tuning, particularly since it’s pretty clearly a 10th Amendment violation of state police powers. But the moratorium exists within a broader federal AI modernization framework in Section 43201(a) that does involve a substantial appropriation: $500 million allocated for updating federal AI infrastructure, developing national standards, and coordinating interagency protocols. That money is real, scoreable, and central to the bill’s stated purpose.

Here’s the crux of the argument: if the appropriation is deemed valid under the Byrd Rule, the guardrails that enable its effective execution may also be valid – especially if they condition the use of federal funds on a coherent national framework. The moratorium can then be interpreted not as an abstract policy preference, but as a necessary precondition for ensuring that the $500 million achieves its budgetary goals without fragmentation.

In other words, the moratorium could be cast as a budget safeguard. Allowing 50 different state AI rules to proliferate while the federal government invests in a national AI backbone could undercut the very purpose of the expenditure. If that fragmentation leads to duplicative spending, legal conflict, or wasted infrastructure, then the moratorium arguably serves a protective fiscal function.

Precedent matters here. Reconciliation has been used in the past to impose conditions on Medicaid, restrict use of federal education funds, and shape how states comply with federal energy and transportation programs. The Supreme Court has rejected some of these on 10th Amendment grounds (NFIB v. Sebelius), but the Byrd Rule test is about budgetary relevance, not constitutional viability.

And that’s where the moratorium finds its most plausible defense: it is incidental only if you believe the spending exists in a vacuum. In truth, the $500 million appropriation depends on consistent, scalable implementation. A federal moratorium ensures that states don’t undermine the utility of that spending. It may be unwise. It may be a budget buster. It may be unpopular. But if it’s tightly tied to the execution of a federal program with scoreable fiscal effects, it just might survive the Byrd test.

So while artists, civil liberties advocates and state officials rightly decry the moratorium on policy grounds, its procedural fate may ultimately rest on a more mundane calculus: Does this provision help protect federal funds from inefficiency? If the answer is yes—and the appropriation stays—then the moratorium may live on, not because it deserves to, but because it was drafted just cleverly enough to thread the eye of the Byrd Rule needle.

Like I said, these guys aren’t babies and they thought about this because they mean to win. Ideally, somebody should have stopped it from ever getting into the bill in the first place. But since they didn’t, our challenge is going to be stopping it from getting through attached to a triple-whip, too big to fail, must pass signature legislation that Trump campaigned on and was elected.

And even if we are successful in stopping the AI moratorium safe harbor in the Senate, do you think it’s just going to go away? Will the Tech Bros just say, you got me, now I’ll happily pay those wrongful death claims?

What Bell Labs and Xerox PARC Can Teach Us About the Future of Music

When we talk about the great innovation engines of the 20th century, two names stand out: Bell Labs and Xerox PARC. These legendary research institutions didn’t just push the boundaries of science and technology—they found solutions that brought us breakthroughs to challenges. The transistor, the laser, the UNIX operating system, the graphical user interface, and Ethernet networking all trace their origins to these hubs of long-range, cross-disciplinary thinking.

These breakthroughs didn’t happen by accident. They were the product of institutions that were intentionally designed to explore what might be possible outside the pressures of quarterly earnings reports–which means monthly which means weekly. Bell Labs and Xerox PARC proved that bold ideas need space, time, and a mandate to explore—even if commercial applications aren’t immediately apparent. You cannot solve big problems with an eye on weekly revenues–and I know that because I worked at A&M Records.

Now imagine if music had something like Bell Labs and Xerox PARC.

What if there were a Bell Labs for Music—an independent research and development hub where songwriters, engineers, logisticians, rights experts, and economists could collaborate to solve deep-rooted industry challenges? Instead of letting dominant tech platforms dictate the future, the music industry could build its own innovation engine, tailored to the needs of creators. Let’s consider how similar institutions could empower the music industry to reclaim its creative and economic future particularly confronted by AI and its institutional takeover.

Big Tech’s Self-Dealing: A $500 Million Taxpayer-Funded Windfall

While creators are being told to “adapt” to the age of AI, Big Tech has quietly written itself a $500 million check—funded by taxpayers—for AI infrastructure. Buried within the sprawling “innovation and competitiveness” sections of legislation being promoted as part of Trump’s “big beautiful bill,” this provision would hand over half a billion dollars in public funding—more accurately, public debt—to cloud providers, chipmakers, and AI monopolists with little transparency and even fewer obligations to the public.

Don’t bother looking–it will come as no surprise that there are no offsetting provisions for musicians, authors, educators, or even news publishers whose work is routinely scraped to train these AI models. There are no earmarks for building fair licensing infrastructure or consent-based AI training databases. There is no “AI Bell Labs” for the creative economy.

Once again, we see that innovation policy is being written by and for the same old monopolists who already control the platforms and the Internet itself, while the people whose work fills those platforms are left unprotected, uncompensated, and uninformed. If we are willing to borrow hundreds of millions to accelerate private AI growth, we should be at least as willing to invest in creator-centered infrastructure that ensures innovation is equitable—not extractive.

Innovation Needs a Home—and a Conscience

Bell Labs and Xerox PARC were designed not just to build technology, but to think ahead. They solved many future challenges often before the world even knew they existed.

The music industry can—and must—do the same. Instead of waiting for another monopolist to exercise its political clout to grant itself new safe harbors to upend the rules–like AI platforms are doing right now–we can build a space where songwriters, developers, and rights holders collaborate to define a better future. That means metadata that respects rights and tracks payments to creators. That means fair discovery systems. That means artist-first economic models.

It’s time for a Bell Labs for music. And it’s time to fund it not through government dependency—but through creator-led coalitions, industry responsibility, and platform accountability.

Because the future of music shouldn’t be written in Silicon Valley boardrooms. It should be composed, engineered, and protected by the people who make it matter.

How Google’s “AI Overviews” Product Exposes a New Frontier in Copyright Infringement and Monopoly Abuse: Lessons from the Chegg Lawsuit

In February 2025, Chegg, Inc.—a Santa Clara education technology company—filed what I think will be a groundbreaking antitrust lawsuit against Google and Alphabet over Google’s use of “retrieval augmented generation” or “RAG.” Chegg alleges that the search monopolist’s new AI-powered search product, AI Overviews, is the latest iteration of its longstanding abuse of monopoly power.

The Chegg case may be the first major legal test of how RAG tools, like those powering Google’s AI search features, can be weaponized to maintain dominance in a core market—while gutting adjacent industries.

What Is at Stake?

Chegg’s case is more than a business dispute over search traffic. It’s a critical turning point in how regulators, courts, and the public understand Google’s dual role as:
– The gatekeeper of the web, and
– The competitor to every content publisher, educator, journalist, or creator whose material feeds its systems.

According to Chegg, Google’s AI Overviews scrapes and repackages publisher content—including Chegg’s proprietary educational explanations—into neatly summarized answers, which are then featured prominently at the top of search results. These AI responses provide zero compensation and little visibility for the original source, effectively diverting traffic and revenue from publishers who are still needed to produce the underlying content. Very Googley.

Chegg alleges it has experienced a 49% drop in non-subscriber traffic from Google searches, directly attributing the collapse to the introduction of AI Overviews. Google, meanwhile, offers its usual “What, Me Worry?” defense and insists its AI summaries enhance the user experience and are simply the next evolution of search—not a monopoly violation. Yeah, right, that’s the ticket.

But the implications go far beyond Chegg’s case.

Monopoly Abuse, Evolved for AI

The Chegg lawsuit revives a familiar pattern from Google’s past:

– In the 2017 Google Shopping case, the EU fined Google €2.42 billion for self-preferencing—boosting its own comparison shopping service in search while demoting rivals.
– In the U.S. DOJ monopoly case (2020–2024), a federal court found that Google illegally maintained its monopoly by locking in default search placement on mobile browsers and devices.

Now with AI Overviews, Google is not just favoring its own product in the search interface—it is repurposing the product of others to power that offering. And unlike traditional links, AI Overviews can satisfy a query without any click-through, undermining both the economic incentive to create content and the infrastructure of the open web.

Critically, publishers who have opted out of AI training via robots.txt or Google’s own tools like Google-Extended find that this does not block RAG-based uses in AI Overviews—highlighting a regulatory gap that Google exploits. This should come as no surprise given Google’s long history of loophole seeking arbitrage.

Implications Under EU Law

The European Union should take note. Article 102 of the Treaty on the Functioning of the European Union (TFEU) prohibits dominant firms from abusing their market position to distort competition. The same principles that justified the €2.42B Google Shopping fine and the 2018 €4.1B Android fine apply here:

– Leveraging dominance in general search to distort competition in education, journalism, and web publishing.
– Self-preferencing and vertical integration via AI systems that cannibalize independent businesses.
– Undermining effective consent mechanisms (like AI training opt-outs) to maintain data advantage.

Chegg’s case may be the canary in the coal mine for what’s to come globally as more AI systems become integrated into dominant platforms. Google’s strategy with AI Overviews represents not just feature innovation, but a structural shift in how monopolies operate: they no longer just exclude rivals—they absorb them.

A Revelatory Regulatory Moment

The Chegg v. Google case matters because it pushes antitrust law into the AI litigation arena. It challenges regulators to treat search-AI hybrids as more than novel tech. They are economic chokepoints that extend monopoly control through invisible algorithms and irresistible user interfaces.

Rights holders, US courts and the European Commission should watch closely: this is not just a copyright fight—it’s a competition law flashpoint.

How RAG Affects Different Media and Web Publishers

Note: RAG systems can use audiovisual content, but typically through textual intermediaries like transcripts, not by directly retrieving and analyzing raw audio/video files. But that could be next.

CategoryExamples of Rights HoldersHow RAG Uses the Content
Film Studios / ScriptwritersParamount, Amazon, DisneySummarizes plots, reviews, and character arcs (e.g., ‘What happens in Oppenheimer?’)
Music Publishers / SongwritersUniversal, Concord, Peer/Taylor Swift/Bob Dylan/Kendrick LamarDisplays lyrics, interpretations, and credits (e.g., ‘Meaning of Anti-Hero by Taylor Swift’)
News OrganizationsCNN, Reuters, BBCGenerates summaries from live news feeds (e.g., ‘What’s happening in Gaza today?’)
Book Publishers / AuthorsHarpersCollins, Hachette, Macmillan Synthesizes themes, summaries, and reviews (e.g., ‘Theme of Beloved by Toni Morrison’)
Gaming Studios / ReviewersGameFAQs, IGN, RedditExplains gameplay strategies using fan walkthroughs (e.g., ‘How to defeat Fire Giant in Elden Ring’)
Visual Artists / PhotojournalistsArtNet, Museum Sites, Personal PortfoliosExplains style and methods from exhibition texts and bios (e.g., ‘How does Banksy create his art?’)
Podcasters / Transcription ServicesPodcast transcripts, show notesPulls quotes and summaries from transcript databases (e.g., ‘What did Ezra Klein say about AI regulation?’)
Educational Publishers / EdTechKhan Academy, Chegg, PearsonDelivers step-by-step solutions and concept explanations (e.g., ‘Explain the Pythagorean Theorem’)
Science and Medical PublishersMayo Clinic, MedlinePlus, PubMedAnswers medical questions with clinical and scientific data (e.g., ‘Symptoms of lupus’)

Machines Don’t Let Machines Do Opt Outs: Why robots.txt won’t get it done for AI Opt Outs

[Following is based on an except from the Artist Rights Institute’s submission to the UK Intellectual Property Office consultation on a UK AI legislative proposal]

The fundamental element of any rights reservation regime is knowing which work is being blocked by which rights owner.  This will require creating a metadata identification regime for all works of authorship, a regime that has never existed and must be created from whole cloth.  As the IPO is aware, metadata for songs is quite challenging as was demonstrated in the IPO’s UK Industry Agreement on Music Streaming Metadata Working Groups.

Using machine-readable formats for reservations sounds like would be an easy fix, but it creates an enormous burden on the artist, i.e., the target of the data scraper, and is a major gift to the AI platform delivered by government.  We can look to the experience with robots.txt for guidance.

Using a robots.txt file or similar “do not index” file puts far too big a bet on machines getting it right in the silence of the Internet.  Big Tech has used this opt-out mantra for years in a somewhat successful attempt to fool lawmakers into believing that blocking is all so easy.  If only there was a database, even a machine can do it.  And yet there are still massive numbers of webpages copied and those pages that were copied for search (or the Internet Archive) are now being used to train AI.  

It also must be said that a “disallow” signal is designed to work with file types or folders, not millions of song titles or sound recordings (see GEMA’s lawsuits against AI platforms). For example, this robots.txt code will recognize and block a “private-directory” folder but would otherwise allow Google to freely index the site while blocking Bing from indexing images:

User-agent: *

Disallow: /private-directory/

User-agent: Googlebot

Allow: /

User-agent: Bingbot

Disallow: /images/

Theoretically, existing robots.txt files could be configured to block AI crawlers entirely by designating known crawlers as user-agents such as ChatGPT.  However, there are many known defects when robots.txt can fail to block web crawlers or AI data scrapers including:

Malicious or non-compliant crawlers might ignore the rules in a robots.txt file and continue to scrape a website despite the directives.

Incorrect Syntax of a robots.txt file can lead to unintended results, such as not blocking the intended paths or blocking too many paths.

Issues with server configuration can prevent the robots.txt file from being correctly read or accessed by crawlers.

Content generated dynamically through JavaScript or AJAX requests might not be blocked if robots.txt is not properly configured to account for these resources.

Unlisted crawlers or scrapers not known to the user may not adhere to the intended rules.

Crawlers using cached versions of a site may bypass rules in a robots.txt file, particularly updated rules since the cache was created.

Subdomains and Subdirectories limiting the scope of the rules can lead to not blocking all intended subdomains or subdirectories.

Missing Entire Lists of Songs, Recordings, or Audiovisual works.

While robots.txt and similar techniques theoretically are useful tools for managing crawler access, they are not foolproof. Implementing additional security measures, such as IP blocking, CAPTCHA, rate limiting, and monitoring server logs, can help strengthen a site’s defenses against unwanted scraping.  However, like the other tools that were supposed to level the playing field for artists against Big Tech, none of these tools are free, all of them require more programming knowledge than can reasonably be expected, all require maintenance, and at scale, all of them can be gamed or will eventually fail. 

 It must be said that all of the headaches and expense of keeping Big Tech out is because Big Tech so desperately wants to get in.

The difference between blocking a search engine crawler and an AI data scraper (which could each be operated by the same company in the case of Meta, Bing or Google) is that failing to block a search engine crawler is inconvenient for artists, but failing to block an AI data scraper is catastrophic for artists.

Even if the crawlers worked seamlessly, should any of these folders change names and the site admin forgets to change the robots.txt file, that is asking a lot of every website on the Internet.

It must also be said that pages using machine readable blocking tools may result in pages being downranked, particularly for AI platforms closely associated with search engines.  Robots.txt blocking already has problems with crawlers and downranking for several reasons. A robots.txt file itself doesn’t directly cause pages to be downranked in search results. However, it can indirectly affect rankings by limiting search engine crawlers’ access to certain parts of a website. Here’s how:

Restricted Crawling: If you block crawlers from accessing important pages using robots.txt, those pages won’t be indexed. Without indexing, they won’t appear in search results, let alone rank.

Crawl Budget Mismanagement: For large websites, search engines allocate a “crawl budget”—the number of pages they crawl in a given time. If robots.txt doesn’t guide crawlers efficiently, that may randomly leave pages unindexed.

No Content Evaluation: If a page is blocked by robots.txt but still linked elsewhere, search engines might index its URL without evaluating its content. This can result in poor rankings since the page’s relevance and quality can’t be assessed.

The TDM safe harbor is too valuable and potentially too dangerous to leave to machines.

The Delay’s The Thing: Anthropic Leapfrogs Its Own November Valuation Despite Litigation from Authors and Songwriters in the Heart of Darkness

If you’ve read Joseph Conrad’s Heart of Darkness, you’ll be familiar with the Congo Free State, a private colony of Belgian King Leopold II that is today largely the Democratic Republic of the Congo. When I say “private” I mean literally privately owned by his Leopoldness. Why would old King Leo be so interested in owning a private colony in Africa? Why for the money, of course. Leo had to move some pieces around the board and get other countries to allow him to get away with essentially “buying” the place, if “buying” is the right description.

So Leo held an international conference in Berlin to discuss the idea and get international buy-in, kind of like the World Economic Forum with worse food and no skiing. Rather than acknowledging his very for-profit intention to ravage the Congo for ivory (aka slaughtering elephants) and rubber (the grisly extraction of which was accomplished by uncompensated slave labor) with brutal treatment of all concerned, Leo convinced the assembled nations that his intentions were humanitarian and philanthropic. You know, don’t be evil. Just lie.

Of course, however much King Leopold may have foreshadowed our sociopathic overlords from Silicon Valley, it must be said that Leo’s real envy won’t so much be the money as what he could have done with AI himself had he only known. Oh well, he just had to make do with Kurtz.

Which bring us to AI in general and Anthropic in particular. Anthropic’s corporate slogan is equally humanitarian and philanthropic: “Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.” Oh yes, all very jolly.

All very innocent and high minded, until you get punched in the face (to coin a phrase). It turns out–quelle horreur–that Anthropic stands accused of massive copyright infringement rather than lauded for its humanitarianism. Even more shocking? The company’s valuation is going through the stratosphere! These innocents surely must be falsely accused! The VC’s are voting with their bucks, so they wouldn’t put their shareholders’ money or limiteds money on the line for a–RACKETEER INFLUENCED CORRUPT ORGANIZATION?!?

Not only have authors brought this class action against Anthropic which is both Google’s stalking horse and cats paw to mix a metaphor, but the songwriters and music publishers have sued them as well. Led by Concord and Universal, the publishers have sued for largely the same reasons as the authors but for their quite distinct copyrights.

So let’s understand the game that’s being played here–as the Artist Rights Institute submitted in a comment to the UK Intellectual Property Office in the IPO’s current consultation on AI and copyright, the delay is the thing. And thanks to Anthropic, we can now put a valuation on the delay since the $4,000,000,000 the company raised in November 2024: $3,500,000,000. This one company is valued at $61.5 billion, roughly half of the entire creative industries in the UK and roughly equal to the entire U.S. music industry. No wonder delay is their business model.

However antithetical, copyright and AI must be discussed together for a very specific reason:  Artificial intelligence platforms operated by Google, Microsoft/OpenAI, Meta and the like have scraped and ingested works of authorship from baby pictures to Sir Paul McCartney as fast and as secretly as possible.  And the AI platforms know that the longer they can delay accountability, the more of the world’s culture they will have devoured—or as they might say, the more data they will have ingested.  And Not to mention the billions in venture capital they will have raised, just like Anthropic. For the good of humanity, of course, just like old King Leo.

As the Hon. Alison Hume, MP recently told Parliament, this theft is massive and has already happened, another example of why any “opt out” scheme (as had been suggested by the UK government) has failed before it starts:

This week, I discovered that the subtitles from one of my episodes of New Tricks have been scraped and are being used to create learning materials for artificial intelligence.  Along with thousands of other films and television shows, my original work is being used by generative AI to write scripts which one day may replace versions produced by mere humans like me.

This is theft, and it’s happening on an industrial scale.  As the law stands, artificial intelligence companies don’t have to be transparent about what they are stealing.[1]

Any delay[2] in prosecuting AI platforms simply increases their de facto “text and data mining” safe harbor while they scrape ever more of world culture.  As Ms. Hume states, this massive “training” has transferred value to these data-hungry mechanical beasts to a degree that confounds human understanding of its industrial scale infringement.  This theft dwarfs even the Internet piracy that drove broadband penetration, Internet advertising and search platforms in the 1999-2010 period.  It must be said that for Big Tech, commerce and copyright are once again inherently linked for even greater profit.

As the Right Honourable Baroness Kidron said in her successful opposition to the UK Government’s AI legislation in the House of Lords:

The Government are doing this not because the current law does not protect intellectual property rights, nor because they do not understand the devastation it will cause, but because they are hooked on the delusion that the UK’s best interests and economic future align with those of Silicon Valley.[3]  

Baroness Kidron identifies a question of central importance that mankind is forced to consider by the sheer political brute force of the AI lobbying steamroller:  What if AI is another bubble like the Dot Com bubble?  AI is, to a large extent, a black box utterly lacking in transparency much less recordkeeping or performance metrics.  As Baroness Kidron suggests, governments and the people who elect them are making a very big bet that AI is not pursuing an ephemeral bubble like the last time.

Indeed, the AI hype has the earmarks of a bubble, just as the Dot Com bubble did.  Baroness Kidron also reminds us of these fallacious economic arguments surrounding AI:

The Prime Minister cited an IMF report that claimed that, if fully realised, the gains from AI could be worth up to an average of £47 billion to the UK each year over a decade. He did not say that the very same report suggested that unemployment would increase by 5.5% over the same period. This is a big number—a lot of jobs and a very significant cost to the taxpayer. Nor does that £47 billion account for the transfer of funds from one sector to another. The creative industries contribute £126 billion per year to the economy. I do not understand the excitement about £47 billion when you are giving up £126 billion.[4]  

As Hon. Chris Kane, MP said in Parliament,  the Government runs the risk of enabling a wealth transfer that itself is not producing new value but would make old King Leo feel right at home: 

Copyright protections are not a barrier to AI innovation and competition, but they are a safeguard for the work of an industry worth £125 billion per year, employing over two million people.  We can enable a world where much of this value  is transferred to a handful of big tech firms or we can enable a win-win situation for the creative industries and AI developers, one where they work together based on licensed relationships with remuneration and transparency at its heart.


[1] Paul Revoir, AI companies are committing ‘theft’ on an ‘industrial scale’, claims Labour MP – who has written for TV series including New Tricks, Daily Mail (Feb. 12, 2025) available at https://www.dailymail.co.uk/news/article-14391519/AI-companies-committing-theft-industrial-scale-claims-Labour-MP-wrote-TV-shows-including-New-Tricks.html

[2] See, e.g., Kerry Muzzey, [YouTube Delay Tactics with DMCA Notices], Twitter (Feb. 13, 2020) available at https://twitter.com/kerrymuzzey/status/1228128311181578240  (Film composer with Content ID account notes “I have a takedown pending against a heavily-monetized YouTube channel w/a music asset that’s been fine & in use for 7 yrs & 6 days. Suddenly today, in making this takedown, YT decides “there’s a problem w/my metadata on this piece.” There’s no problem w/my metadata tho. This is the exact same delay tactic they threw in my way every single time I applied takedowns against broadcast networks w/monetized YT channels….And I attached a copy of my copyright registration as proof that it’s just fine.”); Zoë Keating, [Content ID secret rules], Twitter (Feb. 6. 2020) available at https://twitter.com/zoecello/status/1225497449269284864  (Independent artist with Content ID account states “[YouTube’s Content ID] doesn’t find every video, or maybe it does but then it has selective, secret rules about what it ultimately claims for me.”).

[3] The Rt. Hon. Baroness Kidron, Speech regarding Data (Use and Access) Bill [HL] Amendment 44A, House of Lords (Jan. 28, 2025) available at https://hansard.parliament.uk/Lords%E2%80%8F/2025-01-28/debates/9BEB4E59-CAB1-4AD3-BF66-FE32173F971D/Data(UseAndAccess)Bill(HL)#contribution-9A4614F3-3860-4E8E-BA1E-53E932589CBF 

[4] Id. 

UK Government’s AI Legislation is Defeated in the House of Lords

The new-ish UK government led by Labour Prime Minister Sir Keir Starmer faced a defeat in the House of Lords regarding their AI bill. The defeat was specifically about measures to protect copyrighted material from being used to train AI models without permission or compensation. Members of the House of Lords (known as “Peers”) voted 145 to 126 in favor of amendments to the UK Government’s Data (Use and Access) Bill, proposed by film director Beeban Tania Kidron, the Baroness Kidron (a “cross bench peer”) which aim to safeguard the intellectual property of creatives. Lady Kidron said:

There is a role in our economy for AI… and there is an opportunity for growth in the combination of AI and creative industries, but this forced marriage on slave terms is not it.

So there’s that. We need a film director in the Senate, don’t you think? Yes, let’s have one of those, please.

Bill Dies With Amendments

The amendments proposed by Baroness Kidron received cross-party support (what would be called “bi-partisan” in the US, but the UK has more than two political parties represented in Parliament). The amendments include provisions to ensure, among other things, that AI companies comply with UK copyright law, disclose the names and owners of web crawlers doing their dastardly deeds in the dark of the recesses of the Internet, and allow copyright owners to know when and how their work is used. It might even protect users of Microsoft or Google products from having their drafts crawled and scraped for AI training.

This defeat highlights the growing concerns within Parliament about the unregulated use of copyrighted material by major tech firms. Starmer’s Data (Use and Access) Bill was proposed by the UK government to excuse the use of copyrighted material by AI models. However, thanks in part to Lady Kidron it faced significant opposition in the House of Lords, leading to its defeat.

Here’s a summary of why it failed:

  1. Cross-Party Support for Amendments: The amendments proposed by Baroness Kidron received strong support from both Labour and Conservative peers. They argued that the bill needed stronger measures to protect the intellectual property of creatives.
  2. Transparency and Redress: The amendments aimed to improve transparency by requiring AI companies to disclose the names and owners of web crawlers and allowing copyright owners to know when and how their work is used.
  3. Government’s Preferred Option: The government suggested an “opt-out” system for text and data mining, which would allow AI developers to scrape copyrighted content unless rights holders actively opted out. This approach was heavily criticized as it would lead to widespread unauthorized use of intellectual property, or as we might say in Texas, that’s bullshit for starters.
  4. Economic Impact: Supporters of the amendments argued that the bill, in its original form, would transfer wealth from individual creatives and small businesses to big tech companies, undermining the sustainability of the UK’s creative industries. Because just like Google’s products, it was a thinly disguised wealth transfer.

The defeat highlights the growing concerns within Parliament about the unregulated use of copyrighted material by major tech firms and the need for stronger protections for creatives. several prominent artists voiced their opposition to the UK government’s AI bill. Sir Elton John and Sir Paul McCartney were among the most prominent critics. They argued that the government’s proposed changes would allow AI companies to use copyrighted material without proper compensation, which could threaten the livelihoods of artists, especially emerging ones.

Elton John expressed concerns that the bill would enable big tech companies to “ride roughshod over traditional copyright laws,” potentially diluting and threatening young artists’ earnings. As a fellow former member of Long John Baldry’s back up band, I say well done, Reg. Paul McCartney echoed these sentiments, emphasizing that the new laws would allow AI to rip off creators and hinder younger artists who might not have the means to protect their work–and frankly, the older artists don’t either when going up against Google and Microsoft, with backing by Softbank and freaking countries.

Their opposition highlights the broader concerns within the creative community like Ivors Academy and ECSA about the potential impact of AI on artists’ rights and earnings.

Role of the House of Lords

The House of Lords is one of the two houses of the UK Parliament, the other being the House of Commons. It plays a crucial role in the legislative process and functions as a revising chamber. Here are some key aspects of the House of Lords:

Functions of the House of Lords

  1. Scrutiny and Revision of Legislation:
    • The House of Lords reviews and scrutinizes bills passed by the House of Commons.
    • It can suggest amendments and revisions to bills, although it cannot ultimately block legislation.
  2. Debate and Deliberation:
    • The Lords engage in detailed debates on a wide range of issues, contributing their expertise and experience.
    • These debates can influence public opinion and policy-making.
  3. Committees:
    • The House of Lords has several committees that investigate specific issues, scrutinize government policies, and produce detailed reports.
    • Committees play a vital role in examining the impact of proposed legislation and holding the government to account.
  4. Checks and Balances:
    • The House of Lords acts as a check on the power of the House of Commons and the executive branch of the government.
    • It ensures that legislation is thoroughly examined and that diverse perspectives are considered.

Composition of the House of Lords

  • Life Peers: Appointed by the King on the advice of the Prime Minister, these members serve for life but do not pass on their titles.
  • Bishops: A number of senior bishops from the Church of England have seats in the House of Lords.
  • Hereditary Peers: A limited number of hereditary peers remain, but most hereditary peerages no longer carry the right to sit in the House of Lords.
  • Law Lords: Senior judges who used to sit in the House of Lords as the highest court of appeal, a function now transferred to the Supreme Court of the United Kingdom.

Limitations

While the House of Lords can delay legislation and suggest amendments, it does not have the power to prevent the House of Commons from passing laws. Its role is more about providing expertise, revising, and advising rather than blocking legislation.

Now What?

Following the defeat in the House of Lords, the government’s Data (Use and Access) Bill will need to be reconsidered by the UK government. They will have to decide whether to accept the amendments proposed by the Lords or to push back and attempt to pass the bill in its original form.

It’s not entirely unusual for Labour peers to vote against a Labour government, especially on issues where they have strong differing opinions or concerns. The House of Lords operates with a degree of independence of the House of Commons, where I would say it would be highly unusual for the government to lose a vote on something as visible at the AI issue.

The AI bill would no doubt be a “triple whip vote”, a strict instruction issued by a political party to its members usually in the House of Commons (in this case the Labour Party), requiring them to attend a vote and vote according to the party’s official stance to support the Government. It’s the most serious form of voting instruction, indicating that the vote is crucial and that party discipline must be strictly enforced. Despite the sadomasochistic overtones of a “triple whip” familiar as caning to British public school boys, peers in the Lords often vote based on their own judgment and expertise rather than strict party lines. This can lead to situations where Labour peers might oppose government proposals if they believe it is in the best interest of the public or aligns with their principles. Imagine that!

So, while it’s not the norm, it’s also not entirely unexpected for Labour peers to vote against a Labour government when significant issues are at stake like, oh say the destruction of the British creative industries.

Crucially, the government is currently consulting on the issue of text and data mining through the Intellectual Property Office. The IPO is accepting public comments on the AI proposals with a deadline of February 25, 2025. This feedback will likely influence their next steps. Did I say that the IPO is accepting public comments, even from Americans? Hint, hint. Read all about the IPO consultation here.

Big Tech’s Misapprehensions About the AI Appropriation Invasion: Artist Rights are Not “Regulation”

It was a rough morning. I ran across both reports from Davos where they are busy blowing AI bubbles yet again and also read about a leading Silicon Valley apologist discussing the current crop of AI litigation. That was nauseating. But once the bile settled down, I had a realization: This is all straight out of the Woodrow Wilson rule-by-technocrats playbook.

Wilson believed that experts and intellectuals, rather than the voting public, should guide the creation and implementation of public policy. The very model of a modern technocrat. The present day technocrats and their enablers in the legal profession are heirs to Wilsonian rule by experts. They view copyright and other human rights of artists as regulation impeding innovation. Innovation is the godhead to which all mankind must–emphasis on must–aspire, whether mankind likes it or not.

Not human rights–artist rights are human rights, so that proposition cannot be allowed. The technocrats want to normalize “innovation” as the superior value that need not be humanized or even explained. Artist rights must yield and even be shattered in the advance of “innovation”. The risible Lessig is already talking about “the right to train” for AI, a human rights exception you can drive a tank through as is his want in the coin-operated policy laundry. In Wilsonian tradition, we are asked to believe that public policy must be the handmaiden to appropriation by technology even if by doing so the experts destroy culture.

We went through this before with Internet piracy. There are many familiar faces in the legal profession showing up on AI cases who were just getting warmed up on the piracy cases of the 1999-2015 period that did their best to grind artist rights into bits. AI is far beyond the massive theft and wealth transfer that put a generation of acolyte children through prep school and higher education. AI takes extracting profit from cultural appropriation to a whole new level–it’s like shoplifting compared to carpet bombing.

“I got the shotgun, you got the brief case…”

And since the AI lawyers are fascinated by Nazi metaphors, let me give you one myself: Internet piracy is to Guernica what AI is to Warsaw. The Luftwaffe was essentially on a training run when they bombed Guernica during the Spanish Civil War. Guernica was a warm up act; the main event was carpet bombing a culture out of existence in Warsaw and after. It was all about the Luftwaffe testing and refining their aerial bombing tactics that opened the door to hell and allowed Satan to walk the Earth swishing his tail as he does to this day. But in the words of Stefan Starzyński, the Mayor of Warsaw who broadcast through the German attack, “We are fighting for our freedom, for our honor, and for our future. We will not surrender.”

This is what these crusader technocrats do not seem to understand no matter how they enrich themselves from the wealth transfer of cultural appropriation. AI litigation and policy confrontation is not about the money–there is no license fee big enough and nobody trusts Silicon Valley to give a straight count in any event.

Artists, songwriters, authors and other creators have nowhere to go. The battle of human rights against the AI appropriation invasion may well be humanity’s last stand.

@FTC: AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive

An important position paper from the Federal Trade Commission about AI (emphasis mine where indicated):

You may have heard that “data is the new oil”—in other words, data is the critical raw material that drives innovation in tech and business, and like oil, it must be collected at a massive scale and then refined in order to be useful. And there is perhaps no data refinery as large-capacity and as data-hungry as AI.

Companies developing AI products, as we have noted, possess a continuous appetite for more and newer data, and they may find that the readiest source of crude data are their own userbases. But many of these companies also have privacy and data security policies in place to protect users’ information. These companies now face a potential conflict of interest: they have powerful business incentives to turn the abundant flow of user data into more fuel for their AI products, but they also have existing commitments to protect their users’ privacy….

It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy. (emphasis in original)…

The FTC will continue to bring actions against companies that engage in unfair or deceptive practices—including those that try to switch up the “rules of the game” on consumers by surreptitiously re-writing their privacy policies or terms of service to allow themselves free rein to use consumer data for product development. Ultimately, there’s nothing intelligent about obtaining artificial consent.

Read the post on FTC