How the AI Moratorium Threatens Local Educational Control

The proposed federal AI moratorium currently in the One Big Beautiful Bill Act states:

[N]o State or political subdivision thereof may enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation of that State or a political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.

What is a “political subdivision”?  According to a pretty standard definition offered by the Social Security Administration:

A political subdivision is a separate legal entity of a State which usually has specific governmental functions.  The term ordinarily includes a county, city, town, village, or school district, and, in many States, a sanitation, utility, reclamation, drainage, flood control, or similar district.

The proposed moratorium would prevent school districts—classified as political subdivisions—from adopting policies that regulate artificial intelligence. This includes rules restricting students’ use of AI tools such as ChatGPT, Gemini, or other platforms in school assignments, exams, and academic work. Districts may be unable to prohibit AI-generated content in essays, discipline AI-related cheating, or require disclosures about AI use unless they write broad rules for ‘unauthorized assistance’ in general or something like that.

Without clear authority to restrict AI in educational contexts, school districts will likely struggle to maintain academic integrity or to update honor codes. The moratorium could even interfere with schools’ ability to assess or certify genuine student performance. 

Parallels with Google’s Track Record in Education

The dangers of preempting local educational control over AI echo prior controversies involving Google’s deployment of tools like Chromebooks, Google Classroom, and Workspace for Education in K–12 environments. Despite being marketed as free and privacy-safe, Google has repeatedly been accused of covertly tracking students, profiling minors, and failing to meet federal privacy standards.  It’s entirely likely that Google has integrated its AI into all of its platforms including those used in school districts, so Google could likely raise the AI moratorium as a safe harbor defense to claims by parents or schools that they violate privacy or other rights with their products.

2015 complaint by the Electronic Frontier Foundation (EFF) alleged that Google tracked student activity even with privacy settings enabled although this was probably an EFF ‘big help, little bad mouth’ situation. New Mexico sued Google in 2020 for collecting student data without parental consent. Most recently, lawsuits in California allege that Google continues to fingerprint students and gather metadata despite educational safeguards.

Although the EFF filed an FTC complaint against Google in 2015, it did not launch a broad campaign or litigation strategy afterward. Critics argue that EFF’s muted follow-up may reflect its financial ties to Google, which has funded the organization in the past. This creates a potential conflict: while EFF publicly supports student privacy, its response to Google’s misconduct has been comparatively restrained.

This has led to the suggestion that EFF operates in a ‘big help, little bad mouth’ mode—providing substantial policy support to Google on issues like net neutrality and platform immunity, while offering limited criticism on privacy violations that directly affect vulnerable groups like students.

AI Use in Schools vs. Google’s Educational Data Practices: A Dangerous Parallel

The proposed AI moratorium would prevent school districts from regulating how artificial intelligence tools are used in classrooms—including tools that generate student work or analyze student behavior. This prohibition becomes even more alarming when we consider the historical abuses tied to Google’s education technologies, which have long raised concerns about student profiling and surveillance.

Over the past decade, Google has aggressively expanded its presence in American classrooms through products like Google Classroom, Chromebooks with Google Workspace for Education, Google Docs and Gmail for student accounts.

Although marketed as free tools, these services have been criticized for tracking children’s browsing behavior and location, storing search histories, even when privacy settings were enabled, creating behavioral profiles for advertising or product development, and sharing metadata with third-party advertisers or internal analytics teams.

Google previously entered into a 2014 agreement with the Electronic Frontier Foundation (EFF) to curb these practices—but watchdog groups and investigative journalists have continued to document covert tracking of minors, even in K–12 settings where children cannot legally consent to data collection.

AI Moratorium: Legalizing a New Generation of Surveillance Tools

The AI moratorium would take these concerns a step further by prohibiting school districts from regulating newer AI systems, even if they profile students using facial recognition, emotion detection, or predictive analytics, auto-grade essays and responses, building proprietary datasets on student writing patterns, offer “personalized learning” in exchange for access to sensitive performance and behavior data, or encourage use of generative tools (like ChatGPT) that may store and analyze student prompts and usage patterns

If school districts cannot ban or regulate these tools, they are effectively stripped of their local authority to protect students from the next wave of educational surveillance.

Contrast in Power Dynamics

IssueGoogle for EducationAI Moratorium Impacts
Privacy ConcernsTracked students via Gmail, Docs, and Classroom without proper disclosures.Prevents districts from banning or regulating AI tools that collect behavioral or academic data.
Policy ResponseLimited voluntary reforms; Google maintains a dominant K–12 market share.Preempts all local regulation, even if communities demand stricter safeguards.
Legal RemediesFew successful lawsuits due to weak enforcement of COPPA and FERPA.Moratorium would block even the potential for future local rules.
Educational ImpactCreated asymmetries in access and data protection between schools.Risks deepening digital divides and eroding academic integrity.

Why It Matters

Allowing companies to introduce AI tools into classrooms—while simultaneously barring school districts from regulating them—opens the door to widespread, unchecked profiling of minors, with no meaningful local oversight. Just as Google was allowed to shape a generation’s education infrastructure behind closed doors, this moratorium would empower new AI actors to do the same, shielded from accountability.

Parents groups should let lawmakers know that the AI moratorium has to come out of the legislation.

Now What? Can the AI Moratorium Survive the Byrd Rule on “Germaneness”?

Yes, the Big Beautiful Bill Act has passed the House of Representatives and is on its way to the Senate–with the AI safe harbor moratorium and its $500,000,000 giveaway appropriation intact. Yes, right next to Medicaid cuts, etc.

So now what? The controversial AI regulation moratorium tucked inside the reconciliation package is still a major point of contention. Critics argue that the provision—which would block state and local governments from enforcing or adopting AI-related laws for a decade—is blatantly non-germane to a budget bill. But what if the AI moratorium, in the context of a broader $500 million appropriation for a federal AI modernization initiative, isn’t so clearly in violation of the Byrd Rule? Just remember–these guys are not babies. They’ve thought about this and they intend to win–that’s why the language survived the House.

Remember, the assumption is that President Trump can’t get the BBB through the Senate in regular order which would require 60 votes and instead is going to jam it through under “budget reconciliation” rules which requires a simple majority vote in the Republican-held Senate. Reconciliation requires that there not be shenanigans (hah) and that the budget reconciliation actually deals with the budget and not some policy change that is getting sneaked under the tent. Well, what if it’s both?

Let’s consider what the Senate’s Byrd Rule actually requires.

To survive reconciliation, a provision must:
1. Affect federal outlays or revenues;
2. Have a budgetary impact that is not “merely incidental” to its policy effects;
3. Fall within the scope of the congressional instructions to the committees of jurisdiction;
4. Not increase the federal deficit outside the budget window;
5. Not make recommendations regarding Social Security;
6. Not violate Senate rules on germaneness or jurisdiction.

Critics rightly point out that a sweeping 10-year regulatory moratorium in Section 43201(c) smells more like federal policy overreach than fiscal fine-tuning, particularly since it’s pretty clearly a 10th Amendment violation of state police powers. But the moratorium exists within a broader federal AI modernization framework in Section 43201(a) that does involve a substantial appropriation: $500 million allocated for updating federal AI infrastructure, developing national standards, and coordinating interagency protocols. That money is real, scoreable, and central to the bill’s stated purpose.

Here’s the crux of the argument: if the appropriation is deemed valid under the Byrd Rule, the guardrails that enable its effective execution may also be valid – especially if they condition the use of federal funds on a coherent national framework. The moratorium can then be interpreted not as an abstract policy preference, but as a necessary precondition for ensuring that the $500 million achieves its budgetary goals without fragmentation.

In other words, the moratorium could be cast as a budget safeguard. Allowing 50 different state AI rules to proliferate while the federal government invests in a national AI backbone could undercut the very purpose of the expenditure. If that fragmentation leads to duplicative spending, legal conflict, or wasted infrastructure, then the moratorium arguably serves a protective fiscal function.

Precedent matters here. Reconciliation has been used in the past to impose conditions on Medicaid, restrict use of federal education funds, and shape how states comply with federal energy and transportation programs. The Supreme Court has rejected some of these on 10th Amendment grounds (NFIB v. Sebelius), but the Byrd Rule test is about budgetary relevance, not constitutional viability.

And that’s where the moratorium finds its most plausible defense: it is incidental only if you believe the spending exists in a vacuum. In truth, the $500 million appropriation depends on consistent, scalable implementation. A federal moratorium ensures that states don’t undermine the utility of that spending. It may be unwise. It may be a budget buster. It may be unpopular. But if it’s tightly tied to the execution of a federal program with scoreable fiscal effects, it just might survive the Byrd test.

So while artists, civil liberties advocates and state officials rightly decry the moratorium on policy grounds, its procedural fate may ultimately rest on a more mundane calculus: Does this provision help protect federal funds from inefficiency? If the answer is yes—and the appropriation stays—then the moratorium may live on, not because it deserves to, but because it was drafted just cleverly enough to thread the eye of the Byrd Rule needle.

Like I said, these guys aren’t babies and they thought about this because they mean to win. Ideally, somebody should have stopped it from ever getting into the bill in the first place. But since they didn’t, our challenge is going to be stopping it from getting through attached to a triple-whip, too big to fail, must pass signature legislation that Trump campaigned on and was elected.

And even if we are successful in stopping the AI moratorium safe harbor in the Senate, do you think it’s just going to go away? Will the Tech Bros just say, you got me, now I’ll happily pay those wrongful death claims?

Winning without Fighting: Strategic Parallels between TikTok and China’s “Assassin’s Mace” Weapons

To fight and conquer in all your battles is not supreme excellence; supreme excellence consists in breaking the enemy’s resistance without fighting.
Sun Tzu, The Art of War (Giles trans.)

In his must-read book The Hundred-Year Marathon, Michael Pillsbury describes China’s “Assassin’s Mace” weapons strategy as strategic systems designed to neutralize superior adversaries, particularly the United States. Assassin’s Mace weapons are asymmetric, cost-effective, and intended to exploit specific vulnerabilities in order to deliver a knockout blow.

Key characteristics include:

  • Asymmetry: Undermines U.S. advantages without matching its power.
  • Concealment: Many programs are secretive and deceptive.
  • Psychological Disruption: Designed to shock and paralyze response.
  • Preemptive Advantage: Intended to disable key systems early in a conflict.

Examples Pillsbury cites include anti-satellite weapons, cyberwarfare tools, EMPs, anti-ship ballistic missiles, and hypersonic glide vehicles.

It must also be said that the PRC has long had a doctrine of “military-civil fusion.” Military-Civil Fusion (MCF) doctrine is a national strategy aimed at integrating civilian industries, research institutions, and private enterprises with military development to enhance the capabilities of the People’s Liberation Army (PLA). The policy seeks to eliminate barriers between China’s civilian and military sectors, ensuring that technological advancements in areas like artificial intelligence (of which Bytedance is one of the top 5 AI developers in China), quantum computing, aerospace, and biotechnology serve both economic and defense purposes.

Key aspects of MCF include:

  • Technology Acquisition – The Chinese government encourages the transfer of cutting-edge civilian technologies to military applications, often through state-backed research programs and corporate partnerships.
  • Institutional Integration – The Central Military-Civil Fusion Development Committee, chaired by Xi Jinping, oversees the strategy to ensure seamless coordination between civilian and military entities.
  • Global Concerns – The U.S. and other nations view MCF as a security risk, citing concerns over intellectual property theft and the potential for civilian technologies to be repurposed for military dominance.

MCF is a cornerstone of China’s long-term military modernization, with the goal of developing a world-class military by 2049. If you’re familiar with China’s National Intelligence Law mandating cooperation by the civilian sector with the Ministry of State Security, this should all sound pretty familiar vis a vis TikTok.

Comparison to TikTok’s Data Mining and AI Algorithms

While not traditional kinetic weapons, TikTok’s AI and data collection tactics mirror many elements of an Assassin’s Mace—particularly in the information and psychological warfare domains.

Comparison:

FeatureAssassin’s Mace (Military)TikTok Data/A.I. (Civil-Info)
AsymmetricTargets U.S. military dependence on techTargets U.S. cultural and cognitive weaknesses
Concealed capabilitiesHidden programs in cyberwarfare or spaceOpaque algorithms and data harvesting
Psychological effectShock and morale disruptionBehavioral influence and identity shaping
Preemptive edgeDeployed early in conflictInfluences before conflict or overt tension
Cost/AttributionCheap and hard to detectSocial media disguise, plausible deniability
Dependency creationReduces U.S. tech autonomyEntrenches digital reliance on foreign platform

Strategic Parallels, MCF and National Security Implications

  • Informational Warfare: TikTok’s algorithmic controls may shape narratives aligned with CCP objectives.
  • Data as Weaponized Intel: TikTok collects biometric and behavioral data potentially usable for state profiling or surveillance.
  • AI as Force Multiplier: Data harvested fuels China’s military-linked AI development.
  • Cultural Erosion: Gradual influence can diminish U.S. civic cohesion and resilience.

Surrender Videos and CCP Use of Video as Psychological Operations (PsyOps)

The Chinese Communist Party (CCP) has increasingly leveraged video platforms—including domestic networks like WeChat and global platforms like TikTok—for strategic psychological operations aimed at foreign populations. These campaigns serve to erode morale, stir political divisions, and promote favorable perceptions of the Chinese regime.

A notable example includes the circulation of staged or coerced “surrender videos” purportedly featuring Taiwanese soldiers or civilians pledging allegiance to Beijing. Such footage is designed to sap resistance and cultivate an image of inevitable Chinese dominance over Taiwan, particularly in the event of an invasion or political crisis.

Another instance occurred on TikTok, where a Chinese user posted a video in fluent English urging Americans to support China and reject then-President Trump’s trade and tariff policies. I’m not a huge fan of the tariffs, but I found this video to be very suspicious.

The video called for solidarity with China and implied that U.S. opposition to Chinese economic expansion was both unjust and self-destructive. Though framed as personal opinion, such content aligns with Chinese state interests and is amplified by algorithms that may favor politically charged engagement. These efforts form part of a broader information warfare strategy wherein short-form video is used not only to manipulate algorithms and audience emotions but to subtly shift public opinion in democracies. By flooding feeds with curated messages, the CCP could exploit free speech protections in adversary nations to inject authoritarian narratives under the guise of popular expression

TikTok Could be a Combination Punch to Win Without Fighting

TikTok’s AI algorithms and extensive data collection constitute a modern parallel to China’s Assassin’s Mace strategy. Instead of missiles or EMPs, Beijing may be relying on AI-powered cognitive and cultural influence to erode Western resilience over time. This information-first strategy aligns with Pillsbury’s warning that America’s adversaries may seek to win without fighting a conventional war by use of strategic weapons like the Assassin’s Mace. As Master Sun said, win without fighting.

What Bell Labs and Xerox PARC Can Teach Us About the Future of Music

When we talk about the great innovation engines of the 20th century, two names stand out: Bell Labs and Xerox PARC. These legendary research institutions didn’t just push the boundaries of science and technology—they found solutions that brought us breakthroughs to challenges. The transistor, the laser, the UNIX operating system, the graphical user interface, and Ethernet networking all trace their origins to these hubs of long-range, cross-disciplinary thinking.

These breakthroughs didn’t happen by accident. They were the product of institutions that were intentionally designed to explore what might be possible outside the pressures of quarterly earnings reports–which means monthly which means weekly. Bell Labs and Xerox PARC proved that bold ideas need space, time, and a mandate to explore—even if commercial applications aren’t immediately apparent. You cannot solve big problems with an eye on weekly revenues–and I know that because I worked at A&M Records.

Now imagine if music had something like Bell Labs and Xerox PARC.

What if there were a Bell Labs for Music—an independent research and development hub where songwriters, engineers, logisticians, rights experts, and economists could collaborate to solve deep-rooted industry challenges? Instead of letting dominant tech platforms dictate the future, the music industry could build its own innovation engine, tailored to the needs of creators. Let’s consider how similar institutions could empower the music industry to reclaim its creative and economic future particularly confronted by AI and its institutional takeover.

Big Tech’s Self-Dealing: A $500 Million Taxpayer-Funded Windfall

While creators are being told to “adapt” to the age of AI, Big Tech has quietly written itself a $500 million check—funded by taxpayers—for AI infrastructure. Buried within the sprawling “innovation and competitiveness” sections of legislation being promoted as part of Trump’s “big beautiful bill,” this provision would hand over half a billion dollars in public funding—more accurately, public debt—to cloud providers, chipmakers, and AI monopolists with little transparency and even fewer obligations to the public.

Don’t bother looking–it will come as no surprise that there are no offsetting provisions for musicians, authors, educators, or even news publishers whose work is routinely scraped to train these AI models. There are no earmarks for building fair licensing infrastructure or consent-based AI training databases. There is no “AI Bell Labs” for the creative economy.

Once again, we see that innovation policy is being written by and for the same old monopolists who already control the platforms and the Internet itself, while the people whose work fills those platforms are left unprotected, uncompensated, and uninformed. If we are willing to borrow hundreds of millions to accelerate private AI growth, we should be at least as willing to invest in creator-centered infrastructure that ensures innovation is equitable—not extractive.

Innovation Needs a Home—and a Conscience

Bell Labs and Xerox PARC were designed not just to build technology, but to think ahead. They solved many future challenges often before the world even knew they existed.

The music industry can—and must—do the same. Instead of waiting for another monopolist to exercise its political clout to grant itself new safe harbors to upend the rules–like AI platforms are doing right now–we can build a space where songwriters, developers, and rights holders collaborate to define a better future. That means metadata that respects rights and tracks payments to creators. That means fair discovery systems. That means artist-first economic models.

It’s time for a Bell Labs for music. And it’s time to fund it not through government dependency—but through creator-led coalitions, industry responsibility, and platform accountability.

Because the future of music shouldn’t be written in Silicon Valley boardrooms. It should be composed, engineered, and protected by the people who make it matter.

Who’s Coming to Lunch? What Do Personnel Changes at Copyright Office Mean for MLC?

If you’ve been following the news lately, you’ll have heard that President Trump has made some personnel changes at the Library of Congress and the head of the U.S. Copyright Office (styled as the “Register of Copyrights”). When the dust settles we’ll see if these changes stick, but my bet is they probably will. This is because the President was probably within his authority to replace the Librarian of Congress (a presidential appointee). Remember that the Librarian is a “principal officer of the United States” who ultimately reports to the President. We’ll come back to that point.

Because the Librarian appoints the head of the Copyright Office for an unspecified term and can terminate that person, there’s probably an argument for the President being able to terminate the “Register” directly if there’s a vacancy in the Librarian’s office especially if there’s urgent business before the Copyright Office. Alternatively, there’s definitely an argument for the replacement Librarian, “Acting” or otherwise, to be able to terminate the non-Senate confirmed Register. (See a similar argument from Professor Volokh.)

So whatever the sequence, the result is likely the same. Was it prudent? No. Was it well-handled? No. Is it enforceable? Quite probably. That doesn’t mean that those who are terminated can’t or shouldn’t pursue claims, but I think it does mean that their respective replacements are going to take over. The topic that is front and center in most discussions of these movements is Big Tech’s lobbying on AI and that is well to be concerned about because today is Wednesday and Big Tech is still trying to screw us. In that regard it is a day like any other.

But there is other pending business before the Copyright Office that will now be supervised by a Department of Justice lawyer with an entirely different background and set of relationships to all prior Registers. My bet is that the culture at the Copyright Office is about to change. I would say change radically, but I’d be skeptical that anything in Washington changes radically. For example, remember that the Library of Congress/Copyright Office public database apparently uses an older Oracle database system and/or COBOL or PL/SQL for data processing.  The user interface is HTML with embedded JavaScript, and uses CGI or early Java-based web tools for query submission. That’s right–1998 technology. Helloooo DoGE.

One item of pending business is the 5-year redesignation oversight review of the MLC’s operations and a review of the MLC’s investment policy on the $1.2 billion black box (or more) that is gradually inching its way toward a market share distribution with little or no explanation.

For reasons known only to the lobbyists who wrote Title I of the Music Modernization Act, the Copyright Office was given oversight of the MLC and its hedge fund.   As anyone could have predicted who’d studied the culture of the Copyright Office for five minutes, that oversight is effectively meaningless.  The MLC has just refused to allow any transparency over their hedge fund—over a billion dollars of other people’s money—and the Copyright Office so far has let that happen.  As Guy Forsyth wrote, Americans are freedom loving people and nothing says freedom like getting away with it.

So there’s a deeper structural issue with the MLC’s oversight: the Copyright Office is required to review the MLC every five years, but it has no real enforcement powers other than refusing to redesignate the quango which would create a huge disconnect between the sunny narrative of aspirations for the “historic” Title I of the MMA that created the MLC and the dark underbelly of the utter failure of that legislation that no one talks about at parties. Unlike executive agencies like the DOJ, FTC or SEC, the Copyright Office can’t subpoena records, issue fines, or force compliance. Its first five-year review—launched in January 2024—is now grinding on in its second year, with no public recommendations or reforms issued to date despite the requirements of the moment.

With an emphasis on regulatory accountability, the Trump administration might push for more rigorous oversight of the MLC’s operations, including its data practices and how it invests the black box OPM funds. Oversight could be enhanced through a combination of Copyright Office audits and a potential executive branch role—such as a streamlined agency focused on government efficiency. The goal: protect creators’ money and ensure the MLC’s compliance without increasing taxpayer burden. Costs for such oversight could, and arguably should, be charged back to the MLC which is funded by the richest corporations in commercial history.

In fact, beefing up the Copyright Office’s oversight role may actually be required. As Professor Volokh observes:

The answer appears to be that the Library of Congress is actually an Executive Branch department for legal purposes [and not in the Legislative Branch], though it also provides some services to Congress. Indeed, I think it has to be such a department in order to have the authority that it has over the implementation of copyright law (via the Register of Copyrights): As Buckley v. Valeo (1976) made clear, in a less famous part of its holding, Congress can’t appoint heads of agencies that exercise executive powers.

Of course the Librarian has to be confirmed by the Senate, although under vacancies rules, an acting Librarian has pretty much the full authority of the office for 210 days without Senate confirmation. The Register is not Senate confirmed, so there’s an odd juxtaposition where Trump’s Acting Librarian could be replaced, but the Register is not subject to the 210 day clock.

This is all down in the weeds in Appointments Clause land. But you get the idea. Paul Perkins, who was serving as an Associate Deputy Attorney General at the U.S. Department of Justice, will soon be looking at the MLC. My understanding is that Mr. Perkins is the deputy of Todd Blanche, who is now taking over as acting Librarian. (Todd Blanche who currently serves as the 40th United States Deputy Attorney General, having been confirmed by the Senate. He was formerly a partner at Cadwalader and former federal prosecutor in the SDNY.)

And just wait til DoGE gets a load of that COBOL programming and a billion dollar hedge fund at a quasi governmental agency. Remember, the Presidential Signing Statement for the Music Modernization Act–signed by Trump 45–specifically designates the MLC board members as inferior officers of the United States. That means on a certain level that they report to the Librarian, a new twist for music business executives. If it comes to a showdown between Trump and the MLC, my money is on Trump. So there’s that.

Time will tell. But one thing is certain: The DOJ lawyer coming in to supervise the entire situation is unlikely to care whether he’ll ever have lunch in that town again.

How Google’s “AI Overviews” Product Exposes a New Frontier in Copyright Infringement and Monopoly Abuse: Lessons from the Chegg Lawsuit

In February 2025, Chegg, Inc.—a Santa Clara education technology company—filed what I think will be a groundbreaking antitrust lawsuit against Google and Alphabet over Google’s use of “retrieval augmented generation” or “RAG.” Chegg alleges that the search monopolist’s new AI-powered search product, AI Overviews, is the latest iteration of its longstanding abuse of monopoly power.

The Chegg case may be the first major legal test of how RAG tools, like those powering Google’s AI search features, can be weaponized to maintain dominance in a core market—while gutting adjacent industries.

What Is at Stake?

Chegg’s case is more than a business dispute over search traffic. It’s a critical turning point in how regulators, courts, and the public understand Google’s dual role as:
– The gatekeeper of the web, and
– The competitor to every content publisher, educator, journalist, or creator whose material feeds its systems.

According to Chegg, Google’s AI Overviews scrapes and repackages publisher content—including Chegg’s proprietary educational explanations—into neatly summarized answers, which are then featured prominently at the top of search results. These AI responses provide zero compensation and little visibility for the original source, effectively diverting traffic and revenue from publishers who are still needed to produce the underlying content. Very Googley.

Chegg alleges it has experienced a 49% drop in non-subscriber traffic from Google searches, directly attributing the collapse to the introduction of AI Overviews. Google, meanwhile, offers its usual “What, Me Worry?” defense and insists its AI summaries enhance the user experience and are simply the next evolution of search—not a monopoly violation. Yeah, right, that’s the ticket.

But the implications go far beyond Chegg’s case.

Monopoly Abuse, Evolved for AI

The Chegg lawsuit revives a familiar pattern from Google’s past:

– In the 2017 Google Shopping case, the EU fined Google €2.42 billion for self-preferencing—boosting its own comparison shopping service in search while demoting rivals.
– In the U.S. DOJ monopoly case (2020–2024), a federal court found that Google illegally maintained its monopoly by locking in default search placement on mobile browsers and devices.

Now with AI Overviews, Google is not just favoring its own product in the search interface—it is repurposing the product of others to power that offering. And unlike traditional links, AI Overviews can satisfy a query without any click-through, undermining both the economic incentive to create content and the infrastructure of the open web.

Critically, publishers who have opted out of AI training via robots.txt or Google’s own tools like Google-Extended find that this does not block RAG-based uses in AI Overviews—highlighting a regulatory gap that Google exploits. This should come as no surprise given Google’s long history of loophole seeking arbitrage.

Implications Under EU Law

The European Union should take note. Article 102 of the Treaty on the Functioning of the European Union (TFEU) prohibits dominant firms from abusing their market position to distort competition. The same principles that justified the €2.42B Google Shopping fine and the 2018 €4.1B Android fine apply here:

– Leveraging dominance in general search to distort competition in education, journalism, and web publishing.
– Self-preferencing and vertical integration via AI systems that cannibalize independent businesses.
– Undermining effective consent mechanisms (like AI training opt-outs) to maintain data advantage.

Chegg’s case may be the canary in the coal mine for what’s to come globally as more AI systems become integrated into dominant platforms. Google’s strategy with AI Overviews represents not just feature innovation, but a structural shift in how monopolies operate: they no longer just exclude rivals—they absorb them.

A Revelatory Regulatory Moment

The Chegg v. Google case matters because it pushes antitrust law into the AI litigation arena. It challenges regulators to treat search-AI hybrids as more than novel tech. They are economic chokepoints that extend monopoly control through invisible algorithms and irresistible user interfaces.

Rights holders, US courts and the European Commission should watch closely: this is not just a copyright fight—it’s a competition law flashpoint.

How RAG Affects Different Media and Web Publishers

Note: RAG systems can use audiovisual content, but typically through textual intermediaries like transcripts, not by directly retrieving and analyzing raw audio/video files. But that could be next.

CategoryExamples of Rights HoldersHow RAG Uses the Content
Film Studios / ScriptwritersParamount, Amazon, DisneySummarizes plots, reviews, and character arcs (e.g., ‘What happens in Oppenheimer?’)
Music Publishers / SongwritersUniversal, Concord, Peer/Taylor Swift/Bob Dylan/Kendrick LamarDisplays lyrics, interpretations, and credits (e.g., ‘Meaning of Anti-Hero by Taylor Swift’)
News OrganizationsCNN, Reuters, BBCGenerates summaries from live news feeds (e.g., ‘What’s happening in Gaza today?’)
Book Publishers / AuthorsHarpersCollins, Hachette, Macmillan Synthesizes themes, summaries, and reviews (e.g., ‘Theme of Beloved by Toni Morrison’)
Gaming Studios / ReviewersGameFAQs, IGN, RedditExplains gameplay strategies using fan walkthroughs (e.g., ‘How to defeat Fire Giant in Elden Ring’)
Visual Artists / PhotojournalistsArtNet, Museum Sites, Personal PortfoliosExplains style and methods from exhibition texts and bios (e.g., ‘How does Banksy create his art?’)
Podcasters / Transcription ServicesPodcast transcripts, show notesPulls quotes and summaries from transcript databases (e.g., ‘What did Ezra Klein say about AI regulation?’)
Educational Publishers / EdTechKhan Academy, Chegg, PearsonDelivers step-by-step solutions and concept explanations (e.g., ‘Explain the Pythagorean Theorem’)
Science and Medical PublishersMayo Clinic, MedlinePlus, PubMedAnswers medical questions with clinical and scientific data (e.g., ‘Symptoms of lupus’)

Jim Hood Was First and He Was Right: Japan Serves Google with Anti-Monopoly C&D

Does “Publicly Available” AI Scraping Mean They Take Everything or Just Anything That’s Not Nailed Down?

Let’s be clear: It is not artificial intelligence as a technology that’s the existential threat. It’s the people who make the decisions about how to train and use artificial intelligence that are the existential threat. Just like nuclear power is not an existential threat, it’s the Czar Bomba that measured 50 megatons on the bangometer that’s the existential threat.

If you think that the tech bros can be trusted not to use your data scraped from their various consumer products for their own training purposes, please point to the five things they’ve done in the last 20 years that give you that confidence? Or point to even one thing.

Here’s an example. Back in the day when we were trying to build a library of audio fingerprints, we first had to rip millions of tracks in order to create the fingerprints. One employee who came to us from a company with a free email service said that there were millions of emails with audio file attachments just sitting there in users’ sent mail folders. Maybe we could just grab those audio files? Obviously that would be off limits for a host of reasons, but he didn’t see it. It’s not that he is an immoral person–immoral people recognize that there are some rules and they just want to break them. He was amoral–he didn’t see the rules and he didn’t think anything was wrong with his suggestion.

But the moral of the story–so to speak–is that I fully believe every consumer product is being scraped. That means that there’s a fairly good chance that Google, Microsoft, Meta/Facebook and probably other Big Tech players are using all of their consumer products to train AI. I would not bet against it.

If you think that’s crazy, I would suggest you think again. While these companies keep that kind of thing fairly quiet, it’s not the first time that the issue has come up–Big Tech telling you one thing, but using you to gain a benefit for something entirely different that you probably would never have agreed to had you known.

Take the Google Books saga. The whole point of Google’s effort at digitizing all the world’s books wasn’t because of some do-gooder desire to create the digital library of Alexandria or even the snippets that were the heart of the case. No–it was the “nondisplay uses” like training Google’s translation engine using “corpus machine translation”. The “corpus” of all the digitized books was the real value and of course was the main thing that Google wouldn’t share with the authors and didn’t want to discuss in the case.

Another random example would be “GOOG-411”. We can thank Marissa Meyer for spilling the beans on that one.

According to PC World back in 2010:

Google will close down 1-800-GOOG-411 next month, saying the free directory assistance service has served its purpose in helping the company develop other, more sophisticated voice-powered technologies.

GOOG-411, which will be unplugged on Nov. 12, was the search company’s first speech recognition service and led to the development of mobile services like Voice Search, Voice Input and Voice Actions.

Google, which recorded calls made to GOOG-411, has been candid all along about the motivations behind running the service, which provides phone numbers for businesses in the U.S. and Canada.

In 2007, Google Vice President of Search Products & User Experience Marissa Mayer said she was skeptical that free directory assistance could be viable business, but that she had no doubt that GOOG-411 was key to the company’s efforts to build speech-to-text services.

GOOG 411 is a prime example of how Big Tech plays the thimblerig, especially the “has been candid all along about the motivations behind running the service.” Doesn’t that phrase just ooze corporate flak? That, as we say in the trade, is a freaking lie.

None of the GOOG-411 collateral ever said, “Hey idiot, come help us get even richer by using our dumbass “free” directory assistance “service”.” Just like they’re not saying, “Hey idiot, use our “free” products so we can train our AI to take your job.” That’s the thimblerig, but played at our expense.

This subterfuge has big consequences for people like lawyers. As I wrote in my 2014 piece in Texas Lawyer:

“A lawyer’s duty to maintain the confidentiality of privileged communications is axiomatic. Given Google’s scanning and data mining capabilities, can lawyers using Gmail comply with that duty without their clients’ informed consent? In addition to scanning the text, senders and recipients, Google’s patents for its Gmail applications claim very broad functionality to scan file attachments. (The main patent is available on Google’s site. A good discussion of these patents is in Jeff Gould’s article, “The Natural History of Gmail Data Mining”, available on Medium.)”

Google has made a science of enticing users into giving up free data for Google to evolve even more products that may or may not be useful beyond the “free” part. Does the world really need another free email program? Maybe not, but Google does need a way to snarf down data for its artificial intelligence platforms–deceptively.

Fast forward ten years or so and here we are with the same problem–except it’s entirely possible that all of the Big Tech AI platforms are using their consumer products to train AI. Nothing has changed for lawyers, and some version of these rules would be prudent to follow for anyone with a duty of confidentiality like a doctor, accountant, stock broker or any of the many licensed professions. Not to mention social workers, priests, and the list goes on. If you call Big Tech on the deception and they will all say that they operate within their privacy policies, “de-identify” user data, only use “public” information, or other excuses.

I think the point of all this is that the platforms have far too many opportunities to cross-collateralize our data for the law to permit any confusion about what data they scrape.

What We Think We Know

Microsoft’s AI Training Practices

Microsoft has publicly stated that it does not use data from its Microsoft 365 products (e.g., Word, Excel, Outlook) to train its AI models. The company wants us to believe they rely on “de-identified” data from sources such as Bing searches, Copilot interactions, and “publicly available” information, whatever that means. Microsoft emphasizes its commitment to responsible AI practices, including removing metadata and anonymizing data to protect user privacy. See what I mean? Given Microsoft takes these precautions, that makes it all fine.

However, professionals using Microsoft’s tools must remain vigilant. While Microsoft claims not to use customer data from enterprise accounts for AI training, any inadvertent sharing of sensitive information through other Microsoft services (e.g., Bing or Copilot) could pose risks for users, particularly people with a duty of confidentiality like lawyers and doctors. And we haven’t even discussed child users yet.

Google’s AI Training Practices

For decades, Google has faced scrutiny for its data practices, particularly with products like Gmail, Google Docs, and Google Drive. Google’s updated privacy policy explicitly allows the use of “publicly available” information and user data for training its AI models, including Bard and Gemini. While Google claims to anonymize and de-identify data, concerns remain about the potential for sensitive information to be inadvertently included in training datasets.

For licensed professionals, these practices raise significant red flags. Google advises users not to input confidential or sensitive information into its AI-powered tools–typically Googlely. The risk of human reviewers accessing “de-identified” data can happen to anyone, but why in the world would you ever trust Google?

Does “Publicly Available” Mean Everything or Does it Mean Anything That’s Not Nailed Down?

These companies speak of “publicly available” data as if data that is publicly available is free to scrape and use for training. So what does that mean?

Based on the context and some poking around, it appears that there is no legally recognizable definition of what “publicly available” actually means. If you were going to draw a line between “publicly available” and the opposite, where would you draw it? You won’t be surprised to know that Big Tech will probably draw the line in an entirely different place than a normal person.

As far as I can tell, “publicly available” data would include data or content that is accessible by a data scraping crawler or by the general public without a subscription, payment, or special access permissions. This likely includes web pages, posts on social media like baby pictures on Facebook or Instagram, or other platforms that do not restrict access to their content through paywalls, registration requirements, or other barriers like terms of service prohibiting data scraping, API or a robots.txt file (which like a lot of other people including Ed Newton-Rex, I’m skeptical of even working).

While discussions of terms of service, notices prohibiting scraping and automated directions to crawlers sound good, in reality there’s no way to stop a determined crawler. The vulpine lust for data and cold hard cash by Big Tech is not realistically possible to stop at this point. Stopping the existential onslaught explains why the world needs to escalate punishment for these violations to a new level that may seem extreme at this point or at least unusually harsh.

Yet the massive and intentional copyright infringement, privacy violations, and who knows what else are so vast they are beyond civil penalties particularly for a defendant that seemingly prints money.



Machines Don’t Let Machines Do Opt Outs: Why robots.txt won’t get it done for AI Opt Outs

[Following is based on an except from the Artist Rights Institute’s submission to the UK Intellectual Property Office consultation on a UK AI legislative proposal]

The fundamental element of any rights reservation regime is knowing which work is being blocked by which rights owner.  This will require creating a metadata identification regime for all works of authorship, a regime that has never existed and must be created from whole cloth.  As the IPO is aware, metadata for songs is quite challenging as was demonstrated in the IPO’s UK Industry Agreement on Music Streaming Metadata Working Groups.

Using machine-readable formats for reservations sounds like would be an easy fix, but it creates an enormous burden on the artist, i.e., the target of the data scraper, and is a major gift to the AI platform delivered by government.  We can look to the experience with robots.txt for guidance.

Using a robots.txt file or similar “do not index” file puts far too big a bet on machines getting it right in the silence of the Internet.  Big Tech has used this opt-out mantra for years in a somewhat successful attempt to fool lawmakers into believing that blocking is all so easy.  If only there was a database, even a machine can do it.  And yet there are still massive numbers of webpages copied and those pages that were copied for search (or the Internet Archive) are now being used to train AI.  

It also must be said that a “disallow” signal is designed to work with file types or folders, not millions of song titles or sound recordings (see GEMA’s lawsuits against AI platforms). For example, this robots.txt code will recognize and block a “private-directory” folder but would otherwise allow Google to freely index the site while blocking Bing from indexing images:

User-agent: *

Disallow: /private-directory/

User-agent: Googlebot

Allow: /

User-agent: Bingbot

Disallow: /images/

Theoretically, existing robots.txt files could be configured to block AI crawlers entirely by designating known crawlers as user-agents such as ChatGPT.  However, there are many known defects when robots.txt can fail to block web crawlers or AI data scrapers including:

Malicious or non-compliant crawlers might ignore the rules in a robots.txt file and continue to scrape a website despite the directives.

Incorrect Syntax of a robots.txt file can lead to unintended results, such as not blocking the intended paths or blocking too many paths.

Issues with server configuration can prevent the robots.txt file from being correctly read or accessed by crawlers.

Content generated dynamically through JavaScript or AJAX requests might not be blocked if robots.txt is not properly configured to account for these resources.

Unlisted crawlers or scrapers not known to the user may not adhere to the intended rules.

Crawlers using cached versions of a site may bypass rules in a robots.txt file, particularly updated rules since the cache was created.

Subdomains and Subdirectories limiting the scope of the rules can lead to not blocking all intended subdomains or subdirectories.

Missing Entire Lists of Songs, Recordings, or Audiovisual works.

While robots.txt and similar techniques theoretically are useful tools for managing crawler access, they are not foolproof. Implementing additional security measures, such as IP blocking, CAPTCHA, rate limiting, and monitoring server logs, can help strengthen a site’s defenses against unwanted scraping.  However, like the other tools that were supposed to level the playing field for artists against Big Tech, none of these tools are free, all of them require more programming knowledge than can reasonably be expected, all require maintenance, and at scale, all of them can be gamed or will eventually fail. 

 It must be said that all of the headaches and expense of keeping Big Tech out is because Big Tech so desperately wants to get in.

The difference between blocking a search engine crawler and an AI data scraper (which could each be operated by the same company in the case of Meta, Bing or Google) is that failing to block a search engine crawler is inconvenient for artists, but failing to block an AI data scraper is catastrophic for artists.

Even if the crawlers worked seamlessly, should any of these folders change names and the site admin forgets to change the robots.txt file, that is asking a lot of every website on the Internet.

It must also be said that pages using machine readable blocking tools may result in pages being downranked, particularly for AI platforms closely associated with search engines.  Robots.txt blocking already has problems with crawlers and downranking for several reasons. A robots.txt file itself doesn’t directly cause pages to be downranked in search results. However, it can indirectly affect rankings by limiting search engine crawlers’ access to certain parts of a website. Here’s how:

Restricted Crawling: If you block crawlers from accessing important pages using robots.txt, those pages won’t be indexed. Without indexing, they won’t appear in search results, let alone rank.

Crawl Budget Mismanagement: For large websites, search engines allocate a “crawl budget”—the number of pages they crawl in a given time. If robots.txt doesn’t guide crawlers efficiently, that may randomly leave pages unindexed.

No Content Evaluation: If a page is blocked by robots.txt but still linked elsewhere, search engines might index its URL without evaluating its content. This can result in poor rankings since the page’s relevance and quality can’t be assessed.

The TDM safe harbor is too valuable and potentially too dangerous to leave to machines.

TikTok Extended

Imagine if the original Napster had received TikTok-level attention from POTUS?  Forget I said that.  The ongoing divestment of TikTok from its parent company ByteDance has reached yet another critical point with yet another bandaid.  Congress originally set a January 19, 2025 deadline for ByteDance to either sell TikTok’s U.S. operations or face a potential ban in the United States as part of the Protecting Americans from Foreign Adversary Controlled Applications Act or “PAFACA” (I guess “covfefe” was taken). The US Supreme Court upheld that law in TikTok v. Garland.

When January 20 came around, President Trump gave Bytedance an extension to April 5, 2025 by executive order. When that deadline came, President Trump granted an extension to the extension to the January 19 deadline by another executive order, providing additional time for ByteDance to finalize a deal to divest. The extended deadline now pushes the timeline for divestment negotiations to July 1, 2025.

This new extension is designed to allow for further negotiation time among ByteDance, potential buyers, and regulatory authorities, while addressing the ongoing trade issues and concerns raised by both the U.S. and Chinese governments. 

It’s getting mushy, but I’ll take a stab at the status of the divestment process. I might miss someone as they’re all getting into the act.

I would point out that all these bids anticipate a major overhaul in how TikTok operates which—just sayin’—means it likely would no longer be TikTok as its hundreds of millions of users now know it.  I went down this path with Napster, and I would just say that it’s a very big deal to change a platform that has inherent legal issues into one that satisfies a standard that does not yet exist.  I always used the rule of thumb that changing old Napster to new Napster (neither of which had anything to do with the service that eventually launched with the “Napster” brand but bore no resemblance to original Napster or its DNA) would result in an initial loss of 90% of the users. Just sayin’.

Offers and Terms

Multiple parties have expressed interest in acquiring TikTok’s U.S. operations, but the terms of these offers remain fluid due to ongoing negotiations and the complexity of the deal. Key bidders include:

Bytedance Investors: According to Reuters, “the biggest non-Chinese investors in parent company ByteDance to up their stakes and acquire the short video app’s U.S. operations.” This would involve Susquehanna International Group, General Atlantic, and KKR. Bytedance looks like it retains a minority ownership position of less than 20%, which I would bet probably means 19.99999999% or something like that. Reuters describes this as the front runner bid, and I tend to buy into that characterization. From a cap table point of view, this would be the cleanest with the least hocus pocus. However, the Reuters story is based on anonymous sources and doesn’t say how the deal would address the data privacy issues (other than that Oracle would continue to hold the data), or the algorithm. Remember, Oracle has been holding the data and that evidently has been unsatisfactory to Congress which is how we got here. Nothing against Oracle, but I suspect this significant wrinkle will have to get fleshed out.

Lawsuit by Bidder Company Led by Former Myspace Executive: In a lawsuit in Florida federal court by TikTok Global LLC filed April 3, TikTok Global accuses ByteDance, TikTok Inc., and founder Yiming Zhang of sabotaging a $33 billion U.S. acquisition deal by engaging in fraud, antitrust violations, and breach of contract. The complaint alleges ByteDance misled regulators, misappropriated the “TikTok Global” brand, and conspired to maintain control of TikTok in violation of U.S. government directives. The suit brings six causes of action, including tortious interference and unjust enrichment, underscoring a complex clash over corporate deception and national security compliance.

Oracle and Walmart: This proposal, which nearly closed in 2024 (I guess), involved a sale of TikTok’s U.S. business to a consortium of U.S.-based companies, with Oracle managing data security and infrastructure. ByteDance was to retain a minority stake in the new entity. However, this deal has not closed, who knows why aside from competition and then there’s those trade tariffs and the need for approval from both U.S. and Chinese regulators who have to be just so chummy right at the moment.

AppLovin: A preliminary bid has been submitted by AppLovin, an adtech company, to acquire TikTok’s U.S. operations. It appears that AppLovin’s offer includes managing TikTok’s user base and revenue model, with a focus on ad-driven strategies, although further negotiations are still required.  According to Pitchbook, “AppLovin is a vertically integrated advertising technology company that acts as a demand-side platform for advertisers, a supply-side platform for publishers, and an exchange facilitating transactions between the two. About 80% of AppLovin’s revenue comes from the DSP, AppDiscovery, while the remainder comes from the SSP, Max, and gaming studios, which develop mobile games. AppLovin announced in February 2025 its plans to divest from the lower-margin gaming studios to focus exclusively on the ad tech platform.”  It’s a public company trading as APP and seems to be worth about $100 billion.   Call me crazy, but I’m a bit suspicious of a public company with “lovin” in its name.  A bit groovy for the complexity of this negotiation, but you watch, they’ll get the deal.

Amazon and Blackstone: Amazon and Blackstone have also expressed interest in acquiring TikTok or a stake in a TikTok spinoff in Blackstone’s case. These offers would likely involve ByteDance retaining a minority interest in TikTok’s U.S. operations, though specifics of the terms remain unclear.  Remember, Blackstone owns HFA through SESAC.  So there’s that.

Frank McCourt/Project Liberty:  The “People’s Bid” for TikTok is spearheaded by Project Liberty, founded by Frank McCourt. This initiative aims to acquire TikTok and change its platform to prioritize user privacy, data control, and digital empowerment. The consortium includes notable figures such as Tim Berners-Lee, Kevin O’Leary, and Jonathan Haidt, alongside technologists and academics like Lawrence Lessig.  This one gives me the creeps as readers can imagine; anything with Lessig in it is DOA for me.

The bid proposes migrating TikTok to a new open-source protocol to address concerns raised by Congress while preserving its creative essence. As of now, the consortium has raised approximately $20 billion to support this ambitious vision.  Again, these people act like you can just put hundreds of millions of users on hold while this changeover happens.  I don’t think so, but I’m not as smart as these city fellers.

PRC’s Reaction

The People’s Republic of China (PRC) has strongly opposed the forced sale of TikTok’s U.S. operations, so there’s that. PRC officials argue that such a divestment would be a dangerous precedent, potentially harming Chinese tech companies’ international expansion. And they’re not wrong about that, it’s kind of the idea. Furthermore, the PRC’s position seems to be that any divestment agreement that involves the transfer of TikTok’s algorithm to a foreign entity requires Chinese regulatory approval.  Which I suspect would be DOA.

They didn’t just make that up– the PRC, through the Cyberspace Administration of China (CAC), owns a “golden share” in ByteDance’s main Chinese subsidiary. This 1% stake, acquired in 2021, grants the PRC significant influence over ByteDance including the ability to influence content and business strategies.

Unsurprisingly, ByteDance must ensure that the PRC government (i.e., the Chinese Communist Party) maintains control over TikTok’s core algorithm, a key asset for the company. PRC authorities have been clear that they will not approve any sale that results in ByteDance losing full control over TikTok’s proprietary technology, complicating the negotiations with prospective buyers.  

So a pressing question is whether TikTok without the algorithm is really TikTok from the users experience.  And then there’s that pesky issue of valuation—is TikTok with an unknown algo worth as much as TikTok with the proven, albeit awful, current algo.

Algorithm Lease Proposal

In an attempt to address both U.S. security concerns and the PRC’s objections, a novel solution has been proposed: leasing TikTok’s algorithm. Under this arrangement, ByteDance would retain ownership of the algorithm, while a U.S.-based company, most likely Oracle, would manage the operational side of TikTok’s U.S. business.

ByteDance would maintain control over its technology, while allowing a U.S. entity to oversee the platform’s operation within the U.S. The U.S. company would be responsible for ensuring compliance with U.S. data privacy laws and national security regulations, while ByteDance would continue to control its proprietary algorithm and intellectual property.

Under this leasing proposal, Oracle would be in charge of managing TikTok’s data security and ensuring that sensitive user data is handled according to U.S. regulations. This arrangement would allow ByteDance to retain its technological edge while addressing American security concerns regarding data privacy.

The primary concern is safeguarding user data rather than the algorithm itself. The proposal aims to address these concerns while avoiding the need for China’s approval of a full sale.

Now remember, the reason we are in this situation at all is that Chinese law requires TikTok to turn over on demand any data it gathers on TikTok users which I discussed on MTP back in 2020. The “National Intelligence Law” even requires TikTok to allow the PRC’s State Security police to take over the operation of TikTok for intelligence gathering purposes on any aspect of the users’ lives.  And if you wonder what that really means to the CCP, I have a name for you: Jimmy Lai. You could ask that Hong Konger, but he’s in prison.

This leasing proposal has sparked debate because it doesn’t seem to truly remove ByteDance’s influence over TikTok (and therefore the PRC’s influence). It’s being compared to “Project Texas 2.0,” a previous plan to secure TikTok’s data and operations.  I’m not sure how the leasing proposal solves this problem. Or said another way, if the idea is to get the PRC’s hands off of Americans’ user data, what the hell are we doing?

Next Steps

As the revised deadline approaches, I’d expect a few steps, each of which has its own steps within steps:

Finalization of a Deal: This is the biggest one–easy to say, nearly impossible to accomplish.  ByteDance will likely continue negotiating with interested parties while they snarf down user data, working to secure an agreement that satisfies both U.S. regulatory requirements and Chinese legal constraints. The latest extension provides runway for both sides to close key issues that are closable, particularly concerning the algorithm lease and ByteDance’s continued role in the business.

Operational Contingency:  I suppose at some point the buyer is going to be asked if whatever their proposal is will actually function and whether the fans will actually stick around to justify whatever the valuation is.  One of the problems with rich people getting ego involved in a fight over something they think is valuable is that they project all kinds of ideas on it that show how smart they are, only to find that once they get the thing they can’t actually do what they thought they would do.  By the time they figure out that it doesn’t work, they’ve moved on to the next episode in Short Attention Span Theater and it’s called Myspace.

China’s Approval: ByteDance will need to secure approval from PRC regulatory authorities for any deal involving the algorithm lease or a full divestment. So why introduce the complexity of the algo lease when you have to go through that step anyway?  Without PRC approval, any sale or lease of TikTok’s technology is likely dead, or at best could face significant legal and diplomatic hurdles.

Legal Action: If an agreement is not reached by the new deadline of July 1, 2025, further legal action could be pursued, either by ByteDance to contest the divestment order or by the U.S. government to enforce a ban on TikTok’s operations.  I doubt that President Trump is going to keep extending the deadline if there’s no significant progress.

If I were a betting man, I’d bet on the whole thing collapsing into a shut down and litigation, but watch this space.