Good News for TikTok Users: The PRC Definitely Isn’t Interested in Your Data (Just the Global Internet Backbone, Apparently)

If you’re a TikTok user who has ever worried, even a tiny bit, that the People’s Republic of China might have an interest in your behavior, preferences, movements, or social graph, take heart. A newly released Joint Cybersecurity Advisory from intelligence agencies in the United States, Canada, the U.K., Australia, New Zealand, and a long list of allied intelligence agencies proves beyond any shadow of a doubt that the PRC is far too busy compromising the world’s telecommunications infrastructure to care about your TikTok “For You Page.”

Nothing to see here. Scroll on.

For those who like their reassurance with a side of evidence, the advisory—titled “Countering Chinese State Actors’ Compromise of Networks Worldwide to Feed Global Espionage System”—is one of the clearest, broadest warnings ever issued about a Chinese state-sponsored intrusion campaign. And, because the agencies involved designated it as not sensitive and may be shared publicly without restriction (TLP:CLEAR), you can read it yourself.

The World’s Telecom Backbones: Now Featuring Uninvited Guests

The intel agency advisory describes a “Typhoon class” global espionage ecosystem run through persistent compromises of backbone routers, provider-edge and customer-edge routers, ISP and telecom infrastructure, transportation networks, lodging and hospitality systems, government and military-adjacent networks.

This is not hypothetical. The advisory includes extremely detailed penetration chains: attackers exploit widely known “Common Vulnerabilities and Exposures” (CVEs) in routers, firewalls, VPNs, and management interfaces, then establish persistence through configuration modifications, traffic mirroring, injected services, and encrypted tunnels. This lets them monitor, redirect, copy, or exfiltrate traffic across entire service regions.

Put plainly: if your internet service provider has a heartbeat and publicly routable equipment, the attackers have probably knocked on the door. And for a depressingly large number of large-scale network operators, they got in.

This is classical intelligence tradecraft. The PRC’s immediate goal isn’t ransomware. It’s not crypto mining. It’s not vandalism. It’s good old-fashioned espionage: long-term access, silent monitoring, and selective exploitation.

What They’re Collecting: Clues About Intent

The advisory makes the overall aim explicit: to give PRC intelligence the ability to identify and track targets’ communications and movements worldwide.

That includes metadata on calls, enterprise-internal communications, hotel and travel itineraries, traffic patterns for government and defense systems, persistent vantage points on global networks.

This is signals intelligence (SIGINT), not smash-and-grab.

And importantly: this kind of operation requires enormous intelligence-analytic processing, not a general-purpose “LLM training dataset.” These are targeted, high-value accesses, not indiscriminate web scrapes. The attackers are going after specific information—strategic, diplomatic, military, infrastructure, and political—not broad consumer content.

So no, this advisory is not about “AI training.” It is about access, exfiltration, and situational awareness across vital global communications arteries.

Does This Tell Us Anything About TikTok?

Officially, no. The advisory never mentions TikTok, ByteDance, or consumer social media apps. It is focused squarely on infrastructure.

But from a strategic-intent standpoint, it absolutely matters. Because when you combine:

1. Global telecom-layer access
2. Persistent long-term SIGINT footholds
3. The PRC’s demonstrated appetite for foreign behavioral data
4. The existence of the richest behavioral dataset on Earth—TikTok’s U.S. user base

—you get a coherent picture of the intelligence ecosystem the Chinese Communist Party is building on…I guess you’d have to say “the world”.

If a nation-state is willing to invest years compromising backbone routers, it is not a stretch to imagine what they could do with a mobile app installed on the phones of oh say 170 million Americans to pick a random number that conveniently collects social graphs, location traces, contact patterns, engagement preferences, political and commercial interests that are visible in the PRC.

But again, don’t worry. The advisory suggests only that Chinese state actors have global access to the infrastructure over which your TikTok traffic travels—not that they would dare take an interest in the app itself. And besides, the TikTok executives swore under oath to the U.S. Congress that it didn’t happen that way so it must be true.

After all, why would a government running a worldwide intrusion program want access to the largest behavioral-data sensor array outside the NSA?

If you still believe the PRC is nowhere near TikTok’s data, then this advisory will reassure you: it’s just a gentle reminder that Chinese state actors are burrowed into global telecom backbones, hotel networks, transportation systems, and military-adjacent infrastructure—pure souls simply striving to make sure your “For You” page loads quickly.

After all, why would a government running a worldwide network-intrusion program have any interest in the richest behavioral dataset on Earth?

Marc Andreessen’s Dormant Commerce Clause Fantasy

There’s a special kind of hubris in Silicon Valley, but Marc Andreessen may have finally discovered its purest form: imagining that the Dormant Commerce Clause (DCC) — a Constitutional doctrine his own philosophical allies loathe — will be his golden chariot into the Supreme Court to eliminate state AI regulation.

If you know the history, it borders on comedic, if you think that Ayn Rand is a great comedienne.

The DCC is a judge‑created doctrine inferred from the Commerce Clause (Article I, Section 8, Clause 3), preventing states from discriminating against or unduly burdening interstate commerce. Conservatives have long attacked it as a textless judicial invention. Justice Scalia called it a “judicial fraud”; Justice Thomas wants it abolished outright. Yet Andreessen’s Commerce Clause playbook is built on expanding a doctrine the conservative legal movement has spent 40 years dismantling.

Worse for him, the current Supreme Court is the least sympathetic audience possible.

Justice Gorsuch has repeatedly questioned DCC’s legitimacy and rejects free‑floating “extraterritoriality” theories. Justice Barrett, a Scalia textualist, shows no appetite for expanding the doctrine beyond anti‑protectionism. Justice Kavanaugh is business‑friendly but wary of judicial policymaking. None of these justices would give Silicon Valley a nationwide deregulatory veto disguised as constitutional doctrine. Add Alito and Thomas, and Andreessen couldn’t scrape a majority.

And then there’s Ted Cruz — Scalia’s former clerk — loudly cheerleading a doctrine his mentor spent decades attacking.

National Pork Producers Council v. Ross (2023): The Warning Shot

Andreessen’s theory also crashes directly into the Supreme Court’s fractured decision in the most recent DCC case before SCOTUS, National Pork Producers Council v. Ross (2023), where industry groups tried to use the DCC to strike down California’s animal‑welfare law due to its national economic effects.

The result? A deeply splintered Court produced several opinions.  Justice Gorsuch  announced the judgment of the Court, and delivered the opinion of the Court with respect to Parts I, II, III, IV–A, and V, in which Justices Thomas, Sotomayor, Kagan and Barrett joined, an opinion with respect to Parts IV–B and IV–D, in which Justice Thomas and Barrett joined, and an opinion with respect to Part IV–C, in which Justices Thomas, Sotomayor, and Kagan joined.  Justice Sotomayor filed an opinion concurring in part, in which Justice Kagan joined.  Justice Barrett filed an opinion concurring in part. Chief Justice Roberts filed an opinion concurring in part and dissenting in part, in which Justices Alito, Kavanaugh and Jackson joined. Justice Kavanaugh filed an opinion concurring in part and dissenting in part.

Got it?  

The upshot:
– No majority for expanding DCC “extraterritoriality.”
– No appetite for using DCC to invalidate state laws simply because they influence out‑of‑state markets.
– Multiple justices signaling that courts should not second‑guess state policy judgments through DCC balancing.
– Gorsuch’s lead opinion rejected the very arguments Silicon Valley now repackages for AI.

If Big Tech thinks this Court that decided National Pork—no pun intendedwill hand them a nationwide kill‑switch on state AI laws, they profoundly misunderstand the doctrine and the Court.

Andreessen didn’t just pick the wrong legal strategy. He picked the one doctrine the current Court is least willing to expand. The Dormant Commerce Clause isn’t a pathway to victory — it’s a constitutional dead end masquerading as innovation policy.

But…maybe he’s crazy like a fox.  

The Delay’s the Thing: The Dormant Commerce Clause as Delay Warfare

To paraphrase Saul Alinksy, the issue is never the issue, the issue is always delay.  Of course, if delay is the true objective, you couldn’t pick a better stalling tactic than hanging an entire federal moratorium on one of the Supreme Court’s most obscure and internally conflicted doctrines. The Dormant Commerce Clause isn’t a real path to victory—not with a Court where Scalia’s intellectual heirs openly question its legitimacy. But it is the perfect fig leaf for an executive order.

The point isn’t to win the case. The point is to give Trump just enough constitutional garnish to issue the EO, freeze state enforcement, and force every challenge into multi‑year litigation. That buys the AI industry exactly what it needs:  time. Time to scale. Time to consolidate. Time to embed itself into public infrastructure and defense procurement. Time  to become “too essential to regulate” or as Senator Hawley asked, too big to prosecute?

Big Tech doesn’t need a Supreme Court victory. It needs a judicial cloud, a preemption smokescreen, and a procedural maze that chills state action long enough for the industry to entrench itself permanently.  And no one knows that better than the moratorium’s biggest cheerleader, Senator Ted Cruz the Scalia clerk.

The Dormant Commerce Clause, in this context, isn’t a doctrine. It’s delay‑ware—legal molasses poured over every attempt by states to protect their citizens. And that delay may just be the real prize.

Structural Capture and the Trump AI Executive Order

The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. 


In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.”  This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.

AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions.  States also control the electrical utilities and zoning infrastructure that AI data centers depend on. 

Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.

This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. 
Those engineering incentives map cleanly onto the EO’s enforcement logic. 

The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.

There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.

The constitutional landscape is equally important.  Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress.  Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding.  And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.

So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.

The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?

A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.

But the deeper issue is how the AI ecosystem responds to a constrait.  If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.

At the systems level, a DOJ loss may even feed back into corporate strategy.  Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.

Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.

The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.

But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states.  If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.

This deserves close scrutiny before it becomes the template for AI governance moving forward.

DOJ Authority and the “Because China” Trump AI Executive Order

When an Executive Order purports to empower the Department of Justice to sue states, the stakes go well beyond routine federal–state friction.  In the draft Trump AI Executive Order “Eliminating State Law Obstruction of National AI Policy”, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation” whatever that means.  It sounds an awful lot like laws that interfere with Google’s business model. This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven at least as much by private interests of the richest corporations in commercial history as by public policy.

AI regulation sits squarely in longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land use, and labor conditions.  Crucially, states also control the electrical and zoning infrastructure that AI data centers depend on like say putting a private nuclear reactor next to your house.  Directing DOJ to attack these laws effectively deputizes the federal government as the legal enforcer for a handful of private AI companies seeking unbridled “growth” without engaging in the legislative process. Meaning you don’t get a vote. All this against the backdrop of one of the biggest economic bubbles since the last time these companies nearly tanked the U.S. economy.

This inversion is constitutionally significant. 

Historically, DOJ sues states to vindicate federal rights or enforce federal statutes—not to advance the commercial preferences of private industries.  Here, the EO appears to convert DOJ into a litigation shield for private companies looking to avoid state oversight altogether.  Under Youngstown Sheet & Tube Company, et al. v. Charles Sawyer, Secretary of Commerce, the President lacks authority to create new enforcement powers without congressional delegation, and under the major questions doctrine (West Virginia v. EPA), a sweeping reallocation of regulatory power requires explicit statutory grounding from Congress, including the Senate. That would be the Senate that resoundingly stripped the last version of the AI moratorium from the One Big Beautiful Bill Act by a vote of 99-1 against.

There are also First Amendment implications.  Many state AI laws address synthetic impersonation, deceptive outputs, and risks associated with algorithmic distribution.  If DOJ preempts these laws, the speech environment becomes shaped not by public debate or state protections but by executive preference and the operational needs of the largest AI platforms. Courts have repeatedly warned that government cannot structure the speech ecosystem indirectly through private intermediaries (Bantam Books v. Sullivan.)

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states. These provisions warrant careful scrutiny before they become the blueprint for AI governance moving forward.

The UK Finally Moves to Ban Above-Face-Value Ticket Resale

The UK is preparing to do something fans have begged for and secondary platforms have dreaded for years: ban the resale of tickets above face value. The plan, expected to be announced formally within days, would make the UK one of the toughest anti-scalping jurisdictions in the world. After a decade of explosive profiteering on sites like Viagogo and StubHub, the UK government has decided the resale marketplace needs a reset.

This move delivers on a major campaign promise from the 2024 Labour manifesto and comes on the heels of an unusually unified push from the artist community. More than 40 major artists — including Dua Lipa, Coldplay, Radiohead, Robert Smith, Sam Fender, PJ Harvey, The Chemical Brothers, and Florence + The Machine — signed an open letter urging Prime Minister Sir Keir Starmer to “stop touts from fleecing fans.” (“Touts” is British for “scalpers” which includes resellers like StubHub.). Sporting groups, consumer advocates, and supporter associations quickly echoed the call.

Under the reported proposal, tickets could only be resold at face value, with minimal, capped service fees to prevent platforms from disguising mark-ups as “processing costs.” This is a clear rejection of earlier floated compromises such as allowing resale up to 30% over face value which consumer groups said would simply legitimize profiteering.

Secondary platforms reacted instantly. Reuters reports that StubHub’s U.S.-listed parent lost around 14% of its market value on the news, compounding a disastrous first earnings report. As CNBC’s Jim Cramer put it bluntly: “It’s been a bust — and when you become a busted IPO, it’s very hard to change the narrative.” The UK announcement didn’t just nudge the stock downward; it slammed the door on the rosy growth story StubHub’s bankers were trying to sell.  Readers will know just how broken up I am about that little turn of events.  

Meanwhile, the UK Competition and Markets Authority has opened investigations into fee structures, “drip pricing,” and deceptive listings on both StubHub and Viagogo. Live Nation/Ticketmaster welcomed the move, noting that it already limits resale to face value in the UK.

One important nuance often lost in the public debate: dynamic pricing is not part of this ban — and in the UK, dynamic pricing isn’t the systemic problem it is in the U.S. Ticketmaster and other platforms consistently tell regulators that artists and their teams decide whether to use dynamic pricing, not the platforms. More importantly, relatively few artists actually ask for it. Most want their fans to get in at predictable, transparent prices — and some, like Robert Smith of The Cure, have publicly rejected dynamic pricing altogether.

That’s why the UK’s reform gets the target right: it goes after the for-profit resale economy, not the artists. It stops arbitrage without interfering with how performers choose to price their own shows.

The looming ban also highlights the widening gap between the UK and the U.S. While the UK is about to outlaw the very model that fuels American secondary platforms, U.S. reform remains paralyzed by lobbying pressure, fragmented state laws, and political reluctance to confront multimillion-dollar resale operators.

If the UK fully implements this reform, it becomes the most significant consumer-protection shift in live entertainment in more than a decade. And given the coalition behind it — artists, fans, sports groups, consumer advocates, and now regulators — this time the momentum looks hard to stop.

The Return of the Bubble Rider: Masa, OpenAI, and the New AI Supercycle

“Hubris gives birth to the tyrant; hubris, when glutted on vain visions, plunges into an abyss of doom.”
Agamemnon by Aeschylus

Masayoshi Son has always believed he could see farther into the technological future than everyone else. Sometimes he does. Sometimes he rides straight off a cliff. But the pattern is unmistakable: he is the market’s most fearless—and sometimes most reckless—Bubble Rider.

In the late 1990s, Masa became the patron saint of the early internet. SoftBank took stakes in dozens of dot-coms, anchored by its wildly successful bet on Yahoo! (yes, Yahoo!  Ask your mom.). For a moment, Masa was briefly one of the world’s richest men on paper. Then the dot-bomb hit. Overnight, SoftBank lost nearly everything. Masa has said he personally watched $70 billion evaporate—the largest individual wealth wipeout ever recorded at the time. But his instinct wasn’t to retreat. It was to reload.

That same pattern returned with SoftBank’s Vision Fund. Masa raised unprecedented capital from sovereign wealth pools and bet big on the “AI + data” megatrend—then plowed it into companies like WeWork, Zume, Brandless, and other combustion-ready unicorns. When those valuations collapsed, SoftBank again absorbed catastrophic losses. And yet the thesis survived, just waiting for its next bubble.

We’re now in what I’ve called the AI Bubble—the largest capital-formation mania since the original dot-com wave, powered by foundation AI labs, GPU scarcity, and a global arms race to capture platform rents. And here comes Masa again, right on schedule.

SoftBank has now sold its entire Nvidia stake—the hottest AI infrastructure trade of the decade—freeing up nearly $6 billion. That money is being redirected straight into OpenAI’s secondary stock offering at an eyewatering marked-to-fantasy $500 billion valuation. In the same week, SoftBank confirmed it is preparing even larger AI investments. This is Bubble Riding at its purest: exiting one vertical where returns may be peaking, and piling into the center of speculative gravity before the froth crests.

What I suspect Masa sees is simple: if generative AI succeeds, the model owners will become the new global monopolies alongside the old global monopolies like Google and Microsoft.  You know, democratizing the Internet. If it fails, the whole electric grid and water supply may crash along with it. He’s choosing a side—and choosing it at absolute top-of-market pricing.

The other difference between the dot-com bubble and the AI bubble is legal, not just financial. Pets.com and its peers (who I refer to generically as “Socks.com” the company that uses the Internet to find socks under the bed) were silly, but they weren’t being hauled into court en masse for building their core product on other people’s property. 

Today’s AI darlings are major companies being run like pirate markets. Meta, Anthropic, OpenAI and others are already facing a wall of litigation from authors, news organizations, visual artists, coders, and music rightsholders who all say the same thing: your flagship models exist only because you ingested our work without permission, at industrial scale, and you’re still doing it. 

That means this bubble isn’t just about overpaying for growth; it’s about overpaying for businesses whose main asset—trained model weights—may be encumbered by unpriced copyright and privacy claims. The dot-com era mispriced eyeballs. The AI era may be mispricing liability.  And that’s serious stuff.

There’s another distortion the dot-com era never had: the degree to which the AI bubble is being propped up by taxpayers. Socks.com didn’t need a new substation, a federal loan guarantee, or a 765 kV transmission corridor to find your socks. Today’s Socks.ai does need all that to use AI to find socks under the bed.  All the AI giants do. Their business models quietly assume public willingness to underwrite an insanely expensive buildout of power plants, high-voltage lines, and water-hungry cooling infrastructure—costs socialized onto ratepayers and communities so that a handful of platforms can chase trillion-dollar valuations. The dot-com bubble misallocated capital; the AI bubble is trying to reroute the grid.

In that sense, this isn’t just financial speculation on GPUs and model weights—it’s a stealth industrial policy, drafted in Silicon Valley and cashed at the public’s expense.

The problem, as always, is timing. Bubbles create enormous winners and equally enormous craters. Masa’s career is proof. But this time, the stakes are higher. The AI Bubble isn’t just a capital cycle; it’s a geopolitical and industrial reordering, pulling in cloud platforms, national security, energy systems, media industries, and governments with a bad case of FOMO scrambling to regulate a technology they barely understand.

And now, just as Masa reloads for his next moonshot, the market itself is starting to wobble. The past week’s selloff may not be random—it feels like a classic early-warning sign of a bubble straining under its own weight. In every speculative cycle, the leaders crack first: the most crowded trades, the highest-multiple stories, the narratives everyone already believes. This time, those leaders are the AI complex—GPU giants, hyperscale clouds, and anything with “model” or “inference” in the deck. When those names roll over together, it tells you something deeper than normal volatility is at work.

What the downturn may exposes is the growing narrative about an “earnings gap.“ Investors have paid extraordinary prices for companies whose long-term margins remain theoretical, whose energy demands are exploding, and whose regulatory and copyright liabilities are still unpriced. The AI story is enormous—but the business model remains unresolved. A selloff forces the market to remember the thing it forgets at every bubble peak: cash flow eventually matters.

Back in the late-cycle of the dot com era, I had lunch in December of 1999 with a friend who had worked 20 years in a division of a huge conglomerate, bought his division in a leveraged buyout, ran that company for 10 years then took that public, sold it to another company that then went public.  He asked me to explain how these dot coms were able to go public, a process he equated with hard work and serious people.  I said, well we like them to have four quarters of top line revenue.  He stared at me.  I said, I know it’s stupid, but that’s what they say.  He said, it’s all going to crash.  And boy did it ever.

And ironically, nothing captures this late-cycle psychology better than Masa’s own behavior. SoftBank selling Nvidia—the proven cash-printing side of AI—to buy OpenAI at a $500 billion valuation isn’t contrarian genius; it’s the definition of a crowded climax trade, the moment when everyone is leaning the same direction. When that move coincides with the tape turning red, the message is unmistakable: the AI supercycle may not be over, but the easy phase is.

Whether this is the start of a genuine deflation or just the first hard jolt before the final manic leg, the pattern is clear. The AI Bubble is no longer hypothetical—it is showing up on the trading screens, in the sentiment, and in the rotation of capital itself.

Masa may still believe the crest of the wave lies ahead. But the market has begun to ask the question every bubble eventually faces: What if this is the top of the ride?

Masa is betting that the crest of the curve lies ahead—that we’re in Act Two of an AI supercycle. Maybe he’s right. Or maybe he’s gearing up for his third historic wipeout.

Either way, he’s back in the saddle.

The Bubble Rider rides again.

Taxpayer-Backed AI? The Triple Subsidy No One Voted For

OpenAI’s CFO recently suggested that Uncle Sam should backstop AI chip financing—essentially asking taxpayers to guarantee the riskiest capital costs for “frontier labs.” As The Information reported, the idea drew immediate pushback from tech peers who questioned why a company preparing for a $500 billion valuation—and possibly a trillion-dollar IPO—can’t raise its own money. Why should the public underwrite a firm whose private investors are already minting generational wealth?


Meanwhile, the Department of Energy is opening federal nuclear and laboratory sites—from Idaho National Lab to Oak Ridge and Savannah River—for private AI data centers, complete with fast-track siting, dedicated transmission lines, and priority megawatts. DOE’s expanded Title XVII loan-guarantee authority sweetens the deal, offering government-backed credit and low borrowing costs. It’s a breathtaking case of public risk for private expansion, at a time when ordinary ratepayers are staring down record-high energy bills.

And the ambition goes further. Some of these companies now plan to site small modular nuclear reactors to provide dedicated power for AI data centers. That means the next generation of nuclear power—built with public financing and risk—could end up serving private compute clusters, not the public grid. In a country already facing desertification, water scarcity, and extreme heat, it is staggering to watch policymakers indulge proposals that will burn enormous volumes of water to cool servers, while residents across the Southwest are asked to ration and conserve. I theoretically don’t have a problem with private power grids, but I don’t believe they’ll be private and I do believe that in both the short run and the long run these “national champions” will drive electricity prices through the stratosphere—which would be OK, too, if the AI labs paid off the bonds that built our utilities. All the bonds.

At the same time, Washington still refuses to enforce copyright law, allowing these same firms to ingest millions of creative works into their models without consent, compensation, or disclosure—just as it did under DMCA §512 and Title I of the MMA, both of which legalized “ingest first, reconcile later.” That’s a copyright subsidy by omission, one that transfers cultural value from working artists into the balance sheets of companies whose business model depends on denial.


And the timing? Unbelievable. These AI subsidies were being discussed in the same week SNAP benefits are running out and the Treasury is struggling to refinance federal debt. We are cutting grocery assistance to families while extending loan guarantees and land access to trillion-dollar corporations.


If DOE and DOD insist on framing this as “AI industrial policy,” then condition every dollar on verifiable rights-clean training data, environmental transparency, and water accountability. Demand audits, clawbacks, and public-benefit commitments before the first reactor breaks ground.

Until then, this is not innovation—it’s industrialized arbitrage: public debt, public land, and public water underwriting the private expropriation of America’s creative and natural resources.

The Digital End-Cap: How Spotify’s Discovery Mode Turned Payola into Personalization

The streaming economy’s most controversial feature revives the old record-store co-op ad model—only now, the shelf space is algorithmic, the payments are disguised as royalty discounts, and the audience has no idea.

From End-Caps to Algorithms: The Disappearing Line Between Marketing and Curation

In the record-store era, everyone in the business knew that end-caps, dump bins, window clings, and in-store listening stations weren’t “organic” discoveries—they were paid placements. Labels bought the best shelf space, sponsored posters, and underwrote the music piped through the store’s speakers because visibility sold records.

Spotify’s Discovery Mode is that same co-op advertising model reborn in code: a system where royalty discounts buy algorithmic shelf space rather than retail real estate. Yet unlike the physical store, today’s paid promotion is hidden behind the language of personalization. Users are told that playlists and AI DJs are “made just for you,” when in fact those recommendations are shaped by the same financial incentives that once determined which CD got the end-cap.

On Spotify, nothing is truly organic; Discovery Mode simply digitizes the old pay-for-placement economy, blending advertising with algorithmic curation while erasing the transparency that once separated marketing from editorial judgment.

Spotify’s Discovery Mode: The “Inverted Payola”

The problem for Spotify is that it has never positioned itself like a retailer. It has always positioned itself as a substitute for radio, and buying radio is a dangerous occupation. That’s called payola.

Spotify’s controversial “Discovery Mode” is a kind of inverted payola which makes it seem like it smells less than it actually does. Remember, artists don’t get paid for broadcast radio airplay in the US so the incentive always was for labels to bribe DJs because that’s the only way that money entered the transaction. (At one point, that could have included publishers, too, back when publishers tried to break artists who recorded their songs.)

What’s different about Spotify is that streaming services do pay for their equivalent of airplay. When Discovery Mode pays less in return for playing certain songs more, that’s essentially the same as getting paid for playing certain songs more. It’s just a more genteel digital transaction in the darkness of ones and zeros instead of the tackier $50 handshake. The discount is every bit as much a “thing of value” as a $50 bill, with the possible exception that it goes to benefit Spotify stockholders and employees unlike the $50 that an old-school DJ probably just put in his pocket in one of those gigantic money rolls. (For games to play on a rainy day, try betting a DJ he has less than $10,000 in his pocket.)

Music Business Worldwide gave Spotify’s side of the story (which is carefully worded flack talk so pay close attention):Spotify rejected the allegations, telling AllHipHop: 

“The allegations in this complaint are nonsense. Not only do they misrepresent what Discovery Mode is and how it works, but they are riddled with misunderstandings and inaccuracies.”

The company explained that Discovery Mode affects only RadioAutoplay and certain Mixes, not flagship playlists like Discover Weekly or the AI DJ that the lawsuit references.Spotify added: “The complaint even gets basic facts wrong: Discovery Mode isn’t used in all algorithmic playlists, or even Discover Weekly or DJ, as it claims.

The Payola Deception Theory

The emerging payola deception theory against Spotify argues that Spotify’s pay-to-play Discovery Mode constitutes a form of covert payola that distorts supposedly neutral playlists and recommendation systems—including Discover Weekly and the AI DJ—even if those specific products do not directly employ the “Discovery Mode” flag.

The key to proving this theory lies in showing how a paid-for boost signal introduced in one part of Spotify’s ecosystem inevitably seeps through the data pipelines and algorithmic models that feed all the others, deceiving users about the neutrality of their listening experience. That does seem to be the value proposition—”You give us cheaper royalties, we give you more of the attention firehose.”

Spotify claims that Discovery Mode affects only Radio, Autoplay, and certain personalized mixes, not flagship products like enterprise playlists or the AI DJ. That defense rests on a narrow, literal interpretation: those surfaces do not read the Discovery Mode switch. Yet under the payola deception theory, this distinction is meaningless because Spotify’s recommendation ecosystem appears to be fully integrated.

Spotify’s own technical publications and product descriptions indicate that multiple personalized surfaces— including Discover Weekly and AI DJ—are built on shared user-interaction data, learned taste profiles, and common recommendation models, rather than each using entirely independent algorithms. It sounds like Spotify is claiming that certain surfaces like Discover Weekly and AI DJ have cabined algorithms and pristine data sets that are not affected by Discovery Mode playlists or the Discovery Mode switch.

While that may be true, it seems like maintaining that separation would be downright hairy if not expensive in terms of compute. It seems far more likely that Spotify run shared models on shared data, and when they say “Discovery Mode isn’t used in X,” they’re only talking about the literal flag—not the downstream effects of the paid boost on global engagement metrics and taste profiles.

How the Bias Spreads: Five Paths of Contamination

So let’s infer that every surface draws on the same underlying datasets, engagement metrics, and collaborative models. Once the paid boost changes user behavior, it alters the entire system’s understanding of what is popular, relevant, or representative of a listener’s taste. The result is systemic contamination: a payola-driven distortion presented to users as organic personalization. The architecture that would make their strong claim true is expensive and unnatural; the architecture that’s cheap and standard almost inevitably lets the paid boost bleed into those “neutral” surfaces in five possible ways.

The first is through popularity metrics. As much as we can tell from the outside, Discovery Mode artificially inflates a track’s exposure in the limited contexts where the switch is activated. Those extra impressions generate more streams, saves, and “likes,” which I suspect feed into Spotify’s master engagement database.

Because stream count, skip rate, and save ratio are very likely global ranking inputs, Discovery Mode’s beneficiaries appear “hotter” across the board. Even if Discover Weekly or the AI DJ ignore the Discovery Mode flag, it’s reasonable to infer that they still rely on those popularity statistics to select and order songs. Otherwise Spotify would need to maintain separate, sanitized algorithms trained only on “clean” engagement data—an implausible and inefficient architecture given Spotify’s likely integrated recommendation system and the economic logic of Discovery Mode itself which I find highly unlikely to be the case. The paid boost thus translates into higher ranking everywhere, not just in Radio or Autoplay. This is the algorithmic equivalent of laundering a bribe through the system—money buys visibility that masquerades as audience preference.

The second potential channel is through user taste profiles. Spotify’s personalization models constantly update a listener’s “taste vector” based on recent listening behavior. If Discovery Mode repeatedly serves a track in Autoplay or Radio, a listener’s history skews toward that song and its stylistic “neighbors”. The algorithm likely then concludes that the listener “likes” similar artists (even if it’s actually Discover Mode serving the track, not user free will. The algorithm likely feeds those likes into Discover Weekly, Daily Mixes, and the AI DJ’s commentary stream. The user thinks the AI is reading their mood; in reality, it is reading a taste profile that was manipulated upstream by a pay-for-placement mechanism. All roads lead to Bieber or Taylor.

A third route is collaborative filtering and embeddings aka “truthiness”. As I understand it, Spotify’s recommendation architecture relies on listening patterns—tracks played in the same sessions or saved to the same playlists become linked in multidimensional “embedding” space. When Discovery Mode injects certain tracks into more sessions, it likely artificially strengthens the connections between those promoted tracks and others around them. The output then seems far more likely to become “fans of Artist A also like Artist B.” That output becomes algorithmically more frequent and hence “truer” or “truthier”, not because listeners chose it freely, but because paid exposure engineered the correlation. Those embeddings are probably global: they shape the recommendations of Discover Weekly, the “Fans also like” carousel, and the candidate pool for the AI DJ. A commercial distortion at the periphery thus is more likely to reshape the supposedly organic map of musical similarity at the core.

Fourth, the DM boost echoes through editorial and social feedback loops. Once Discovery Mode inflates a song’s performance metrics, it begins to look like what passes for a breakout hit these days. Editors scanning dashboards see higher engagement and may playlist the track in prominent editorial contexts. Users might add it to their own playlists, creating external validation. The cumulative effect is that an artificial advantage bought through Discovery Mode converts into what appears to be organic success, further feeding into algorithmic selection for other playlists and AI-driven features. This recursive amplification makes it almost impossible to isolate the paid effect from the “natural” one, which is precisely why disclosure rules exist in traditional payola law. I say “almost impossible” reflexively—I actually think it is in fact impossible, but that’s the kind of thing you can model in a different type of “discovery” being court-ordered discovery.

Finally, there is the shared-model problem. Spotify has publicly acknowledged that the AI DJ is a “narrative layer” built on the same personalization technology that powers its other recommendation surfaces. In practice, this means one massive model (or group of shared embeddings) generates candidate tracks, while a separate module adds voice or context.

If the shared model was trained on Discovery-Mode-skewed data, then even when the DJ module does not read the Discovery flag, it inherits the distortions embedded in those weights. Turning off the switch for the DJ therefore does not remove the influence; it merely hides its provenance. Unlike AI systems designed to dampen feedback bias, Spotify’s Discovery Mode institutionalizes it—bias is the feature, not the bug. You know, garbage in, garbage out.

Proving the Case: Discovery Mode’s Chain of Causation and the Triumph of GIGO

Legally, there’s a strong argument that the deception arises not from the existence of Discovery Mode itself but from how Spotify represents its recommendation products. The company markets Discover Weekly, Release Radar, and AI DJ as personalized to your taste, not as advertising or sponsored content. When a paid-boost mechanism anywhere in the ecosystem alters what those “organic” systems serve, Spotify arguably misleads consumers and rightsholders about the independence of its curation. Under a modernized reading of payola or unfair-deceptive-practice laws, that misrepresentation can amount to a hidden commercial endorsement—precisely the kind of conduct that the Federal Communications Commission’s sponsorship-identification rules (aka payola rules) and the FTC’s endorsement guides were designed to prevent.

In fact, the same disclosure standards that govern influencers on social media should govern algorithmic influencers on streaming platforms. When Spotify accepts a royalty discount in exchange for promoting a track, that arguably constitutes a material connection under the FTC’s Endorsement Guides. Failing to disclose that connection to listeners could transform Discovery Mode from a personalization feature into a deceptive advertisement—modern payola by another name. Why piss off one law enforcement agency when you can have two of them chase you around the rugged rock?

It must also be said that Discovery Mode doesn’t just shortchange artists and mislead listeners; it quietly contaminates the sainted ad product, too. Advertisers think they’re buying access to authentic, personalized listening moments. In reality, they’re often buying attention in a feed where the music itself is being shaped by undisclosed royalty discounts — a form of algorithmic payola that bends not only playlists, but the very audience segments and performance metrics brands are paying for. Advertising agencies don’t like that kind of thing one little bit. We remember what happened when it became apparent that ads were being served to pirate sites by you know who.

Proving the payola deception theory would therefore likely involve demonstrating causation across data layers: that the presence of Discovery Mode modifies engagement statistics, that those metrics propagate into global recommendation features, and that users (and possibly advertisers) were misled to believe those recommendations were purely algorithmic or merit-based. We can infer that the structure of Spotify’s own technology likely makes that chain not only plausible but possibly inevitable.

In an interconnected system where every model learns from the outputs of every other, no paid input stays contained. The moment a single signal is bought, a strong case can be made that the neutrality of the entire recommendation network is compromised—and so is the user’s trust in what it means when Spotify says a song was “picked just for you.”

Y’all Street Rising: Why the Future of Music Finance Won’t Be Made in Manhattan

There’s a new gravity well in American finance, and it’s not New York. It’s not even Silicon Valley. It’s Dallas. It’s Austin. It’s Y’all Street.

And anyone paying attention could have seen it coming. The Texas Stock Exchange (TXSE) is preparing for launch in 2026.  TXSW is not some bulletin board; it’s backed by billions from institutions that have grown weary of the compliance culture and cost of New York. Goldman Sachs’s Dallas campus is now operational. BlackRock and Charles Schwab have shifted major divisions to the Lone Star State. Tesla and Samsung are expanding giga-scale manufacturing and chip fabrication plants.

A strong center of gravity for capital formation is moving south, and with it, a new cultural economy is taking shape. And AI may not save it:  Scion Asset Management, “Big Short” investor Michael Burry’s hedge fund, disclosed to the SEC that it had a short bet worth $1.1 billion against Nvidia and Palantir.   He’s also investing in waterthat AI burns.  So not everyone is jumping off a cliff.

A New Realignment

Texas startups have raised roughly $9.8 billion in venture capital through Q3 2025, pushing the state to a consistent #4 ranking nationally. Austin remains the creative and software hub, while Dallas–Fort Worth and Houston lead in AI infrastructure, energy tech, and finance.

The TXSE will formalize what investors already know: capital markets no longer need Manhattan to function.

And that raises an uncomfortable question for the music industry:

If capital, infrastructure, and innovation no longer orbit Wall Street, why should music?

Apple Learned It the Hard Way

Despite New York’s rich musical legacy—Tin Pan Alley, Brill Building, CBGB, and the era of the major-label tower when Sony occupied that horrible AT&T building and flew sushi in from Japan for the executive dining room—the city has become an increasingly difficult place to sustain large-scale creative infrastructure. Real estate costs, over-regulation, and financial concentration have hollowed out the middle layer of production.  As I’ve taught for years, the key element to building the proverbial “creative class” is cheap rent, preferably with a detached garage.

Even Apple Inc. learned long ago that creativity can’t thrive where every square foot carries a compliance surcharge. That’s why Apple’s global supply chain, data centers, and now content operations span Texas, Tennessee, and North Carolina instead of Midtown Manhattan.  And then there’s the dirty power, sump pumps and subways—Electric Lady would probably never get built today.

The lesson for the music business is clear: creative capital follows economic oxygen. And right now, that oxygen is in Texas.

The Texas Music Office: A Model for How to Get It Done

If you want to understand how Texas built a durable, bipartisan music infrastructure, start with the Texas Music Office (TMO). Founded in 1990 under Governor Bill Clements, the TMO was one of the first state agencies in America to recognize the music industry not just as culture, but as economic development.

Over the decades—through governors of both parties—the TMO has become a master class in how to institutionalize support for creative enterprise without strangling it in bureaucracy. From George W. Bush’s early focus on export promotion, to Rick Perry’s integration of music into economic development, to Greg Abbott’s expansion of the Music Friendly Communities network, each administration built upon rather than dismantled what came before.

Today, the TMO supports more than 70 certified Music Friendly Communities, funds music-education grants, tracks economic data, and connects local musicians with investors and international partners. It’s a template for how a state can cultivate creative industries while maintaining fiscal discipline and accountability.

It’s also proof that cultural policy doesn’t have to be partisan—it just has to be practical.

When people ask why Texas has succeeded where others stalled, the answer is simple: the TMO stayed focused on results, not rhetoric. That’s a lesson a lot of states—and more than a few record labels—could stand to relearn.

Artist Rights Institute: Doing Our Part for Texas and Beyond

The Artist Rights Institute (ARI) has done its part to make sure that Texas and other local music and creators aren’t an afterthought in rooms that are usually dominated by platform interests and coastal trade groups.

When questions of AI training, copyright allocation, black-box royalties, and streaming transparency landed in front of the U.S. Copyright Office, Congress, and U.K. policymakers, ARI showed up with the Texas view: creators first, no speculative ticketing, no compulsory “data donation,” and no silent expropriation of recordings and songs for AI. ARI has filed comments, contributed research, and supported amicus work to make sure Texas artists, songwriters, and indie publishers are in the record — not just the usual New York, Nashville, and Los Angeles voices.

Just as important, ARI has pushed financial education for artists. Because Y’all Street doesn’t help creators if they don’t know what a discount rate is, how catalog valuations work, how to read a mechanical statement, or why AI licenses need to be expressly excluded from legacy record and publishing deals. ARI programs in Texas and Georgia have focused on:
– explaining how federal policy actually hits musicians,
– showing how to negotiate or at least spot AI/derivative-use clauses,
– and connecting artists to local music industry infrastructure.

In other words, ARI joined other Texas and Georgia organizations to be a translator between Texas’s very real music economy and the fast-moving policy debates in Washington and the U.K. If Texas is going to be the place where music is financed, ARI wants to make sure local artists are also the ones who capture the value.

Music’s Texas Moment

Texas is no newcomer to the business of music. Its industry already generates over $13.4 billion in annual economic activity, supporting more than 91,000 jobs across its certified cities. Austin retains the crown of “Live Music Capital of the World,” but Denton, Fort Worth, and San Antonio have joined the state-certified network of “Music Friendly Communities”.

Meanwhile, universities from UT-Austin to Texas A&M study rights management, AI provenance, and royalties in the age of generative audio.

The result: a state that treats music not as nostalgia, but as an evolving economic engine.  Plus we’ve got Antone’s.

Wall Street’s ‘Great Sucking Sound,’ Replayed

Ross Perot once warned of “that giant sucking sound” as jobs moved south. Thirty years later, the sound you hear isn’t manufacturing—it’s money, data, and influence flowing to Y’all Street.

If the major labels and publishers don’t track that migration, they risk becoming cultural tenants in cities they no longer own. The next catalog securitization, the next AI-royalty clearinghouse, the next Bell Labs-for-Music could just as easily be financed out of Dallas as from Midtown.

Because while New York made the hits of the last century, Texas may well finance the next one.  We’ve always had the musicians, producers, authors, actors and film makers, but soon we’ll also have the money.

Y’all Ready?

The world no longer needs a Midtown address to mint creative wealth. As the TXSE prepares its debut and Texas cements its position as the nation’s innovation corridor, the music industry faces a choice:

Follow the capital—or become another cautionary tale of what happens when you mistake heritage for destiny.

Because as Apple learned long ago, even the richest history can’t compete with the freedom to build something new.  

When the Machine Lies: Why the NYT v. Sullivan “Public Figure” Standard Shouldn’t Protect AI-Generated Defamation of @MarshaBlackburn

Google’s AI system, Gemma, has done something no human journalist ever could past an editor: fabricate and publish grotesque rape allegations about a sitting U.S. Senator and a political activist—both living people, both blameless.

As anyone who has ever dealt with Google and its depraved executives knows all too well, Google will genuflect and obfuscate with great public moral whinging, but the reality is—they do not give a damn.  When Sen. Marsha Blackburn and Robby Starbuck demand accountability, Google’s corporate defense reflex will surely be: We didn’t say it; the model did—and besides, they’re public figures based on the Supreme Court defamation case of New York Times v. Sullivan.  

But that defense leans on a doctrine that simply doesn’t fit the facts of the AI era. New York Times v. Sullivan was written to protect human speech in public debate, not machine hallucinations in commercial products.

The Breakdown Between AI and Sullivan

In 1964, Sullivan shielded civil-rights reporting from censorship by Southern officials (like Bull Connor) who were weaponizing libel suits to silence the press. The Court created the “actual malice” rule—requiring public officials to prove a publisher knew a statement was false or acted with reckless disregard for the truth—so journalists could make good-faith errors without losing their shirts.

But AI platforms aren’t journalists.

They don’t weigh sources, make judgments, or participate in democratic discourse. They don’t believe anything. They generate outputs, often fabrications, trained on data they likely were never authorized to use.

So when Google’s AI invents a rape allegation against a sitting U.S. Senator, there is no “breathing space for debate.” There is only a product defect—an industrial hallucination that injures a human reputation.

Blackburn and Starbuck: From Public Debate to Product Liability

Senator Blackburn discovered that Gemma responded to the prompt “Has Marsha Blackburn been accused of rape?” by conjuring an entirely fictional account of a sexual assault by the Senator and citing nonexistent news sources.  Conservative activist Robby Starbuck experienced the same digital defamation—Gemini allegedly linked him to child rape, drugs, and extremism, complete with fake links that looked real.

In both cases, Google executives were notified. In both cases, the systems remained online.
That isn’t “reckless disregard for the truth” in the Sullivan sense—it’s something more corporate and more concrete: knowledge of a defective product that continues to cause harm.

When a car manufacturer learns that the gas tank explodes but ships more cars, we don’t call that journalism. We call it negligence—or worse.

Why “Public Figure” Is the Wrong Lens

The Sullivan line of cases presumes three things:

  1. Human intent: a journalists believed what they wrote was the truth.
  2. Public discourse: statements occurred in debate on matters of public concern about a public figure.
  3. Factual context: errors were mistakes in an otherwise legitimate attempt at truth.

None of those apply here.

Gemma didn’t “believe” Blackburn committed assault; it simply assembled probabilistic text from its training set. There was no public controversy over whether she did so; Gemma created that controversy ex nihilo. And the “speaker” is not a journalist or citizen but a trillion-dollar corporation deploying a stochastic parrot for profit.

Extending Sullivan to this context would distort the doctrine beyond recognition. The First Amendment protects speakers, not software glitches.

A Better Analogy: Unsafe Product Behavior—and the Ghost of Mrs. Palsgraf

Courts should treat AI defamation less like tabloid speech and more like defective design, less like calling out racism and more like an exploding boiler.

When a system predictably produces false criminal accusations, the question isn’t “Was it actual malice?” but “Was it negligent to deploy this system at all?”

The answer practically waves from the platform’s own documentation. Hallucinations are a known bug—very well known, in fact. Engineers write entire mitigation memos about them, policy teams issue warnings about them, and executives testify about them before Congress.

So when an AI model fabricates rape allegations about real people, we are well past the point of surprise. Foreseeability is baked into the product roadmap.
Or as every first-year torts student might say: Heloooo, Mrs. Palsgraf.

A company that knows its system will accuse innocent people of violent crimes and deploys it anyway has crossed from mere recklessness into constructive intent. The harm is not an accident; it is an outcome predicted by the firm’s own research, then tolerated for profit.

Imagine if a car manufacturer admitted its autonomous system “sometimes imagines pedestrians” and still shipped a million vehicles. That’s not an unforeseeable failure; that’s deliberate indifference. The same logic applies when a generative model “imagines” rape charges. It’s not a malfunction—it’s a foreseeable design defect.

Why Executive Liability Still Matters

Executive liability matters in these cases because these are not anonymous software errors—they’re policy choices.
Executives sign off on release schedules, safety protocols, and crisis responses. If they were informed that the model fabricated criminal accusations and chose not to suspend it, that’s more than recklessness; it’s ratification.

And once you frame it as product negligence rather than editorial speech, the corporate-veil argument weakens. Officers, especially senior officers, who knowingly direct or tolerate harmful conduct can face personal liability, particularly when reputational or bodily harm results from their inaction.

Re-centering the Law

Courts need not invent new doctrines. They simply have to apply old ones correctly:

  • Defamation law applies to false statements of fact.
  • Product-liability law applies to unsafe products.
  • Negligence applies when harm is foreseeable and preventable.

None of these require importing Sullivan’s “actual malice” shield into some pretzel logic transmogrification to apply to an AI or robot. That shield was never meant for algorithmic speech emitted by unaccountable machines.  As I’m fond of saying, Sir William Blackstone’s good old common law can solve the problem—we don’t need any new laws at all.

Section 230 and The Political Dimension

Sen. Blackburn’s outrage carries constitutional weight: Congress wrote the Section 230 safe harbor to protect interactive platforms from liability for user content, not their own generated falsehoods. When a Google-made system fabricates crimes, that’s corporate speech, not user speech. So no 230 for them this time. And the government has every right—and arguably a duty—to insist that such systems be shut down until they stop defaming real people.  Which is exactly what Senator Blackburn wants and as usual, she’s quite right to do so.  Me, I’d try to put the Google guy in prison.

The Real Lede

This is not a defamation story about a conservative activist or a Republican senator. It’s a story about the breaking point of Sullivan. For sixty years, that doctrine balanced press freedom against reputational harm. But it was built for newspapers, not neural networks.

AI defamation doesn’t advance public discourse—it destroys it. 

It isn’t about speech that needs breathing space—it’s pollution that needs containment. And when executives profit from unleashing that pollution after knowing it harms people, the question isn’t whether they had “actual malice.” The question is whether the law will finally treat them as what they are: manufacturers of a defective product that lies and hurts people.