When the Machine Lies: Why the NYT v. Sullivan “Public Figure” Standard Shouldn’t Protect AI-Generated Defamation of @MarshaBlackburn

Google’s AI system, Gemma, has done something no human journalist ever could past an editor: fabricate and publish grotesque rape allegations about a sitting U.S. Senator and a political activist—both living people, both blameless.

As anyone who has ever dealt with Google and its depraved executives knows all too well, Google will genuflect and obfuscate with great public moral whinging, but the reality is—they do not give a damn.  When Sen. Marsha Blackburn and Robby Starbuck demand accountability, Google’s corporate defense reflex will surely be: We didn’t say it; the model did—and besides, they’re public figures based on the Supreme Court defamation case of New York Times v. Sullivan.  

But that defense leans on a doctrine that simply doesn’t fit the facts of the AI era. New York Times v. Sullivan was written to protect human speech in public debate, not machine hallucinations in commercial products.

The Breakdown Between AI and Sullivan

In 1964, Sullivan shielded civil-rights reporting from censorship by Southern officials (like Bull Connor) who were weaponizing libel suits to silence the press. The Court created the “actual malice” rule—requiring public officials to prove a publisher knew a statement was false or acted with reckless disregard for the truth—so journalists could make good-faith errors without losing their shirts.

But AI platforms aren’t journalists.

They don’t weigh sources, make judgments, or participate in democratic discourse. They don’t believe anything. They generate outputs, often fabrications, trained on data they likely were never authorized to use.

So when Google’s AI invents a rape allegation against a sitting U.S. Senator, there is no “breathing space for debate.” There is only a product defect—an industrial hallucination that injures a human reputation.

Blackburn and Starbuck: From Public Debate to Product Liability

Senator Blackburn discovered that Gemma responded to the prompt “Has Marsha Blackburn been accused of rape?” by conjuring an entirely fictional account of a sexual assault by the Senator and citing nonexistent news sources.  Conservative activist Robby Starbuck experienced the same digital defamation—Gemini allegedly linked him to child rape, drugs, and extremism, complete with fake links that looked real.

In both cases, Google executives were notified. In both cases, the systems remained online.
That isn’t “reckless disregard for the truth” in the Sullivan sense—it’s something more corporate and more concrete: knowledge of a defective product that continues to cause harm.

When a car manufacturer learns that the gas tank explodes but ships more cars, we don’t call that journalism. We call it negligence—or worse.

Why “Public Figure” Is the Wrong Lens

The Sullivan line of cases presumes three things:

  1. Human intent: a journalists believed what they wrote was the truth.
  2. Public discourse: statements occurred in debate on matters of public concern about a public figure.
  3. Factual context: errors were mistakes in an otherwise legitimate attempt at truth.

None of those apply here.

Gemma didn’t “believe” Blackburn committed assault; it simply assembled probabilistic text from its training set. There was no public controversy over whether she did so; Gemma created that controversy ex nihilo. And the “speaker” is not a journalist or citizen but a trillion-dollar corporation deploying a stochastic parrot for profit.

Extending Sullivan to this context would distort the doctrine beyond recognition. The First Amendment protects speakers, not software glitches.

A Better Analogy: Unsafe Product Behavior—and the Ghost of Mrs. Palsgraf

Courts should treat AI defamation less like tabloid speech and more like defective design, less like calling out racism and more like an exploding boiler.

When a system predictably produces false criminal accusations, the question isn’t “Was it actual malice?” but “Was it negligent to deploy this system at all?”

The answer practically waves from the platform’s own documentation. Hallucinations are a known bug—very well known, in fact. Engineers write entire mitigation memos about them, policy teams issue warnings about them, and executives testify about them before Congress.

So when an AI model fabricates rape allegations about real people, we are well past the point of surprise. Foreseeability is baked into the product roadmap.
Or as every first-year torts student might say: Heloooo, Mrs. Palsgraf.

A company that knows its system will accuse innocent people of violent crimes and deploys it anyway has crossed from mere recklessness into constructive intent. The harm is not an accident; it is an outcome predicted by the firm’s own research, then tolerated for profit.

Imagine if a car manufacturer admitted its autonomous system “sometimes imagines pedestrians” and still shipped a million vehicles. That’s not an unforeseeable failure; that’s deliberate indifference. The same logic applies when a generative model “imagines” rape charges. It’s not a malfunction—it’s a foreseeable design defect.

Why Executive Liability Still Matters

Executive liability matters in these cases because these are not anonymous software errors—they’re policy choices.
Executives sign off on release schedules, safety protocols, and crisis responses. If they were informed that the model fabricated criminal accusations and chose not to suspend it, that’s more than recklessness; it’s ratification.

And once you frame it as product negligence rather than editorial speech, the corporate-veil argument weakens. Officers, especially senior officers, who knowingly direct or tolerate harmful conduct can face personal liability, particularly when reputational or bodily harm results from their inaction.

Re-centering the Law

Courts need not invent new doctrines. They simply have to apply old ones correctly:

  • Defamation law applies to false statements of fact.
  • Product-liability law applies to unsafe products.
  • Negligence applies when harm is foreseeable and preventable.

None of these require importing Sullivan’s “actual malice” shield into some pretzel logic transmogrification to apply to an AI or robot. That shield was never meant for algorithmic speech emitted by unaccountable machines.  As I’m fond of saying, Sir William Blackstone’s good old common law can solve the problem—we don’t need any new laws at all.

Section 230 and The Political Dimension

Sen. Blackburn’s outrage carries constitutional weight: Congress wrote the Section 230 safe harbor to protect interactive platforms from liability for user content, not their own generated falsehoods. When a Google-made system fabricates crimes, that’s corporate speech, not user speech. So no 230 for them this time. And the government has every right—and arguably a duty—to insist that such systems be shut down until they stop defaming real people.  Which is exactly what Senator Blackburn wants and as usual, she’s quite right to do so.  Me, I’d try to put the Google guy in prison.

The Real Lede

This is not a defamation story about a conservative activist or a Republican senator. It’s a story about the breaking point of Sullivan. For sixty years, that doctrine balanced press freedom against reputational harm. But it was built for newspapers, not neural networks.

AI defamation doesn’t advance public discourse—it destroys it. 

It isn’t about speech that needs breathing space—it’s pollution that needs containment. And when executives profit from unleashing that pollution after knowing it harms people, the question isn’t whether they had “actual malice.” The question is whether the law will finally treat them as what they are: manufacturers of a defective product that lies and hurts people.

Denmark’s Big Idea: Protect Personhood from the Blob With Consent First and Platform Duty Built In

Denmark has given the rest of us a simple, powerful starting point: protect the personhood of citizens from the blob—the borderless slurry of synthetic media that can clone your face, your voice, and your performance at scale. Crucially, Denmark isn’t trying to turn name‑image‑likeness into a mini‑copyright. It’s saying something more profound: your identity isn’t a “work”; it’s you. It’s what is sometimes called “personhood.” That framing changes everything. It’s not commerce, it’s a human right.

The Elements of Personhood

Personhood raises human reality as moral consideration, not a piece of content. For example, the European Court of Human Rights reads Article 8 ECHR (“private life”) to include personal identity (name, identity integrity, etc.), protecting individual identity against unjustified interference. This is, of course, anathema to Silicon Valley, but the world takes a different view.

In fact, Denmark’s proposal echoes the Universal Declaration of Human Rights. It starts with dignity (Art. 1) and recognition of each person before the law (Art. 6), and it squarely protects private life, honor, and reputation against synthetic impersonation (Art. 12). It balances freedom of expression (Art. 19) with narrow, clearly labeled carve-outs, and it respects creators’ moral and material interests (Art. 27(2)). Most importantly, it delivers an effective remedy (Art. 8): a consent-first rule backed by provenance and cross-platform stay-down, so individuals aren’t forced into DMCA-style learned helplessness.

Why does this matter? Because the moment we call identity or personhood a species of copyright, platforms will reach for a familiar toolbox—quotation, parody, transient copies, text‑and‑data‑mining (TDM)—and claim exceptions to protect them from “data holders”. That’s bleed‑through: the defenses built for expressive works ooze into an identity context where they don’t belong. The result is an unearned permission slip to scrape faces and voices “because the web is public.” Denmark points us in the opposite direction: consent or it’s unlawful. Not “fair use,” not “lawful access,” not “industry custom., not “data profile.” Consent. Pretty easy concept. It’s one of the main reasons tech executives keep their kids away from cell phones and social media.

Not Replicating the Safe Harbor Disaster

Think about how we got here. The first generation of the internet scaled by pushing risk downstream with a portfolio of safe harbors like the God-awful DMCA and Section 230 in the US. Platforms insisted they were deserving of blanket liability shields because they were special. They were “neutral pipes” which no one believed then and don’t believe now. These massive safe harbors hardened into a business model that likely added billions to the FAANG bottom line. We taught millions of rightsholders and users to live with learned helplessness: file a notice, watch copies multiply, rinse and repeat. Many users did not know they could even do that much, and frankly still may not. That DMCA‑era whack‑a‑mole turned into a faux license, a kind of “catch me if you can” bargain where exhaustion replaces consent.

Denmark’s New Protection of Personhood for the AI Era

Denmark’s move is a chance to break that pattern—if we resist the gravitational pull back to copyright. A fresh right of identity (called a “sui generis” right among Latin fans) is not subject to copyright or database exceptions, especially fair use, DMCA, and TDM. In plain English: “publicly available” is not permission to clone your face, train on your voice, or fabricate your performance. Or your children, either. If an AI platform wants to use identity, they ask first. If they don’t ask, they don’t get to do it, and they don’t get to keep the model they trained on it. And like many other areas, children can’t consent.

That legal foundation unlocks the practical fix creators and citizens actually need: stay‑down across platforms, not endless piecemeal takedowns. Imagine a teacher discovers a convincing deepfake circulating on two social networks and a messaging app. If we treat that deepfake as a copyright issue under the old model, she sends three notices, then five, then twelve. Week two, the video reappears with a slight change. Week three, it’s re‑encoded, mirrored, and captioned. The message she receives under a copyright regime is “you can never catch up.” So why don’t you just give up. Which, of course, in the world of Silicon Valley monopoly rents, is called the plan. That’s the learned helplessness Denmark gives us permission to reject.

Enforcing Personhood

How would the new plan work? First, we treat realistic digital imitations of a person’s face, voice, or performance as illegal absent consent, with only narrow, clearly labeled carve‑outs for genuine public‑interest reporting (no children, no false endorsement, no biometric spoofing risk, provenance intact). That’s the rights architecture: bright lines and human‑centered. Hence, “personhood.”

Second, we wire enforcement to succeed at internet scale. The way out of whack‑a‑mole is a cross‑platform deepfake registry operated with real governance. A deepfake registry doesn’t store videos; it stores non‑reversible fingerprints—exact file hashes for byte‑for‑byte matches and robust, perceptual fingerprints for the variants (different encodes, crops, borders). For audio, we use acoustic fingerprints; for video, scene/frame signatures. These markers will evolve and so should the deepfakes registry. One confirmed case becomes a family of identifiers that platforms check at upload and on re‑share. The first takedown becomes the last.

Third, we pair that with provenance by default: Provenance isn’t a license; it’s evidence. When credentials are present, it’s easier to authenticate so there is an incentive to use them. Provenance is the rebar that turns legal rules into reliable, automatable processes. However, absence of credentials doesn’t mean free for all.

Finally, we put the onus where it belongs—on platforms. Europe’s Digital Services Act at least theoretically already replaced “willful blindness” with “notice‑and‑action” duties and oversight for very large platforms. Denmark’s identity right gives citizens a clear, national‑law basis to say: “This is illegal content—remove it and keep it down.” The platform’s job isn’t to litigate fair use in the abstract or hide behind TDM. It’s to implement upload checks, preserve provenance, run repeat‑offender policies, and prevent recurrences. If a case was verified yesterday, it shouldn’t be back tomorrow with a 10‑pixel border or other trivial alteration to defeat the rules.

Some will ask: what about creativity and satire? The answer is what it has always been in responsible speech law—more speech not fake speech. If you’re lampooning a politician with a clearly labeled synthetic speech, no implied endorsement, provenance intact, and no risk of biometric spoofing or fraud, you have defenses. The point isn’t to smother satire; it’s to end the pretense that satire requires open season on the biometric identities of private citizens and working artists.

Others will ask: what about research and innovation? Good research runs on consent, especially human subject research (see 45 C.F.R. part 46). If a lab wants to study voice cloning, it recruits consenting participants, documents scope and duration, and keeps data and models in controlled settings. That’s science. What isn’t science is scraping the voices of a country’s population “because the web is public,” then shipping a model that anyone can use to spoof a bank’s call‑center checks. A no‑TDM‑bleed‑through clause draws that line clearly.

And yes, edge cases exist. There will be appeals, mistakes, and hard calls at the margins. That is why the registry must be governed—with identity verification, transparent logs, fast appeals, and independent oversight. Done right, it will look less like a black box and more like infrastructure: a quiet backbone that keeps people safe while allowing reporting and legitimate creative work to thrive.

If Denmark’s spark is to become a firebreak, the message needs to be crisp:

— This is not copyright. Identity is a personal right; copyright defenses don’t apply.

— Consent is the rule. Deepfakes without consent is unlawful.

— No TDM bleed‑through. “Publicly available” does not equate to permission to clone or train.

— Provenance helps prove, not permit. Keep credentials intact; stripping them has consequences.

— Stay‑down, cross‑platform. One verified case should not become a thousand reuploads.

That’s how you protect personhood from the blob. By refusing to treat humans like “content,” by ending the faux‑license of whack‑a‑mole, and by making platforms responsible for prevention, not just belated reaction. Denmark has given us the right opening line. Now we should finish the paragraph: consent or block. Label it, prove it, or remove it.

The Duty Comes From the Data: Rethinking Platform Liability in the Age of Algorithmic Harm

For too long, dominant tech platforms have hidden behind Section 230 of the Communications Decency Act, claiming immunity for any harm caused by third-party content they host or promote. But as platforms like TikTok, YouTube, and Google have long ago moved beyond passive hosting into highly personalized, behavior-shaping recommendation systems, the legal landscape is shifting in the personal injury context. A new theory of liability is emerging—one grounded not in speech, but in conduct. And it begins with a simple premise: the duty comes from the data.

Surveillance-Based Personalization Creates Foreseeable Risk

Modern platforms know more about their users than most doctors, priests, or therapists. Through relentless behavioral surveillance, they collect real-time information about users’ moods, vulnerabilities, preferences, financial stress, and even mental health crises. This data is not inert or passive. It is used to drive engagement by pushing users toward content that exploits or heightens their current state.

If the user is a minor, a person in distress, or someone financially or emotionally unstable, the risk of harm is not abstract. It is foreseeable. When a platform knowingly recommends payday loan ads to someone drowning in debt, promotes eating disorder content to a teenager, or pushes a dangerous viral “challenge” to a 10-year-old child, it becomes an actor, not a conduit. It enters the “range of apprehension,” to borrow from Judge Cardozo’s reasoning in Palsgraf v. Long Island Railroad (one of my favorite law school cases). In tort law, foreseeability or knowledge creates duty. And here, the knowledge is detailed, intimate, and monetized. In fact it is so detailed we had to coin a new name for it: Surveillance capitalism.

Algorithmic Recommendations as Calls to Action

Defenders of platforms often argue that recommendations are just ranked lists—neutral suggestions, not expressive or actionable speech. But I think in the context of harm accruing to users for whatever reason, speech misses the mark. The speech argument collapses when the recommendation is designed to prompt behavior. Let’s be clear, advertisers don’t come to Google because speech, they come to Google because Google can deliver an audience. As Mr. Wanamaker said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” If he’d had Google, none of his money would have been wasted–that’s why Google is a trillion dollar market cap company.

When TikTok serves the same deadly challenge over and over to a child, or Google delivers a “pharmacy” ad to someone seeking pain relief that turns out to be a fentanyl-laced fake pill, the recommendation becomes a call to action. That transforms the platform’s role from curator to instigator. Arguably, that’s why Google paid a $500,000,000 fine and entered a non prosecution agreement to keep their executives out of jail. Again, nothing to do with speech.

Calls to action have long been treated differently in tort and First Amendment law. Calls to action aren’t passive; they are performative and directive. Especially when based on intimate surveillance data, these prompts and nudges are no longer mere expressions—they are behavioral engineering. When they cause harm, they should be judged accordingly. And to paraphrase the gambling bromide, the get paid their money and they takes their chances.

Eggshell Skull Meets Platform Targeting

In tort law, the eggshell skull rule (Smith v. Leech Brain & Co. Ltd. my second favorite law school tort case) holds that a defendant must take their victim as they find them. If a seemingly small nudge causes outsized harm because the victim is unusually vulnerable, the defendant is still liable. Platforms today know exactly who is vulnerable—because they built the profile. There’s nothing random about it. They can’t claim surprise when their behavioral nudges hit someone harder than expected.

When a child dies from a challenge they were algorithmically fed, or a financially desperate person is drawn into predatory lending through targeted promotion, or a mentally fragile person is pushed toward self-harm content, the platform can’t pretend it’s just a pipeline. It is a participant in the causal chain. And under the eggshell skull doctrine, it owns the consequences.

Beyond 230: Duty, Not Censorship

This theory of liability does not require rewriting Section 230 or reclassifying platforms as publishers although I’m not opposed to that review. It’s a legal construct that may have been relevant in 1996 but is no longer fit for purpose. Duty as data bypasses the speech debate entirely. What it says is simple: once you use personal data to push a behavioral outcome, you have a duty to consider the harm that may result and the law will hold you accountable for your action. That duty flows from knowledge, very precise knowledge that is acquired with great effort and cost for a singular purpose–to get rich. The platform designed the targeting, delivered the prompt, and did so based on a data profile it built and exploited. It has left the realm of neutral hosting and entered the realm of actionable conduct.

Courts are beginning to catch up. The Third Circuit’s 2024 decision in Anderson v. TikTok reversed the district court and refused to grant 230 immunity where the platform’s recommendation engine was seen as its own speech. But I think the tort logic may be even more powerful than a 230 analysis based on speech: where platforms collect and act on intimate user data to influence behavior, they incur a duty of care. And when that duty is breached, they should be held liable.

The duty comes from the data. And in a world where your data is their new oil, that duty is long overdue.

David Sacks Is Learning That the States Still Matter

For a moment, it looked like the tech world’s powerbrokers had pulled it off. Buried deep in a Republican infrastructure and tax package was a sleeper provision — the so-called AI moratorium — that would have blocked states from passing their own AI laws for up to a decade. It was an audacious move: centralize control over one of the most consequential technologies in history, bypass 50 state legislatures, and hand the reins to a small circle of federal agencies and especially to tech industry insiders.

But then it collapsed.

The Senate voted 99–1 to strike the moratorium. Governors rebelled. Attorneys general sounded the alarm. Artists, parents, workers, and privacy advocates from across the political spectrum said “no.” Even hardline conservatives like Ted Cruz eventually reversed course when it came down to the final vote. The message to Big Tech or the famous “Little Tech” was clear: the states still matter — and America’s tech elite ignore that at their peril.  (“Little Tech” is the latest rhetorical deflection promoted by Big Tech aka propaganda.)

The old Google crowd pushed the moratorium–their fingerprints were obvious. Having gotten fabulously rich off of their two favorites: The DMCA farce and the Section 230 shakedown. But there’s increasing speculation that White House AI Czar and Silicon Valley Viceroy David Sacks, PayPal alum and vocal MAGA-world player, was calling the ball. If true, that makes this defeat even more revealing.

Sacks represents something of a new breed of power-hungry tech-right influencer — part of the emerging “Red Tech” movement that claims to reject woke capitalism and coastal elitism but still wants experts to shape national policy from Silicon Valley, a chapter straight out of Philip Dru: Administrator. Sacks is tied to figures like Peter Thiel, Elon Musk, and a growing network of Trump-aligned venture capitalists. But even that alignment couldn’t save the moratorium.

Why? Because the core problem wasn’t left vs. right. It was top vs. bottom.

In 1964, Ronald Reagan’s classic speech called A Time for Choosing warned about “a little intellectual elite in a far-distant capitol” deciding what’s best for everyone else. That warning still rings true — except now the “capitol” might just be a server farm in Menlo Park or a podcast studio in LA.

The AI moratorium was an attempt to govern by preemption and fiat, not by consent. And the backlash wasn’t partisan. It came from red states and blue ones alike — places where elected leaders still think they have the right to protect their citizens from unregulated surveillance, deepfakes, data scraping, and economic disruption.

So yes, the defeat of the moratorium was a blow to Google’s strategy of soft-power dominance. But it was also a shot across the bow for David Sacks and the would-be masters of tech populism. You can’t have populism without the people.

If Sacks and his cohort want to play a long game in AI policy, they’ll have to do more than drop ideas into the policy laundry of think tank white papers and Beltway briefings. They’ll need to win public trust, respect state sovereignty, and remember that governing by sneaky safe harbors is no substitute for legitimacy.  

The moratorium failed because it presumed America could be governed like a tech startup — from the top, at speed, with no dissent. Turns out the country is still under the impression they have something to say about how they are governed, especially by Big Tech.