The Duty Comes From the Data: Rethinking Platform Liability in the Age of Algorithmic Harm

For too long, dominant tech platforms have hidden behind Section 230 of the Communications Decency Act, claiming immunity for any harm caused by third-party content they host or promote. But as platforms like TikTok, YouTube, and Google have long ago moved beyond passive hosting into highly personalized, behavior-shaping recommendation systems, the legal landscape is shifting in the personal injury context. A new theory of liability is emerging—one grounded not in speech, but in conduct. And it begins with a simple premise: the duty comes from the data.

Surveillance-Based Personalization Creates Foreseeable Risk

Modern platforms know more about their users than most doctors, priests, or therapists. Through relentless behavioral surveillance, they collect real-time information about users’ moods, vulnerabilities, preferences, financial stress, and even mental health crises. This data is not inert or passive. It is used to drive engagement by pushing users toward content that exploits or heightens their current state.

If the user is a minor, a person in distress, or someone financially or emotionally unstable, the risk of harm is not abstract. It is foreseeable. When a platform knowingly recommends payday loan ads to someone drowning in debt, promotes eating disorder content to a teenager, or pushes a dangerous viral “challenge” to a 10-year-old child, it becomes an actor, not a conduit. It enters the “range of apprehension,” to borrow from Judge Cardozo’s reasoning in Palsgraf v. Long Island Railroad (one of my favorite law school cases). In tort law, foreseeability or knowledge creates duty. And here, the knowledge is detailed, intimate, and monetized. In fact it is so detailed we had to coin a new name for it: Surveillance capitalism.

Algorithmic Recommendations as Calls to Action

Defenders of platforms often argue that recommendations are just ranked lists—neutral suggestions, not expressive or actionable speech. But I think in the context of harm accruing to users for whatever reason, speech misses the mark. The speech argument collapses when the recommendation is designed to prompt behavior. Let’s be clear, advertisers don’t come to Google because speech, they come to Google because Google can deliver an audience. As Mr. Wanamaker said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” If he’d had Google, none of his money would have been wasted–that’s why Google is a trillion dollar market cap company.

When TikTok serves the same deadly challenge over and over to a child, or Google delivers a “pharmacy” ad to someone seeking pain relief that turns out to be a fentanyl-laced fake pill, the recommendation becomes a call to action. That transforms the platform’s role from curator to instigator. Arguably, that’s why Google paid a $500,000,000 fine and entered a non prosecution agreement to keep their executives out of jail. Again, nothing to do with speech.

Calls to action have long been treated differently in tort and First Amendment law. Calls to action aren’t passive; they are performative and directive. Especially when based on intimate surveillance data, these prompts and nudges are no longer mere expressions—they are behavioral engineering. When they cause harm, they should be judged accordingly. And to paraphrase the gambling bromide, the get paid their money and they takes their chances.

Eggshell Skull Meets Platform Targeting

In tort law, the eggshell skull rule (Smith v. Leech Brain & Co. Ltd. my second favorite law school tort case) holds that a defendant must take their victim as they find them. If a seemingly small nudge causes outsized harm because the victim is unusually vulnerable, the defendant is still liable. Platforms today know exactly who is vulnerable—because they built the profile. There’s nothing random about it. They can’t claim surprise when their behavioral nudges hit someone harder than expected.

When a child dies from a challenge they were algorithmically fed, or a financially desperate person is drawn into predatory lending through targeted promotion, or a mentally fragile person is pushed toward self-harm content, the platform can’t pretend it’s just a pipeline. It is a participant in the causal chain. And under the eggshell skull doctrine, it owns the consequences.

Beyond 230: Duty, Not Censorship

This theory of liability does not require rewriting Section 230 or reclassifying platforms as publishers although I’m not opposed to that review. It’s a legal construct that may have been relevant in 1996 but is no longer fit for purpose. Duty as data bypasses the speech debate entirely. What it says is simple: once you use personal data to push a behavioral outcome, you have a duty to consider the harm that may result and the law will hold you accountable for your action. That duty flows from knowledge, very precise knowledge that is acquired with great effort and cost for a singular purpose–to get rich. The platform designed the targeting, delivered the prompt, and did so based on a data profile it built and exploited. It has left the realm of neutral hosting and entered the realm of actionable conduct.

Courts are beginning to catch up. The Third Circuit’s 2024 decision in Anderson v. TikTok reversed the district court and refused to grant 230 immunity where the platform’s recommendation engine was seen as its own speech. But I think the tort logic may be even more powerful than a 230 analysis based on speech: where platforms collect and act on intimate user data to influence behavior, they incur a duty of care. And when that duty is breached, they should be held liable.

The duty comes from the data. And in a world where your data is their new oil, that duty is long overdue.

The Patchwork They Fear Is Accountability: Why Big AI Wants a Moratorium on State Laws

Why Big Tech’s Push for a Federal AI Moratorium Is Really About Avoiding State Investigations, Liability, and Transparency

As Congress debates the so-called “One Big Beautiful Bill Act,” one of its most explosive provisions has stayed largely below the radar: a 10-year or 5-year or any-year federal moratorium on state and local regulation of artificial intelligence. Supporters frame it as a common sense way to prevent a “patchwork” of conflicting state laws. But the real reason for the moratorium may be more self-serving—and more ominous.

The truth is, the patchwork they fear is not complexity. It’s accountability.

Liability Landmines Beneath the Surface

As has been well-documented by the New York Times and others, generative AI platforms have likely ingested and processed staggering volumes of data that implicate state-level consumer protections. This includes biometric data (like voiceprints and faces), personal communications, educational records, and sensitive metadata—all of which are protected under laws in states like Illinois (BIPA), California (CCPA/CPRA), and Texas.

If these platforms scraped and trained on such data without notice or consent, they are sitting on massive latent liability. Unlike federal laws, which are often narrow or toothless, many state statutes allow private lawsuits and statutory damages. Class action risk is not hypothetical—it is systemic.  It is crucial for policymakers to have a clear understanding of where we are today with respect to the collision between AI and consumer rights, including copyright.  The corrosion of consumer rights by the richest corporations in commercial history is not something that may happen in the future.  Massive violations have  already occurred, are occurring this minute, and will continue to occur into the future at an increasing rate.  

The Quiet Race to Avoid Discovery

State laws don’t just authorize penalties; they open the door to discovery. Once an investigation or civil case proceeds, AI platforms could be forced to disclose exactly what data they trained on, how it was retained, and whether any red flags were ignored.

This mirrors the arc of the social media addiction lawsuits now consolidated in multidistrict litigation. Platforms denied culpability for years—until internal documents showed what they knew and when. The same thing could happen here, but on a far larger scale.

Preemption as Shield and Sword

The proposed AI moratorium isn’t a regulatory timeout. It’s a firewall. By halting enforcement of state AI laws, the moratorium could prevent lawsuits, derail investigations, and shield past conduct from scrutiny.

Even worse, the Senate version conditions broadband infrastructure funding (BEAD) on states agreeing to the moratorium—an unconstitutional act of coercion that trades state police powers for federal dollars. The legal implications are staggering, especially under the anti-commandeering doctrine of Murphy v. NCAA and Printz v. United States.

This Isn’t About Clarity. It’s About Control.

Supporters of the moratorium, including senior federal officials and lobbying arms of Big Tech, claim that a single federal standard is needed to avoid chaos. But the evidence tells a different story.

States are acting precisely because Congress hasn’t. Illinois’ BIPA led to real enforcement. California’s privacy framework has teeth. Dozens of other states are pursuing legislation to respond to harms AI is already causing.

In this light, the moratorium is not a policy solution. It’s a preemptive strike.

Who Gets Hurt?
– Consumers, whose biometric data may have been ingested without consent
– Parents and students, whose educational data may now be part of generative models
– Artists, writers, and journalists, whose copyrighted work has been scraped and reused
– State AGs and legislatures, who lose the ability to investigate and enforce

Google Is an Example of Potential Exposure

Google’s former executive chairman Eric Schmidt has seemed very, very interested in writing the law for AI.  For example, Schmidt worked behind the scenes for the two years at least to establish US artificial intelligence policy under President Biden. Those efforts produced the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence“, the longest executive order in history. That EO was signed into effect by President Biden on October 30.  In his own words during an Axios interview with Mike Allen, the Biden AI EO was signed just in time for Mr. Schmidt to present that EO as what Mr. Schmidt calls “bait” to the UK government–which convened a global AI safety conference at Bletchley Park in the UK convened by His Excellency Rishi Sunak (the UK’s tech bro Prime Minister) that just happened to start on November 1, the day after President Biden signed the EO.  And now look at the disaster that the UK AI proposal would be.  

As Mr. Schmidt told Axios:

So far we are on a win, the taste of winning is there.  If you look at the UK event which I was part of, the UK government took the bait, took the ideas, decided to lead, they’re very good at this,  and they came out with very sensible guidelines.  Because the US and UK have worked really well together—there’s a group within the National Security Council here that is particularly good at this, and they got it right, and that produced this EO which is I think is the longest EO in history, that says all aspects of our government are to be organized around this.

Apparently, Mr. Schmidt hasn’t gotten tired of winning.  Of course, President Trump rescinded the Biden AI EO which may explain why we are now talking about a total moratorium on state enforcement which percolated at a very pro-Google shillery called R Street Institute, apparently by one Adam Thierer .  But why might Google be so interested in this idea?

Google may face exponentially acute liability under state laws if it turns out that biometric or behavioral data from platforms like YouTube Kids or Google for Education were ingested into AI training sets. 

These services, marketed to families and schools, collect sensitive information from minors—potentially implicating both federal protections like COPPA and more expansive state statutes. As far back as 2015, Senator Ben Nelson raised alarms about YouTube Kids, calling it “ridiculously porous” in terms of oversight and lack of safeguards. If any of that youth-targeted data has been harvested by generative AI tools, the resulting exposure is not just a regulatory lapse—it’s a landmine. 

The moratorium could be seen as an attempt to preempt the very investigations that might uncover how far that exposure goes.

What is to be Done?

Instead of smuggling this moratorium into a must-pass bill, Congress should strip it out and hold open hearings. If there’s merit to federal preemption, let it be debated on its own. But do not allow one of the most sweeping power grabs in modern tech policy to go unchallenged.

The public deserves better. Our children deserve better.  And the states have every right to defend their people. Because the patchwork they fear isn’t legal confusion.

It’s accountability.