The Duty Comes From the Data: Rethinking Platform Liability in the Age of Algorithmic Harm

For too long, dominant tech platforms have hidden behind Section 230 of the Communications Decency Act, claiming immunity for any harm caused by third-party content they host or promote. But as platforms like TikTok, YouTube, and Google have long ago moved beyond passive hosting into highly personalized, behavior-shaping recommendation systems, the legal landscape is shifting in the personal injury context. A new theory of liability is emerging—one grounded not in speech, but in conduct. And it begins with a simple premise: the duty comes from the data.

Surveillance-Based Personalization Creates Foreseeable Risk

Modern platforms know more about their users than most doctors, priests, or therapists. Through relentless behavioral surveillance, they collect real-time information about users’ moods, vulnerabilities, preferences, financial stress, and even mental health crises. This data is not inert or passive. It is used to drive engagement by pushing users toward content that exploits or heightens their current state.

If the user is a minor, a person in distress, or someone financially or emotionally unstable, the risk of harm is not abstract. It is foreseeable. When a platform knowingly recommends payday loan ads to someone drowning in debt, promotes eating disorder content to a teenager, or pushes a dangerous viral “challenge” to a 10-year-old child, it becomes an actor, not a conduit. It enters the “range of apprehension,” to borrow from Judge Cardozo’s reasoning in Palsgraf v. Long Island Railroad (one of my favorite law school cases). In tort law, foreseeability or knowledge creates duty. And here, the knowledge is detailed, intimate, and monetized. In fact it is so detailed we had to coin a new name for it: Surveillance capitalism.

Algorithmic Recommendations as Calls to Action

Defenders of platforms often argue that recommendations are just ranked lists—neutral suggestions, not expressive or actionable speech. But I think in the context of harm accruing to users for whatever reason, speech misses the mark. The speech argument collapses when the recommendation is designed to prompt behavior. Let’s be clear, advertisers don’t come to Google because speech, they come to Google because Google can deliver an audience. As Mr. Wanamaker said, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” If he’d had Google, none of his money would have been wasted–that’s why Google is a trillion dollar market cap company.

When TikTok serves the same deadly challenge over and over to a child, or Google delivers a “pharmacy” ad to someone seeking pain relief that turns out to be a fentanyl-laced fake pill, the recommendation becomes a call to action. That transforms the platform’s role from curator to instigator. Arguably, that’s why Google paid a $500,000,000 fine and entered a non prosecution agreement to keep their executives out of jail. Again, nothing to do with speech.

Calls to action have long been treated differently in tort and First Amendment law. Calls to action aren’t passive; they are performative and directive. Especially when based on intimate surveillance data, these prompts and nudges are no longer mere expressions—they are behavioral engineering. When they cause harm, they should be judged accordingly. And to paraphrase the gambling bromide, the get paid their money and they takes their chances.

Eggshell Skull Meets Platform Targeting

In tort law, the eggshell skull rule (Smith v. Leech Brain & Co. Ltd. my second favorite law school tort case) holds that a defendant must take their victim as they find them. If a seemingly small nudge causes outsized harm because the victim is unusually vulnerable, the defendant is still liable. Platforms today know exactly who is vulnerable—because they built the profile. There’s nothing random about it. They can’t claim surprise when their behavioral nudges hit someone harder than expected.

When a child dies from a challenge they were algorithmically fed, or a financially desperate person is drawn into predatory lending through targeted promotion, or a mentally fragile person is pushed toward self-harm content, the platform can’t pretend it’s just a pipeline. It is a participant in the causal chain. And under the eggshell skull doctrine, it owns the consequences.

Beyond 230: Duty, Not Censorship

This theory of liability does not require rewriting Section 230 or reclassifying platforms as publishers although I’m not opposed to that review. It’s a legal construct that may have been relevant in 1996 but is no longer fit for purpose. Duty as data bypasses the speech debate entirely. What it says is simple: once you use personal data to push a behavioral outcome, you have a duty to consider the harm that may result and the law will hold you accountable for your action. That duty flows from knowledge, very precise knowledge that is acquired with great effort and cost for a singular purpose–to get rich. The platform designed the targeting, delivered the prompt, and did so based on a data profile it built and exploited. It has left the realm of neutral hosting and entered the realm of actionable conduct.

Courts are beginning to catch up. The Third Circuit’s 2024 decision in Anderson v. TikTok reversed the district court and refused to grant 230 immunity where the platform’s recommendation engine was seen as its own speech. But I think the tort logic may be even more powerful than a 230 analysis based on speech: where platforms collect and act on intimate user data to influence behavior, they incur a duty of care. And when that duty is breached, they should be held liable.

The duty comes from the data. And in a world where your data is their new oil, that duty is long overdue.

What the Algocrats Want You to Believe

There are five key assumptions that support the streamer narrative and we will look at them each in turn. Today we’ll assess assumption #1–streamers are not in the music business but they want you to believe the opposite.

Assumption 1:  Streamers Are In the Music Business

Streamers like Spotify, TikTok and YouTube are not in the music business.  They are in the data business.  Why?  So they can monetize your fans that you drive to them.

These companies make extensive use of algorithms and artificial intelligence in their business, especially to sell targeted advertising.  This has a direct impact on your ability to compete with enterprise playlists and fake tracks–or what you might call “decoy footprints”–as identified by Liz Pelly’s exceptional journalism in her new book (did I say it’s on sale now?).

Signally, while Spotify artificially capped its subscription rates for over ten years in order to convince Wall Street of its growth story, the company definitely did not cap its advertising rates which are based on an auction model like YouTube.  Like YouTube, Spotify collects emotional data (analyzing a user social media posts), demographics (age, gender, location, geofencing), behavioral data (listening habits, interests), and contextual data (serving ads in relevant moments like breakfast, lunch, dinner).  They also use geofencing to target users by regions, cities, postal codes, and even Designated Market Areas (DMAs). My bet is that they can tell if you’re looking at men’s suits in ML Liddy’s (San Angelo or Ft. Worth).

Why the snooping? They do this to monetize your fans.  Sometimes they break the law, such as Spotify’s $5.5 million fine by Swedish authorities for violating Europe’s data protection laws.

They’ll also tell you that streamers are all up in introducing fans to new music or what they call “discovery.” The truth is that they could just as easily be introducing you to a new brand of Spam. “Discovery” is just a data application for the thousands of employees of these companies who form the algocracy who make far more money on average than any songwriter or musician does on average.  As Maria Schneider anointed the algocracy in her eponymous Pulitzer Prize finalist album, these are the Data Lords.  And I gather from Liz Pelly’s book that it’s starting to look like “discovery” is just another form of payola behind the scenes.

It also must be said that these algocrats tend to run together which makes any bright line between the companies harder to define.  For example, Spotify has phased out owning data centers and migrated its extensive data operations to the Google Cloud Platform which means Spotify is arguably entirely dependent on Google for a significant part of its data business.  Yes, the dominant music streaming platform Spotify collaborates with the adjudicated monopolist Google for its data monetization operations.  Not to mention the Meta pixel class action controversy—”It’s believed that Spotify may have installed a tracking tool on its website called the Meta pixel that can be used to gather data about website visitors and share it with Meta. Specifically, [attorneys] suspect that Spotify may have used the Meta pixel to track which videos its users have watched on Spotify.com and send that information to Meta along with each person’s Facebook ID.”

And remember, Spotify doesn’t allow AI training on the music and metadata on its platform.  

Right. That’s the good news.

@SenThomTillis and Other Members of Congress Question ContentID–Again

ContentID has a lot of potential and in many respects is similar to the SNOCAP audio fingerprinting application from 2005–very similar.  Quite similar.  Although if the SNOCAP team got back together using current technology, that tool could be much more broadly applied including to search.  Which makes me ask why Google isn’t doing the same with their endless resources.

Here’s an excerpt from the Member’s letter about Content ID:

Tillis Letter

Read the letter here

Facebook Outage Reveals People Still Read the News Other Ways, Would YouTube Outage Reveal People Still Listen to Music?

I have often said that if I was able to persuade the entire entertainment industry to devote say 10% of their marketing spend to aardvark.com, then aardvark.com could be as big as YouTube.  This, of course, is an aspirational statement that doesn’t take into account how Google would react or how Google games search result, but you get the idea.

Somehow YouTube has managed to convince our marketing folk that they just can’t get along without the views and likes.  But is that really true?  Will people listen to music somewhere besides YouTube if YouTube wasn’t there?

Josh Schwartz writing at Nielman Lab gives us some insight into a somewhat analogous situation with Facebook and news sites:

At Chartbeat, we got a glimpse into that on August 3, 2018, when Facebook went down for 45 minutes and traffic patterns across the web changed in an instant. What did people do? According to our data, they went directly to publishers’ mobile apps and sites (as well as to search engines) to get their information fix. This window into consumer behavior reflects broader changes we see taking hold this year around content discovery, particularly on mobile.

So when YouTube tries to tell us that we can’t get along without them, which is definitely the implication of Google’s most recent charm offensive in the European Parliament, it may not even be a close call.  Particularly when you consider the downside from low royalties, unchecked stream ripping and YouTube’s corrosive safe harbor practices.

Fans found music they loved before YouTube and they will after YouTube, just like they did after Tower Records–and Tower Records didn’t spy on them.  And that’s what the Chartbeat research showed about news sites after the Facebook outage:

Key data points show that when Facebook went down, referrals to news sites fell, as expected — but other activity more than made up for it.

  • Direct traffic to publishers’ websites increased 11 percent, while traffic to publishers’ mobile apps soared 22 percent.

  • Search referral traffic to publishers was also up 8 percent.

  • Surprisingly, there was a net total traffic increase of 2.3 percent — meaning that the number of pages consumed across the web spiked upward in this timeframe.

What if it turned out that YouTube needed us more than we need YouTube?