Since it’s 1999, What MGM v. Grokster Teaches Us About Perplexity’s Bizarre Infringement Defense

Nate Garhart writing in Reuters analyzes Perplexity AI’s novel—some might say bizarre—legal defense in copyright suits filed by the New York Times and the Chicago Tribune in December 2025.   Rather than relying primarily on fair use, the typical defense in AI infringement cases, Perplexity instead argues it lacked “volitional conduct” sufficient for direct copyright infringement, contending that it did not “make” the infringing copies in a legally relevant sense. The defense draws on the Second Circuit’s 2008 Cartoon Network v. CSC Holdings decision, where a DVR service was not held directly liable because the user, not the service, initiated the recording of each specific work.  Sound familiar?  That’s one straight outta 1999.  You know, the technology made me do it.

Why Generative AI Is Not a Passive Conduit

Mr. Garhart makes clear that Perplexity’s attempt to cast itself as a mere automated tool triggered by user prompts is fundamentally at odds with how generative AI systems actually work. There are several reasons why the “passive conduit” framing fails.

Deliberate System Architecture Embodies Volition

The Grokster Inducement Framework Reinforces This Analysis

The Court identified three particularly notable features of intent evidence:

  1. Failure to implement filtering or safeguards: Neither defendant developed tools to diminish infringing activity, which—while not independently sufficient—was probative of intent alongside other evidence. 

Moreover, at each stage of Perplexity’s training pipeline, human decision-making is deeply embedded: engineers and researchers decide what content to tokenize, how to structure training data, and which model behaviors to reinforce or suppress through “reinforcement learning from human feedback” (RLHF) and other fine-tuning methods. The resulting system is curated by humans at multiple points in the typical workflow from dataset selection and preprocessing, to model alignment and quality control, meaning the outputs are not the product of a purely autonomous process but rather of layered, intentional design choices made by people, or more precisely, by Perplexity.

Tokenization itself is a telling example of design choice: by selecting a tokenization scheme and deciding which corpora to process (and spend scarce compute resources on), the system’s developers are making both editorial and commercial judgments about what material the model will learn from and be capable of reproducing. These upstream human choices further undercut the notion that the system is a passive conduit simply responding to downstream user prompts.

Importantly, these tokenization decisions are not made in a vacuum or for altruistic reasons—they are driven by the commercial imperative of delivering a product sufficiently useful that consumers will pay Perplexity for it, rather than paying the New York Times or other original publishers for their journalism. The economic logic is plain: the more effectively the system can ingest and repackage high-quality copyrighted content, the more valuable the product becomes to subscribers, and the more extracted revenue flows to Perplexity instead of to the creators whose work fuels the system. These upstream human choices further undercut the notion that the system is a passive conduit simply responding to user prompts.  Sound familiar?

Applying Grokster‘s Logic to Generative AI

Several design features of a generative AI answer engine map onto the Grokster framework, even without identical facts:

The Causal Chain Is Not Broken by a User Prompt

I think Mr. Garhart’s most compelling point is that a user’s query is not the kind of discrete, volitional act that broke the causal chain in Cartoon Network.  A user who types “What does the New York Times say about X?” is asking a question—not selecting a specific copyrighted work and pressing “copy” as with a DVR. The Perplexity system then selects, processes, and generates expressive content drawn from copyrighted sources because that’s how it was trained.   The Grokster Court rejected the notion that intermediaries like Perplexity could hide behind user-initiated actions when those intermediaries had built systems designed to facilitate infringement and had taken affirmative steps to encourage it. 

Critically, the generative AI system’s response to a prompt is shaped by decisions made long before the user ever typed a query. Humans selected the training corpora, decided how text would be tokenized and encoded, fine-tuned the model’s outputs through iterative RLHF and other quality-control processes, and designed the retrieval and generation architecture. Each of these steps reflects purposeful human conduct—not the behavior of a neutral pipe. A system in which humans curate the inputs, architect the processing, and refine the outputs at multiple stages is, by any reasonable measure, an active participant in producing the allegedly infringing content.

In sum, generative AI systems are not passive conduits. They are purpose-built products whose design choices—what to crawl, what to tokenize, how to store it, when to reproduce it, and how to monetize it—reflect exactly the kind of upstream volition and deliberate architecture that both the Cartoon Network volitional conduct doctrine and the Grokster inducement framework are designed to capture. The fact that a user prompt triggers the final output does not absolve a company that engineered every step in the chain leading to that output.

Why did Perplexity scrape leading newspapers for content to feed their AI?  Because it was high value, well written, well editing writing and it was valuable to them.  In short, they did it for the money.

They robbed the authors for the same famous reason Willie Sutton robbed the banks.  Because that’s where the money is.

And going back to 1999 won’t save them.