Nate Garhart writing in Reuters analyzes Perplexity AI’s novel—some might say bizarre—legal defense in copyright suits filed by the New York Times and the Chicago Tribune in December 2025. Rather than relying primarily on fair use, the typical defense in AI infringement cases, Perplexity instead argues it lacked “volitional conduct” sufficient for direct copyright infringement, contending that it did not “make” the infringing copies in a legally relevant sense. The defense draws on the Second Circuit’s 2008 Cartoon Network v. CSC Holdings decision, where a DVR service was not held directly liable because the user, not the service, initiated the recording of each specific work. Sound familiar? That’s one straight outta 1999. You know, the technology made me do it.
The article explains the strategic logic: eliminating direct infringement (a strict liability standard requiring no proof of intent or knowledge) would be meaningful, even if secondary liability theories survive. However, Mr. Garhart is correctly skeptical the defense will succeed, at least at the motion to dismiss stage. A key difficulty for Perplexity is that Perplexity’s system involves a far more complicated causal chain than a mere DVR: it crawls, scrapes and copies paywalled articles, indexes them, stores them, and generates output that may track the original expression—each step reflecting deliberate system design. Hold that thought, we’ll come back to that one shortly. The newspapers’ attorney, Steven Lieberman, publicly emphasized that Perplexity “does not dispute copying The Times’s journalism from behind a paywall to deliver responses to their customers in real time”. Rut roh.
The article also flags that the 2nd Circuit may be reluctant to extend Cartoon Network to generative systems that actively synthesize and reproduce expression, rather than merely storing or transmitting content like a DVR. Ya think? Ultimately, Mr. Garhart suggests the case may hinge more on fair use—and particularly the fourth factor of market harm that was so attractive to Judge Chhabria in the Kadrey case—because Perplexity’s output product arguably substitutes directly for paywalled news content and floods the market with infringing copies.
Why Generative AI Is Not a Passive Conduit
Mr. Garhart makes clear that Perplexity’s attempt to cast itself as a mere automated tool triggered by user prompts is fundamentally at odds with how generative AI systems actually work. There are several reasons why the “passive conduit” framing fails.
Deliberate System Architecture Embodies Volition
Unlike the DVR in Cartoon Network, where a user pressed “record” to copy a specific program they selected, Perplexity’s system was engineered to crawl restricted content, scrape it and retain it in usable form, and generate responses that substitute for the original. Mr. Garhart argues that “a company that builds a machine capable of extracting and reproducing paywalled journalism has made many volitional choices, even if those decisions occurred at the design stage rather than at the moment of each query”. The system’s architecture—its crawling, scraping, indexing, storage, and generation workflow pipeline—is itself the volitional act, not the functionality of a neutral pass-through.
The Grokster Inducement Framework Reinforces This Analysis
The Court identified three particularly notable features of intent evidence:
- Targeting a known demand for infringement: Grokster and StreamCast aimed to capture former Napster users, a market defined by its appetite for pirated content.
- Failure to implement filtering or safeguards: Neither defendant developed tools to diminish infringing activity, which—while not independently sufficient—was probative of intent alongside other evidence.
- Revenue model dependent on infringement: The defendants’ advertising-based business model relied on high-volume use, which was overwhelmingly infringing.
Moreover, at each stage of Perplexity’s training pipeline, human decision-making is deeply embedded: engineers and researchers decide what content to tokenize, how to structure training data, and which model behaviors to reinforce or suppress through “reinforcement learning from human feedback” (RLHF) and other fine-tuning methods. The resulting system is curated by humans at multiple points in the typical workflow from dataset selection and preprocessing, to model alignment and quality control, meaning the outputs are not the product of a purely autonomous process but rather of layered, intentional design choices made by people, or more precisely, by Perplexity.
Tokenization itself is a telling example of design choice: by selecting a tokenization scheme and deciding which corpora to process (and spend scarce compute resources on), the system’s developers are making both editorial and commercial judgments about what material the model will learn from and be capable of reproducing. These upstream human choices further undercut the notion that the system is a passive conduit simply responding to downstream user prompts.
Importantly, these tokenization decisions are not made in a vacuum or for altruistic reasons—they are driven by the commercial imperative of delivering a product sufficiently useful that consumers will pay Perplexity for it, rather than paying the New York Times or other original publishers for their journalism. The economic logic is plain: the more effectively the system can ingest and repackage high-quality copyrighted content, the more valuable the product becomes to subscribers, and the more extracted revenue flows to Perplexity instead of to the creators whose work fuels the system. These upstream human choices further undercut the notion that the system is a passive conduit simply responding to user prompts. Sound familiar?
Applying Grokster‘s Logic to Generative AI
Several design features of a generative AI answer engine map onto the Grokster framework, even without identical facts:
- System design that encourages or enables infringement: Perplexity was engineered to circumvent paywalls, ingest copyrighted journalism, and deliver it as synthesized answers to paying users. These are not incidental byproducts—they are Perplexity’s core value proposition. Mr. Garhart emphasizes that the system’s “architecture may embody the relevant volitional act” even if no employee selects a particular New York Times article at query time.
- Absence of meaningful safeguards: As in Grokster, the failure to prevent the reproduction of paywalled content is relevant circumstantial evidence when combined with a system designed to harvest and repackage that content. A truly passive system would not be built to bypass access restrictions but would go with the flow.
- Commercial substitution model: The article highlights that when an answer engine provides a detailed summary or near-verbatim account of a paywalled article in response to a user query, it competes directly in the market for news consumption. Surely that’s obvious. The Grokster Court found it significant that the defendants’ revenue depended on infringing volume; analogously, a generative AI product’s commercial viability may depend substantially on its ability to deliver the substance of others’ copyrighted work.
The Causal Chain Is Not Broken by a User Prompt
Unlike the DVR in Cartoon Network where a user pressed “record” to copy a specific program they selected, Perplexity’s system was engineered to crawl restricted content, scrape it and retain it in usable form, and generate responses that substitute for the original. The article argues that “a company that builds a machine capable of extracting and reproducing paywalled journalism has made many [, many] volitional choices, even if those decisions occurred at the design stage rather than at the moment of each query.” The system’s architecture—its crawling, scraping, indexing, storage, and generation pipeline—is itself the volitional act, not a neutral pass-through.
I think Mr. Garhart’s most compelling point is that a user’s query is not the kind of discrete, volitional act that broke the causal chain in Cartoon Network. A user who types “What does the New York Times say about X?” is asking a question—not selecting a specific copyrighted work and pressing “copy” as with a DVR. The Perplexity system then selects, processes, and generates expressive content drawn from copyrighted sources because that’s how it was trained. The Grokster Court rejected the notion that intermediaries like Perplexity could hide behind user-initiated actions when those intermediaries had built systems designed to facilitate infringement and had taken affirmative steps to encourage it.
Critically, the generative AI system’s response to a prompt is shaped by decisions made long before the user ever typed a query. Humans selected the training corpora, decided how text would be tokenized and encoded, fine-tuned the model’s outputs through iterative RLHF and other quality-control processes, and designed the retrieval and generation architecture. Each of these steps reflects purposeful human conduct—not the behavior of a neutral pipe. A system in which humans curate the inputs, architect the processing, and refine the outputs at multiple stages is, by any reasonable measure, an active participant in producing the allegedly infringing content.
In sum, generative AI systems are not passive conduits. They are purpose-built products whose design choices—what to crawl, what to tokenize, how to store it, when to reproduce it, and how to monetize it—reflect exactly the kind of upstream volition and deliberate architecture that both the Cartoon Network volitional conduct doctrine and the Grokster inducement framework are designed to capture. The fact that a user prompt triggers the final output does not absolve a company that engineered every step in the chain leading to that output.
Why did Perplexity scrape leading newspapers for content to feed their AI? Because it was high value, well written, well editing writing and it was valuable to them. In short, they did it for the money.
They robbed the authors for the same famous reason Willie Sutton robbed the banks. Because that’s where the money is.
And going back to 1999 won’t save them.