Google’s “AI Overviews” Draws a Formal Complaint in Germany under the EU Digital Services Act

A coalition of NGOs, media associations, and publishers in Germany has filed a formal Digital Services Act (DSA) complaint against Google’s AI Overviews, arguing the feature diverts traffic and revenue from independent media, increases misinformation risks via opaque systems, and threatens media plurality. Under the DSA, violations can carry fines up to 6% of global revenue—a potentially multibillion-dollar exposure.

The complaint claims that AI Overviews answer users’ queries inside Google, short-circuiting click-throughs to the original sources and starving publishers of ad and subscription revenues. Because users can’t see how answers are generated or verified, the coalition warns of heightened misinformation risk and erosion of democratic discourse.

Why the Digital Services Act Matters

As I understand the DSA, the news publishers can either (1) lodge a complaint with their national Digital Services Coordinator alleging a platform’s DSA breach (triggers regulatory scrutiny);  (2) Use the platform dispute tools: first the internal complaint-handling system, then certified out-of-court dispute settlement for moderation/search-display disputes—often faster practical relief; (3) Sue for damages in national courts for losses caused by a provider’s DSA infringement (Art. 54); or (4) Act collectively by mandating a qualified entity or through the EU Representative Actions Directive to seek injunctions/redress (kind of like class actions in the US but more limited in scope). 

Under the DSA, Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) are services with more than 45 million EU users (approximately 10% of the population). Once formally designated by the European Commission, they face stricter obligations than smaller platforms: conducting annual systemic risk assessments, implementing mitigation measures, submitting to independent audits, providing data access to researchers, and ensuring transparency in recommender systems and advertising. Enforcement is centralized at the Commission, with penalties up to 6% of global revenue. This matters because VLOPs like Google, Meta, and TikTok must alter core design choices that directly affect media visibility and revenue.In parallel, the European Commission/DSCs retain powerful public-enforcement tools against Very Large Online Platforms. 

As a designated Very Large Online Platform, Google faces strict duties to mitigate systemic risks, provide algorithmic transparency, and avoid conduct that undermines media pluralism. The complaint contends AI Overviews violate these requirements by replacing outbound links with Google’s own synthesized answers.

The U.S. Angle: Penske lawsuit

A Major Publisher Has Sued Google in Federal Court Over AI Overview

On Sept. 14, 2025, Penske Media (Rolling Stone, Billboard, Variety) sued Google in D.C. federal court, alleging AI Overviews repurpose its journalism, depress clicks, and damage revenue—marking the first lawsuit by a major U.S. publisher aimed squarely at AI Overviews. The claims include an allegation on training-use claiming that Google enriched itself by using PMC’s works to train and ground models powering Gemini/AI Overviews, seeking restitution and disgorgement. Penske also argues that Google abuses its search monopoly to coerce publishers: indexing is effectively tied to letting Google (a) republish/summarize their material in AI Overviews, Featured Snippets, and AI Mode, and (b) use their works to train Google’s LLMs—reducing click-through and revenues while letting Google expand its monopoly into online publishing. 

Trade Groups Urged FTC/DOJ Action

The News/Media Alliance had previously asked the FTC and DOJ to investigate AI Overviews for diverting traffic and ‘misappropriating’ publishers’ investments, calling for enforcement under FTC Act §5 and Sherman Act §2.

Data Showing Traffic Harm

Industry analyses indicate material referral declines tied to AI Overviews. Digital Content Next reports Google Search referrals down 1%–25% for most member publishers over recent weeks; Digiday pegs impacts as much as 25%. The trend feeds a broader ‘Google Zero’ concern—zero-click results displacing publisher visits.

Why Europe vs. U.S. Paths Differ

The EU/DSA offers a procedural path to assess systemic risk and platform design choices like AI Overviews and levy platform-wide remedies and fines. In the U.S., the fight currently runs through private litigation (Penske) and competition/consumer-protection advocacy at FTC/DOJ, where enforcement tools differ and take longer to mobilize.

RAG vs. Training Data Issues

AI Overviews are best understood as a Retrieval-Augmented Generation (RAG) issue. Readers will recall that RAG is probably the most direct example of verbatim copying in AI outputs. The harms arise because Google as middleman retrieves live publisher content and synthesizes it into an answer inside the Search Engine Results Page (SERP), reducing traffic to the sources. This is distinct from the training-data lawsuits (Kadrey, Bartz) that allege unlawful ingestion of works during model pretraining.

Kadrey: Indirect Market Harm

A RAG case like Penske’s could also be characterized as indirect market harm. Judge Chhabria’s ruling in Kadrey under U.S. law highlights that market harm isn’t limited to direct substitution for fair use purposes. Factor 4 in fair use analysis includes foreclosure of licensing and derivative markets. For AI/search, that means reduced referrals depress ad and subscription revenue, while widespread zero-click synthesis may foreclose an emerging licensing market for summaries and excerpts. Evidence of harm includes before/after referral data, revenue deltas, and qualitative harms like brand erasure and loss of attribution. Remedies could include more prominent linking, revenue-sharing, compliance with robots/opt-outs, and provenance disclosures.

I like them RAG cases.

The Essential Issue is Similar in EU and US

Whether in Brussels or Washington, the core dispute is very similar: Who captures the value of journalism in an AI-mediated search world? Germany’s DSA complaint and Penske’s U.S. lawsuit frame twin fronts of a larger conflict—one about control of distribution, payment for content, and the future of a pluralistic press. Not to mention the usual free-riding and competition issues swirling around Google as it extracts rents by inserting itself into places it’s not wanted.

How an AI Moratorium Would Preclude Penske’s Lawsuit

Many “AI moratorium” proposals function as broad safe harbors with preemption. A moratorium to benefit AI and pick national champions was the subject of an IP Subcommittee hearing on September 18. If Congress enacted a moratorium that (1) expressly immunizes core AI practices (training, grounding, and SERP-level summaries), (2) preempts overlapping state claims, and (3) channels disputes into agency processes with exclusive public enforcement, it would effectively close the courthouse door to private suits like Penske and make the US more like Europe without the enforcement apparatus. Here’s how:

Express immunity for covered conduct. If the statute declares that using publicly available content for training and for retrieval-augmented summaries in search is lawful during the moratorium, Penske’s core theory (RAG substitution plus training use) loses its predicate.
No private right of action / exclusive public enforcement. Limiting enforcement to the FTC/DOJ (or a designated tech regulator) would bar private plaintiffs from seeking damages or injunctions over covered AI conduct.
Antitrust carve-out or agency preclearance. Congress could provide that covered AI practices (AI Overviews, featured snippets powered by generative models, training/grounding on public web content) cannot form the basis for Sherman/Clayton liability during the moratorium, or must first be reviewed by the agency—undercutting Penske’s §1/§2 counts.
Primary-jurisdiction plus statutory stay. Requiring first resort to the agency with a mandatory stay of court actions would pause (or dismiss) Penske until the regulator acts.
Preemption of state-law theories. A preemption clause would sweep in state unjust-enrichment and consumer-protection claims that parallel the covered AI practices.
Limits on injunctive relief. Barring courts from enjoining covered AI features (e.g., SERP-level summaries) and reserving design changes to the agency would eliminate the centerpiece remedy Penske seeks.
Potential retroactive shield. If drafted to apply to past conduct, a moratorium could moot pending suits by deeming prior training/RAG uses compliant for the moratorium period.

A moratorium with safe harbors, preemption, and agency-first review would either stay, gut, or bar Penske’s antitrust and unjust-enrichment claims—reframing the dispute as a regulatory matter rather than a private lawsuit. Want to bet that White House AI Viceroy David Sacks will be sitting in judgement?

Denmark’s Big Idea: Protect Personhood from the Blob With Consent First and Platform Duty Built In

Denmark has given the rest of us a simple, powerful starting point: protect the personhood of citizens from the blob—the borderless slurry of synthetic media that can clone your face, your voice, and your performance at scale. Crucially, Denmark isn’t trying to turn name‑image‑likeness into a mini‑copyright. It’s saying something more profound: your identity isn’t a “work”; it’s you. It’s what is sometimes called “personhood.” That framing changes everything. It’s not commerce, it’s a human right.

The Elements of Personhood

Personhood raises human reality as moral consideration, not a piece of content. For example, the European Court of Human Rights reads Article 8 ECHR (“private life”) to include personal identity (name, identity integrity, etc.), protecting individual identity against unjustified interference. This is, of course, anathema to Silicon Valley, but the world takes a different view.

In fact, Denmark’s proposal echoes the Universal Declaration of Human Rights. It starts with dignity (Art. 1) and recognition of each person before the law (Art. 6), and it squarely protects private life, honor, and reputation against synthetic impersonation (Art. 12). It balances freedom of expression (Art. 19) with narrow, clearly labeled carve-outs, and it respects creators’ moral and material interests (Art. 27(2)). Most importantly, it delivers an effective remedy (Art. 8): a consent-first rule backed by provenance and cross-platform stay-down, so individuals aren’t forced into DMCA-style learned helplessness.

Why does this matter? Because the moment we call identity or personhood a species of copyright, platforms will reach for a familiar toolbox—quotation, parody, transient copies, text‑and‑data‑mining (TDM)—and claim exceptions to protect them from “data holders”. That’s bleed‑through: the defenses built for expressive works ooze into an identity context where they don’t belong. The result is an unearned permission slip to scrape faces and voices “because the web is public.” Denmark points us in the opposite direction: consent or it’s unlawful. Not “fair use,” not “lawful access,” not “industry custom., not “data profile.” Consent. Pretty easy concept. It’s one of the main reasons tech executives keep their kids away from cell phones and social media.

Not Replicating the Safe Harbor Disaster

Think about how we got here. The first generation of the internet scaled by pushing risk downstream with a portfolio of safe harbors like the God-awful DMCA and Section 230 in the US. Platforms insisted they were deserving of blanket liability shields because they were special. They were “neutral pipes” which no one believed then and don’t believe now. These massive safe harbors hardened into a business model that likely added billions to the FAANG bottom line. We taught millions of rightsholders and users to live with learned helplessness: file a notice, watch copies multiply, rinse and repeat. Many users did not know they could even do that much, and frankly still may not. That DMCA‑era whack‑a‑mole turned into a faux license, a kind of “catch me if you can” bargain where exhaustion replaces consent.

Denmark’s New Protection of Personhood for the AI Era

Denmark’s move is a chance to break that pattern—if we resist the gravitational pull back to copyright. A fresh right of identity (called a “sui generis” right among Latin fans) is not subject to copyright or database exceptions, especially fair use, DMCA, and TDM. In plain English: “publicly available” is not permission to clone your face, train on your voice, or fabricate your performance. Or your children, either. If an AI platform wants to use identity, they ask first. If they don’t ask, they don’t get to do it, and they don’t get to keep the model they trained on it. And like many other areas, children can’t consent.

That legal foundation unlocks the practical fix creators and citizens actually need: stay‑down across platforms, not endless piecemeal takedowns. Imagine a teacher discovers a convincing deepfake circulating on two social networks and a messaging app. If we treat that deepfake as a copyright issue under the old model, she sends three notices, then five, then twelve. Week two, the video reappears with a slight change. Week three, it’s re‑encoded, mirrored, and captioned. The message she receives under a copyright regime is “you can never catch up.” So why don’t you just give up. Which, of course, in the world of Silicon Valley monopoly rents, is called the plan. That’s the learned helplessness Denmark gives us permission to reject.

Enforcing Personhood

How would the new plan work? First, we treat realistic digital imitations of a person’s face, voice, or performance as illegal absent consent, with only narrow, clearly labeled carve‑outs for genuine public‑interest reporting (no children, no false endorsement, no biometric spoofing risk, provenance intact). That’s the rights architecture: bright lines and human‑centered. Hence, “personhood.”

Second, we wire enforcement to succeed at internet scale. The way out of whack‑a‑mole is a cross‑platform deepfake registry operated with real governance. A deepfake registry doesn’t store videos; it stores non‑reversible fingerprints—exact file hashes for byte‑for‑byte matches and robust, perceptual fingerprints for the variants (different encodes, crops, borders). For audio, we use acoustic fingerprints; for video, scene/frame signatures. These markers will evolve and so should the deepfakes registry. One confirmed case becomes a family of identifiers that platforms check at upload and on re‑share. The first takedown becomes the last.

Third, we pair that with provenance by default: Provenance isn’t a license; it’s evidence. When credentials are present, it’s easier to authenticate so there is an incentive to use them. Provenance is the rebar that turns legal rules into reliable, automatable processes. However, absence of credentials doesn’t mean free for all.

Finally, we put the onus where it belongs—on platforms. Europe’s Digital Services Act at least theoretically already replaced “willful blindness” with “notice‑and‑action” duties and oversight for very large platforms. Denmark’s identity right gives citizens a clear, national‑law basis to say: “This is illegal content—remove it and keep it down.” The platform’s job isn’t to litigate fair use in the abstract or hide behind TDM. It’s to implement upload checks, preserve provenance, run repeat‑offender policies, and prevent recurrences. If a case was verified yesterday, it shouldn’t be back tomorrow with a 10‑pixel border or other trivial alteration to defeat the rules.

Some will ask: what about creativity and satire? The answer is what it has always been in responsible speech law—more speech not fake speech. If you’re lampooning a politician with a clearly labeled synthetic speech, no implied endorsement, provenance intact, and no risk of biometric spoofing or fraud, you have defenses. The point isn’t to smother satire; it’s to end the pretense that satire requires open season on the biometric identities of private citizens and working artists.

Others will ask: what about research and innovation? Good research runs on consent, especially human subject research (see 45 C.F.R. part 46). If a lab wants to study voice cloning, it recruits consenting participants, documents scope and duration, and keeps data and models in controlled settings. That’s science. What isn’t science is scraping the voices of a country’s population “because the web is public,” then shipping a model that anyone can use to spoof a bank’s call‑center checks. A no‑TDM‑bleed‑through clause draws that line clearly.

And yes, edge cases exist. There will be appeals, mistakes, and hard calls at the margins. That is why the registry must be governed—with identity verification, transparent logs, fast appeals, and independent oversight. Done right, it will look less like a black box and more like infrastructure: a quiet backbone that keeps people safe while allowing reporting and legitimate creative work to thrive.

If Denmark’s spark is to become a firebreak, the message needs to be crisp:

— This is not copyright. Identity is a personal right; copyright defenses don’t apply.

— Consent is the rule. Deepfakes without consent is unlawful.

— No TDM bleed‑through. “Publicly available” does not equate to permission to clone or train.

— Provenance helps prove, not permit. Keep credentials intact; stripping them has consequences.

— Stay‑down, cross‑platform. One verified case should not become a thousand reuploads.

That’s how you protect personhood from the blob. By refusing to treat humans like “content,” by ending the faux‑license of whack‑a‑mole, and by making platforms responsible for prevention, not just belated reaction. Denmark has given us the right opening line. Now we should finish the paragraph: consent or block. Label it, prove it, or remove it.