AI’s Legal Defense Team Looks Familiar — Because It Is

If you feel like you’ve seen this movie before, you have.

Back in the 2003-ish runup to the 2005 MGM Studios, Inc. v. Grokster, Ltd. Supreme Court case, I met with the founder of one of the major p2p platforms in an effort to get him to go legal.  I reminded him that he knew there was all kinds of bad stuff that got uploaded to his platform.  However much he denied it, he was filtering it out and he was able to do that because he had the control over the content that he (and all his cohorts) denied he had.  

I reminded him that if this case ever went bad, someone was going to invade his space and find out exactly what he was up to. Just because the whole distributed p2p model (unlike Napster, by the way) was built to both avoid knowledge and be a perpetual motion machine, there was going to come a day when none of that legal advice was going to matter.  Within a few months the platform shut down, not because he didn’t want to go legal, but because he couldn’t, at least not without actually devoting himself to respecting other people’s rights.

Everything Old is New Again

Back in the early 2000s, peer-to-peer (P2P) piracy platforms claimed they weren’t responsible for the illegal music and videos flooding their networks. Today, AI companies claim they don’t know what’s in their training data. The defense is essentially the same: “We’re just the neutral platform. We don’t control the content.”  It’s that distorted view of the DMCA and Section 230 safe harbors that put many lawyers’ children through prep school, college and graduate school.

But just like with Morpheus, eDonkey, Grokster, and LimeWire, everyone knew that was BS because the evidence said otherwise — and here’s the kicker: many of the same lawyers are now running essentially the same playbook to defend AI giants.

The P2P Parallel: “We Don’t Control Uploads… Except We Clearly Do”

In the 2000s, platforms like Kazaa and LimeWire were like my little buddy–magically they  never had illegal pornography or extreme violence available to consumers, they prioritized popular music and movies, and filtered out the worst of the web

That selective filtering made it clear: they knew what was on their network. It wasn’t even a question of “should have known”, they actually knew and they did it anyway.  Courts caught on. 

In Grokster,  the Supreme Court side stepped the hosting issue and essentially said that if you design a platform with the intent to enable infringement, you’re liable.

The Same Playbook in the AI Era

Today’s AI platforms — OpenAI, Anthropic, Meta, Google, and others — essentially argue:
“Our model doesn’t remember where it learned [fill in the blank]. It’s just statistics.”

But behind the curtain, they:
– Run deduplication tools to avoid overloading, for example on copyrighted books
– Filter out NSFW or toxic content
– Choose which datasets to include and exclude
– Fine-tune models to align with somebody’s social norms or optics

This level of control shows they’re not ignorant — they’re deflecting liability just like they did with p2p.

Déjà Vu — With Many of the Same Lawyers

Many of the same law firms that defended Grokster, Kazaa, and other P2P pirate defendants as well as some of the ISPs are now representing AI companies—and the AI companies are very often some, not all, but some of the same ones that started screwing us on DMCA, etc., for the last 25 years.  You’ll see familiar names all of whom have done their best to destroy the creative community for big, big bucks in litigation and lobbying billable hours while filling their pockets to overflowing. 

The legal cadre pioneered the ‘willful blindness’ defense and are now polishing it up for AI, hoping courts haven’t learned the lesson.  And judging…no pun intended…from some recent rulings, maybe they haven’t.

Why do they drive their clients into a position where they pose an existential threat to all creators?  Do they not understand that they are creating a vast community of humans that really, truly, hate their clients?  I think they do understand, but there is a corresponding hatred of the super square Silicon Valley types who hate “Hollywood” right back.

Because, you know, information wants to be free—unless they are selling it.  And your data is their new oil. They apply this “ethic” not just to data, but to everything: books, news, music, images, and voice. Copyright? A speed bump. Terms of service? A suggestion. Artist consent? Optional.  Writing a song is nothing compared to the complexities of Biggest Tech.

Why do they do this?  OCPD Much?

Because control over training data is strategic dominance and these people are the biggest control freaks that mankind has ever produced.  They exhibit persistent and inflexible patterns of behavior characterized by an excessive need to control people, environments, and outcomes, often associated with traits of obsessive-compulsive personality disorder.  

So empathy will get you nowhere with these people, although their narcissism allows them to believe that they are extremely empathetic.  Pathetic, yes, empathetic, not so much.  

Pay No Attention to that Pajama Boy Behind the Curtain

The driving force behind AI is very similar to the driving force behind the Internet.   If pajama boy can harvest the world’s intellectual property and use it to train his proprietary AI model, he now owns a simulation of the culture he is not otherwise part of, and not only can he monetize it without sharing profits or credit, he can deny profits and credit to the people who actually created it.

So just like the heyday of Pirate Bay, Grokster & Co.  (and Daniel Ek’s pirate incarnation) the goal isn’t innovation. The goal is control over language, imagery, and the markets that used to rely on human creators.  This should all sound familiar if you were around for the p2p era.

Why This Matters

Like p2p platforms, it’s just not believable that the AI companies do know what’s in their models.  They may build their chatbot interface so that the public can’t ask the chatbot to blow the whistle on the platform operator, but that doesn’t mean  the company can’t tell what they are training on.  These operators have to be able to know what’s in the training materials and manipulate that data daily.  

They fingerprint, deduplicate, and sanitize their datasets. How else can they avoid having multiple copies of books, for example, that would be a compute nightmare.  They store “embeddings” in a way that they can optimize their AI to use only the best copy of any particular book.  They control the pipeline.

It’s not about the model’s memory. It’s about the platform’s intent and awareness.

If they’re smart enough to remove illegal content and prioritize clean data, they’re smart enough to be held accountable.

We’re not living through the first digital content crisis — just the most powerful one yet. The legal defenses haven’t changed much. But the stakes — for copyright, competition, and consumer protection — are much higher now.

Courts, Congress, and the public should recognize this for what it is: a recycled defense strategy in service of unchecked AI power. Eventually Grokster ran into Grokster— and all these lawyers are praying that there won’t be an AI version of the Grokster case. 

Creators Rally Behind Cyril Vetter’s Termination Rights Case in the Fifth Circuit

Songwriter and publisher Cyril Vetter is at the center of a high-stakes copyright case over his song “Double Shot of My Baby’s Love” with massive implications for authors’ termination rights under U.S. law. His challenge to Resnik Music Group has reached the Fifth Circuit Court of Appeals, and creators across the country are showing up in force—with a wave of amicus briefs filed in support including Artist Rights Institute.  Let’s consider the case on appeal.

At the heart of Vetter’s case is a crucial question: When a U.S. author signs a U.S. contract governed by U.S. law and then later the author (or the author’s heirs) invokes their 35-year termination right under Sections 203 and 304 of the U.S. Copyright Act, does that termination recover only U.S. rights (the conventional wisdom)—or the entire copyright, including worldwide rights?  Vetter argued for the worldwide rights at trial.  And the trial judge agreed over strenuous objections by the music publisher opposing Cyril.

Judge Shelly Dick of the U.S. District Court for the Middle District of Louisiana agreed. Her ruling made clear that a grant of worldwide rights under a U.S. contract is subject to U.S. termination. To hold otherwise would defeat the statute’s purpose which seems obvious.

I’ve known Vetter’s counsel Tim Kappel since he was a law student and have followed this case closely. Tim built a strong record in the District Court and secured a win against tough odds. MTP readers may recall our interviews with him about the case, which attracted considerable attention. Tim’s work with Cyril has energized a creator community long skeptical of the industry’s ‘U.S. rights only’ narrative—a narrative more tradition than law, an artifact of smoke filled rooms and backroom lawyers.

The Artist Rights Institute (David Lowery, Nikki Rowling, and Chris Castle), along with allies including Abby North (daughter-in-law of the late film composer Alex North), Blake Morgan (#IRespectMusic), and Angela Rose White (daughter of the late television composer and music director David Rose), filed a brief supporting Vetter. The message is simple: Congress did not grant a second bite at half the apple. Termination rights are meant to restore the full copyright—not just fragments.

As we explained in our brief, Vetter’s original grant of rights was typical: worldwide and perpetual, sometimes described as ‘throughout the universe.’ The idea that termination lets an author reclaim only U.S. rights—leaving the rest with the publisher—is both absurd and dangerous.

This case is a wake-up call. Artists shouldn’t belong to the  ‘torturable class’—doomed to accept one-sided deals as normal. Termination was Congress’s way of correcting those imbalances. Terminations are designed by Congress to give a second bite at the whole apple, not the half.

Stay tuned—we’ll spotlight more briefs soon. Until then, here’s ours for your review.

Steve’s Not Here–Why AI Platforms Are Still Acting Like Pirate Bay

In 2006, I wrote “Why Not Sell MP3s?” — a simple question pointing to an industry in denial. The dominant listening format was the MP3 file, yet labels were still trying to sell CDs or hide digital files behind brittle DRM. It seems kind of incredible in retrospect, but believe me it happened. Many cycles were burned on that conversation. Fans had moved on. The business hadn’t.

Then came Steve Jobs.

At the launch of the iTunes Store — and I say this as someone who sat in the third row — Jobs gave one of the most brilliant product presentations I’ve ever seen. He didn’t bulldoze the industry. He waited for permission, but only after crafting an offer so compelling it was as if the labels should be paying him to get in. He brought artists on board first. He made it cool, tactile, intuitive. He made it inevitable.

That’s not what’s happening in AI.

Incantor: DRM for the Input Layer

Incantor is trying to be the clean-data solution for AI — a system that wraps content in enforceable rights metadata, licenses its use for training and inference, and tracks compliance. It’s DRM, yes — but applied to training inputs instead of music downloads.

It may be imperfect, but at least it acknowledges that rights exist.

What’s more troubling is the contrast between Incantor’s attempt to create structure and the behavior of the major AI platforms, which have taken a very different route.

AI Platforms = Pirate Bay in a Suit

Today’s generative AI platforms — the big ones — aren’t behaving like Apple. They’re behaving like The Pirate Bay with a pitch deck.

– They ingest anything they can crawl.
– They claim “public availability” as a legal shield.
– They ignore licensing unless forced by litigation or regulation.
– They posture as infrastructure, while vacuuming up the cultural labor of others.

These aren’t scrappy hackers. They’re trillion-dollar companies acting like scraping is a birthright. Where Jobs sat down with artists and made the economics work, the platforms today are doing everything they can to avoid having that conversation.

This isn’t just indifference — it’s design. The entire business model depends on skipping the licensing step and then retrofitting legal justifications later. They’re not building an ecosystem. They’re strip-mining someone else’s.

What Incantor Is — and Isn’t

Incantor isn’t Steve Jobs. It doesn’t control the hardware, the model, the platform, or the user experience. It can’t walk into the room and command the majors to listen with elegance. But what it is trying to do is reintroduce some form of accountability — to build a path for data that isn’t scraped, stolen, or in legal limbo.

That’s not an iTunes power move. It’s a cleanup job. And it won’t work unless the AI companies stop pretending they’re search engines and start acting like publishers, licensees, and creative partners.

What the MP3 Era Actually Taught Us

The MP3 era didn’t end because DRM won. It ended because someone found a way to make the business model and the user experience better — not just legal, but elegant. Jobs didn’t force the industry to change. He gave them a deal they couldn’t refuse.

Today, there’s no Steve Jobs. No artists on stage at AI conferences. No tactile beauty. Just cold infrastructure, vague promises, and a scramble to monetize other people’s work before the lawsuits catch up. Let’s face it–when it comes to Elon, Sam, or Zuck, would you buy a used Mac from that man?

If artists and AI platforms were in one of those old “I’m a Mac / I’m a PC” commercials, you wouldn’t need to be told which is which. One side is creative, curious, collaborative. The other is corporate, defensive, and vaguely annoyed that you even asked the question.

Until that changes, platforms like Incantor will struggle to matter — and the AI industry will continue to look less like iTunes, and more like Pirate Bay with an enterprise sales team.

The OBBBA’s AI Moratorium Provision Has Existential Constitutional Concerns and Policy Implications

As we watch the drama of the One Big Beautiful Bill Act play out there’s a plot twist waiting in the wings that could create a cliffhanger in the third act: The poorly thought out, unnecessary and frankly offensive AI moratorium safe harbor that serves only the Biggest of Big Tech that we were gifted by Adam Theirer of the R Street Institute.

The latest version of the AI moratorium poison pill in the Senate version of OBBBA (aka HR1) reads something like this:

The AI moratorium provision within the One Big Beautiful Bill Act (OBBBA) reads like the fact pattern for a bar exam crossover question. The proposed legislation raises significant Constitutional and policy concerns. Before it even gets to the President’s desk, the legislation likely violates the Senate’s Byrd Rule that allows the OBBBA to avoid the 60 vote threshold (and the filibuster) and get voted on in “reconciliation” on a simple majority. The President’s party has a narrow simple majority in the Senate so if it were not for the moratorium the OBBBA should pass.

There are lots of people who think that the moratorium should fail the “Byrd Bath” analysis because it is not “germane” to the budget and tax process required to qualify for reconciliation. This is important because if the Senate Parliamentarian does not hold the line on germaine-ness, everyone will get into the act for every bill simply by attaching a chunk of money to your favorite donor, and that will not go over well. According to Roll Call, Senator Cruz is already talking about introducing regulatory legislation with the moratorium, which would likely only happen if the OBBBA poison pill was cut out:

The AI moratorium has already picked up some serious opponents in the Senate who would likely have otherwise voted for the President’s signature legislation with the President’s tax and spending policies in place. The difference between the moratorium and spending cuts is that money is fungible and a moratorium banning states from acting under their police powers really, really, really is not fungible at all. The moratorium is likely going to fail or get close to failing, and if the art of the deal says getting 80% of something is better than 100% of nothing, that moratorium is going to go away in the context of a closing. Maybe.

And don’t forget, the bill has to go back to the House which passed it by a single vote and there are already Members of the House who are getting buyers remorse about the AI moratorium specifically. So when they get a chance to vote again…who knows.

Even if it passes, the 40 state Attorneys General who oppose it may be gearing up to launch a Constitutional challenge to the provision on a number of grounds starting with the Tenth Amendment, its implications for federalism, and other Constitutional issues that just drip out of this thing. And my bet is that Adam Thierer will be eyeball witness #1 in that litigation.

So to recap the vulnerabilities:

Byrd Rule Violation

The Byrd Rule prohibits non-budgetary provisions in reconciliation bills. The AI moratorium’s primary effect is regulatory, not fiscal, as it preempts state laws without directly impacting federal revenues or expenditures. Senators, including Ed Markey (D-MA) as reported by Roll Call, have indicated intentions to challenge the provision under the Byrd Rule. The Hill reports:

Federal Preemption, the Tenth Amendment and Anti-Commandeering Doctrine

The Tenth Amendment famously reserves powers not delegated to the federal government to the states and to the people (remember them?). The constitutional principle of “anticommandeering” is a doctrine under U.S. Constitutional law that prohibits the federal government from compelling states or state officials to enact, enforce, or administer federal regulatory programs.

Anticommandeering is grounded primarily in the Tenth Amendment. Under this principle, while the federal government can regulate individuals directly under its enumerated powers (such as the Commerce Clause), it cannot force state governments to govern according to federal instructions. Which is, of course, exactly what the moratorium does, although the latest version would have you believe that the feds aren’t really commandeering, they are just tying behavior to money which the feds do all the time. I doubt anyone believes it.

The AI moratorium infringes upon the good old Constitution by:

  • Overriding State Authority: It prohibits states from enacting or enforcing AI regulations, infringing upon their traditional police powers to legislate for the health, safety, and welfare of their citizens.
  • Lack of Federal Framework: Unlike permissible federal preemption, which operates within a comprehensive federal regulatory scheme, the AI moratorium lacks such a framework, making it more akin to unconstitutional commandeering.
  • Precedent in Murphy v. NCAA: The Supreme Court held that Congress cannot prohibit states from enacting laws, as that prohibition violates the anti-commandeering principle. The AI moratorium, by preventing states from regulating AI, mirrors the unconstitutional aspects identified in Murphy. So there’s that.

The New Problem: Coercive Federalism

By conditioning federal broadband funds (“BEAD money”) on states’ agreement to pause AI regulations , the provision exerts undue pressure on states, potentially violating principles established in cases like NFIB v. Sebelius. Plus, the Broadband Equity, Access, and Deployment (BEAD) Program is a $42.45 billion federal initiative established under the Infrastructure Investment and Jobs Act of 2021. Administered by the National Telecommunications and Information Administration (NTIA), BEAD aims to expand high-speed internet access across the United States by funding planning, infrastructure deployment, and adoption programs. In other words, BEAD has nothing to do with the AI moratorium. So there’s that.

Supremacy Clause Concerns

The moratorium may conflict with existing state laws, leading to legal ambiguities and challenges regarding federal preemption. That’s one reason why 40 state AGs are going to the mattresses for the fight.

Lawmakers Getting Cold Feet or In Opposition

Several lawmakers have voiced concerns or opposition to the AI moratorium:

  • Rep. Marjorie Taylor Greene (R-GA): Initially voted for the bill but later stated she was unaware of the AI provision and would have opposed it had she known. She has said that she will vote no on the OBBBA when it comes back to the House if the Mr. T’s moratorium poison pill is still in there.
  • Sen. Josh Hawley (R-MO): Opposes the moratorium, emphasizing the need to protect individual rights over corporate interests.
  • Sen. Marsha Blackburn (R-TN): Expressed concerns that the moratorium undermines state protections, particularly referencing Tennessee’s AI-related laws.
  • Sen. Edward Markey (D-MA): Intends to challenge the provision under the Byrd Rule, citing its potential to harm vulnerable communities.

Recommendation: Allow Dissenting Voices

Full disclosure, I don’t think Trump gives a damn about the AI moratorium. I also think this is performative and is tied to giving the impression to people like Masa at Softbank that he tried. It must be said that Masa’s billions are not quite as important after Trump’s Middle East roadshow than they were before, speaking of leverage. While much has been made of the $1 million contributions that Zuckerberg, Tim Apple, & Co. made to attend the inaugural, there’s another way to look at that tableau–remember Titus Andronicus when the general returned to Rome with Goth prisoners in chains following his chariot? That was Tamora, the Queen of the Goths, her three sons Alarbus, Chiron, and Demetrius along with Aaron the Moor. Titus and the Goth’s still hated each other. Just sayin’.

Somehow I wouldn’t be surprised if this entire exercise was connected to the TikTok divestment in ways that aren’t entirely clear. So, given the constitutional concerns and growing opposition, it is advisable for President Trump to permit members of Congress to oppose the AI moratorium provision without facing political repercussions, particularly since Rep. Greene has already said she’s a no vote–on the 214-213 vote the first time around. This approach would:

  • Respect the principles of federalism and states’ rights.
  • Tell Masa he tried, but oh well.
  • Demonstrate responsiveness to legitimate legislative concerns on a bi-partisan basis.
  • Ensure that the broader objectives of the OBBBA are not jeopardized by a contentious provision.

Let’s remember: The tax and spend parts of OBBBA are existential to the Trump agenda; the AI moratorium definitely is not, no matter what Mr. T wants you to believe. While the OBBBA encompasses significant policy initiatives which are highly offensive to a lot of people, the AI moratorium provision presents constitutional and procedural challenges and fundamental attacks on our Constitution that warrant its removal. Cutting it out will strengthen the bill’s likelihood of passing and uphold the foundational principles of American governance, at least for now.

Hopefully Trump looks at it that way, too.