The UK Finally Moves to Ban Above-Face-Value Ticket Resale

The UK is preparing to do something fans have begged for and secondary platforms have dreaded for years: ban the resale of tickets above face value. The plan, expected to be announced formally within days, would make the UK one of the toughest anti-scalping jurisdictions in the world. After a decade of explosive profiteering on sites like Viagogo and StubHub, the UK government has decided the resale marketplace needs a reset.

This move delivers on a major campaign promise from the 2024 Labour manifesto and comes on the heels of an unusually unified push from the artist community. More than 40 major artists — including Dua Lipa, Coldplay, Radiohead, Robert Smith, Sam Fender, PJ Harvey, The Chemical Brothers, and Florence + The Machine — signed an open letter urging Prime Minister Sir Keir Starmer to “stop touts from fleecing fans.” (“Touts” is British for “scalpers” which includes resellers like StubHub.). Sporting groups, consumer advocates, and supporter associations quickly echoed the call.

Under the reported proposal, tickets could only be resold at face value, with minimal, capped service fees to prevent platforms from disguising mark-ups as “processing costs.” This is a clear rejection of earlier floated compromises such as allowing resale up to 30% over face value which consumer groups said would simply legitimize profiteering.

Secondary platforms reacted instantly. Reuters reports that StubHub’s U.S.-listed parent lost around 14% of its market value on the news, compounding a disastrous first earnings report. As CNBC’s Jim Cramer put it bluntly: “It’s been a bust — and when you become a busted IPO, it’s very hard to change the narrative.” The UK announcement didn’t just nudge the stock downward; it slammed the door on the rosy growth story StubHub’s bankers were trying to sell.  Readers will know just how broken up I am about that little turn of events.  

Meanwhile, the UK Competition and Markets Authority has opened investigations into fee structures, “drip pricing,” and deceptive listings on both StubHub and Viagogo. Live Nation/Ticketmaster welcomed the move, noting that it already limits resale to face value in the UK.

One important nuance often lost in the public debate: dynamic pricing is not part of this ban — and in the UK, dynamic pricing isn’t the systemic problem it is in the U.S. Ticketmaster and other platforms consistently tell regulators that artists and their teams decide whether to use dynamic pricing, not the platforms. More importantly, relatively few artists actually ask for it. Most want their fans to get in at predictable, transparent prices — and some, like Robert Smith of The Cure, have publicly rejected dynamic pricing altogether.

That’s why the UK’s reform gets the target right: it goes after the for-profit resale economy, not the artists. It stops arbitrage without interfering with how performers choose to price their own shows.

The looming ban also highlights the widening gap between the UK and the U.S. While the UK is about to outlaw the very model that fuels American secondary platforms, U.S. reform remains paralyzed by lobbying pressure, fragmented state laws, and political reluctance to confront multimillion-dollar resale operators.

If the UK fully implements this reform, it becomes the most significant consumer-protection shift in live entertainment in more than a decade. And given the coalition behind it — artists, fans, sports groups, consumer advocates, and now regulators — this time the momentum looks hard to stop.

The Patchwork They Fear Is Accountability: Why Big AI Wants a Moratorium on State Laws

Why Big Tech’s Push for a Federal AI Moratorium Is Really About Avoiding State Investigations, Liability, and Transparency

As Congress debates the so-called “One Big Beautiful Bill Act,” one of its most explosive provisions has stayed largely below the radar: a 10-year or 5-year or any-year federal moratorium on state and local regulation of artificial intelligence. Supporters frame it as a common sense way to prevent a “patchwork” of conflicting state laws. But the real reason for the moratorium may be more self-serving—and more ominous.

The truth is, the patchwork they fear is not complexity. It’s accountability.

Liability Landmines Beneath the Surface

As has been well-documented by the New York Times and others, generative AI platforms have likely ingested and processed staggering volumes of data that implicate state-level consumer protections. This includes biometric data (like voiceprints and faces), personal communications, educational records, and sensitive metadata—all of which are protected under laws in states like Illinois (BIPA), California (CCPA/CPRA), and Texas.

If these platforms scraped and trained on such data without notice or consent, they are sitting on massive latent liability. Unlike federal laws, which are often narrow or toothless, many state statutes allow private lawsuits and statutory damages. Class action risk is not hypothetical—it is systemic.  It is crucial for policymakers to have a clear understanding of where we are today with respect to the collision between AI and consumer rights, including copyright.  The corrosion of consumer rights by the richest corporations in commercial history is not something that may happen in the future.  Massive violations have  already occurred, are occurring this minute, and will continue to occur into the future at an increasing rate.  

The Quiet Race to Avoid Discovery

State laws don’t just authorize penalties; they open the door to discovery. Once an investigation or civil case proceeds, AI platforms could be forced to disclose exactly what data they trained on, how it was retained, and whether any red flags were ignored.

This mirrors the arc of the social media addiction lawsuits now consolidated in multidistrict litigation. Platforms denied culpability for years—until internal documents showed what they knew and when. The same thing could happen here, but on a far larger scale.

Preemption as Shield and Sword

The proposed AI moratorium isn’t a regulatory timeout. It’s a firewall. By halting enforcement of state AI laws, the moratorium could prevent lawsuits, derail investigations, and shield past conduct from scrutiny.

Even worse, the Senate version conditions broadband infrastructure funding (BEAD) on states agreeing to the moratorium—an unconstitutional act of coercion that trades state police powers for federal dollars. The legal implications are staggering, especially under the anti-commandeering doctrine of Murphy v. NCAA and Printz v. United States.

This Isn’t About Clarity. It’s About Control.

Supporters of the moratorium, including senior federal officials and lobbying arms of Big Tech, claim that a single federal standard is needed to avoid chaos. But the evidence tells a different story.

States are acting precisely because Congress hasn’t. Illinois’ BIPA led to real enforcement. California’s privacy framework has teeth. Dozens of other states are pursuing legislation to respond to harms AI is already causing.

In this light, the moratorium is not a policy solution. It’s a preemptive strike.

Who Gets Hurt?
– Consumers, whose biometric data may have been ingested without consent
– Parents and students, whose educational data may now be part of generative models
– Artists, writers, and journalists, whose copyrighted work has been scraped and reused
– State AGs and legislatures, who lose the ability to investigate and enforce

Google Is an Example of Potential Exposure

Google’s former executive chairman Eric Schmidt has seemed very, very interested in writing the law for AI.  For example, Schmidt worked behind the scenes for the two years at least to establish US artificial intelligence policy under President Biden. Those efforts produced the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence“, the longest executive order in history. That EO was signed into effect by President Biden on October 30.  In his own words during an Axios interview with Mike Allen, the Biden AI EO was signed just in time for Mr. Schmidt to present that EO as what Mr. Schmidt calls “bait” to the UK government–which convened a global AI safety conference at Bletchley Park in the UK convened by His Excellency Rishi Sunak (the UK’s tech bro Prime Minister) that just happened to start on November 1, the day after President Biden signed the EO.  And now look at the disaster that the UK AI proposal would be.  

As Mr. Schmidt told Axios:

So far we are on a win, the taste of winning is there.  If you look at the UK event which I was part of, the UK government took the bait, took the ideas, decided to lead, they’re very good at this,  and they came out with very sensible guidelines.  Because the US and UK have worked really well together—there’s a group within the National Security Council here that is particularly good at this, and they got it right, and that produced this EO which is I think is the longest EO in history, that says all aspects of our government are to be organized around this.

Apparently, Mr. Schmidt hasn’t gotten tired of winning.  Of course, President Trump rescinded the Biden AI EO which may explain why we are now talking about a total moratorium on state enforcement which percolated at a very pro-Google shillery called R Street Institute, apparently by one Adam Thierer .  But why might Google be so interested in this idea?

Google may face exponentially acute liability under state laws if it turns out that biometric or behavioral data from platforms like YouTube Kids or Google for Education were ingested into AI training sets. 

These services, marketed to families and schools, collect sensitive information from minors—potentially implicating both federal protections like COPPA and more expansive state statutes. As far back as 2015, Senator Ben Nelson raised alarms about YouTube Kids, calling it “ridiculously porous” in terms of oversight and lack of safeguards. If any of that youth-targeted data has been harvested by generative AI tools, the resulting exposure is not just a regulatory lapse—it’s a landmine. 

The moratorium could be seen as an attempt to preempt the very investigations that might uncover how far that exposure goes.

What is to be Done?

Instead of smuggling this moratorium into a must-pass bill, Congress should strip it out and hold open hearings. If there’s merit to federal preemption, let it be debated on its own. But do not allow one of the most sweeping power grabs in modern tech policy to go unchallenged.

The public deserves better. Our children deserve better.  And the states have every right to defend their people. Because the patchwork they fear isn’t legal confusion.

It’s accountability.