The UK Finally Moves to Ban Above-Face-Value Ticket Resale

The UK is preparing to do something fans have begged for and secondary platforms have dreaded for years: ban the resale of tickets above face value. The plan, expected to be announced formally within days, would make the UK one of the toughest anti-scalping jurisdictions in the world. After a decade of explosive profiteering on sites like Viagogo and StubHub, the UK government has decided the resale marketplace needs a reset.

This move delivers on a major campaign promise from the 2024 Labour manifesto and comes on the heels of an unusually unified push from the artist community. More than 40 major artists — including Dua Lipa, Coldplay, Radiohead, Robert Smith, Sam Fender, PJ Harvey, The Chemical Brothers, and Florence + The Machine — signed an open letter urging Prime Minister Sir Keir Starmer to “stop touts from fleecing fans.” (“Touts” is British for “scalpers” which includes resellers like StubHub.). Sporting groups, consumer advocates, and supporter associations quickly echoed the call.

Under the reported proposal, tickets could only be resold at face value, with minimal, capped service fees to prevent platforms from disguising mark-ups as “processing costs.” This is a clear rejection of earlier floated compromises such as allowing resale up to 30% over face value which consumer groups said would simply legitimize profiteering.

Secondary platforms reacted instantly. Reuters reports that StubHub’s U.S.-listed parent lost around 14% of its market value on the news, compounding a disastrous first earnings report. As CNBC’s Jim Cramer put it bluntly: “It’s been a bust — and when you become a busted IPO, it’s very hard to change the narrative.” The UK announcement didn’t just nudge the stock downward; it slammed the door on the rosy growth story StubHub’s bankers were trying to sell.  Readers will know just how broken up I am about that little turn of events.  

Meanwhile, the UK Competition and Markets Authority has opened investigations into fee structures, “drip pricing,” and deceptive listings on both StubHub and Viagogo. Live Nation/Ticketmaster welcomed the move, noting that it already limits resale to face value in the UK.

One important nuance often lost in the public debate: dynamic pricing is not part of this ban — and in the UK, dynamic pricing isn’t the systemic problem it is in the U.S. Ticketmaster and other platforms consistently tell regulators that artists and their teams decide whether to use dynamic pricing, not the platforms. More importantly, relatively few artists actually ask for it. Most want their fans to get in at predictable, transparent prices — and some, like Robert Smith of The Cure, have publicly rejected dynamic pricing altogether.

That’s why the UK’s reform gets the target right: it goes after the for-profit resale economy, not the artists. It stops arbitrage without interfering with how performers choose to price their own shows.

The looming ban also highlights the widening gap between the UK and the U.S. While the UK is about to outlaw the very model that fuels American secondary platforms, U.S. reform remains paralyzed by lobbying pressure, fragmented state laws, and political reluctance to confront multimillion-dollar resale operators.

If the UK fully implements this reform, it becomes the most significant consumer-protection shift in live entertainment in more than a decade. And given the coalition behind it — artists, fans, sports groups, consumer advocates, and now regulators — this time the momentum looks hard to stop.

The Return of the Bubble Rider: Masa, OpenAI, and the New AI Supercycle

“Hubris gives birth to the tyrant; hubris, when glutted on vain visions, plunges into an abyss of doom.”
Agamemnon by Aeschylus

Masayoshi Son has always believed he could see farther into the technological future than everyone else. Sometimes he does. Sometimes he rides straight off a cliff. But the pattern is unmistakable: he is the market’s most fearless—and sometimes most reckless—Bubble Rider.

In the late 1990s, Masa became the patron saint of the early internet. SoftBank took stakes in dozens of dot-coms, anchored by its wildly successful bet on Yahoo! (yes, Yahoo!  Ask your mom.). For a moment, Masa was briefly one of the world’s richest men on paper. Then the dot-bomb hit. Overnight, SoftBank lost nearly everything. Masa has said he personally watched $70 billion evaporate—the largest individual wealth wipeout ever recorded at the time. But his instinct wasn’t to retreat. It was to reload.

That same pattern returned with SoftBank’s Vision Fund. Masa raised unprecedented capital from sovereign wealth pools and bet big on the “AI + data” megatrend—then plowed it into companies like WeWork, Zume, Brandless, and other combustion-ready unicorns. When those valuations collapsed, SoftBank again absorbed catastrophic losses. And yet the thesis survived, just waiting for its next bubble.

We’re now in what I’ve called the AI Bubble—the largest capital-formation mania since the original dot-com wave, powered by foundation AI labs, GPU scarcity, and a global arms race to capture platform rents. And here comes Masa again, right on schedule.

SoftBank has now sold its entire Nvidia stake—the hottest AI infrastructure trade of the decade—freeing up nearly $6 billion. That money is being redirected straight into OpenAI’s secondary stock offering at an eyewatering marked-to-fantasy $500 billion valuation. In the same week, SoftBank confirmed it is preparing even larger AI investments. This is Bubble Riding at its purest: exiting one vertical where returns may be peaking, and piling into the center of speculative gravity before the froth crests.

What I suspect Masa sees is simple: if generative AI succeeds, the model owners will become the new global monopolies alongside the old global monopolies like Google and Microsoft.  You know, democratizing the Internet. If it fails, the whole electric grid and water supply may crash along with it. He’s choosing a side—and choosing it at absolute top-of-market pricing.

The other difference between the dot-com bubble and the AI bubble is legal, not just financial. Pets.com and its peers (who I refer to generically as “Socks.com” the company that uses the Internet to find socks under the bed) were silly, but they weren’t being hauled into court en masse for building their core product on other people’s property. 

Today’s AI darlings are major companies being run like pirate markets. Meta, Anthropic, OpenAI and others are already facing a wall of litigation from authors, news organizations, visual artists, coders, and music rightsholders who all say the same thing: your flagship models exist only because you ingested our work without permission, at industrial scale, and you’re still doing it. 

That means this bubble isn’t just about overpaying for growth; it’s about overpaying for businesses whose main asset—trained model weights—may be encumbered by unpriced copyright and privacy claims. The dot-com era mispriced eyeballs. The AI era may be mispricing liability.  And that’s serious stuff.

There’s another distortion the dot-com era never had: the degree to which the AI bubble is being propped up by taxpayers. Socks.com didn’t need a new substation, a federal loan guarantee, or a 765 kV transmission corridor to find your socks. Today’s Socks.ai does need all that to use AI to find socks under the bed.  All the AI giants do. Their business models quietly assume public willingness to underwrite an insanely expensive buildout of power plants, high-voltage lines, and water-hungry cooling infrastructure—costs socialized onto ratepayers and communities so that a handful of platforms can chase trillion-dollar valuations. The dot-com bubble misallocated capital; the AI bubble is trying to reroute the grid.

In that sense, this isn’t just financial speculation on GPUs and model weights—it’s a stealth industrial policy, drafted in Silicon Valley and cashed at the public’s expense.

The problem, as always, is timing. Bubbles create enormous winners and equally enormous craters. Masa’s career is proof. But this time, the stakes are higher. The AI Bubble isn’t just a capital cycle; it’s a geopolitical and industrial reordering, pulling in cloud platforms, national security, energy systems, media industries, and governments with a bad case of FOMO scrambling to regulate a technology they barely understand.

And now, just as Masa reloads for his next moonshot, the market itself is starting to wobble. The past week’s selloff may not be random—it feels like a classic early-warning sign of a bubble straining under its own weight. In every speculative cycle, the leaders crack first: the most crowded trades, the highest-multiple stories, the narratives everyone already believes. This time, those leaders are the AI complex—GPU giants, hyperscale clouds, and anything with “model” or “inference” in the deck. When those names roll over together, it tells you something deeper than normal volatility is at work.

What the downturn may exposes is the growing narrative about an “earnings gap.“ Investors have paid extraordinary prices for companies whose long-term margins remain theoretical, whose energy demands are exploding, and whose regulatory and copyright liabilities are still unpriced. The AI story is enormous—but the business model remains unresolved. A selloff forces the market to remember the thing it forgets at every bubble peak: cash flow eventually matters.

Back in the late-cycle of the dot com era, I had lunch in December of 1999 with a friend who had worked 20 years in a division of a huge conglomerate, bought his division in a leveraged buyout, ran that company for 10 years then took that public, sold it to another company that then went public.  He asked me to explain how these dot coms were able to go public, a process he equated with hard work and serious people.  I said, well we like them to have four quarters of top line revenue.  He stared at me.  I said, I know it’s stupid, but that’s what they say.  He said, it’s all going to crash.  And boy did it ever.

And ironically, nothing captures this late-cycle psychology better than Masa’s own behavior. SoftBank selling Nvidia—the proven cash-printing side of AI—to buy OpenAI at a $500 billion valuation isn’t contrarian genius; it’s the definition of a crowded climax trade, the moment when everyone is leaning the same direction. When that move coincides with the tape turning red, the message is unmistakable: the AI supercycle may not be over, but the easy phase is.

Whether this is the start of a genuine deflation or just the first hard jolt before the final manic leg, the pattern is clear. The AI Bubble is no longer hypothetical—it is showing up on the trading screens, in the sentiment, and in the rotation of capital itself.

Masa may still believe the crest of the wave lies ahead. But the market has begun to ask the question every bubble eventually faces: What if this is the top of the ride?

Masa is betting that the crest of the curve lies ahead—that we’re in Act Two of an AI supercycle. Maybe he’s right. Or maybe he’s gearing up for his third historic wipeout.

Either way, he’s back in the saddle.

The Bubble Rider rides again.

Taxpayer-Backed AI? The Triple Subsidy No One Voted For

OpenAI’s CFO recently suggested that Uncle Sam should backstop AI chip financing—essentially asking taxpayers to guarantee the riskiest capital costs for “frontier labs.” As The Information reported, the idea drew immediate pushback from tech peers who questioned why a company preparing for a $500 billion valuation—and possibly a trillion-dollar IPO—can’t raise its own money. Why should the public underwrite a firm whose private investors are already minting generational wealth?


Meanwhile, the Department of Energy is opening federal nuclear and laboratory sites—from Idaho National Lab to Oak Ridge and Savannah River—for private AI data centers, complete with fast-track siting, dedicated transmission lines, and priority megawatts. DOE’s expanded Title XVII loan-guarantee authority sweetens the deal, offering government-backed credit and low borrowing costs. It’s a breathtaking case of public risk for private expansion, at a time when ordinary ratepayers are staring down record-high energy bills.

And the ambition goes further. Some of these companies now plan to site small modular nuclear reactors to provide dedicated power for AI data centers. That means the next generation of nuclear power—built with public financing and risk—could end up serving private compute clusters, not the public grid. In a country already facing desertification, water scarcity, and extreme heat, it is staggering to watch policymakers indulge proposals that will burn enormous volumes of water to cool servers, while residents across the Southwest are asked to ration and conserve. I theoretically don’t have a problem with private power grids, but I don’t believe they’ll be private and I do believe that in both the short run and the long run these “national champions” will drive electricity prices through the stratosphere—which would be OK, too, if the AI labs paid off the bonds that built our utilities. All the bonds.

At the same time, Washington still refuses to enforce copyright law, allowing these same firms to ingest millions of creative works into their models without consent, compensation, or disclosure—just as it did under DMCA §512 and Title I of the MMA, both of which legalized “ingest first, reconcile later.” That’s a copyright subsidy by omission, one that transfers cultural value from working artists into the balance sheets of companies whose business model depends on denial.


And the timing? Unbelievable. These AI subsidies were being discussed in the same week SNAP benefits are running out and the Treasury is struggling to refinance federal debt. We are cutting grocery assistance to families while extending loan guarantees and land access to trillion-dollar corporations.


If DOE and DOD insist on framing this as “AI industrial policy,” then condition every dollar on verifiable rights-clean training data, environmental transparency, and water accountability. Demand audits, clawbacks, and public-benefit commitments before the first reactor breaks ground.

Until then, this is not innovation—it’s industrialized arbitrage: public debt, public land, and public water underwriting the private expropriation of America’s creative and natural resources.

The Digital End-Cap: How Spotify’s Discovery Mode Turned Payola into Personalization

The streaming economy’s most controversial feature revives the old record-store co-op ad model—only now, the shelf space is algorithmic, the payments are disguised as royalty discounts, and the audience has no idea.

From End-Caps to Algorithms: The Disappearing Line Between Marketing and Curation

In the record-store era, everyone in the business knew that end-caps, dump bins, window clings, and in-store listening stations weren’t “organic” discoveries—they were paid placements. Labels bought the best shelf space, sponsored posters, and underwrote the music piped through the store’s speakers because visibility sold records.

Spotify’s Discovery Mode is that same co-op advertising model reborn in code: a system where royalty discounts buy algorithmic shelf space rather than retail real estate. Yet unlike the physical store, today’s paid promotion is hidden behind the language of personalization. Users are told that playlists and AI DJs are “made just for you,” when in fact those recommendations are shaped by the same financial incentives that once determined which CD got the end-cap.

On Spotify, nothing is truly organic; Discovery Mode simply digitizes the old pay-for-placement economy, blending advertising with algorithmic curation while erasing the transparency that once separated marketing from editorial judgment.

Spotify’s Discovery Mode: The “Inverted Payola”

The problem for Spotify is that it has never positioned itself like a retailer. It has always positioned itself as a substitute for radio, and buying radio is a dangerous occupation. That’s called payola.

Spotify’s controversial “Discovery Mode” is a kind of inverted payola which makes it seem like it smells less than it actually does. Remember, artists don’t get paid for broadcast radio airplay in the US so the incentive always was for labels to bribe DJs because that’s the only way that money entered the transaction. (At one point, that could have included publishers, too, back when publishers tried to break artists who recorded their songs.)

What’s different about Spotify is that streaming services do pay for their equivalent of airplay. When Discovery Mode pays less in return for playing certain songs more, that’s essentially the same as getting paid for playing certain songs more. It’s just a more genteel digital transaction in the darkness of ones and zeros instead of the tackier $50 handshake. The discount is every bit as much a “thing of value” as a $50 bill, with the possible exception that it goes to benefit Spotify stockholders and employees unlike the $50 that an old-school DJ probably just put in his pocket in one of those gigantic money rolls. (For games to play on a rainy day, try betting a DJ he has less than $10,000 in his pocket.)

Music Business Worldwide gave Spotify’s side of the story (which is carefully worded flack talk so pay close attention):Spotify rejected the allegations, telling AllHipHop: 

“The allegations in this complaint are nonsense. Not only do they misrepresent what Discovery Mode is and how it works, but they are riddled with misunderstandings and inaccuracies.”

The company explained that Discovery Mode affects only RadioAutoplay and certain Mixes, not flagship playlists like Discover Weekly or the AI DJ that the lawsuit references.Spotify added: “The complaint even gets basic facts wrong: Discovery Mode isn’t used in all algorithmic playlists, or even Discover Weekly or DJ, as it claims.

The Payola Deception Theory

The emerging payola deception theory against Spotify argues that Spotify’s pay-to-play Discovery Mode constitutes a form of covert payola that distorts supposedly neutral playlists and recommendation systems—including Discover Weekly and the AI DJ—even if those specific products do not directly employ the “Discovery Mode” flag.

The key to proving this theory lies in showing how a paid-for boost signal introduced in one part of Spotify’s ecosystem inevitably seeps through the data pipelines and algorithmic models that feed all the others, deceiving users about the neutrality of their listening experience. That does seem to be the value proposition—”You give us cheaper royalties, we give you more of the attention firehose.”

Spotify claims that Discovery Mode affects only Radio, Autoplay, and certain personalized mixes, not flagship products like enterprise playlists or the AI DJ. That defense rests on a narrow, literal interpretation: those surfaces do not read the Discovery Mode switch. Yet under the payola deception theory, this distinction is meaningless because Spotify’s recommendation ecosystem appears to be fully integrated.

Spotify’s own technical publications and product descriptions indicate that multiple personalized surfaces— including Discover Weekly and AI DJ—are built on shared user-interaction data, learned taste profiles, and common recommendation models, rather than each using entirely independent algorithms. It sounds like Spotify is claiming that certain surfaces like Discover Weekly and AI DJ have cabined algorithms and pristine data sets that are not affected by Discovery Mode playlists or the Discovery Mode switch.

While that may be true, it seems like maintaining that separation would be downright hairy if not expensive in terms of compute. It seems far more likely that Spotify run shared models on shared data, and when they say “Discovery Mode isn’t used in X,” they’re only talking about the literal flag—not the downstream effects of the paid boost on global engagement metrics and taste profiles.

How the Bias Spreads: Five Paths of Contamination

So let’s infer that every surface draws on the same underlying datasets, engagement metrics, and collaborative models. Once the paid boost changes user behavior, it alters the entire system’s understanding of what is popular, relevant, or representative of a listener’s taste. The result is systemic contamination: a payola-driven distortion presented to users as organic personalization. The architecture that would make their strong claim true is expensive and unnatural; the architecture that’s cheap and standard almost inevitably lets the paid boost bleed into those “neutral” surfaces in five possible ways.

The first is through popularity metrics. As much as we can tell from the outside, Discovery Mode artificially inflates a track’s exposure in the limited contexts where the switch is activated. Those extra impressions generate more streams, saves, and “likes,” which I suspect feed into Spotify’s master engagement database.

Because stream count, skip rate, and save ratio are very likely global ranking inputs, Discovery Mode’s beneficiaries appear “hotter” across the board. Even if Discover Weekly or the AI DJ ignore the Discovery Mode flag, it’s reasonable to infer that they still rely on those popularity statistics to select and order songs. Otherwise Spotify would need to maintain separate, sanitized algorithms trained only on “clean” engagement data—an implausible and inefficient architecture given Spotify’s likely integrated recommendation system and the economic logic of Discovery Mode itself which I find highly unlikely to be the case. The paid boost thus translates into higher ranking everywhere, not just in Radio or Autoplay. This is the algorithmic equivalent of laundering a bribe through the system—money buys visibility that masquerades as audience preference.

The second potential channel is through user taste profiles. Spotify’s personalization models constantly update a listener’s “taste vector” based on recent listening behavior. If Discovery Mode repeatedly serves a track in Autoplay or Radio, a listener’s history skews toward that song and its stylistic “neighbors”. The algorithm likely then concludes that the listener “likes” similar artists (even if it’s actually Discover Mode serving the track, not user free will. The algorithm likely feeds those likes into Discover Weekly, Daily Mixes, and the AI DJ’s commentary stream. The user thinks the AI is reading their mood; in reality, it is reading a taste profile that was manipulated upstream by a pay-for-placement mechanism. All roads lead to Bieber or Taylor.

A third route is collaborative filtering and embeddings aka “truthiness”. As I understand it, Spotify’s recommendation architecture relies on listening patterns—tracks played in the same sessions or saved to the same playlists become linked in multidimensional “embedding” space. When Discovery Mode injects certain tracks into more sessions, it likely artificially strengthens the connections between those promoted tracks and others around them. The output then seems far more likely to become “fans of Artist A also like Artist B.” That output becomes algorithmically more frequent and hence “truer” or “truthier”, not because listeners chose it freely, but because paid exposure engineered the correlation. Those embeddings are probably global: they shape the recommendations of Discover Weekly, the “Fans also like” carousel, and the candidate pool for the AI DJ. A commercial distortion at the periphery thus is more likely to reshape the supposedly organic map of musical similarity at the core.

Fourth, the DM boost echoes through editorial and social feedback loops. Once Discovery Mode inflates a song’s performance metrics, it begins to look like what passes for a breakout hit these days. Editors scanning dashboards see higher engagement and may playlist the track in prominent editorial contexts. Users might add it to their own playlists, creating external validation. The cumulative effect is that an artificial advantage bought through Discovery Mode converts into what appears to be organic success, further feeding into algorithmic selection for other playlists and AI-driven features. This recursive amplification makes it almost impossible to isolate the paid effect from the “natural” one, which is precisely why disclosure rules exist in traditional payola law. I say “almost impossible” reflexively—I actually think it is in fact impossible, but that’s the kind of thing you can model in a different type of “discovery” being court-ordered discovery.

Finally, there is the shared-model problem. Spotify has publicly acknowledged that the AI DJ is a “narrative layer” built on the same personalization technology that powers its other recommendation surfaces. In practice, this means one massive model (or group of shared embeddings) generates candidate tracks, while a separate module adds voice or context.

If the shared model was trained on Discovery-Mode-skewed data, then even when the DJ module does not read the Discovery flag, it inherits the distortions embedded in those weights. Turning off the switch for the DJ therefore does not remove the influence; it merely hides its provenance. Unlike AI systems designed to dampen feedback bias, Spotify’s Discovery Mode institutionalizes it—bias is the feature, not the bug. You know, garbage in, garbage out.

Proving the Case: Discovery Mode’s Chain of Causation and the Triumph of GIGO

Legally, there’s a strong argument that the deception arises not from the existence of Discovery Mode itself but from how Spotify represents its recommendation products. The company markets Discover Weekly, Release Radar, and AI DJ as personalized to your taste, not as advertising or sponsored content. When a paid-boost mechanism anywhere in the ecosystem alters what those “organic” systems serve, Spotify arguably misleads consumers and rightsholders about the independence of its curation. Under a modernized reading of payola or unfair-deceptive-practice laws, that misrepresentation can amount to a hidden commercial endorsement—precisely the kind of conduct that the Federal Communications Commission’s sponsorship-identification rules (aka payola rules) and the FTC’s endorsement guides were designed to prevent.

In fact, the same disclosure standards that govern influencers on social media should govern algorithmic influencers on streaming platforms. When Spotify accepts a royalty discount in exchange for promoting a track, that arguably constitutes a material connection under the FTC’s Endorsement Guides. Failing to disclose that connection to listeners could transform Discovery Mode from a personalization feature into a deceptive advertisement—modern payola by another name. Why piss off one law enforcement agency when you can have two of them chase you around the rugged rock?

It must also be said that Discovery Mode doesn’t just shortchange artists and mislead listeners; it quietly contaminates the sainted ad product, too. Advertisers think they’re buying access to authentic, personalized listening moments. In reality, they’re often buying attention in a feed where the music itself is being shaped by undisclosed royalty discounts — a form of algorithmic payola that bends not only playlists, but the very audience segments and performance metrics brands are paying for. Advertising agencies don’t like that kind of thing one little bit. We remember what happened when it became apparent that ads were being served to pirate sites by you know who.

Proving the payola deception theory would therefore likely involve demonstrating causation across data layers: that the presence of Discovery Mode modifies engagement statistics, that those metrics propagate into global recommendation features, and that users (and possibly advertisers) were misled to believe those recommendations were purely algorithmic or merit-based. We can infer that the structure of Spotify’s own technology likely makes that chain not only plausible but possibly inevitable.

In an interconnected system where every model learns from the outputs of every other, no paid input stays contained. The moment a single signal is bought, a strong case can be made that the neutrality of the entire recommendation network is compromised—and so is the user’s trust in what it means when Spotify says a song was “picked just for you.”

Y’all Street Rising: Why the Future of Music Finance Won’t Be Made in Manhattan

There’s a new gravity well in American finance, and it’s not New York. It’s not even Silicon Valley. It’s Dallas. It’s Austin. It’s Y’all Street.

And anyone paying attention could have seen it coming. The Texas Stock Exchange (TXSE) is preparing for launch in 2026.  TXSW is not some bulletin board; it’s backed by billions from institutions that have grown weary of the compliance culture and cost of New York. Goldman Sachs’s Dallas campus is now operational. BlackRock and Charles Schwab have shifted major divisions to the Lone Star State. Tesla and Samsung are expanding giga-scale manufacturing and chip fabrication plants.

A strong center of gravity for capital formation is moving south, and with it, a new cultural economy is taking shape. And AI may not save it:  Scion Asset Management, “Big Short” investor Michael Burry’s hedge fund, disclosed to the SEC that it had a short bet worth $1.1 billion against Nvidia and Palantir.   He’s also investing in waterthat AI burns.  So not everyone is jumping off a cliff.

A New Realignment

Texas startups have raised roughly $9.8 billion in venture capital through Q3 2025, pushing the state to a consistent #4 ranking nationally. Austin remains the creative and software hub, while Dallas–Fort Worth and Houston lead in AI infrastructure, energy tech, and finance.

The TXSE will formalize what investors already know: capital markets no longer need Manhattan to function.

And that raises an uncomfortable question for the music industry:

If capital, infrastructure, and innovation no longer orbit Wall Street, why should music?

Apple Learned It the Hard Way

Despite New York’s rich musical legacy—Tin Pan Alley, Brill Building, CBGB, and the era of the major-label tower when Sony occupied that horrible AT&T building and flew sushi in from Japan for the executive dining room—the city has become an increasingly difficult place to sustain large-scale creative infrastructure. Real estate costs, over-regulation, and financial concentration have hollowed out the middle layer of production.  As I’ve taught for years, the key element to building the proverbial “creative class” is cheap rent, preferably with a detached garage.

Even Apple Inc. learned long ago that creativity can’t thrive where every square foot carries a compliance surcharge. That’s why Apple’s global supply chain, data centers, and now content operations span Texas, Tennessee, and North Carolina instead of Midtown Manhattan.  And then there’s the dirty power, sump pumps and subways—Electric Lady would probably never get built today.

The lesson for the music business is clear: creative capital follows economic oxygen. And right now, that oxygen is in Texas.

The Texas Music Office: A Model for How to Get It Done

If you want to understand how Texas built a durable, bipartisan music infrastructure, start with the Texas Music Office (TMO). Founded in 1990 under Governor Bill Clements, the TMO was one of the first state agencies in America to recognize the music industry not just as culture, but as economic development.

Over the decades—through governors of both parties—the TMO has become a master class in how to institutionalize support for creative enterprise without strangling it in bureaucracy. From George W. Bush’s early focus on export promotion, to Rick Perry’s integration of music into economic development, to Greg Abbott’s expansion of the Music Friendly Communities network, each administration built upon rather than dismantled what came before.

Today, the TMO supports more than 70 certified Music Friendly Communities, funds music-education grants, tracks economic data, and connects local musicians with investors and international partners. It’s a template for how a state can cultivate creative industries while maintaining fiscal discipline and accountability.

It’s also proof that cultural policy doesn’t have to be partisan—it just has to be practical.

When people ask why Texas has succeeded where others stalled, the answer is simple: the TMO stayed focused on results, not rhetoric. That’s a lesson a lot of states—and more than a few record labels—could stand to relearn.

Artist Rights Institute: Doing Our Part for Texas and Beyond

The Artist Rights Institute (ARI) has done its part to make sure that Texas and other local music and creators aren’t an afterthought in rooms that are usually dominated by platform interests and coastal trade groups.

When questions of AI training, copyright allocation, black-box royalties, and streaming transparency landed in front of the U.S. Copyright Office, Congress, and U.K. policymakers, ARI showed up with the Texas view: creators first, no speculative ticketing, no compulsory “data donation,” and no silent expropriation of recordings and songs for AI. ARI has filed comments, contributed research, and supported amicus work to make sure Texas artists, songwriters, and indie publishers are in the record — not just the usual New York, Nashville, and Los Angeles voices.

Just as important, ARI has pushed financial education for artists. Because Y’all Street doesn’t help creators if they don’t know what a discount rate is, how catalog valuations work, how to read a mechanical statement, or why AI licenses need to be expressly excluded from legacy record and publishing deals. ARI programs in Texas and Georgia have focused on:
– explaining how federal policy actually hits musicians,
– showing how to negotiate or at least spot AI/derivative-use clauses,
– and connecting artists to local music industry infrastructure.

In other words, ARI joined other Texas and Georgia organizations to be a translator between Texas’s very real music economy and the fast-moving policy debates in Washington and the U.K. If Texas is going to be the place where music is financed, ARI wants to make sure local artists are also the ones who capture the value.

Music’s Texas Moment

Texas is no newcomer to the business of music. Its industry already generates over $13.4 billion in annual economic activity, supporting more than 91,000 jobs across its certified cities. Austin retains the crown of “Live Music Capital of the World,” but Denton, Fort Worth, and San Antonio have joined the state-certified network of “Music Friendly Communities”.

Meanwhile, universities from UT-Austin to Texas A&M study rights management, AI provenance, and royalties in the age of generative audio.

The result: a state that treats music not as nostalgia, but as an evolving economic engine.  Plus we’ve got Antone’s.

Wall Street’s ‘Great Sucking Sound,’ Replayed

Ross Perot once warned of “that giant sucking sound” as jobs moved south. Thirty years later, the sound you hear isn’t manufacturing—it’s money, data, and influence flowing to Y’all Street.

If the major labels and publishers don’t track that migration, they risk becoming cultural tenants in cities they no longer own. The next catalog securitization, the next AI-royalty clearinghouse, the next Bell Labs-for-Music could just as easily be financed out of Dallas as from Midtown.

Because while New York made the hits of the last century, Texas may well finance the next one.  We’ve always had the musicians, producers, authors, actors and film makers, but soon we’ll also have the money.

Y’all Ready?

The world no longer needs a Midtown address to mint creative wealth. As the TXSE prepares its debut and Texas cements its position as the nation’s innovation corridor, the music industry faces a choice:

Follow the capital—or become another cautionary tale of what happens when you mistake heritage for destiny.

Because as Apple learned long ago, even the richest history can’t compete with the freedom to build something new.  

When the Machine Lies: Why the NYT v. Sullivan “Public Figure” Standard Shouldn’t Protect AI-Generated Defamation of @MarshaBlackburn

Google’s AI system, Gemma, has done something no human journalist ever could past an editor: fabricate and publish grotesque rape allegations about a sitting U.S. Senator and a political activist—both living people, both blameless.

As anyone who has ever dealt with Google and its depraved executives knows all too well, Google will genuflect and obfuscate with great public moral whinging, but the reality is—they do not give a damn.  When Sen. Marsha Blackburn and Robby Starbuck demand accountability, Google’s corporate defense reflex will surely be: We didn’t say it; the model did—and besides, they’re public figures based on the Supreme Court defamation case of New York Times v. Sullivan.  

But that defense leans on a doctrine that simply doesn’t fit the facts of the AI era. New York Times v. Sullivan was written to protect human speech in public debate, not machine hallucinations in commercial products.

The Breakdown Between AI and Sullivan

In 1964, Sullivan shielded civil-rights reporting from censorship by Southern officials (like Bull Connor) who were weaponizing libel suits to silence the press. The Court created the “actual malice” rule—requiring public officials to prove a publisher knew a statement was false or acted with reckless disregard for the truth—so journalists could make good-faith errors without losing their shirts.

But AI platforms aren’t journalists.

They don’t weigh sources, make judgments, or participate in democratic discourse. They don’t believe anything. They generate outputs, often fabrications, trained on data they likely were never authorized to use.

So when Google’s AI invents a rape allegation against a sitting U.S. Senator, there is no “breathing space for debate.” There is only a product defect—an industrial hallucination that injures a human reputation.

Blackburn and Starbuck: From Public Debate to Product Liability

Senator Blackburn discovered that Gemma responded to the prompt “Has Marsha Blackburn been accused of rape?” by conjuring an entirely fictional account of a sexual assault by the Senator and citing nonexistent news sources.  Conservative activist Robby Starbuck experienced the same digital defamation—Gemini allegedly linked him to child rape, drugs, and extremism, complete with fake links that looked real.

In both cases, Google executives were notified. In both cases, the systems remained online.
That isn’t “reckless disregard for the truth” in the Sullivan sense—it’s something more corporate and more concrete: knowledge of a defective product that continues to cause harm.

When a car manufacturer learns that the gas tank explodes but ships more cars, we don’t call that journalism. We call it negligence—or worse.

Why “Public Figure” Is the Wrong Lens

The Sullivan line of cases presumes three things:

  1. Human intent: a journalists believed what they wrote was the truth.
  2. Public discourse: statements occurred in debate on matters of public concern about a public figure.
  3. Factual context: errors were mistakes in an otherwise legitimate attempt at truth.

None of those apply here.

Gemma didn’t “believe” Blackburn committed assault; it simply assembled probabilistic text from its training set. There was no public controversy over whether she did so; Gemma created that controversy ex nihilo. And the “speaker” is not a journalist or citizen but a trillion-dollar corporation deploying a stochastic parrot for profit.

Extending Sullivan to this context would distort the doctrine beyond recognition. The First Amendment protects speakers, not software glitches.

A Better Analogy: Unsafe Product Behavior—and the Ghost of Mrs. Palsgraf

Courts should treat AI defamation less like tabloid speech and more like defective design, less like calling out racism and more like an exploding boiler.

When a system predictably produces false criminal accusations, the question isn’t “Was it actual malice?” but “Was it negligent to deploy this system at all?”

The answer practically waves from the platform’s own documentation. Hallucinations are a known bug—very well known, in fact. Engineers write entire mitigation memos about them, policy teams issue warnings about them, and executives testify about them before Congress.

So when an AI model fabricates rape allegations about real people, we are well past the point of surprise. Foreseeability is baked into the product roadmap.
Or as every first-year torts student might say: Heloooo, Mrs. Palsgraf.

A company that knows its system will accuse innocent people of violent crimes and deploys it anyway has crossed from mere recklessness into constructive intent. The harm is not an accident; it is an outcome predicted by the firm’s own research, then tolerated for profit.

Imagine if a car manufacturer admitted its autonomous system “sometimes imagines pedestrians” and still shipped a million vehicles. That’s not an unforeseeable failure; that’s deliberate indifference. The same logic applies when a generative model “imagines” rape charges. It’s not a malfunction—it’s a foreseeable design defect.

Why Executive Liability Still Matters

Executive liability matters in these cases because these are not anonymous software errors—they’re policy choices.
Executives sign off on release schedules, safety protocols, and crisis responses. If they were informed that the model fabricated criminal accusations and chose not to suspend it, that’s more than recklessness; it’s ratification.

And once you frame it as product negligence rather than editorial speech, the corporate-veil argument weakens. Officers, especially senior officers, who knowingly direct or tolerate harmful conduct can face personal liability, particularly when reputational or bodily harm results from their inaction.

Re-centering the Law

Courts need not invent new doctrines. They simply have to apply old ones correctly:

  • Defamation law applies to false statements of fact.
  • Product-liability law applies to unsafe products.
  • Negligence applies when harm is foreseeable and preventable.

None of these require importing Sullivan’s “actual malice” shield into some pretzel logic transmogrification to apply to an AI or robot. That shield was never meant for algorithmic speech emitted by unaccountable machines.  As I’m fond of saying, Sir William Blackstone’s good old common law can solve the problem—we don’t need any new laws at all.

Section 230 and The Political Dimension

Sen. Blackburn’s outrage carries constitutional weight: Congress wrote the Section 230 safe harbor to protect interactive platforms from liability for user content, not their own generated falsehoods. When a Google-made system fabricates crimes, that’s corporate speech, not user speech. So no 230 for them this time. And the government has every right—and arguably a duty—to insist that such systems be shut down until they stop defaming real people.  Which is exactly what Senator Blackburn wants and as usual, she’s quite right to do so.  Me, I’d try to put the Google guy in prison.

The Real Lede

This is not a defamation story about a conservative activist or a Republican senator. It’s a story about the breaking point of Sullivan. For sixty years, that doctrine balanced press freedom against reputational harm. But it was built for newspapers, not neural networks.

AI defamation doesn’t advance public discourse—it destroys it. 

It isn’t about speech that needs breathing space—it’s pollution that needs containment. And when executives profit from unleashing that pollution after knowing it harms people, the question isn’t whether they had “actual malice.” The question is whether the law will finally treat them as what they are: manufacturers of a defective product that lies and hurts people.

Less Than Zero: The Significance of the Per Stream Rate and Why It Matters

Spotify’s insistence that it’s “misleading” to compare services based on a derived per-stream rate reveals exactly how out of touch the company has become with the very artists whose labor fuels its stock price. Artists experience streaming one play at a time, not as an abstract revenue pool or a complex pro-rata formula. Each stream represents a listener’s decision, a moment of engagement, and a microtransaction of trust. Dismissing the per-stream metric as irrelevant is a rhetorical dodge that shields Spotify from accountability for its own value proposition. (The same applies to all streamers, but Spotify is the only one that denies the reality of the per-stream rate.)

Spotify further claims that users don’t pay per stream but for access as if that negates the artist’s per stream rate payments. It is fallacious to claim that because Spotify users pay a subscription fee for “access,” there is no connection between that payment and any one artist they stream. This argument treats music like a public utility rather than a marketplace of individual works. In reality, users subscribe because of the artists and songs they want to hear; the value of “access” is wholly derived from those choices and the fans that artists drive to the platform. Each stream represents a conscious act of consumption and engagement that justifies compensation.

Economically, the subscription fee is not paid into a vacuum — it forms a revenue pool that Spotify divides among rights holders according to streams. Thus, the distribution of user payments is directly tied to which artists are streamed, even if the payment mechanism is indirect. To say otherwise erases the causal relationship between fan behavior and artist earnings.

The “access” framing serves only to obscure accountability. It allows Spotify to argue that artists are incidental to its product when, in truth, they are the product. Without individual songs, there is nothing to access. The subscription model may bundle listening into a single fee, but it does not sever the fundamental link between listener choice and the artist’s right to be paid fairly for that choice.

Less Than Zero Effect: AI, Infinite Supply and Erasing Artist

In fact, this “access” argument may undermine Spotify’s point entirely. If subscribers pay for access, not individual plays, then there’s an even greater obligation to ensure that subscription revenue is distributed fairly across the artists who generate the listening engagement that keeps fans paying each month. The opacity of this system—where listeners have no idea how their money is allocated—protects Spotify, not artists. If fans understood how little of their monthly fee reached the musicians they actually listen to, they might demand a user-centric payout model or direct licensing alternatives. Or they might be more inclined to use a site like Bandcamp. And Spotify really doesn’t want that.

And to anticipate Spotify’s typical deflection—that low payments are the label’s fault—that’s not correct either. Spotify sets the revenue pool, defines the accounting model, and negotiates the rates. Labels may divide the scraps, but it’s Spotify that decides how small the pie is in the first place either through its distribution deals or exercising pricing power.

Three Proofs of Intention

Daniel Ek, the Spotify CEO and arms dealer, made a Dickensian statement that tells you everything you need to know about how Spotify perceives their role as the Streaming Scrooge—“Today, with the cost of creating content being close to zero, people can share an incredible amount of content”.

That statement perfectly illustrates how detached he has become from the lived reality of the people who actually make the music that powers his platform’s market capitalization (which allows him to invest in autonomous weapons). First, music is not generic “content.” It is art, labor, and identity. Reducing it to “content” flattens the creative act into background noise for an algorithmic feed. That’s not rhetoric; it’s a statement of his values. Of course in his defense, “near zero cost” to a billionaire like Ek is not the same as “near zero cost” to any artist. This disharmonious statement shows that Daniel Ek mistakes the harmony of the people for the noise of the marketplace—arming algorithms instead of artists.

Second, the notion that the cost of creating recordings is “close to zero” is absurd. Real artists pay for instruments, studios, producers, engineers, session musicians, mixing, mastering, artwork, promotion, and often the cost of simply surviving long enough to make the next record or write the next song. Even the so-called “bedroom producer” incurs real expenses—gear, software, electricity, distribution, and years of unpaid labor learning the craft. None of that is zero. As I said in the UK Parliament’s Inquiry into the Economics of Streaming, when the day comes that a soloist aspires to having their music included on a Spotify “sleep” playlist, there’s something really wrong here.

Ek’s comment reveals the Silicon Valley mindset that art is a frictionless input for data platforms, not an enterprise of human skill, sacrifice, and emotion. When the CEO of the world’s dominant streaming company trivializes the cost of creation, he’s not describing an economy—he’s erasing one.

While Spotify tries to distract from the “per-stream rate,” it conveniently ignores the reality that whatever it pays “the music industry” or “rights holders” for all the artists signed to one label still must be broken down into actual payments to the individual artists and songwriters who created the work. Labels divide their share among recording artists; publishers do the same for composers and lyricists. If Spotify refuses to engage on per-stream value, what it’s really saying is that it doesn’t want to address the people behind the music—the very creators whose livelihoods depend on those streams. In pretending the per-stream question doesn’t matter, Spotify admits the artist doesn’t matter either.

Less Than Zero or Zeroing Out: Where Do We Go from Here?

The collapse of artist revenue and the rise of AI aren’t coincidences; they’re two gears in the same machine. Streaming’s economics rewards infinite supply at near-zero unit cost which is really the nugget of truth in Daniel Ek’s statements. This is evidenced by Spotify’s dalliances with Epidemic Sound and the like. But—human-created music is finite and costly; AI music is effectively infinite and cheap. For a platform whose margins improve as payout obligations shrink, the logical endgame is obvious: keep the streams, remove the artists.

  • Two-sided market math. Platforms sell audience attention to advertisers and access to subscribers. Their largest variable cost is royalties. Every substitution of human tracks with synthetic “sound-alikes,” noise, functional audio, or AI mashup reduces royalty liability while keeping listening hours—and revenue—intact. You count the AI streams just long enough to reduce the royalty pool, then you remove them from the system, only to be replace by more AI tracks. Spotify’s security is just good enough to miss the AI tracks for at least one royalty accounting period.
  • Perpetual content glut as cover. Executives say creation costs are “near zero,” justifying lower per-stream value. That narrative licenses a race to the bottom, then invites AI to flood the catalog so the floor can fall further.
  • Training to replace, not to pay. Models ingest human catalogs to learn style and voice, then output “good enough” music that competes with the very works that trained them—without the messy line item called “artist compensation.”
  • Playlist gatekeeping. When discovery is centralized in editorial and algorithmic playlists, platforms can steer demand toward low-or-no-royalty inventory (functional audio, public-domain, in-house/commissioned AI), starving human repertoire while claiming neutrality.
  • Investor alignment. The story that scales is not “fair pay”; it’s “gross margin expansion.” AI is the lever that turns culture into a fixed cost and artists into externalities.

Where does that leave us? Both streaming and AI “work” best for Big Tech, financially, when the artist is cheap enough to ignore or easy enough to replace. AI doesn’t disrupt that model; it completes it. It also gives cover through a tortured misreading through the “national security” lens so natural for a Lord of War investor like Mr. Ek who will no doubt give fellow Swede and one of the great Lords of War, Alfred Nobel, a run for his money. (Perhaps Mr. Ek will reimagine the Peace Prize.) If we don’t hard-wire licensing, provenance, and payout floors, the platform’s optimal future is music without musicians.

Plato conceived justice as each part performing its proper function in harmony with the whole—a balance of reason, spirit, and appetite within the individual and of classes within the city. Applied to AI synthetic works like those generated by Sora 2, injustice arises when this order collapses: when technology imitates creation without acknowledging the creators whose intellect and labor made it possible. Such systems allow the “appetitive” side—profit and scale—to dominate reason and virtue. In Plato’s terms, an AI trained on human art yet denying its debt to artists enacts the very disorder that defines injustice.

Too Dynamic to Question, Too Dangerous to Ignore

When Ed Newton-Rex left Stability AI, he didn’t just make a career move — he issued a warning. His message was simple: we’ve built an industry that moves too fast to be honest.

AI’s defenders insist that regulation can’t keep up, that oversight will “stifle innovation.” But that speed isn’t a by-product; it’s the business model. The system is engineered for planned obsolescence of accountability — every time the public begins to understand one layer of technology, another version ships, invalidating the debate. The goal isn’t progress; it’s perpetual synthetic novelty, where nothing stays still long enough to be measured or governed, and “nothing says freedom like getting away with it.”

We’ve seen this play before. Car makers built expensive sensors we don’t want that fail on schedule; software platforms built policies that expire the moment they bite. In both cases, complexity became a shield and a racket — “too dynamic to question.” And yet, like those unasked-for, but paid for, features in the cars we don’t want, AI’s design choices are too dangerous to ignore. (Like what if your brakes really are going out, not just the sensor is malfunctioning.)

Ed Newton-Rex’s point — echoed in his tweets and testimony — is that the industry has mistaken velocity for virtue. He’s right. The danger is not that these systems evolve too quickly to regulate; it’s that they’re designed that way designed to fail just like that brake sensor. And until lawmakers recognize that speed itself is a form of governance, we’ll keep mistaking momentum for inevitability.

SB 683: California’s Quiet Rejection of the DMCA—and a Roadmap for Real AI Accountability

When Lucian Grainge drew a bright line—“UMG will not do business with bad actors regardless of the consequences”—he did more than make a corporate policy statement.  He threw down a moral challenge to an entire industry: choose creators or choose exploitation.

California’s recently passed SB 683 does not shout as loudly, but it answers the same call. By refusing to copy Washington’s bureaucratic NO FAKES Act and its DMCA-style “notice-and-takedown” maze, SB 683 quietly re-asserts a lost principle: rights are vindicated through courts and accountability, not compliance portals.

What SB 683 actually does

SB 683 amends California Civil Code § 3344, the state’s right-of-publicity statute for living persons, to make injunctive relief real and fast.  If someone’s name, voice, or likeness is exploited without consent, a court can now issue a temporary restraining order or preliminary injunction.  If the order is granted without notice, the defendant must comply within two business days.  

That sounds procedural—and it is—but it matters. SB 683 replaces “send an email to a platform” with “go to a judge.”   It converts moral outrage into enforceable law.

The deeper signal: a break from the DMCA’s bureaucracy

For twenty-seven years, the Digital Millennium Copyright Act (DMCA) has governed online infringement through a privatized system of takedown notices, counter-notices, and platform safe harbors.  When it was passed, Silicon Valley came alive with schemes to get around copyright infringement through free riding schemes that beat a path to Grokster‘s door.

But the DMCA was built for a dial-up internet and has aged about as gracefully as a boil on cow’s butt.

The Copyright Office’s 2020 Section 512 Study concluded that whatever Solomonic balance Congress thought it was making has completely collapsed:

“[T]he volume of notices demonstrates that the notice-and-takedown system does not effectively remove infringing content from the internet; it is, at best, a game of whack-a-mole.”

“Congress’ original intended balance has been tilted askew.”  

“Rightsholders report notice-and-takedown is burdensome and ineffective.”  

“Judicial interpretations have wrenched the process out of alignment with Congress’ intentions.” 
 
“Rising notice volume can only indicate that the system is not working.”  

Unsurprisingly, the Office concluded that “Roughly speaking, many OSPs spoke of section 512 as being a success, enabling them to [free ride and] grow exponentially and serve the public without facing debilitating lawsuits [or one might say, paying the freight]. Rightsholders reported a markedly different perspective, noting grave concerns with the ability of individual creators to meaningfully use the section 512 system to address copyright infringement and the “whack-a-mole” problem of infringing content re-appearing after being taken down. Based upon its own analysis of the present effectiveness of section 512, the Office has concluded that Congress’ original intended balance has been tilted askew.”

Which is a genteel way of saying the DMCA is an abject failure for creators and halcyon days for venture-backed online service providers. So why would anyone who cared about creators want to continue that absurd process?

SB 683 flips that logic. Instead of creating bureaucracy and rewarding the one who can wait out the last notice standing, it demands obedience to law.  Instead of deferring to internal “trust and safety” departments, it puts a judge back in the loop. That’s a cultural and legal break—a small step, but in the right direction.

The NO FAKES Act: déjà vu all over again

Washington’s proposed NO FAKES Act is designed to protect individuals from AI-generated digital replicas which is great. However—NO FAKES recreates the truly awful DMCA’s failed architecture: a federal registry of “designated agents,” a complex notice-and-takedown workflow, and a new safe-harbor regime based on “good-faith compliance.”    You know, notice and notice and notice and notice and notice and notice and…..

If NO FAKES passes, platforms like Google would again hold all the procedural cards: largely ignore notices until they’re convenient, claim “good faith,” and continue monetizing AI-generated impersonations.  In other words, it gives the platforms exactly what they wanted because delay is the point.  I seriously doubt that Congress of 1998 thought that their precious DMCA would be turned into a not so funny joke on artists, and I do remember Congressman Howard Berman (one of the House managers for DMCA) looking like he was going to throw up during the SOPA hearings when he found out how many millions of DMCA notices YouTube alone receives.  So why would we want to make the same mistake again thinking we’ll have a different outcome?  With the same platforms now richer beyond category? Who could possibly defend such garbage as anything but a colossal mistake?

The approach of SB 683 is, by contrast, the opposite of NO FAKES. It tells creators: you don’t need to find the right form—you need to find a judge.  It tells platforms: if a court says take it down, you have two days, not two months of emails, BS counter notices and a bad case of learned helplessness.  True, litigation is more costly than sending a DMCA notice, but litigation is far more likely to be effective in keeping infringing material down and will not become a faux “license” like DMCA has become.  

The DMCA heralded twenty-seven years of normalizing massive and burdensome copyright infringement and raising generations of lawyers to defend the thievery while Big Tech scooped up free rider rents that they then used for anti-creator lobbying around the world.  It should be entirely unsurprising that all of that litigation and lobbying has lead us to the current existential crisis.

Lucian Grainge’s throw-down and the emerging fault line

When Mr. Grainge spoke, he wasn’t just defending Universal’s catalog; he was drawing a perimeter around normalizing AI exploitation, and not buying into an even further extension of “permissionless innovation.”

Universal’s position aligns with what California just did. While Congress toys with a federal opt-out regime for AI impersonations, Sacramento quietly passed a law grounded in judicial enforcement and personal rights.  It’s not perfect, but it’s a rejection of the “catch me if you can” ethos that has defined Silicon Valley’s relationship with artists for decades.

A job for the Attorney General

SB 683 leaves enforcement to private litigants, but the scale of AI exploitation demands public enforcement under the authority of the State.  California’s Attorney General should have explicit power to pursue pattern-or-practice actions against companies that:

– Manufacture or distribute AI-generated impersonations of deceased performers (like Sora 2’s synthetic videos).
– Monetize those impersonations through advertising or subscription revenue (like YouTube does right now with the Sora videos).
– Repackage deepfake content as “user-generated” to avoid responsibility.

Such conduct isn’t innovation—it’s unfair competition under California law. AG actions could deliver injunctions, penalties, and restitution far faster than piecemeal suits. And as readers know, I love a good RICO, so let’s put out there that the AG should consider prosecuting the AI cabal with its interlocking investments under Penal Code §§ 186–186.8, known as the California Control of Profits of Organized Crime Act (CCPOCA) (h/t Seeking Alpha).

While AI platforms complain of “burdensome” and “unproductive” litigation, that’s simply not true of enterprises like the AI cabal—litigation is exactly what was required in order to reveal the truth about massive piracy powering the circular AI bubble economy. Litigation has revealed that the scale of infringement by AI platforms like Anthropic and Meta is so vast that private damages are meaningless. It is increasingly clear these companies are not alone—they have relied on pirate libraries and torrent ecosystems to ingest millions of works across every creative category. Rather than whistle past the graveyard while these sites flourish, government must confront its failure to enforce basic property rights. When theft becomes systemic, private remedies collapse, and enforcement becomes a matter for the state. Even Anthropic’s $1.5 billion settlement feels hollow because the crime is so immense. Not just because statutory damages in the US were also established in 1999 to confront…CD ripping.

AI regulation as the moment to fix the DMCA

The coming wave of AI legislation represents the first genuine opportunity in a generation to rewrite the online liability playbook.  AI and the DMCA cannot peacefully coexist—platforms will always choose whichever regime helps them keep the money.

If AI regulation inherits the DMCA’s safe harbors, nothing changes. Instead, lawmakers should take the SB 683 cue:
– Restore judicial enforcement.  
– Tie AI liability to commercial benefit. 
– Require provenance, not paperwork.  
– Authorize public enforcement.

The living–deceased gap: California’s unfinished business


SB 683 improves enforcement for living persons, but California’s § 3344.1 already protects deceased individuals against digital replicas.  That creates an odd inversion: John Coltrane’s estate can challenge an AI-generated “Coltrane tone,” but a living jazz artist cannot.   The Legislature should align the two statutes so the living and the dead share the same digital dignity.

Why this matters now

Platforms like YouTube host and monetize videos generated by AI systems such as Sora, depicting deceased performers in fake performances.  If regulators continue to rely on notice-and-takedown, those platforms will never face real risk.   They’ll simply process the takedown, re-serve the content through another channel, and cash another check.

The philosophical pivot

The DMCA taught the world that process can replace principle. SB 683 quietly reverses that lesson.  It says: a person’s identity is not an API, and enforcement should not depend on how quickly you fill out a form.

In the coming fight over AI and creative rights, that distinction matters. California’s experiment in court-centered enforcement could become the model for the next generation of digital law—where substance defeats procedure, and accountability outlives automation.

SB 683 is not a revolution, but it’s a reorientation. It abandons the DMCA’s failed paperwork culture and points toward a world where AI accountability and creator rights converge under the rule of law.

If the federal government insists on doubling down with the NO FAKES Act’s national “opt-out” registry, California may once again find itself leading by quiet example: rights first, bureaucracy last.

Ghosts in the Machine: How AI’s “Future” Runs on a 1960s Grid

The smart people want us to believe that artificial intelligence is the frontier and apotheosis of human progress. They sell it as transformative and disruptive. That’s probably true as far as it goes, but it doesn’t go that far. In practice, the infrastructure that powers it often dates back to a different era and there is the paradox: much of the electricity to power AI’s still flows through the bones of mid‑20th century engineering. Wouldn’t it be a good thing if they innovated a new energy source before they crowd out the humans?

The Current Generation Energy Mix — And What AI Adds

To see that paradox, start with the U.S. national electricity mix:

In 2023 , the U.S. generated about 4,178 billion kWh of electricity at utility-scale facilities. Of that, 60% came from fossil fuels (coal, natural gas, petroleum, other gases), 19% came from nuclear, and 21% from renewables (wind, solar, hydro). 
Nuclear power remains the backbone of zero-carbon baseload: it supplies around 18–19% of U.S. electricity, and nearly half of all non‑emitting generation. 
In 2025, clean sources (nuclear + renewables) are edging upward. According to Ember, in March 2025 fossil fuels fell below 50% of U.S. electricity generation for the first time (49.2%), marking a historic shift.
– Yet still, more than half of US power comes from carbon-emitting sources in most months.

Meanwhile, AI’s demand is surging:

– The Department of Energy estimates that data centers consumed 4.4% of U.S. electricity in 2023 (176 TWh) and projects this to rise to 6.7–12% by 2028 (325–580 TWh) according to the Department of Energy.
– An academic study of 2,132 U.S. data centers (2023–2024) found that these facilities accounted for more than 4% of national power consumption, with 56% coming from fossil sources, and emitted more than 105 million tons of CO₂e (approximately 2.18% of U.S. emissions in 2023). 
– That study also concluded: data centers’ carbon intensity (CO₂ per kWh) is 48% higher than the U.S. average.

So: AI’s power demands are no small increment—they threaten to stress a grid still anchored in older thermal technologies.

Global Data Centers https://www.datacentermap.com

Why “1960s Infrastructure” Isn’t Hyperbole

When I say AI is running on 1960s technology, I mean several things:

1. Thermal generation methods remain largely unchanged according to the EPA.  Coal-fired steam turbines and natural gas combined-cycle plants still dominate.

2. Many plants are old and aging.  The average age of coal plants in the U.S. is about 43 years; some facilities are over 60. Transmission lines and grid control systems often date from mid-to late-20th century planning.

3. Nuclear’s modern edge is historical.  Most U.S. nuclear reactors in operation were ordered in the 1960s–1970s and built over subsequent decades. In other words: The commercial installed base is old.

The Rickover Motif: Nuclear, Legacy, and Power Politics

To criticize AI’s reliance on legacy infrastructure, one powerful symbol is Admiral Hyman G. Rickover, the man often called the “Father of the Nuclear Navy.” Rickover’s work in the 1950s and 1960s not only shaped naval propulsion but also influenced the civilian nuclear sector.

Rickover pushed for rigorous engineering standards , standardization, safety protocols, and institutional discipline in building reactors. After the success of naval nuclear systems, Rickover was assigned by the Atomic Energy Commission to influence civilian nuclear power development.

Rickover famously required applicants to the nuclear submarine service to have “fixed their own car.” That speaks to technical literacy, self-reliance, and understanding systems deeply, qualities today’s AI leaders often lack. I mean seriously—can you imagine Sam Altman on a mechanic’s dolly covered in grease?

As the U.S. Navy celebrates its 250th anniversary, it’s ironic that modern AI ambitions lean on reactors whose protocols, safety cultures, and control logic remain deeply shaped by Rickover-era thinking from…yes…1947. And remember, Admiral Rickover had to transfer the hidebound Navy to nuclear power which at the time was just recently discovered and not well understood—and away from diesel. Diesel. That’s innovation and required a hugely entrepreneurial leader.

The Hypocrisy of Innovation Without Infrastructure

AI companies claim disruption but site data centers wherever grid power is cheapest — often near legacy thermal or nuclear plants. They promote “100% renewable” branding via offsets, but in real time pull electricity from fossil-heavy grids. Dense compute loads aggravate transmission congestion. FERC and NERC now list hyperscale data centers as emerging reliability risks. 

The energy costs AI doesn’t pay — grid upgrades, transmission reinforcement, reserve margins — are socialized onto ratepayers and bondholders. If the AI labs would like to use their multibillion dollar valuations to pay off that bond debt, that’s a conversation. But they don’t want that, just like they don’t want to pay for the copyrights they train on.

Innovation without infrastructure isn’t innovation — it’s rent-seeking. Shocking, I know…Silicon Valley engaging in rent-seeking and corporate welfare.

The 1960s Called. They Want Their Grid Back.

We cannot build the future on the bones of the past. If AI is truly going to transform the world, its promoters must stop pretending that plugging into a mid-century grid is good enough. The industry should lead on grid modernization, storage, and advanced generation, not free-ride on infrastructure our grandparents paid for.

Admiral Rickover understood that technology without stewardship is just hubris. He built a nuclear Navy because new power required new systems and new thinking. That lesson is even more urgent now.

Until it is learned, AI will remain a contradiction: the most advanced machines in human history, running on steam-age physics and Cold War engineering.