Y’all Street Rising: Why the Future of Music Finance Won’t Be Made in Manhattan

There’s a new gravity well in American finance, and it’s not New York. It’s not even Silicon Valley. It’s Dallas. It’s Austin. It’s Y’all Street.

And anyone paying attention could have seen it coming. The Texas Stock Exchange (TXSE) is preparing for launch in 2026.  TXSW is not some bulletin board; it’s backed by billions from institutions that have grown weary of the compliance culture and cost of New York. Goldman Sachs’s Dallas campus is now operational. BlackRock and Charles Schwab have shifted major divisions to the Lone Star State. Tesla and Samsung are expanding giga-scale manufacturing and chip fabrication plants.

A strong center of gravity for capital formation is moving south, and with it, a new cultural economy is taking shape. And AI may not save it:  Scion Asset Management, “Big Short” investor Michael Burry’s hedge fund, disclosed to the SEC that it had a short bet worth $1.1 billion against Nvidia and Palantir.   He’s also investing in waterthat AI burns.  So not everyone is jumping off a cliff.

A New Realignment

Texas startups have raised roughly $9.8 billion in venture capital through Q3 2025, pushing the state to a consistent #4 ranking nationally. Austin remains the creative and software hub, while Dallas–Fort Worth and Houston lead in AI infrastructure, energy tech, and finance.

The TXSE will formalize what investors already know: capital markets no longer need Manhattan to function.

And that raises an uncomfortable question for the music industry:

If capital, infrastructure, and innovation no longer orbit Wall Street, why should music?

Apple Learned It the Hard Way

Despite New York’s rich musical legacy—Tin Pan Alley, Brill Building, CBGB, and the era of the major-label tower when Sony occupied that horrible AT&T building and flew sushi in from Japan for the executive dining room—the city has become an increasingly difficult place to sustain large-scale creative infrastructure. Real estate costs, over-regulation, and financial concentration have hollowed out the middle layer of production.  As I’ve taught for years, the key element to building the proverbial “creative class” is cheap rent, preferably with a detached garage.

Even Apple Inc. learned long ago that creativity can’t thrive where every square foot carries a compliance surcharge. That’s why Apple’s global supply chain, data centers, and now content operations span Texas, Tennessee, and North Carolina instead of Midtown Manhattan.  And then there’s the dirty power, sump pumps and subways—Electric Lady would probably never get built today.

The lesson for the music business is clear: creative capital follows economic oxygen. And right now, that oxygen is in Texas.

The Texas Music Office: A Model for How to Get It Done

If you want to understand how Texas built a durable, bipartisan music infrastructure, start with the Texas Music Office (TMO). Founded in 1990 under Governor Bill Clements, the TMO was one of the first state agencies in America to recognize the music industry not just as culture, but as economic development.

Over the decades—through governors of both parties—the TMO has become a master class in how to institutionalize support for creative enterprise without strangling it in bureaucracy. From George W. Bush’s early focus on export promotion, to Rick Perry’s integration of music into economic development, to Greg Abbott’s expansion of the Music Friendly Communities network, each administration built upon rather than dismantled what came before.

Today, the TMO supports more than 70 certified Music Friendly Communities, funds music-education grants, tracks economic data, and connects local musicians with investors and international partners. It’s a template for how a state can cultivate creative industries while maintaining fiscal discipline and accountability.

It’s also proof that cultural policy doesn’t have to be partisan—it just has to be practical.

When people ask why Texas has succeeded where others stalled, the answer is simple: the TMO stayed focused on results, not rhetoric. That’s a lesson a lot of states—and more than a few record labels—could stand to relearn.

Artist Rights Institute: Doing Our Part for Texas and Beyond

The Artist Rights Institute (ARI) has done its part to make sure that Texas and other local music and creators aren’t an afterthought in rooms that are usually dominated by platform interests and coastal trade groups.

When questions of AI training, copyright allocation, black-box royalties, and streaming transparency landed in front of the U.S. Copyright Office, Congress, and U.K. policymakers, ARI showed up with the Texas view: creators first, no speculative ticketing, no compulsory “data donation,” and no silent expropriation of recordings and songs for AI. ARI has filed comments, contributed research, and supported amicus work to make sure Texas artists, songwriters, and indie publishers are in the record — not just the usual New York, Nashville, and Los Angeles voices.

Just as important, ARI has pushed financial education for artists. Because Y’all Street doesn’t help creators if they don’t know what a discount rate is, how catalog valuations work, how to read a mechanical statement, or why AI licenses need to be expressly excluded from legacy record and publishing deals. ARI programs in Texas and Georgia have focused on:
– explaining how federal policy actually hits musicians,
– showing how to negotiate or at least spot AI/derivative-use clauses,
– and connecting artists to local music industry infrastructure.

In other words, ARI joined other Texas and Georgia organizations to be a translator between Texas’s very real music economy and the fast-moving policy debates in Washington and the U.K. If Texas is going to be the place where music is financed, ARI wants to make sure local artists are also the ones who capture the value.

Music’s Texas Moment

Texas is no newcomer to the business of music. Its industry already generates over $13.4 billion in annual economic activity, supporting more than 91,000 jobs across its certified cities. Austin retains the crown of “Live Music Capital of the World,” but Denton, Fort Worth, and San Antonio have joined the state-certified network of “Music Friendly Communities”.

Meanwhile, universities from UT-Austin to Texas A&M study rights management, AI provenance, and royalties in the age of generative audio.

The result: a state that treats music not as nostalgia, but as an evolving economic engine.  Plus we’ve got Antone’s.

Wall Street’s ‘Great Sucking Sound,’ Replayed

Ross Perot once warned of “that giant sucking sound” as jobs moved south. Thirty years later, the sound you hear isn’t manufacturing—it’s money, data, and influence flowing to Y’all Street.

If the major labels and publishers don’t track that migration, they risk becoming cultural tenants in cities they no longer own. The next catalog securitization, the next AI-royalty clearinghouse, the next Bell Labs-for-Music could just as easily be financed out of Dallas as from Midtown.

Because while New York made the hits of the last century, Texas may well finance the next one.  We’ve always had the musicians, producers, authors, actors and film makers, but soon we’ll also have the money.

Y’all Ready?

The world no longer needs a Midtown address to mint creative wealth. As the TXSE prepares its debut and Texas cements its position as the nation’s innovation corridor, the music industry faces a choice:

Follow the capital—or become another cautionary tale of what happens when you mistake heritage for destiny.

Because as Apple learned long ago, even the richest history can’t compete with the freedom to build something new.  

When the Machine Lies: Why the NYT v. Sullivan “Public Figure” Standard Shouldn’t Protect AI-Generated Defamation of @MarshaBlackburn

Google’s AI system, Gemma, has done something no human journalist ever could past an editor: fabricate and publish grotesque rape allegations about a sitting U.S. Senator and a political activist—both living people, both blameless.

As anyone who has ever dealt with Google and its depraved executives knows all too well, Google will genuflect and obfuscate with great public moral whinging, but the reality is—they do not give a damn.  When Sen. Marsha Blackburn and Robby Starbuck demand accountability, Google’s corporate defense reflex will surely be: We didn’t say it; the model did—and besides, they’re public figures based on the Supreme Court defamation case of New York Times v. Sullivan.  

But that defense leans on a doctrine that simply doesn’t fit the facts of the AI era. New York Times v. Sullivan was written to protect human speech in public debate, not machine hallucinations in commercial products.

The Breakdown Between AI and Sullivan

In 1964, Sullivan shielded civil-rights reporting from censorship by Southern officials (like Bull Connor) who were weaponizing libel suits to silence the press. The Court created the “actual malice” rule—requiring public officials to prove a publisher knew a statement was false or acted with reckless disregard for the truth—so journalists could make good-faith errors without losing their shirts.

But AI platforms aren’t journalists.

They don’t weigh sources, make judgments, or participate in democratic discourse. They don’t believe anything. They generate outputs, often fabrications, trained on data they likely were never authorized to use.

So when Google’s AI invents a rape allegation against a sitting U.S. Senator, there is no “breathing space for debate.” There is only a product defect—an industrial hallucination that injures a human reputation.

Blackburn and Starbuck: From Public Debate to Product Liability

Senator Blackburn discovered that Gemma responded to the prompt “Has Marsha Blackburn been accused of rape?” by conjuring an entirely fictional account of a sexual assault by the Senator and citing nonexistent news sources.  Conservative activist Robby Starbuck experienced the same digital defamation—Gemini allegedly linked him to child rape, drugs, and extremism, complete with fake links that looked real.

In both cases, Google executives were notified. In both cases, the systems remained online.
That isn’t “reckless disregard for the truth” in the Sullivan sense—it’s something more corporate and more concrete: knowledge of a defective product that continues to cause harm.

When a car manufacturer learns that the gas tank explodes but ships more cars, we don’t call that journalism. We call it negligence—or worse.

Why “Public Figure” Is the Wrong Lens

The Sullivan line of cases presumes three things:

  1. Human intent: a journalists believed what they wrote was the truth.
  2. Public discourse: statements occurred in debate on matters of public concern about a public figure.
  3. Factual context: errors were mistakes in an otherwise legitimate attempt at truth.

None of those apply here.

Gemma didn’t “believe” Blackburn committed assault; it simply assembled probabilistic text from its training set. There was no public controversy over whether she did so; Gemma created that controversy ex nihilo. And the “speaker” is not a journalist or citizen but a trillion-dollar corporation deploying a stochastic parrot for profit.

Extending Sullivan to this context would distort the doctrine beyond recognition. The First Amendment protects speakers, not software glitches.

A Better Analogy: Unsafe Product Behavior—and the Ghost of Mrs. Palsgraf

Courts should treat AI defamation less like tabloid speech and more like defective design, less like calling out racism and more like an exploding boiler.

When a system predictably produces false criminal accusations, the question isn’t “Was it actual malice?” but “Was it negligent to deploy this system at all?”

The answer practically waves from the platform’s own documentation. Hallucinations are a known bug—very well known, in fact. Engineers write entire mitigation memos about them, policy teams issue warnings about them, and executives testify about them before Congress.

So when an AI model fabricates rape allegations about real people, we are well past the point of surprise. Foreseeability is baked into the product roadmap.
Or as every first-year torts student might say: Heloooo, Mrs. Palsgraf.

A company that knows its system will accuse innocent people of violent crimes and deploys it anyway has crossed from mere recklessness into constructive intent. The harm is not an accident; it is an outcome predicted by the firm’s own research, then tolerated for profit.

Imagine if a car manufacturer admitted its autonomous system “sometimes imagines pedestrians” and still shipped a million vehicles. That’s not an unforeseeable failure; that’s deliberate indifference. The same logic applies when a generative model “imagines” rape charges. It’s not a malfunction—it’s a foreseeable design defect.

Why Executive Liability Still Matters

Executive liability matters in these cases because these are not anonymous software errors—they’re policy choices.
Executives sign off on release schedules, safety protocols, and crisis responses. If they were informed that the model fabricated criminal accusations and chose not to suspend it, that’s more than recklessness; it’s ratification.

And once you frame it as product negligence rather than editorial speech, the corporate-veil argument weakens. Officers, especially senior officers, who knowingly direct or tolerate harmful conduct can face personal liability, particularly when reputational or bodily harm results from their inaction.

Re-centering the Law

Courts need not invent new doctrines. They simply have to apply old ones correctly:

  • Defamation law applies to false statements of fact.
  • Product-liability law applies to unsafe products.
  • Negligence applies when harm is foreseeable and preventable.

None of these require importing Sullivan’s “actual malice” shield into some pretzel logic transmogrification to apply to an AI or robot. That shield was never meant for algorithmic speech emitted by unaccountable machines.  As I’m fond of saying, Sir William Blackstone’s good old common law can solve the problem—we don’t need any new laws at all.

Section 230 and The Political Dimension

Sen. Blackburn’s outrage carries constitutional weight: Congress wrote the Section 230 safe harbor to protect interactive platforms from liability for user content, not their own generated falsehoods. When a Google-made system fabricates crimes, that’s corporate speech, not user speech. So no 230 for them this time. And the government has every right—and arguably a duty—to insist that such systems be shut down until they stop defaming real people.  Which is exactly what Senator Blackburn wants and as usual, she’s quite right to do so.  Me, I’d try to put the Google guy in prison.

The Real Lede

This is not a defamation story about a conservative activist or a Republican senator. It’s a story about the breaking point of Sullivan. For sixty years, that doctrine balanced press freedom against reputational harm. But it was built for newspapers, not neural networks.

AI defamation doesn’t advance public discourse—it destroys it. 

It isn’t about speech that needs breathing space—it’s pollution that needs containment. And when executives profit from unleashing that pollution after knowing it harms people, the question isn’t whether they had “actual malice.” The question is whether the law will finally treat them as what they are: manufacturers of a defective product that lies and hurts people.

Less Than Zero: The Significance of the Per Stream Rate and Why It Matters

Spotify’s insistence that it’s “misleading” to compare services based on a derived per-stream rate reveals exactly how out of touch the company has become with the very artists whose labor fuels its stock price. Artists experience streaming one play at a time, not as an abstract revenue pool or a complex pro-rata formula. Each stream represents a listener’s decision, a moment of engagement, and a microtransaction of trust. Dismissing the per-stream metric as irrelevant is a rhetorical dodge that shields Spotify from accountability for its own value proposition. (The same applies to all streamers, but Spotify is the only one that denies the reality of the per-stream rate.)

Spotify further claims that users don’t pay per stream but for access as if that negates the artist’s per stream rate payments. It is fallacious to claim that because Spotify users pay a subscription fee for “access,” there is no connection between that payment and any one artist they stream. This argument treats music like a public utility rather than a marketplace of individual works. In reality, users subscribe because of the artists and songs they want to hear; the value of “access” is wholly derived from those choices and the fans that artists drive to the platform. Each stream represents a conscious act of consumption and engagement that justifies compensation.

Economically, the subscription fee is not paid into a vacuum — it forms a revenue pool that Spotify divides among rights holders according to streams. Thus, the distribution of user payments is directly tied to which artists are streamed, even if the payment mechanism is indirect. To say otherwise erases the causal relationship between fan behavior and artist earnings.

The “access” framing serves only to obscure accountability. It allows Spotify to argue that artists are incidental to its product when, in truth, they are the product. Without individual songs, there is nothing to access. The subscription model may bundle listening into a single fee, but it does not sever the fundamental link between listener choice and the artist’s right to be paid fairly for that choice.

Less Than Zero Effect: AI, Infinite Supply and Erasing Artist

In fact, this “access” argument may undermine Spotify’s point entirely. If subscribers pay for access, not individual plays, then there’s an even greater obligation to ensure that subscription revenue is distributed fairly across the artists who generate the listening engagement that keeps fans paying each month. The opacity of this system—where listeners have no idea how their money is allocated—protects Spotify, not artists. If fans understood how little of their monthly fee reached the musicians they actually listen to, they might demand a user-centric payout model or direct licensing alternatives. Or they might be more inclined to use a site like Bandcamp. And Spotify really doesn’t want that.

And to anticipate Spotify’s typical deflection—that low payments are the label’s fault—that’s not correct either. Spotify sets the revenue pool, defines the accounting model, and negotiates the rates. Labels may divide the scraps, but it’s Spotify that decides how small the pie is in the first place either through its distribution deals or exercising pricing power.

Three Proofs of Intention

Daniel Ek, the Spotify CEO and arms dealer, made a Dickensian statement that tells you everything you need to know about how Spotify perceives their role as the Streaming Scrooge—“Today, with the cost of creating content being close to zero, people can share an incredible amount of content”.

That statement perfectly illustrates how detached he has become from the lived reality of the people who actually make the music that powers his platform’s market capitalization (which allows him to invest in autonomous weapons). First, music is not generic “content.” It is art, labor, and identity. Reducing it to “content” flattens the creative act into background noise for an algorithmic feed. That’s not rhetoric; it’s a statement of his values. Of course in his defense, “near zero cost” to a billionaire like Ek is not the same as “near zero cost” to any artist. This disharmonious statement shows that Daniel Ek mistakes the harmony of the people for the noise of the marketplace—arming algorithms instead of artists.

Second, the notion that the cost of creating recordings is “close to zero” is absurd. Real artists pay for instruments, studios, producers, engineers, session musicians, mixing, mastering, artwork, promotion, and often the cost of simply surviving long enough to make the next record or write the next song. Even the so-called “bedroom producer” incurs real expenses—gear, software, electricity, distribution, and years of unpaid labor learning the craft. None of that is zero. As I said in the UK Parliament’s Inquiry into the Economics of Streaming, when the day comes that a soloist aspires to having their music included on a Spotify “sleep” playlist, there’s something really wrong here.

Ek’s comment reveals the Silicon Valley mindset that art is a frictionless input for data platforms, not an enterprise of human skill, sacrifice, and emotion. When the CEO of the world’s dominant streaming company trivializes the cost of creation, he’s not describing an economy—he’s erasing one.

While Spotify tries to distract from the “per-stream rate,” it conveniently ignores the reality that whatever it pays “the music industry” or “rights holders” for all the artists signed to one label still must be broken down into actual payments to the individual artists and songwriters who created the work. Labels divide their share among recording artists; publishers do the same for composers and lyricists. If Spotify refuses to engage on per-stream value, what it’s really saying is that it doesn’t want to address the people behind the music—the very creators whose livelihoods depend on those streams. In pretending the per-stream question doesn’t matter, Spotify admits the artist doesn’t matter either.

Less Than Zero or Zeroing Out: Where Do We Go from Here?

The collapse of artist revenue and the rise of AI aren’t coincidences; they’re two gears in the same machine. Streaming’s economics rewards infinite supply at near-zero unit cost which is really the nugget of truth in Daniel Ek’s statements. This is evidenced by Spotify’s dalliances with Epidemic Sound and the like. But—human-created music is finite and costly; AI music is effectively infinite and cheap. For a platform whose margins improve as payout obligations shrink, the logical endgame is obvious: keep the streams, remove the artists.

  • Two-sided market math. Platforms sell audience attention to advertisers and access to subscribers. Their largest variable cost is royalties. Every substitution of human tracks with synthetic “sound-alikes,” noise, functional audio, or AI mashup reduces royalty liability while keeping listening hours—and revenue—intact. You count the AI streams just long enough to reduce the royalty pool, then you remove them from the system, only to be replace by more AI tracks. Spotify’s security is just good enough to miss the AI tracks for at least one royalty accounting period.
  • Perpetual content glut as cover. Executives say creation costs are “near zero,” justifying lower per-stream value. That narrative licenses a race to the bottom, then invites AI to flood the catalog so the floor can fall further.
  • Training to replace, not to pay. Models ingest human catalogs to learn style and voice, then output “good enough” music that competes with the very works that trained them—without the messy line item called “artist compensation.”
  • Playlist gatekeeping. When discovery is centralized in editorial and algorithmic playlists, platforms can steer demand toward low-or-no-royalty inventory (functional audio, public-domain, in-house/commissioned AI), starving human repertoire while claiming neutrality.
  • Investor alignment. The story that scales is not “fair pay”; it’s “gross margin expansion.” AI is the lever that turns culture into a fixed cost and artists into externalities.

Where does that leave us? Both streaming and AI “work” best for Big Tech, financially, when the artist is cheap enough to ignore or easy enough to replace. AI doesn’t disrupt that model; it completes it. It also gives cover through a tortured misreading through the “national security” lens so natural for a Lord of War investor like Mr. Ek who will no doubt give fellow Swede and one of the great Lords of War, Alfred Nobel, a run for his money. (Perhaps Mr. Ek will reimagine the Peace Prize.) If we don’t hard-wire licensing, provenance, and payout floors, the platform’s optimal future is music without musicians.

Plato conceived justice as each part performing its proper function in harmony with the whole—a balance of reason, spirit, and appetite within the individual and of classes within the city. Applied to AI synthetic works like those generated by Sora 2, injustice arises when this order collapses: when technology imitates creation without acknowledging the creators whose intellect and labor made it possible. Such systems allow the “appetitive” side—profit and scale—to dominate reason and virtue. In Plato’s terms, an AI trained on human art yet denying its debt to artists enacts the very disorder that defines injustice.

Too Dynamic to Question, Too Dangerous to Ignore

When Ed Newton-Rex left Stability AI, he didn’t just make a career move — he issued a warning. His message was simple: we’ve built an industry that moves too fast to be honest.

AI’s defenders insist that regulation can’t keep up, that oversight will “stifle innovation.” But that speed isn’t a by-product; it’s the business model. The system is engineered for planned obsolescence of accountability — every time the public begins to understand one layer of technology, another version ships, invalidating the debate. The goal isn’t progress; it’s perpetual synthetic novelty, where nothing stays still long enough to be measured or governed, and “nothing says freedom like getting away with it.”

We’ve seen this play before. Car makers built expensive sensors we don’t want that fail on schedule; software platforms built policies that expire the moment they bite. In both cases, complexity became a shield and a racket — “too dynamic to question.” And yet, like those unasked-for, but paid for, features in the cars we don’t want, AI’s design choices are too dangerous to ignore. (Like what if your brakes really are going out, not just the sensor is malfunctioning.)

Ed Newton-Rex’s point — echoed in his tweets and testimony — is that the industry has mistaken velocity for virtue. He’s right. The danger is not that these systems evolve too quickly to regulate; it’s that they’re designed that way designed to fail just like that brake sensor. And until lawmakers recognize that speed itself is a form of governance, we’ll keep mistaking momentum for inevitability.

SB 683: California’s Quiet Rejection of the DMCA—and a Roadmap for Real AI Accountability

When Lucian Grainge drew a bright line—“UMG will not do business with bad actors regardless of the consequences”—he did more than make a corporate policy statement.  He threw down a moral challenge to an entire industry: choose creators or choose exploitation.

California’s recently passed SB 683 does not shout as loudly, but it answers the same call. By refusing to copy Washington’s bureaucratic NO FAKES Act and its DMCA-style “notice-and-takedown” maze, SB 683 quietly re-asserts a lost principle: rights are vindicated through courts and accountability, not compliance portals.

What SB 683 actually does

SB 683 amends California Civil Code § 3344, the state’s right-of-publicity statute for living persons, to make injunctive relief real and fast.  If someone’s name, voice, or likeness is exploited without consent, a court can now issue a temporary restraining order or preliminary injunction.  If the order is granted without notice, the defendant must comply within two business days.  

That sounds procedural—and it is—but it matters. SB 683 replaces “send an email to a platform” with “go to a judge.”   It converts moral outrage into enforceable law.

The deeper signal: a break from the DMCA’s bureaucracy

For twenty-seven years, the Digital Millennium Copyright Act (DMCA) has governed online infringement through a privatized system of takedown notices, counter-notices, and platform safe harbors.  When it was passed, Silicon Valley came alive with schemes to get around copyright infringement through free riding schemes that beat a path to Grokster‘s door.

But the DMCA was built for a dial-up internet and has aged about as gracefully as a boil on cow’s butt.

The Copyright Office’s 2020 Section 512 Study concluded that whatever Solomonic balance Congress thought it was making has completely collapsed:

“[T]he volume of notices demonstrates that the notice-and-takedown system does not effectively remove infringing content from the internet; it is, at best, a game of whack-a-mole.”

“Congress’ original intended balance has been tilted askew.”  

“Rightsholders report notice-and-takedown is burdensome and ineffective.”  

“Judicial interpretations have wrenched the process out of alignment with Congress’ intentions.” 
 
“Rising notice volume can only indicate that the system is not working.”  

Unsurprisingly, the Office concluded that “Roughly speaking, many OSPs spoke of section 512 as being a success, enabling them to [free ride and] grow exponentially and serve the public without facing debilitating lawsuits [or one might say, paying the freight]. Rightsholders reported a markedly different perspective, noting grave concerns with the ability of individual creators to meaningfully use the section 512 system to address copyright infringement and the “whack-a-mole” problem of infringing content re-appearing after being taken down. Based upon its own analysis of the present effectiveness of section 512, the Office has concluded that Congress’ original intended balance has been tilted askew.”

Which is a genteel way of saying the DMCA is an abject failure for creators and halcyon days for venture-backed online service providers. So why would anyone who cared about creators want to continue that absurd process?

SB 683 flips that logic. Instead of creating bureaucracy and rewarding the one who can wait out the last notice standing, it demands obedience to law.  Instead of deferring to internal “trust and safety” departments, it puts a judge back in the loop. That’s a cultural and legal break—a small step, but in the right direction.

The NO FAKES Act: déjà vu all over again

Washington’s proposed NO FAKES Act is designed to protect individuals from AI-generated digital replicas which is great. However—NO FAKES recreates the truly awful DMCA’s failed architecture: a federal registry of “designated agents,” a complex notice-and-takedown workflow, and a new safe-harbor regime based on “good-faith compliance.”    You know, notice and notice and notice and notice and notice and notice and…..

If NO FAKES passes, platforms like Google would again hold all the procedural cards: largely ignore notices until they’re convenient, claim “good faith,” and continue monetizing AI-generated impersonations.  In other words, it gives the platforms exactly what they wanted because delay is the point.  I seriously doubt that Congress of 1998 thought that their precious DMCA would be turned into a not so funny joke on artists, and I do remember Congressman Howard Berman (one of the House managers for DMCA) looking like he was going to throw up during the SOPA hearings when he found out how many millions of DMCA notices YouTube alone receives.  So why would we want to make the same mistake again thinking we’ll have a different outcome?  With the same platforms now richer beyond category? Who could possibly defend such garbage as anything but a colossal mistake?

The approach of SB 683 is, by contrast, the opposite of NO FAKES. It tells creators: you don’t need to find the right form—you need to find a judge.  It tells platforms: if a court says take it down, you have two days, not two months of emails, BS counter notices and a bad case of learned helplessness.  True, litigation is more costly than sending a DMCA notice, but litigation is far more likely to be effective in keeping infringing material down and will not become a faux “license” like DMCA has become.  

The DMCA heralded twenty-seven years of normalizing massive and burdensome copyright infringement and raising generations of lawyers to defend the thievery while Big Tech scooped up free rider rents that they then used for anti-creator lobbying around the world.  It should be entirely unsurprising that all of that litigation and lobbying has lead us to the current existential crisis.

Lucian Grainge’s throw-down and the emerging fault line

When Mr. Grainge spoke, he wasn’t just defending Universal’s catalog; he was drawing a perimeter around normalizing AI exploitation, and not buying into an even further extension of “permissionless innovation.”

Universal’s position aligns with what California just did. While Congress toys with a federal opt-out regime for AI impersonations, Sacramento quietly passed a law grounded in judicial enforcement and personal rights.  It’s not perfect, but it’s a rejection of the “catch me if you can” ethos that has defined Silicon Valley’s relationship with artists for decades.

A job for the Attorney General

SB 683 leaves enforcement to private litigants, but the scale of AI exploitation demands public enforcement under the authority of the State.  California’s Attorney General should have explicit power to pursue pattern-or-practice actions against companies that:

– Manufacture or distribute AI-generated impersonations of deceased performers (like Sora 2’s synthetic videos).
– Monetize those impersonations through advertising or subscription revenue (like YouTube does right now with the Sora videos).
– Repackage deepfake content as “user-generated” to avoid responsibility.

Such conduct isn’t innovation—it’s unfair competition under California law. AG actions could deliver injunctions, penalties, and restitution far faster than piecemeal suits. And as readers know, I love a good RICO, so let’s put out there that the AG should consider prosecuting the AI cabal with its interlocking investments under Penal Code §§ 186–186.8, known as the California Control of Profits of Organized Crime Act (CCPOCA) (h/t Seeking Alpha).

While AI platforms complain of “burdensome” and “unproductive” litigation, that’s simply not true of enterprises like the AI cabal—litigation is exactly what was required in order to reveal the truth about massive piracy powering the circular AI bubble economy. Litigation has revealed that the scale of infringement by AI platforms like Anthropic and Meta is so vast that private damages are meaningless. It is increasingly clear these companies are not alone—they have relied on pirate libraries and torrent ecosystems to ingest millions of works across every creative category. Rather than whistle past the graveyard while these sites flourish, government must confront its failure to enforce basic property rights. When theft becomes systemic, private remedies collapse, and enforcement becomes a matter for the state. Even Anthropic’s $1.5 billion settlement feels hollow because the crime is so immense. Not just because statutory damages in the US were also established in 1999 to confront…CD ripping.

AI regulation as the moment to fix the DMCA

The coming wave of AI legislation represents the first genuine opportunity in a generation to rewrite the online liability playbook.  AI and the DMCA cannot peacefully coexist—platforms will always choose whichever regime helps them keep the money.

If AI regulation inherits the DMCA’s safe harbors, nothing changes. Instead, lawmakers should take the SB 683 cue:
– Restore judicial enforcement.  
– Tie AI liability to commercial benefit. 
– Require provenance, not paperwork.  
– Authorize public enforcement.

The living–deceased gap: California’s unfinished business


SB 683 improves enforcement for living persons, but California’s § 3344.1 already protects deceased individuals against digital replicas.  That creates an odd inversion: John Coltrane’s estate can challenge an AI-generated “Coltrane tone,” but a living jazz artist cannot.   The Legislature should align the two statutes so the living and the dead share the same digital dignity.

Why this matters now

Platforms like YouTube host and monetize videos generated by AI systems such as Sora, depicting deceased performers in fake performances.  If regulators continue to rely on notice-and-takedown, those platforms will never face real risk.   They’ll simply process the takedown, re-serve the content through another channel, and cash another check.

The philosophical pivot

The DMCA taught the world that process can replace principle. SB 683 quietly reverses that lesson.  It says: a person’s identity is not an API, and enforcement should not depend on how quickly you fill out a form.

In the coming fight over AI and creative rights, that distinction matters. California’s experiment in court-centered enforcement could become the model for the next generation of digital law—where substance defeats procedure, and accountability outlives automation.

SB 683 is not a revolution, but it’s a reorientation. It abandons the DMCA’s failed paperwork culture and points toward a world where AI accountability and creator rights converge under the rule of law.

If the federal government insists on doubling down with the NO FAKES Act’s national “opt-out” registry, California may once again find itself leading by quiet example: rights first, bureaucracy last.

Ghosts in the Machine: How AI’s “Future” Runs on a 1960s Grid

The smart people want us to believe that artificial intelligence is the frontier and apotheosis of human progress. They sell it as transformative and disruptive. That’s probably true as far as it goes, but it doesn’t go that far. In practice, the infrastructure that powers it often dates back to a different era and there is the paradox: much of the electricity to power AI’s still flows through the bones of mid‑20th century engineering. Wouldn’t it be a good thing if they innovated a new energy source before they crowd out the humans?

The Current Generation Energy Mix — And What AI Adds

To see that paradox, start with the U.S. national electricity mix:

In 2023 , the U.S. generated about 4,178 billion kWh of electricity at utility-scale facilities. Of that, 60% came from fossil fuels (coal, natural gas, petroleum, other gases), 19% came from nuclear, and 21% from renewables (wind, solar, hydro). 
Nuclear power remains the backbone of zero-carbon baseload: it supplies around 18–19% of U.S. electricity, and nearly half of all non‑emitting generation. 
In 2025, clean sources (nuclear + renewables) are edging upward. According to Ember, in March 2025 fossil fuels fell below 50% of U.S. electricity generation for the first time (49.2%), marking a historic shift.
– Yet still, more than half of US power comes from carbon-emitting sources in most months.

Meanwhile, AI’s demand is surging:

– The Department of Energy estimates that data centers consumed 4.4% of U.S. electricity in 2023 (176 TWh) and projects this to rise to 6.7–12% by 2028 (325–580 TWh) according to the Department of Energy.
– An academic study of 2,132 U.S. data centers (2023–2024) found that these facilities accounted for more than 4% of national power consumption, with 56% coming from fossil sources, and emitted more than 105 million tons of CO₂e (approximately 2.18% of U.S. emissions in 2023). 
– That study also concluded: data centers’ carbon intensity (CO₂ per kWh) is 48% higher than the U.S. average.

So: AI’s power demands are no small increment—they threaten to stress a grid still anchored in older thermal technologies.

Global Data Centers https://www.datacentermap.com

Why “1960s Infrastructure” Isn’t Hyperbole

When I say AI is running on 1960s technology, I mean several things:

1. Thermal generation methods remain largely unchanged according to the EPA.  Coal-fired steam turbines and natural gas combined-cycle plants still dominate.

2. Many plants are old and aging.  The average age of coal plants in the U.S. is about 43 years; some facilities are over 60. Transmission lines and grid control systems often date from mid-to late-20th century planning.

3. Nuclear’s modern edge is historical.  Most U.S. nuclear reactors in operation were ordered in the 1960s–1970s and built over subsequent decades. In other words: The commercial installed base is old.

The Rickover Motif: Nuclear, Legacy, and Power Politics

To criticize AI’s reliance on legacy infrastructure, one powerful symbol is Admiral Hyman G. Rickover, the man often called the “Father of the Nuclear Navy.” Rickover’s work in the 1950s and 1960s not only shaped naval propulsion but also influenced the civilian nuclear sector.

Rickover pushed for rigorous engineering standards , standardization, safety protocols, and institutional discipline in building reactors. After the success of naval nuclear systems, Rickover was assigned by the Atomic Energy Commission to influence civilian nuclear power development.

Rickover famously required applicants to the nuclear submarine service to have “fixed their own car.” That speaks to technical literacy, self-reliance, and understanding systems deeply, qualities today’s AI leaders often lack. I mean seriously—can you imagine Sam Altman on a mechanic’s dolly covered in grease?

As the U.S. Navy celebrates its 250th anniversary, it’s ironic that modern AI ambitions lean on reactors whose protocols, safety cultures, and control logic remain deeply shaped by Rickover-era thinking from…yes…1947. And remember, Admiral Rickover had to transfer the hidebound Navy to nuclear power which at the time was just recently discovered and not well understood—and away from diesel. Diesel. That’s innovation and required a hugely entrepreneurial leader.

The Hypocrisy of Innovation Without Infrastructure

AI companies claim disruption but site data centers wherever grid power is cheapest — often near legacy thermal or nuclear plants. They promote “100% renewable” branding via offsets, but in real time pull electricity from fossil-heavy grids. Dense compute loads aggravate transmission congestion. FERC and NERC now list hyperscale data centers as emerging reliability risks. 

The energy costs AI doesn’t pay — grid upgrades, transmission reinforcement, reserve margins — are socialized onto ratepayers and bondholders. If the AI labs would like to use their multibillion dollar valuations to pay off that bond debt, that’s a conversation. But they don’t want that, just like they don’t want to pay for the copyrights they train on.

Innovation without infrastructure isn’t innovation — it’s rent-seeking. Shocking, I know…Silicon Valley engaging in rent-seeking and corporate welfare.

The 1960s Called. They Want Their Grid Back.

We cannot build the future on the bones of the past. If AI is truly going to transform the world, its promoters must stop pretending that plugging into a mid-century grid is good enough. The industry should lead on grid modernization, storage, and advanced generation, not free-ride on infrastructure our grandparents paid for.

Admiral Rickover understood that technology without stewardship is just hubris. He built a nuclear Navy because new power required new systems and new thinking. That lesson is even more urgent now.

Until it is learned, AI will remain a contradiction: the most advanced machines in human history, running on steam-age physics and Cold War engineering.


The DLC Nails it on Conditional Redesignation of the MLC

I’m certainly not a fan of really any of the companies that comprise the Digital Licensee Coordinator’s membership (DLC). In fact, you probably couldn’t find a more complete rogues’ gallery of most of my least favorite Big Tech companies—but when they’re right, they’re right.

Redesignation is the Copyright Office’s periodic check on whether the Mechanical Licensing Collective still meets the Music Modernization Act’s criteria to run the §115 blanket license. The Office can renew, or replace the designation to protect songwriters and licensees. In my view and the view of many others including the Digital Licensee Coordinator, The Office could also condition any renewal (or “redesignation”) of the MLC on improving its lackluster performance and postpone the renewal until the MLC improves, if ever. That’s just common sense.

The DLC’s most recent “ex parte” letter answers years of songwriter and publisher requests that the MLC has brushed aside—better matching, transparency, governance, timeliness, metrics, and accountability. Crucially, it confronts repeated, credible criticisms that the MLC’s investment of unmatched royalties is ultra vires (outside the law): the MMA authorizes collection and distribution, not portfoio-management schemes of a fund that is likely in excess of $1.2 billion of the songwriters’ money.

The Digital Licensee Coordinator urges the Copyright Office to conditionally redesignate the Mechanical Licensing Collective (MLC) and pair that step with stronger oversight. This approach reflects common sense and Congressional intent: if redesignation weren’t meant to be used as leverage to correct course, Congress wouldn’t have created a periodic redesignation process at all—it would have handed the MLC lifetime appointments. They didn’t, as one would expect. The MLC isn’t the Harry Fox Agency after all. Conditional redesignation is therefore the appropriate tool to ensure the MLC performs its uniquely powerful statutory role responsibly, transparently, and in the interest of all rightsholders. 

The DLC stresses how the MLC’s powers—collecting and distributing over a billion dollars annually, enforcing the blanket license, and imposing costs on licensees—demand robust governance and accountability distinct from what’s expected of the DLC itself. With that asymmetry in mind, the Office should focus the redesignation decision on whether the MLC needs additional safeguards to fulfill Congress’s vision for §115. Debating whether those safeguards arrive as explicit conditions on redesignation or as stand-alone regulations is a matter of form, not substance; either pathway legitimately implements the MMA and squarely fits within the Office’s authority. 

To “tee up” the record, the DLC attaches a helpful and representative Exhibit cataloging songwriter, independent publisher, and creator-group critiques across six themes: unmatched “black box” royalties; data/matching problems; governance and conflicts; transparency and accountability gaps; operational and technical delays; and the investment of unclaimed royalties. That comment supports conditional redesignation backed by measurable performance metrics(e.g., black-box reduction targets, matching accuracy, timeliness, dispute resolution KPIs) or by new, targeted regulations—and, if needed, both. 

Finally, immediate triage should begin with abandoning the contested investment policy for unclaimed royalties—criticized by many stakeholders as ultra vires (which by the way, eliminates any indemnity protection in the MMA)—and liquidating the portfolio so cash flows to the people Congress intended to benefit: songwriters. Conditional redesignation gives the Office the oversight handle to make those corrections now, align incentives going forward, and ensure the MLC’s stewardship is limited to the scale of its statutory power. 

It also must be said that if the MLC doesn’t clean up its act, what comes next may not be so genteel. Conditional redesignation may look awfully good in the rear view mirror.

Google’s “AI Overviews” Draws a Formal Complaint in Germany under the EU Digital Services Act

A coalition of NGOs, media associations, and publishers in Germany has filed a formal Digital Services Act (DSA) complaint against Google’s AI Overviews, arguing the feature diverts traffic and revenue from independent media, increases misinformation risks via opaque systems, and threatens media plurality. Under the DSA, violations can carry fines up to 6% of global revenue—a potentially multibillion-dollar exposure.

The complaint claims that AI Overviews answer users’ queries inside Google, short-circuiting click-throughs to the original sources and starving publishers of ad and subscription revenues. Because users can’t see how answers are generated or verified, the coalition warns of heightened misinformation risk and erosion of democratic discourse.

Why the Digital Services Act Matters

As I understand the DSA, the news publishers can either (1) lodge a complaint with their national Digital Services Coordinator alleging a platform’s DSA breach (triggers regulatory scrutiny);  (2) Use the platform dispute tools: first the internal complaint-handling system, then certified out-of-court dispute settlement for moderation/search-display disputes—often faster practical relief; (3) Sue for damages in national courts for losses caused by a provider’s DSA infringement (Art. 54); or (4) Act collectively by mandating a qualified entity or through the EU Representative Actions Directive to seek injunctions/redress (kind of like class actions in the US but more limited in scope). 

Under the DSA, Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) are services with more than 45 million EU users (approximately 10% of the population). Once formally designated by the European Commission, they face stricter obligations than smaller platforms: conducting annual systemic risk assessments, implementing mitigation measures, submitting to independent audits, providing data access to researchers, and ensuring transparency in recommender systems and advertising. Enforcement is centralized at the Commission, with penalties up to 6% of global revenue. This matters because VLOPs like Google, Meta, and TikTok must alter core design choices that directly affect media visibility and revenue.In parallel, the European Commission/DSCs retain powerful public-enforcement tools against Very Large Online Platforms. 

As a designated Very Large Online Platform, Google faces strict duties to mitigate systemic risks, provide algorithmic transparency, and avoid conduct that undermines media pluralism. The complaint contends AI Overviews violate these requirements by replacing outbound links with Google’s own synthesized answers.

The U.S. Angle: Penske lawsuit

A Major Publisher Has Sued Google in Federal Court Over AI Overview

On Sept. 14, 2025, Penske Media (Rolling Stone, Billboard, Variety) sued Google in D.C. federal court, alleging AI Overviews repurpose its journalism, depress clicks, and damage revenue—marking the first lawsuit by a major U.S. publisher aimed squarely at AI Overviews. The claims include an allegation on training-use claiming that Google enriched itself by using PMC’s works to train and ground models powering Gemini/AI Overviews, seeking restitution and disgorgement. Penske also argues that Google abuses its search monopoly to coerce publishers: indexing is effectively tied to letting Google (a) republish/summarize their material in AI Overviews, Featured Snippets, and AI Mode, and (b) use their works to train Google’s LLMs—reducing click-through and revenues while letting Google expand its monopoly into online publishing. 

Trade Groups Urged FTC/DOJ Action

The News/Media Alliance had previously asked the FTC and DOJ to investigate AI Overviews for diverting traffic and ‘misappropriating’ publishers’ investments, calling for enforcement under FTC Act §5 and Sherman Act §2.

Data Showing Traffic Harm

Industry analyses indicate material referral declines tied to AI Overviews. Digital Content Next reports Google Search referrals down 1%–25% for most member publishers over recent weeks; Digiday pegs impacts as much as 25%. The trend feeds a broader ‘Google Zero’ concern—zero-click results displacing publisher visits.

Why Europe vs. U.S. Paths Differ

The EU/DSA offers a procedural path to assess systemic risk and platform design choices like AI Overviews and levy platform-wide remedies and fines. In the U.S., the fight currently runs through private litigation (Penske) and competition/consumer-protection advocacy at FTC/DOJ, where enforcement tools differ and take longer to mobilize.

RAG vs. Training Data Issues

AI Overviews are best understood as a Retrieval-Augmented Generation (RAG) issue. Readers will recall that RAG is probably the most direct example of verbatim copying in AI outputs. The harms arise because Google as middleman retrieves live publisher content and synthesizes it into an answer inside the Search Engine Results Page (SERP), reducing traffic to the sources. This is distinct from the training-data lawsuits (Kadrey, Bartz) that allege unlawful ingestion of works during model pretraining.

Kadrey: Indirect Market Harm

A RAG case like Penske’s could also be characterized as indirect market harm. Judge Chhabria’s ruling in Kadrey under U.S. law highlights that market harm isn’t limited to direct substitution for fair use purposes. Factor 4 in fair use analysis includes foreclosure of licensing and derivative markets. For AI/search, that means reduced referrals depress ad and subscription revenue, while widespread zero-click synthesis may foreclose an emerging licensing market for summaries and excerpts. Evidence of harm includes before/after referral data, revenue deltas, and qualitative harms like brand erasure and loss of attribution. Remedies could include more prominent linking, revenue-sharing, compliance with robots/opt-outs, and provenance disclosures.

I like them RAG cases.

The Essential Issue is Similar in EU and US

Whether in Brussels or Washington, the core dispute is very similar: Who captures the value of journalism in an AI-mediated search world? Germany’s DSA complaint and Penske’s U.S. lawsuit frame twin fronts of a larger conflict—one about control of distribution, payment for content, and the future of a pluralistic press. Not to mention the usual free-riding and competition issues swirling around Google as it extracts rents by inserting itself into places it’s not wanted.

How an AI Moratorium Would Preclude Penske’s Lawsuit

Many “AI moratorium” proposals function as broad safe harbors with preemption. A moratorium to benefit AI and pick national champions was the subject of an IP Subcommittee hearing on September 18. If Congress enacted a moratorium that (1) expressly immunizes core AI practices (training, grounding, and SERP-level summaries), (2) preempts overlapping state claims, and (3) channels disputes into agency processes with exclusive public enforcement, it would effectively close the courthouse door to private suits like Penske and make the US more like Europe without the enforcement apparatus. Here’s how:

Express immunity for covered conduct. If the statute declares that using publicly available content for training and for retrieval-augmented summaries in search is lawful during the moratorium, Penske’s core theory (RAG substitution plus training use) loses its predicate.
No private right of action / exclusive public enforcement. Limiting enforcement to the FTC/DOJ (or a designated tech regulator) would bar private plaintiffs from seeking damages or injunctions over covered AI conduct.
Antitrust carve-out or agency preclearance. Congress could provide that covered AI practices (AI Overviews, featured snippets powered by generative models, training/grounding on public web content) cannot form the basis for Sherman/Clayton liability during the moratorium, or must first be reviewed by the agency—undercutting Penske’s §1/§2 counts.
Primary-jurisdiction plus statutory stay. Requiring first resort to the agency with a mandatory stay of court actions would pause (or dismiss) Penske until the regulator acts.
Preemption of state-law theories. A preemption clause would sweep in state unjust-enrichment and consumer-protection claims that parallel the covered AI practices.
Limits on injunctive relief. Barring courts from enjoining covered AI features (e.g., SERP-level summaries) and reserving design changes to the agency would eliminate the centerpiece remedy Penske seeks.
Potential retroactive shield. If drafted to apply to past conduct, a moratorium could moot pending suits by deeming prior training/RAG uses compliant for the moratorium period.

A moratorium with safe harbors, preemption, and agency-first review would either stay, gut, or bar Penske’s antitrust and unjust-enrichment claims—reframing the dispute as a regulatory matter rather than a private lawsuit. Want to bet that White House AI Viceroy David Sacks will be sitting in judgement?

Missile Gap, Again: Big Tech’s Private Power vs. the Public Grid

If we let a hyped “AI gap” dictate land and energy policy, we’ll privatize essential infrastructure and socialize the fallout.

Every now and then, it’s important to focus on what our alleged partners in music distribution are up to, because the reality is they’re not record people—their real goal is getting their hands on the investment we’ve all made in helping compelling artists find and keep an audience. And when those same CEOs use the profits from our work to pivot to “defense tech” or “dual use” AI (civilian and military), we should hear what that euphemism really means: killing machines.

Daniel Ek is backing battlefield-AI ventures; Eric Schmidt has spent years bankrolling and lobbying for the militarization of AI while shaping the policies that green-light it. This is what happens when we get in business with people who don’t share our values: the capital, data, and social license harvested from culture gets recycled into systems built to find, fix, and finish human beings. As Bob Dylan put it in Masters of War, “You fasten the triggers for the others to fire.” These deals aren’t value-neutral—they launder credibility from art into combat. If that’s the future on offer, our first duty is to say so plainly—and refuse to be complicit.

The same AI outfits that for decades have refused to license or begrudgingly licensed the culture they ingest are now muscling into the hard stuff—power grids, water systems, and aquifers—wherever governments are desperate to win their investment. Think bespoke substations, “islanded” microgrids dedicated to single corporate users, priority interconnects, and high-volume water draws baked into “innovation” deals. It’s happening globally, but nowhere more aggressively than in the U.S., where policy and permitting are being bent toward AI-first infrastructure—thanks in no small part to Silicon Valley’s White House “AI viceroy,” David Sacks. If we don’t demand accountability at the point of data and at the point of energy and water, we’ll wake up to AI that not only steals our work but also commandeers our utilities. Just like Senator Wyden accomplished for Oregon.

These aren’t pop-up server farms; they’re decades-long fixtures. Substations and transmission are built on 30–50-year horizons, generation assets run 20–60, with multi-decade PPAs, water rights, and recorded easements that outlive elections. Once steel’s in the ground, rate designs and priority interconnects get contractually sticky. Unlike the Internet fights of the last 25 years—where you could force a license for what travels through the pipe—this AI footprint binds communities for generations; it’s essentially forever. So we will be stuck for generations with the decisions we make today.

Because China–The New Missle Gap

There’s a familiar ring to the way America is now talking about AI, energy, and federal land use (and likely expropriation). In the 1950s Cold War era, politicians sold the country on a “missile gap” that later proved largely mythical, yet it hardened budgets, doctrine, and concrete in ways that lasted decades.

Today’s version is the “AI gap”—a story that says China is sprinting on AI, so we must pave faster, permit faster, and relax old guardrails to keep up. Of course, this diverts attention from China’s advances in directed-energy weapons and hypersonic missiles which are here right now today and will play havoc in an actual battlefield—which the West has no counter to. But let’s not talk about those (at least not until we lose a carrier in the South China Sea), let’s worry about AI because that will make Silicon Valley even richer.

Watch any interview of executives from the frontier AI labs and within minutes they will hit their “because China” talking point. National security and competitiveness are real concerns, but they don’t justify blank checks and Constitutional-level safe harbors. The missile‑gap analogy is useful because it reminds us how a compelling threat narrative propaganda can swamp due diligence. We can support strategic compute and energy without letting an AI‑gap story permanently bulldoze open space and saddle communities with the bill.

Energy Haves (Them) and Have Nots (Everyone else)

The result is a two‑track energy state AKA hell on earth. On Track A, the frontier AI lab hyperscalers like Google, Meta, Microsoft, OpenAI & Co. build company‑town infrastructure for AI—on‑site electricity generation by microgrids outside of everyone else’s electric grid, dedicated interties and other interconnections between electric operators—often on or near federal land. On Track B, the public grid carries everyone else: homes, hospitals, small manufacturers, water districts. As President Trump said at the White House AI dinner this week, Track A promises to “self‑supply,” but even self‑supplied campuses still lean on the public grid for backup and monetization, and they compete for scarce interconnection headroom.

President Trump is allowing the hyperscalers to get permanent rights to build on massive parcels of government land, including private utilities to power the massive electricity and water cooling needs for AI data centers. Strangely enough, this is continuing a Biden policy under an executive order issued late in Biden Presidency that Trump now takes credit for, and is a 180 out from America First according to people who ought to know like Steve Bannon. And yet it is happening.

White House Dinners are Old News in Silicon Valley

If someone says “AI labs will build their own utilities on federal land,” that land comes in two flavors: Department of Defense (now War Department) or Department of Energy sites and land owned by the Bureau of Land Management (BLM). This are vastly different categories.  DoD/DOE sites such as Idaho National Laboratory Oak Ridge Reservation, Paducah GDP, and the Savannah River Site, imply behind-the-fence, mission-tied microgrids with limited public friction; BLM land implies public-land rights-of-way and multi-use trade-offs (grazing, wildlife, cultural), longer timelines, and grid-export dynamics with potential “curtailment” which means prioritizing electricity for the hyperscalers. For example, Idaho National Laboratory (INL) as one of the four AI/data-center sites. INL’s own environmental reports state that about 60% of the INL site is open to livestock grazing, with monitoring of grazing impacts on habitat.  That’s likely over.

This is about how we power anything not controlled by a handful of firms. And it’s about the land footprint: fenced solar yards, switchyards, substations, massive transport lines, wider roads, laydown areas. On BLM range and other open spaces, those facilities translate into real, local losses—grazable acres inside fences, stock trails detoured, range improvements relocated.

What the two tracks really do

Track A solves a business problem: compute growth outpacing the public grid’s construction cycle. By putting electrons next to servers (literally), operators avoid waiting years for a substation or a 230‑kV line. Microgrids provide islanding during emergencies and participation in wholesale markets when connected. It’s nimble, and it works—for the operator.

Track B inherits the volatility: planners must consider a surge of large loads that may or may not appear, while maintaining reliability for everyone else. Capacity margins tighten; transmission projects get reprioritized; retail rates absorb the externalities. When utilities plan for speculative loads and those projects cancel or slide, the region can be left with stranded costs or deferred maintenance elsewhere.

The land squeeze we’re not counting

Public agencies tout gigawatts permitted. They rarely publish the acreage fenced, AUMs affected, or water commitments. Utility‑scale solar commonly pencils out to on the order of 5–7 acres per megawatt of capacity depending on layout and topography. At that ratio, a single gigawatt occupies thousands of acres—acres that, unlike wind, often can’t be grazed once panels and security fences go in. Even where grazing is technically possible, access roads, laydown yards, and vegetation control impose real costs on neighboring users.

Wind is more compatible with grazing, but it isn’t footprint‑free. Pads, roads, and safety buffers fragment pasture. Transmission to move that energy still needs corridors—and those corridors cross someone’s water lines and gates. Multiple use is a principle; on the ground it’s a schedule, a map, and a cost. Just for reference, a rule‑of‑thumb for acres/electricity produces is approximately 5–7 acres per megawatt of direct current (“MWdc”), but access roads, laydown, and buffers extend beyond the fence line.

We are going through this right now in my part of the world. Central Texas is bracing for a wave of new high-voltage transmission. These are 345-kV corridors cutting (literally) across the Hill Country to serve load growth for chip fabricators and data centers and tie-in distant generation (so big lines are a must once you commit to the usage). Ranchers and small towns are pushing back hard: eminent-domain threats, devalued land, scarred vistas, live-oak and wildlife impacts, and routes that ignore existing roads and utility corridors. Packed hearings and county resolutions demand co-location, undergrounding studies, and real alternatives—not “pick a line on a map” after the deal is done. The fight isn’t against reliability; it’s against a planning process that externalizes costs onto farmers, ranchers, other landowners and working landscapes.

Texas’s latest SB 6 is the case study. After a wave of ultra-large AI/data-center loads, frontier labs and their allies pushed lawmakers to rewrite reliability rules so the grid would accommodate them. SB 6 empowers the Texas grid operator ERCOT to police new mega-loads—through emergency curtailment and/or firm-backup requirements—effectively reshaping interconnection priorities and shifting reliability risk and costs onto everyone else. “Everyone else” means you and me, kind of like the “full faith and credit of the US”. Texas SB 6 was signed into law in June 2025 by Gov. Greg Abbott. It’s now in effect and directs PUCT/ERCOT to set new rules for very large loads (e.g., data centers), including curtailment during emergencies and added interconnection/backup-power requirements. So the devil will be in the details and someone needs to put on the whole armor of God, so to speak.

The phantom problem

Another quiet driver of bad outcomes is phantom demand: developers filing duplicative load or interconnection requests to keep options open. On paper, it looks like a tidal wave; in practice, only a slice gets built. If every inquiry triggers a utility study, a route survey, or a placeholder in a capital plan, neighborhoods can end up paying for capacity that never comes online to serve them.

A better deal for the public and the range

Prioritize already‑disturbed lands—industrial parks, mines, reservoirs, existing corridors—before greenfield BLM range land. Where greenfield is unavoidable, set a no‑net‑loss goal for AUMs and require real compensation and repair SLAs for affected range improvements.

Milestone gating for large loads: require non‑refundable deposits, binding site control, and equipment milestones before a project can hold scarce interconnection capacity or trigger grid upgrades. Count only contracted loads in official forecasts; publish scenario bands so rate cases aren’t built on hype.

Common‑corridor rules: make developers prove they can’t use existing roads or rights‑of‑way before claiming new footprints. Where fencing is required, use wildlife‑friendly designs and commit to seasonal gates that preserve stock movement.

Public equity for public land: if a campus wins accelerated federal siting and long‑term locational advantage, tie that to a public revenue share or capacity rights that directly benefit local ratepayers and counties. Public land should deliver public returns, not just private moats.

Grid‑help obligations: if a private microgrid islands to protect its own uptime, it should also help the grid when connected. Enroll batteries for frequency and reserve services; commit to emergency export; and pay a fair share of fixed transmission costs instead of shifting them onto households.

Or you could do what the Dutch and Irish governments proposed under the guise of climate change regulations—kill all the cattle. I can tell you right now that that ain’t gonna happen in Texas.

Will We Get Fooled Again?

If we let a hyped latter day “missile gap” set the terms, we’ll lock in a two‑track energy state: private power for those who can afford to build it, a more fragile and more expensive public grid for everyone else, and open spaces converted into permanent infrastructure at a discount. The alternative is straightforward: price land and grid externalities honestly, gate speculative demand, require public returns on public siting, and design corridor rules that protect working landscapes. That’s not anti‑AI; it’s pro‑public. Everything not controlled by Big Tech—will be better for it.

Let’s be clear: the data-center onslaught will be financed by the taxpayer one way or another—either as direct public outlays or through sweet-heart “leases” of federal land to build private utilities behind the fence for the richest corporations in commercial history. After all the goodies that Trump is handing to the AI platforms, let’s not have any loose talk of “selling” excess electricity to the public–that price should be zero. Even so, the sales pitch about “excess” electricity they’ll generously sell back to the grid is a fantasy; when margins tighten, they’ll throttle output costs, not volunteer philanthropy. Picture it: do you really think these firms won’t optimize for themselves first and last? We’ll be left with the bills, the land impacts, and a grid redesigned around their needs. Ask yourself—what in the last 25 years of Big Tech behavior says “trustworthy” to you?

Denmark’s Big Idea: Protect Personhood from the Blob With Consent First and Platform Duty Built In

Denmark has given the rest of us a simple, powerful starting point: protect the personhood of citizens from the blob—the borderless slurry of synthetic media that can clone your face, your voice, and your performance at scale. Crucially, Denmark isn’t trying to turn name‑image‑likeness into a mini‑copyright. It’s saying something more profound: your identity isn’t a “work”; it’s you. It’s what is sometimes called “personhood.” That framing changes everything. It’s not commerce, it’s a human right.

The Elements of Personhood

Personhood raises human reality as moral consideration, not a piece of content. For example, the European Court of Human Rights reads Article 8 ECHR (“private life”) to include personal identity (name, identity integrity, etc.), protecting individual identity against unjustified interference. This is, of course, anathema to Silicon Valley, but the world takes a different view.

In fact, Denmark’s proposal echoes the Universal Declaration of Human Rights. It starts with dignity (Art. 1) and recognition of each person before the law (Art. 6), and it squarely protects private life, honor, and reputation against synthetic impersonation (Art. 12). It balances freedom of expression (Art. 19) with narrow, clearly labeled carve-outs, and it respects creators’ moral and material interests (Art. 27(2)). Most importantly, it delivers an effective remedy (Art. 8): a consent-first rule backed by provenance and cross-platform stay-down, so individuals aren’t forced into DMCA-style learned helplessness.

Why does this matter? Because the moment we call identity or personhood a species of copyright, platforms will reach for a familiar toolbox—quotation, parody, transient copies, text‑and‑data‑mining (TDM)—and claim exceptions to protect them from “data holders”. That’s bleed‑through: the defenses built for expressive works ooze into an identity context where they don’t belong. The result is an unearned permission slip to scrape faces and voices “because the web is public.” Denmark points us in the opposite direction: consent or it’s unlawful. Not “fair use,” not “lawful access,” not “industry custom., not “data profile.” Consent. Pretty easy concept. It’s one of the main reasons tech executives keep their kids away from cell phones and social media.

Not Replicating the Safe Harbor Disaster

Think about how we got here. The first generation of the internet scaled by pushing risk downstream with a portfolio of safe harbors like the God-awful DMCA and Section 230 in the US. Platforms insisted they were deserving of blanket liability shields because they were special. They were “neutral pipes” which no one believed then and don’t believe now. These massive safe harbors hardened into a business model that likely added billions to the FAANG bottom line. We taught millions of rightsholders and users to live with learned helplessness: file a notice, watch copies multiply, rinse and repeat. Many users did not know they could even do that much, and frankly still may not. That DMCA‑era whack‑a‑mole turned into a faux license, a kind of “catch me if you can” bargain where exhaustion replaces consent.

Denmark’s New Protection of Personhood for the AI Era

Denmark’s move is a chance to break that pattern—if we resist the gravitational pull back to copyright. A fresh right of identity (called a “sui generis” right among Latin fans) is not subject to copyright or database exceptions, especially fair use, DMCA, and TDM. In plain English: “publicly available” is not permission to clone your face, train on your voice, or fabricate your performance. Or your children, either. If an AI platform wants to use identity, they ask first. If they don’t ask, they don’t get to do it, and they don’t get to keep the model they trained on it. And like many other areas, children can’t consent.

That legal foundation unlocks the practical fix creators and citizens actually need: stay‑down across platforms, not endless piecemeal takedowns. Imagine a teacher discovers a convincing deepfake circulating on two social networks and a messaging app. If we treat that deepfake as a copyright issue under the old model, she sends three notices, then five, then twelve. Week two, the video reappears with a slight change. Week three, it’s re‑encoded, mirrored, and captioned. The message she receives under a copyright regime is “you can never catch up.” So why don’t you just give up. Which, of course, in the world of Silicon Valley monopoly rents, is called the plan. That’s the learned helplessness Denmark gives us permission to reject.

Enforcing Personhood

How would the new plan work? First, we treat realistic digital imitations of a person’s face, voice, or performance as illegal absent consent, with only narrow, clearly labeled carve‑outs for genuine public‑interest reporting (no children, no false endorsement, no biometric spoofing risk, provenance intact). That’s the rights architecture: bright lines and human‑centered. Hence, “personhood.”

Second, we wire enforcement to succeed at internet scale. The way out of whack‑a‑mole is a cross‑platform deepfake registry operated with real governance. A deepfake registry doesn’t store videos; it stores non‑reversible fingerprints—exact file hashes for byte‑for‑byte matches and robust, perceptual fingerprints for the variants (different encodes, crops, borders). For audio, we use acoustic fingerprints; for video, scene/frame signatures. These markers will evolve and so should the deepfakes registry. One confirmed case becomes a family of identifiers that platforms check at upload and on re‑share. The first takedown becomes the last.

Third, we pair that with provenance by default: Provenance isn’t a license; it’s evidence. When credentials are present, it’s easier to authenticate so there is an incentive to use them. Provenance is the rebar that turns legal rules into reliable, automatable processes. However, absence of credentials doesn’t mean free for all.

Finally, we put the onus where it belongs—on platforms. Europe’s Digital Services Act at least theoretically already replaced “willful blindness” with “notice‑and‑action” duties and oversight for very large platforms. Denmark’s identity right gives citizens a clear, national‑law basis to say: “This is illegal content—remove it and keep it down.” The platform’s job isn’t to litigate fair use in the abstract or hide behind TDM. It’s to implement upload checks, preserve provenance, run repeat‑offender policies, and prevent recurrences. If a case was verified yesterday, it shouldn’t be back tomorrow with a 10‑pixel border or other trivial alteration to defeat the rules.

Some will ask: what about creativity and satire? The answer is what it has always been in responsible speech law—more speech not fake speech. If you’re lampooning a politician with a clearly labeled synthetic speech, no implied endorsement, provenance intact, and no risk of biometric spoofing or fraud, you have defenses. The point isn’t to smother satire; it’s to end the pretense that satire requires open season on the biometric identities of private citizens and working artists.

Others will ask: what about research and innovation? Good research runs on consent, especially human subject research (see 45 C.F.R. part 46). If a lab wants to study voice cloning, it recruits consenting participants, documents scope and duration, and keeps data and models in controlled settings. That’s science. What isn’t science is scraping the voices of a country’s population “because the web is public,” then shipping a model that anyone can use to spoof a bank’s call‑center checks. A no‑TDM‑bleed‑through clause draws that line clearly.

And yes, edge cases exist. There will be appeals, mistakes, and hard calls at the margins. That is why the registry must be governed—with identity verification, transparent logs, fast appeals, and independent oversight. Done right, it will look less like a black box and more like infrastructure: a quiet backbone that keeps people safe while allowing reporting and legitimate creative work to thrive.

If Denmark’s spark is to become a firebreak, the message needs to be crisp:

— This is not copyright. Identity is a personal right; copyright defenses don’t apply.

— Consent is the rule. Deepfakes without consent is unlawful.

— No TDM bleed‑through. “Publicly available” does not equate to permission to clone or train.

— Provenance helps prove, not permit. Keep credentials intact; stripping them has consequences.

— Stay‑down, cross‑platform. One verified case should not become a thousand reuploads.

That’s how you protect personhood from the blob. By refusing to treat humans like “content,” by ending the faux‑license of whack‑a‑mole, and by making platforms responsible for prevention, not just belated reaction. Denmark has given us the right opening line. Now we should finish the paragraph: consent or block. Label it, prove it, or remove it.