You Can’t Prosecute Smuggling NVIDIA chips to CCP and Authorize Sales to CCP at the Same Time

The Trump administration is attempting an impossible contradiction: selling advanced NVIDIA AI chips to China while the Department of Justice prosecutes criminal cases for smuggling the exact same chips into China.

According to the DOJ:

“Operation Gatekeeper has exposed a sophisticated smuggling network that threatens our Nation’s security by funneling cutting-edge AI technology to those who would use it against American interests,” said Ganjei. “These chips are the building blocks of AI superiority and are integral to modern military applications. The country that controls these chips will control AI technology; the country that controls AI technology will control the future. The Southern District of Texas will aggressively prosecute anyone who attempts to compromise America’s technological edge.”

That divergence from the prosecutors is not industrial policy. That is incoherence. But mostly it’s just bad advice, likely coming from White House AI Czar David Sacks, Mr. Trump’s South African AI policy advisor who may have a hard time getting a security clearance in the first place..

On one hand, DOJ is rightly bringing cases over the illegal diversion of restricted AI chips—recognizing that these processors are strategic technologies with direct national-security implications. On the other hand, the White House is signaling that access to those same chips is negotiable, subject to licensing workarounds, regulatory carve-outs, or political discretion.

You cannot treat a technology as contraband in federal court and as a commercial export in the West Wing.

Pick one.

AI Chips Are Not Consumer Electronics

The United States does not sell China F-35 fighter jets. We do not sell Patriot missile systems. We do not sell advanced avionics platforms and then act surprised when they show up embedded in military infrastructure. High-end AI accelerators are in the same category.

NVIDIA’s most advanced chips are not merely commercial products. They are general-purpose intelligence infrastructure or what China calls military-civil fusion. They train surveillance systems, military logistics platforms, cyber-offensive tools, and models capable of operating autonomous weapons and battlefield decision-making pipelines with no human in the loop.

If DOJ treats the smuggling of these chips into China as a serious federal crime—and it should—there is no coherent justification for authorizing their sale through executive discretion. Except, of course, money, or in Mr. Sacks case, more money.

Fully Autonomous Weapons—and Selling the Rope

China does not need U.S. chips to build consumer AI. It wants them for military acceleration.Advanced NVIDIA AI chips are not just about chatbots or recommendation engines. They are the backbone of fully autonomous weapons systems—autonomous targeting, swarm coordination, battlefield logistics, and decision-support models that compress the kill chain beyond meaningful human control.

There is an old warning attributed to Vladimir Lenin—that capitalists would sell the rope by which they would later be hanged. Apocryphal or not, it captures this moment with uncomfortable precision.

If NVIDIA chips are powerful enough to underpin autonomous weapons systems for allied militaries, they are powerful enough to underpin autonomous weapons systems for adversaries like China. Trump’s own National Security Strategy statement clearly says previous U.S. elites made “mistaken” assumptions about China such as the famous one that letting China into the WTO would integrate Beijing into the famous rules-based international order. Trump tells us that instead China “got rich and powerful” and used this against us, and goes on to describe the CCP’s well known predatory subsidies, unfair trade, IP theft, industrial espionage, supply-chain leverage, and fentanyl precursor exports as threats the U.S. must “end.” By selling them the most advanced AI chips?

Western governments and investors simultaneously back domestic autonomous-weapons firms—such as Europe-based Helsing, supported by Spotify CEO Daniel Ek—explicitly building AI-enabled munitions for allied defense. That makes exporting equivalent enabling infrastructure to a strategic competitor indefensible.

The AI Moratorium Makes This Worse, Not Better

This contradiction unfolds alongside a proposed federal AI moratorium executive order originating with Mr. Sacks and Adam Thierer of Google’s R Street Institute that would preempt state-level AI protections.
States are told AI is too consequential for local regulation, yet the federal government is prepared to license exports of AI’s core infrastructure abroad.

If AI is too dangerous for states to regulate, it is too dangerous to export. Preemption at home combined with permissiveness abroad is not leadership. It is capture.

This Is What Policy Capture Looks Like

The common thread is not national security. It is Silicon Valley access. David Sacks and others in the AI–VC orbit argue that AI regulation threatens U.S. competitiveness while remaining silent on where the chips go and how they are used.

When DOJ prosecutes smugglers while the White House authorizes exports, the public is entitled to ask whose interests are actually being served. Advisory roles that blur public power and private investment cannot coexist with credible national-security policymaking particularly when the advisor may not even be able to get a US national security clearance unless the President blesses it.

A Line Has to Be Drawn

If a technology is so sensitive that its unauthorized transfer justifies prosecution, its authorized transfer should be prohibited absent extraordinary national interest. AI accelerators meet that test.

Until the administration can articulate a coherent justification for exporting these capabilities to China, the answer should be no. Not licensed. Not delayed. Not cosmetically restricted.

And if that position conflicts with Silicon Valley advisers who view this as a growth opportunity, they should return to where they belong. The fact that the US is getting 25% of the deal (which i bet never finds its way into America’s general account), means nothing except confirming Lenin’s joke about selling the rope to hang ourselves, you know, kind of like TikTok.

David Sacks should go back to Silicon Valley.

This is not venture capital. This is our national security and he’s selling it like rope.

Good News for TikTok Users: The PRC Definitely Isn’t Interested in Your Data (Just the Global Internet Backbone, Apparently)

If you’re a TikTok user who has ever worried, even a tiny bit, that the People’s Republic of China might have an interest in your behavior, preferences, movements, or social graph, take heart. A newly released Joint Cybersecurity Advisory from intelligence agencies in the United States, Canada, the U.K., Australia, New Zealand, and a long list of allied intelligence agencies proves beyond any shadow of a doubt that the PRC is far too busy compromising the world’s telecommunications infrastructure to care about your TikTok “For You Page.”

Nothing to see here. Scroll on.

For those who like their reassurance with a side of evidence, the advisory—titled “Countering Chinese State Actors’ Compromise of Networks Worldwide to Feed Global Espionage System”—is one of the clearest, broadest warnings ever issued about a Chinese state-sponsored intrusion campaign. And, because the agencies involved designated it as not sensitive and may be shared publicly without restriction (TLP:CLEAR), you can read it yourself.

The World’s Telecom Backbones: Now Featuring Uninvited Guests

The intel agency advisory describes a “Typhoon class” global espionage ecosystem run through persistent compromises of backbone routers, provider-edge and customer-edge routers, ISP and telecom infrastructure, transportation networks, lodging and hospitality systems, government and military-adjacent networks.

This is not hypothetical. The advisory includes extremely detailed penetration chains: attackers exploit widely known “Common Vulnerabilities and Exposures” (CVEs) in routers, firewalls, VPNs, and management interfaces, then establish persistence through configuration modifications, traffic mirroring, injected services, and encrypted tunnels. This lets them monitor, redirect, copy, or exfiltrate traffic across entire service regions.

Put plainly: if your internet service provider has a heartbeat and publicly routable equipment, the attackers have probably knocked on the door. And for a depressingly large number of large-scale network operators, they got in.

This is classical intelligence tradecraft. The PRC’s immediate goal isn’t ransomware. It’s not crypto mining. It’s not vandalism. It’s good old-fashioned espionage: long-term access, silent monitoring, and selective exploitation.

What They’re Collecting: Clues About Intent

The advisory makes the overall aim explicit: to give PRC intelligence the ability to identify and track targets’ communications and movements worldwide.

That includes metadata on calls, enterprise-internal communications, hotel and travel itineraries, traffic patterns for government and defense systems, persistent vantage points on global networks.

This is signals intelligence (SIGINT), not smash-and-grab.

And importantly: this kind of operation requires enormous intelligence-analytic processing, not a general-purpose “LLM training dataset.” These are targeted, high-value accesses, not indiscriminate web scrapes. The attackers are going after specific information—strategic, diplomatic, military, infrastructure, and political—not broad consumer content.

So no, this advisory is not about “AI training.” It is about access, exfiltration, and situational awareness across vital global communications arteries.

Does This Tell Us Anything About TikTok?

Officially, no. The advisory never mentions TikTok, ByteDance, or consumer social media apps. It is focused squarely on infrastructure.

But from a strategic-intent standpoint, it absolutely matters. Because when you combine:

1. Global telecom-layer access
2. Persistent long-term SIGINT footholds
3. The PRC’s demonstrated appetite for foreign behavioral data
4. The existence of the richest behavioral dataset on Earth—TikTok’s U.S. user base

—you get a coherent picture of the intelligence ecosystem the Chinese Communist Party is building on…I guess you’d have to say “the world”.

If a nation-state is willing to invest years compromising backbone routers, it is not a stretch to imagine what they could do with a mobile app installed on the phones of oh say 170 million Americans to pick a random number that conveniently collects social graphs, location traces, contact patterns, engagement preferences, political and commercial interests that are visible in the PRC.

But again, don’t worry. The advisory suggests only that Chinese state actors have global access to the infrastructure over which your TikTok traffic travels—not that they would dare take an interest in the app itself. And besides, the TikTok executives swore under oath to the U.S. Congress that it didn’t happen that way so it must be true.

After all, why would a government running a worldwide intrusion program want access to the largest behavioral-data sensor array outside the NSA?

If you still believe the PRC is nowhere near TikTok’s data, then this advisory will reassure you: it’s just a gentle reminder that Chinese state actors are burrowed into global telecom backbones, hotel networks, transportation systems, and military-adjacent infrastructure—pure souls simply striving to make sure your “For You” page loads quickly.

After all, why would a government running a worldwide network-intrusion program have any interest in the richest behavioral dataset on Earth?

Y’all Street Rising: Why the Future of Music Finance Won’t Be Made in Manhattan

There’s a new gravity well in American finance, and it’s not New York. It’s not even Silicon Valley. It’s Dallas. It’s Austin. It’s Y’all Street.

And anyone paying attention could have seen it coming. The Texas Stock Exchange (TXSE) is preparing for launch in 2026.  TXSW is not some bulletin board; it’s backed by billions from institutions that have grown weary of the compliance culture and cost of New York. Goldman Sachs’s Dallas campus is now operational. BlackRock and Charles Schwab have shifted major divisions to the Lone Star State. Tesla and Samsung are expanding giga-scale manufacturing and chip fabrication plants.

A strong center of gravity for capital formation is moving south, and with it, a new cultural economy is taking shape. And AI may not save it:  Scion Asset Management, “Big Short” investor Michael Burry’s hedge fund, disclosed to the SEC that it had a short bet worth $1.1 billion against Nvidia and Palantir.   He’s also investing in waterthat AI burns.  So not everyone is jumping off a cliff.

A New Realignment

Texas startups have raised roughly $9.8 billion in venture capital through Q3 2025, pushing the state to a consistent #4 ranking nationally. Austin remains the creative and software hub, while Dallas–Fort Worth and Houston lead in AI infrastructure, energy tech, and finance.

The TXSE will formalize what investors already know: capital markets no longer need Manhattan to function.

And that raises an uncomfortable question for the music industry:

If capital, infrastructure, and innovation no longer orbit Wall Street, why should music?

Apple Learned It the Hard Way

Despite New York’s rich musical legacy—Tin Pan Alley, Brill Building, CBGB, and the era of the major-label tower when Sony occupied that horrible AT&T building and flew sushi in from Japan for the executive dining room—the city has become an increasingly difficult place to sustain large-scale creative infrastructure. Real estate costs, over-regulation, and financial concentration have hollowed out the middle layer of production.  As I’ve taught for years, the key element to building the proverbial “creative class” is cheap rent, preferably with a detached garage.

Even Apple Inc. learned long ago that creativity can’t thrive where every square foot carries a compliance surcharge. That’s why Apple’s global supply chain, data centers, and now content operations span Texas, Tennessee, and North Carolina instead of Midtown Manhattan.  And then there’s the dirty power, sump pumps and subways—Electric Lady would probably never get built today.

The lesson for the music business is clear: creative capital follows economic oxygen. And right now, that oxygen is in Texas.

The Texas Music Office: A Model for How to Get It Done

If you want to understand how Texas built a durable, bipartisan music infrastructure, start with the Texas Music Office (TMO). Founded in 1990 under Governor Bill Clements, the TMO was one of the first state agencies in America to recognize the music industry not just as culture, but as economic development.

Over the decades—through governors of both parties—the TMO has become a master class in how to institutionalize support for creative enterprise without strangling it in bureaucracy. From George W. Bush’s early focus on export promotion, to Rick Perry’s integration of music into economic development, to Greg Abbott’s expansion of the Music Friendly Communities network, each administration built upon rather than dismantled what came before.

Today, the TMO supports more than 70 certified Music Friendly Communities, funds music-education grants, tracks economic data, and connects local musicians with investors and international partners. It’s a template for how a state can cultivate creative industries while maintaining fiscal discipline and accountability.

It’s also proof that cultural policy doesn’t have to be partisan—it just has to be practical.

When people ask why Texas has succeeded where others stalled, the answer is simple: the TMO stayed focused on results, not rhetoric. That’s a lesson a lot of states—and more than a few record labels—could stand to relearn.

Artist Rights Institute: Doing Our Part for Texas and Beyond

The Artist Rights Institute (ARI) has done its part to make sure that Texas and other local music and creators aren’t an afterthought in rooms that are usually dominated by platform interests and coastal trade groups.

When questions of AI training, copyright allocation, black-box royalties, and streaming transparency landed in front of the U.S. Copyright Office, Congress, and U.K. policymakers, ARI showed up with the Texas view: creators first, no speculative ticketing, no compulsory “data donation,” and no silent expropriation of recordings and songs for AI. ARI has filed comments, contributed research, and supported amicus work to make sure Texas artists, songwriters, and indie publishers are in the record — not just the usual New York, Nashville, and Los Angeles voices.

Just as important, ARI has pushed financial education for artists. Because Y’all Street doesn’t help creators if they don’t know what a discount rate is, how catalog valuations work, how to read a mechanical statement, or why AI licenses need to be expressly excluded from legacy record and publishing deals. ARI programs in Texas and Georgia have focused on:
– explaining how federal policy actually hits musicians,
– showing how to negotiate or at least spot AI/derivative-use clauses,
– and connecting artists to local music industry infrastructure.

In other words, ARI joined other Texas and Georgia organizations to be a translator between Texas’s very real music economy and the fast-moving policy debates in Washington and the U.K. If Texas is going to be the place where music is financed, ARI wants to make sure local artists are also the ones who capture the value.

Music’s Texas Moment

Texas is no newcomer to the business of music. Its industry already generates over $13.4 billion in annual economic activity, supporting more than 91,000 jobs across its certified cities. Austin retains the crown of “Live Music Capital of the World,” but Denton, Fort Worth, and San Antonio have joined the state-certified network of “Music Friendly Communities”.

Meanwhile, universities from UT-Austin to Texas A&M study rights management, AI provenance, and royalties in the age of generative audio.

The result: a state that treats music not as nostalgia, but as an evolving economic engine.  Plus we’ve got Antone’s.

Wall Street’s ‘Great Sucking Sound,’ Replayed

Ross Perot once warned of “that giant sucking sound” as jobs moved south. Thirty years later, the sound you hear isn’t manufacturing—it’s money, data, and influence flowing to Y’all Street.

If the major labels and publishers don’t track that migration, they risk becoming cultural tenants in cities they no longer own. The next catalog securitization, the next AI-royalty clearinghouse, the next Bell Labs-for-Music could just as easily be financed out of Dallas as from Midtown.

Because while New York made the hits of the last century, Texas may well finance the next one.  We’ve always had the musicians, producers, authors, actors and film makers, but soon we’ll also have the money.

Y’all Ready?

The world no longer needs a Midtown address to mint creative wealth. As the TXSE prepares its debut and Texas cements its position as the nation’s innovation corridor, the music industry faces a choice:

Follow the capital—or become another cautionary tale of what happens when you mistake heritage for destiny.

Because as Apple learned long ago, even the richest history can’t compete with the freedom to build something new.  

SB 683: California’s Quiet Rejection of the DMCA—and a Roadmap for Real AI Accountability

When Lucian Grainge drew a bright line—“UMG will not do business with bad actors regardless of the consequences”—he did more than make a corporate policy statement.  He threw down a moral challenge to an entire industry: choose creators or choose exploitation.

California’s recently passed SB 683 does not shout as loudly, but it answers the same call. By refusing to copy Washington’s bureaucratic NO FAKES Act and its DMCA-style “notice-and-takedown” maze, SB 683 quietly re-asserts a lost principle: rights are vindicated through courts and accountability, not compliance portals.

What SB 683 actually does

SB 683 amends California Civil Code § 3344, the state’s right-of-publicity statute for living persons, to make injunctive relief real and fast.  If someone’s name, voice, or likeness is exploited without consent, a court can now issue a temporary restraining order or preliminary injunction.  If the order is granted without notice, the defendant must comply within two business days.  

That sounds procedural—and it is—but it matters. SB 683 replaces “send an email to a platform” with “go to a judge.”   It converts moral outrage into enforceable law.

The deeper signal: a break from the DMCA’s bureaucracy

For twenty-seven years, the Digital Millennium Copyright Act (DMCA) has governed online infringement through a privatized system of takedown notices, counter-notices, and platform safe harbors.  When it was passed, Silicon Valley came alive with schemes to get around copyright infringement through free riding schemes that beat a path to Grokster‘s door.

But the DMCA was built for a dial-up internet and has aged about as gracefully as a boil on cow’s butt.

The Copyright Office’s 2020 Section 512 Study concluded that whatever Solomonic balance Congress thought it was making has completely collapsed:

“[T]he volume of notices demonstrates that the notice-and-takedown system does not effectively remove infringing content from the internet; it is, at best, a game of whack-a-mole.”

“Congress’ original intended balance has been tilted askew.”  

“Rightsholders report notice-and-takedown is burdensome and ineffective.”  

“Judicial interpretations have wrenched the process out of alignment with Congress’ intentions.” 
 
“Rising notice volume can only indicate that the system is not working.”  

Unsurprisingly, the Office concluded that “Roughly speaking, many OSPs spoke of section 512 as being a success, enabling them to [free ride and] grow exponentially and serve the public without facing debilitating lawsuits [or one might say, paying the freight]. Rightsholders reported a markedly different perspective, noting grave concerns with the ability of individual creators to meaningfully use the section 512 system to address copyright infringement and the “whack-a-mole” problem of infringing content re-appearing after being taken down. Based upon its own analysis of the present effectiveness of section 512, the Office has concluded that Congress’ original intended balance has been tilted askew.”

Which is a genteel way of saying the DMCA is an abject failure for creators and halcyon days for venture-backed online service providers. So why would anyone who cared about creators want to continue that absurd process?

SB 683 flips that logic. Instead of creating bureaucracy and rewarding the one who can wait out the last notice standing, it demands obedience to law.  Instead of deferring to internal “trust and safety” departments, it puts a judge back in the loop. That’s a cultural and legal break—a small step, but in the right direction.

The NO FAKES Act: déjà vu all over again

Washington’s proposed NO FAKES Act is designed to protect individuals from AI-generated digital replicas which is great. However—NO FAKES recreates the truly awful DMCA’s failed architecture: a federal registry of “designated agents,” a complex notice-and-takedown workflow, and a new safe-harbor regime based on “good-faith compliance.”    You know, notice and notice and notice and notice and notice and notice and…..

If NO FAKES passes, platforms like Google would again hold all the procedural cards: largely ignore notices until they’re convenient, claim “good faith,” and continue monetizing AI-generated impersonations.  In other words, it gives the platforms exactly what they wanted because delay is the point.  I seriously doubt that Congress of 1998 thought that their precious DMCA would be turned into a not so funny joke on artists, and I do remember Congressman Howard Berman (one of the House managers for DMCA) looking like he was going to throw up during the SOPA hearings when he found out how many millions of DMCA notices YouTube alone receives.  So why would we want to make the same mistake again thinking we’ll have a different outcome?  With the same platforms now richer beyond category? Who could possibly defend such garbage as anything but a colossal mistake?

The approach of SB 683 is, by contrast, the opposite of NO FAKES. It tells creators: you don’t need to find the right form—you need to find a judge.  It tells platforms: if a court says take it down, you have two days, not two months of emails, BS counter notices and a bad case of learned helplessness.  True, litigation is more costly than sending a DMCA notice, but litigation is far more likely to be effective in keeping infringing material down and will not become a faux “license” like DMCA has become.  

The DMCA heralded twenty-seven years of normalizing massive and burdensome copyright infringement and raising generations of lawyers to defend the thievery while Big Tech scooped up free rider rents that they then used for anti-creator lobbying around the world.  It should be entirely unsurprising that all of that litigation and lobbying has lead us to the current existential crisis.

Lucian Grainge’s throw-down and the emerging fault line

When Mr. Grainge spoke, he wasn’t just defending Universal’s catalog; he was drawing a perimeter around normalizing AI exploitation, and not buying into an even further extension of “permissionless innovation.”

Universal’s position aligns with what California just did. While Congress toys with a federal opt-out regime for AI impersonations, Sacramento quietly passed a law grounded in judicial enforcement and personal rights.  It’s not perfect, but it’s a rejection of the “catch me if you can” ethos that has defined Silicon Valley’s relationship with artists for decades.

A job for the Attorney General

SB 683 leaves enforcement to private litigants, but the scale of AI exploitation demands public enforcement under the authority of the State.  California’s Attorney General should have explicit power to pursue pattern-or-practice actions against companies that:

– Manufacture or distribute AI-generated impersonations of deceased performers (like Sora 2’s synthetic videos).
– Monetize those impersonations through advertising or subscription revenue (like YouTube does right now with the Sora videos).
– Repackage deepfake content as “user-generated” to avoid responsibility.

Such conduct isn’t innovation—it’s unfair competition under California law. AG actions could deliver injunctions, penalties, and restitution far faster than piecemeal suits. And as readers know, I love a good RICO, so let’s put out there that the AG should consider prosecuting the AI cabal with its interlocking investments under Penal Code §§ 186–186.8, known as the California Control of Profits of Organized Crime Act (CCPOCA) (h/t Seeking Alpha).

While AI platforms complain of “burdensome” and “unproductive” litigation, that’s simply not true of enterprises like the AI cabal—litigation is exactly what was required in order to reveal the truth about massive piracy powering the circular AI bubble economy. Litigation has revealed that the scale of infringement by AI platforms like Anthropic and Meta is so vast that private damages are meaningless. It is increasingly clear these companies are not alone—they have relied on pirate libraries and torrent ecosystems to ingest millions of works across every creative category. Rather than whistle past the graveyard while these sites flourish, government must confront its failure to enforce basic property rights. When theft becomes systemic, private remedies collapse, and enforcement becomes a matter for the state. Even Anthropic’s $1.5 billion settlement feels hollow because the crime is so immense. Not just because statutory damages in the US were also established in 1999 to confront…CD ripping.

AI regulation as the moment to fix the DMCA

The coming wave of AI legislation represents the first genuine opportunity in a generation to rewrite the online liability playbook.  AI and the DMCA cannot peacefully coexist—platforms will always choose whichever regime helps them keep the money.

If AI regulation inherits the DMCA’s safe harbors, nothing changes. Instead, lawmakers should take the SB 683 cue:
– Restore judicial enforcement.  
– Tie AI liability to commercial benefit. 
– Require provenance, not paperwork.  
– Authorize public enforcement.

The living–deceased gap: California’s unfinished business


SB 683 improves enforcement for living persons, but California’s § 3344.1 already protects deceased individuals against digital replicas.  That creates an odd inversion: John Coltrane’s estate can challenge an AI-generated “Coltrane tone,” but a living jazz artist cannot.   The Legislature should align the two statutes so the living and the dead share the same digital dignity.

Why this matters now

Platforms like YouTube host and monetize videos generated by AI systems such as Sora, depicting deceased performers in fake performances.  If regulators continue to rely on notice-and-takedown, those platforms will never face real risk.   They’ll simply process the takedown, re-serve the content through another channel, and cash another check.

The philosophical pivot

The DMCA taught the world that process can replace principle. SB 683 quietly reverses that lesson.  It says: a person’s identity is not an API, and enforcement should not depend on how quickly you fill out a form.

In the coming fight over AI and creative rights, that distinction matters. California’s experiment in court-centered enforcement could become the model for the next generation of digital law—where substance defeats procedure, and accountability outlives automation.

SB 683 is not a revolution, but it’s a reorientation. It abandons the DMCA’s failed paperwork culture and points toward a world where AI accountability and creator rights converge under the rule of law.

If the federal government insists on doubling down with the NO FAKES Act’s national “opt-out” registry, California may once again find itself leading by quiet example: rights first, bureaucracy last.

Denmark’s Big Idea: Protect Personhood from the Blob With Consent First and Platform Duty Built In

Denmark has given the rest of us a simple, powerful starting point: protect the personhood of citizens from the blob—the borderless slurry of synthetic media that can clone your face, your voice, and your performance at scale. Crucially, Denmark isn’t trying to turn name‑image‑likeness into a mini‑copyright. It’s saying something more profound: your identity isn’t a “work”; it’s you. It’s what is sometimes called “personhood.” That framing changes everything. It’s not commerce, it’s a human right.

The Elements of Personhood

Personhood raises human reality as moral consideration, not a piece of content. For example, the European Court of Human Rights reads Article 8 ECHR (“private life”) to include personal identity (name, identity integrity, etc.), protecting individual identity against unjustified interference. This is, of course, anathema to Silicon Valley, but the world takes a different view.

In fact, Denmark’s proposal echoes the Universal Declaration of Human Rights. It starts with dignity (Art. 1) and recognition of each person before the law (Art. 6), and it squarely protects private life, honor, and reputation against synthetic impersonation (Art. 12). It balances freedom of expression (Art. 19) with narrow, clearly labeled carve-outs, and it respects creators’ moral and material interests (Art. 27(2)). Most importantly, it delivers an effective remedy (Art. 8): a consent-first rule backed by provenance and cross-platform stay-down, so individuals aren’t forced into DMCA-style learned helplessness.

Why does this matter? Because the moment we call identity or personhood a species of copyright, platforms will reach for a familiar toolbox—quotation, parody, transient copies, text‑and‑data‑mining (TDM)—and claim exceptions to protect them from “data holders”. That’s bleed‑through: the defenses built for expressive works ooze into an identity context where they don’t belong. The result is an unearned permission slip to scrape faces and voices “because the web is public.” Denmark points us in the opposite direction: consent or it’s unlawful. Not “fair use,” not “lawful access,” not “industry custom., not “data profile.” Consent. Pretty easy concept. It’s one of the main reasons tech executives keep their kids away from cell phones and social media.

Not Replicating the Safe Harbor Disaster

Think about how we got here. The first generation of the internet scaled by pushing risk downstream with a portfolio of safe harbors like the God-awful DMCA and Section 230 in the US. Platforms insisted they were deserving of blanket liability shields because they were special. They were “neutral pipes” which no one believed then and don’t believe now. These massive safe harbors hardened into a business model that likely added billions to the FAANG bottom line. We taught millions of rightsholders and users to live with learned helplessness: file a notice, watch copies multiply, rinse and repeat. Many users did not know they could even do that much, and frankly still may not. That DMCA‑era whack‑a‑mole turned into a faux license, a kind of “catch me if you can” bargain where exhaustion replaces consent.

Denmark’s New Protection of Personhood for the AI Era

Denmark’s move is a chance to break that pattern—if we resist the gravitational pull back to copyright. A fresh right of identity (called a “sui generis” right among Latin fans) is not subject to copyright or database exceptions, especially fair use, DMCA, and TDM. In plain English: “publicly available” is not permission to clone your face, train on your voice, or fabricate your performance. Or your children, either. If an AI platform wants to use identity, they ask first. If they don’t ask, they don’t get to do it, and they don’t get to keep the model they trained on it. And like many other areas, children can’t consent.

That legal foundation unlocks the practical fix creators and citizens actually need: stay‑down across platforms, not endless piecemeal takedowns. Imagine a teacher discovers a convincing deepfake circulating on two social networks and a messaging app. If we treat that deepfake as a copyright issue under the old model, she sends three notices, then five, then twelve. Week two, the video reappears with a slight change. Week three, it’s re‑encoded, mirrored, and captioned. The message she receives under a copyright regime is “you can never catch up.” So why don’t you just give up. Which, of course, in the world of Silicon Valley monopoly rents, is called the plan. That’s the learned helplessness Denmark gives us permission to reject.

Enforcing Personhood

How would the new plan work? First, we treat realistic digital imitations of a person’s face, voice, or performance as illegal absent consent, with only narrow, clearly labeled carve‑outs for genuine public‑interest reporting (no children, no false endorsement, no biometric spoofing risk, provenance intact). That’s the rights architecture: bright lines and human‑centered. Hence, “personhood.”

Second, we wire enforcement to succeed at internet scale. The way out of whack‑a‑mole is a cross‑platform deepfake registry operated with real governance. A deepfake registry doesn’t store videos; it stores non‑reversible fingerprints—exact file hashes for byte‑for‑byte matches and robust, perceptual fingerprints for the variants (different encodes, crops, borders). For audio, we use acoustic fingerprints; for video, scene/frame signatures. These markers will evolve and so should the deepfakes registry. One confirmed case becomes a family of identifiers that platforms check at upload and on re‑share. The first takedown becomes the last.

Third, we pair that with provenance by default: Provenance isn’t a license; it’s evidence. When credentials are present, it’s easier to authenticate so there is an incentive to use them. Provenance is the rebar that turns legal rules into reliable, automatable processes. However, absence of credentials doesn’t mean free for all.

Finally, we put the onus where it belongs—on platforms. Europe’s Digital Services Act at least theoretically already replaced “willful blindness” with “notice‑and‑action” duties and oversight for very large platforms. Denmark’s identity right gives citizens a clear, national‑law basis to say: “This is illegal content—remove it and keep it down.” The platform’s job isn’t to litigate fair use in the abstract or hide behind TDM. It’s to implement upload checks, preserve provenance, run repeat‑offender policies, and prevent recurrences. If a case was verified yesterday, it shouldn’t be back tomorrow with a 10‑pixel border or other trivial alteration to defeat the rules.

Some will ask: what about creativity and satire? The answer is what it has always been in responsible speech law—more speech not fake speech. If you’re lampooning a politician with a clearly labeled synthetic speech, no implied endorsement, provenance intact, and no risk of biometric spoofing or fraud, you have defenses. The point isn’t to smother satire; it’s to end the pretense that satire requires open season on the biometric identities of private citizens and working artists.

Others will ask: what about research and innovation? Good research runs on consent, especially human subject research (see 45 C.F.R. part 46). If a lab wants to study voice cloning, it recruits consenting participants, documents scope and duration, and keeps data and models in controlled settings. That’s science. What isn’t science is scraping the voices of a country’s population “because the web is public,” then shipping a model that anyone can use to spoof a bank’s call‑center checks. A no‑TDM‑bleed‑through clause draws that line clearly.

And yes, edge cases exist. There will be appeals, mistakes, and hard calls at the margins. That is why the registry must be governed—with identity verification, transparent logs, fast appeals, and independent oversight. Done right, it will look less like a black box and more like infrastructure: a quiet backbone that keeps people safe while allowing reporting and legitimate creative work to thrive.

If Denmark’s spark is to become a firebreak, the message needs to be crisp:

— This is not copyright. Identity is a personal right; copyright defenses don’t apply.

— Consent is the rule. Deepfakes without consent is unlawful.

— No TDM bleed‑through. “Publicly available” does not equate to permission to clone or train.

— Provenance helps prove, not permit. Keep credentials intact; stripping them has consequences.

— Stay‑down, cross‑platform. One verified case should not become a thousand reuploads.

That’s how you protect personhood from the blob. By refusing to treat humans like “content,” by ending the faux‑license of whack‑a‑mole, and by making platforms responsible for prevention, not just belated reaction. Denmark has given us the right opening line. Now we should finish the paragraph: consent or block. Label it, prove it, or remove it.

Shilling Like It’s 1999: Ars, Anthropic, and the Internet of Other People’s Things

Ars Technica just ran a piece headlined “AI industry horrified to face largest copyright class action ever certified.”

It’s the usual breathless “innovation under siege” framing—complete with quotes from “public interest” groups that, if you check the Google Shill List submitted to Judge Alsup in the Oracle case and Public Citizen’s Mission Creep-y, have long been in the paid service of Big Tech. Judge Alsup…hmmm…isn’t he the judge in the very Anthropic case that Ars is going on about?

Here’s what Ars left out: most of these so-called advocacy outfits—EFF, Public Knowledge, CCIA, and their cousins—have been doing Google’s bidding for years, rebranding corporate priorities as public interest. It’s an old play: weaponize the credibility of “independent” voices to protect your bottom line.

The article parrots the industry’s favorite excuse: proving copyright ownership is too hard, so these lawsuits are bound to fail. That line would be laughable if it weren’t so tired; it’s like elder abuse. We live in the age of AI deduplication, manifest checking, and robust content hashing—technologies the AI companies themselves use daily to clean, track, and optimize their training datasets. If they can identify and strip duplicates to improve model efficiency, they can identify and track copyrighted works. What they mean is: “We’d rather not, because it would expose the scale of our free-riding.”

That’s the unspoken truth behind these lawsuits. They’re not about “stifling innovation.” They’re about holding accountable an industry that’s built its fortunes on what can only be called the Internet of Other People’s Things—a business model where your creative output, your data, and your identity are raw material for someone else’s product, without permission, payment, or even acknowledgment.

Instead of cross-examining these corporate talking points like you know…journalists…Ars lets them pass unchallenged, turning what could have been a watershed moment for transparency into a PR assist. That’s not journalism—it’s message laundering.

The lawsuit doesn’t threaten the future of AI. It threatens the profitability of a handful of massive labs—many backed by the same investors and platforms that bankroll these “public interest” mouthpieces. If the case succeeds, it could force AI companies to abandon the Internet of Other People’s Things and start building the old-fashioned way: by paying for what they use.

Come on, Ars. Do we really have to go through this again? If you’re going to quote industry-adjacent lobbyists as if they were neutral experts, at least tell readers who’s paying the bills. Otherwise, it’s just shilling like it’s 1999.

When Viceroy David Sacks Writes the Tariffs: How One VC Could Weaponize U.S. Trade Against the EU

David Sacks is a “Special Government Employee”, Silicon Valley insider and a PayPal mafioso who has become one of the most influential “unofficial” architects of AI policy under the Trump administration. No confirmation hearings, no formal role—but direct access to power.

He:
– Hosts influential political podcasts with Musk and Thiel-aligned narratives.
– Coordinates behind closed doors with elite AI companies who are now PRC-style “national champions” (OpenAI, Anthropic, Palantir).
– Has reportedly played a central role in shaping the AI Executive Orders and industrial strategy driving billions in public infrastructure to favored firms.

Under 18 U.S.C. § 202(a), a Special Government Employee is:

  • Temporarily retained to perform limited government functions,
  • For no more than 130 days per year (which for Sacks ends either April 14 or May 30, 2025), unless reappointed in a different role,
  • Typically serves in an advisory or consultative role, or
  • Without holding actual decision-making or operational authority over federal programs or agencies.

SGEs are used to avoid conflict-of-interest entanglements for outside experts while still tapping their expertise for advisory purposes. They are not supposed to wield sweeping executive power or effectively run a government program. Yeah, right.

And like a good little Silicon Valley weasel, Sacks supposedly is alternating between his DC side hustle and his VC office to stay under 130 days. This is a dumbass reading of the statute which says “‘Special Government employee’ means… any officer or employee…retained, designated, appointed, or employed…to perform…temporary duties… for not more than 130 days during any period of 365 consecutive days.” That’s not the same as “worked” 130 days on the time card punch. But oh well.

David Sacks has already exceeded the legal boundaries of his appointment as a Special Government Employee (SGE) both in time served but also by directing the implementation of a sweeping, whole-of-government AI policy, including authoring executive orders, issuing binding directives to federal agencies, and coordinating interagency enforcement strategies—actions that plainly constitute executive authority reserved for duly appointed officers under the Appointments Clause. As an SGE, Sacks is authorized only to provide temporary, nonbinding advice, not to exercise operational control or policy-setting discretion across the federal government. Accordingly, any executive actions taken at his direction or based on his advisement are constitutionally infirm as the unlawful product of an individual acting without valid authority, and must be deemed void as “fruit of the poisonous tree.”

Of course, one of the states that the Trump AI Executive Orders will collide with almost immediately is the European Union and its EU AI Act. Were they 51st? No that’s Canada. 52nd? Ah, right that’s Greenland. Must be 53rd.

How Could David Sacks Weaponize Trade Policy to Help His Constituents in Silicon Valley?

Here’s the playbook:

Engineer Executive Orders

Through his demonstrated access to Trump and senior White House officials, Sacks could promote executive orders under the International Emergency Economic Powers Act (IEEPA) or Section 301 of the Trade Act, aimed at punishing countries (like EU members) for “unfair restrictions” on U.S. AI exports or operations.

Something like this: “The European Union’s AI Act constitutes a discriminatory and protectionist measure targeting American AI innovation, and materially threatens U.S. national security and technological leadership.” I got your moratorium right here.

Leverage the USTR as a Blunt Instrument

The Office of the U.S. Trade Representative (USTR) can initiate investigations under Section 301 without needing new laws. All it takes is political will—and a nudge from someone like Viceroy Sacks—to argue that the EU’s AI Act discriminates against U.S. firms. See Canada’s “Tech Tax”. Gee, I wonder if Viceroy Sacks had anything to do with that one.

Redefine “National Security”

Sacks and his allies can exploit the Trump administration’s loose definition of “national security” claiming that restricting U.S. AI firms in Europe endangers critical defense and intelligence capabilities.

Smear Campaigns and Influence Operations

Sacks could launch more public campaigns against the EU like his attacks on the AI diffusion rule. According to the BBC, “Mr. Sacks cited the alienation of allies as one of his key arguments against the AI diffusion plan”. That’s a nice ally you got there, be a shame if something happened to it.

After all, the EU AI Act does what Sacks despises like protects artists and consumers, restricts deployment of high-risk AI systems (like facial recognition and social scoring), requires documentation of training data (which exposes copyright violations), and applies extraterritorially (meaning U.S. firms must comply even at home).

And don’t forget, Viceroy Sacks actually was given a portfolio that at least indirectly includes the National Security Council, so he can use the NATO connection to put a fine edge on his “industrial patriotism” just as war looms over Europe.

When Policy Becomes Personal

In a healthy democracy, trade retaliation should be guided by evidence, public interest, and formal process.

But under the current setup, someone like David Sacks can short-circuit the system—turning a private grievance into a national trade war. He’s already done it to consumers, wrongful death claims and copyright, why not join war lords like Eric Schmidt and really jack with people? Like give deduplication a whole new meaning.

When one man’s ideology becomes national policy, it’s not just bad governance.

It’s a broligarchy in real time.

Beyond Standard Oil: How the AI Action Plan Made America a Command Economy for Big Tech That You Will Pay For

When the White House requested public comments earlier this year on how the federal government should approach artificial intelligence, thousands of Americans—ranging from scientists to artists, labor leaders to civil liberties advocates—responded with detailed recommendations. Yet when America’s AI Action Plan was released today, it became immediately clear that those voices were largely ignored. The plan reads less like a response to public input and more like a pre-written blueprint drafted in collaboration with the very corporations it benefits. The priorities, language, and deregulatory thrust suggest that the real consultations happened behind closed doors—with Big Tech executives, not the American people.

In other words, business as usual.

By any historical measure—Standard Oil, AT&T, or even the Cold War military-industrial complex—the Trump Administration’s “America’s AI Action Plan” represents a radical leap toward a command economy built for and by Big Tech. Only this time, there are no rate regulations, no antitrust checks, and no public obligations—just streamlined subsidies, deregulation, and federally orchestrated dominance by a handful of private AI firms.

“Frontier Labs” as National Champions

The plan doesn’t pretend to be neutral. It picks winners—loudly. Companies like OpenAI, Anthropic, Meta, Microsoft, and Google are effectively crowned as “national champions,” entrusted with developing the frontier of artificial intelligence on behalf of the American state.

– The National AI Research Resource (NAIRR) and National Science Foundation partnerships funnel taxpayer-funded compute and talent into these firms.
– Federal procurement standards now require models that align with “American values,” but only as interpreted by government-aligned vendors.
– These companies will receive priority access to compute in a national emergency, hard-wiring them into the national security apparatus.
– Meanwhile, so-called “open” models will be encouraged in name only—no requirement for training data transparency, licensing, or reproducibility.

This is not a free market. This is national champion industrial policy—without the regulation or public equity ownership that historically came with it.

Infrastructure for Them, Not Us

The Action Plan reads like a wishlist from Silicon Valley’s executive suites:

– Federal lands are being opened up for AI data centers and energy infrastructure.
– Environmental and permitting laws are gutted to accelerate construction of facilities for private use.
– A national electrical grid expansion is proposed—not to serve homes and public transportation, but to power hyperscaler GPUs for model training.
– There’s no mention of public access, community benefit, or rural deployment. This is infrastructure built with public expense for private use.

Even during the era of Ma Bell, the public got universal service and price caps. Here? The public is asked to subsidize the buildout and then stand aside.

Deregulation for the Few, Discipline for the Rest

The Plan explicitly orders:
– Rescission of Biden-era safety and equity requirements.
– Reviews of FTC investigations to shield AI firms from liability.
– Withholding of federal AI funding from states that attempt to regulate the technology for safety, labor, or civil rights purposes.

Meanwhile, these same companies are expected to supply the military, detect cyberattacks, run cloud services for federal agencies, and set speech norms in government systems.

The result? An unregulated cartel tasked with executing state functions.

More Extreme Than Standard Oil or AT&T

Let’s be clear: Standard Oil was broken up. AT&T had to offer regulated universal service. Lockheed, Raytheon, and the Cold War defense contractors were overseen by procurement auditors and GAO enforcement.

This new AI economy is more privatized than any prior American industrial model—yet more dependent on the federal government than ever before. It’s an inversion of free market principles wrapped in American flags and GPU clusters.

Welcome to the Command Economy—For Tech Oligarchs

There’s a word for this: command economy. But instead of bureaucrats in Soviet ministries, we now have a handful of unelected CEOs directing infrastructure, energy, science, education, national security, and labor policy—all through cozy relationships with federal agencies.

If we’re going to nationalize AI, let’s do it honestly—with public governance, democratic accountability, and shared benefit. But this halfway privatized, fully subsidized, and wholly unaccountable structure isn’t capitalism. It’s capture.

David Sacks Is Learning That the States Still Matter

For a moment, it looked like the tech world’s powerbrokers had pulled it off. Buried deep in a Republican infrastructure and tax package was a sleeper provision — the so-called AI moratorium — that would have blocked states from passing their own AI laws for up to a decade. It was an audacious move: centralize control over one of the most consequential technologies in history, bypass 50 state legislatures, and hand the reins to a small circle of federal agencies and especially to tech industry insiders.

But then it collapsed.

The Senate voted 99–1 to strike the moratorium. Governors rebelled. Attorneys general sounded the alarm. Artists, parents, workers, and privacy advocates from across the political spectrum said “no.” Even hardline conservatives like Ted Cruz eventually reversed course when it came down to the final vote. The message to Big Tech or the famous “Little Tech” was clear: the states still matter — and America’s tech elite ignore that at their peril.  (“Little Tech” is the latest rhetorical deflection promoted by Big Tech aka propaganda.)

The old Google crowd pushed the moratorium–their fingerprints were obvious. Having gotten fabulously rich off of their two favorites: The DMCA farce and the Section 230 shakedown. But there’s increasing speculation that White House AI Czar and Silicon Valley Viceroy David Sacks, PayPal alum and vocal MAGA-world player, was calling the ball. If true, that makes this defeat even more revealing.

Sacks represents something of a new breed of power-hungry tech-right influencer — part of the emerging “Red Tech” movement that claims to reject woke capitalism and coastal elitism but still wants experts to shape national policy from Silicon Valley, a chapter straight out of Philip Dru: Administrator. Sacks is tied to figures like Peter Thiel, Elon Musk, and a growing network of Trump-aligned venture capitalists. But even that alignment couldn’t save the moratorium.

Why? Because the core problem wasn’t left vs. right. It was top vs. bottom.

In 1964, Ronald Reagan’s classic speech called A Time for Choosing warned about “a little intellectual elite in a far-distant capitol” deciding what’s best for everyone else. That warning still rings true — except now the “capitol” might just be a server farm in Menlo Park or a podcast studio in LA.

The AI moratorium was an attempt to govern by preemption and fiat, not by consent. And the backlash wasn’t partisan. It came from red states and blue ones alike — places where elected leaders still think they have the right to protect their citizens from unregulated surveillance, deepfakes, data scraping, and economic disruption.

So yes, the defeat of the moratorium was a blow to Google’s strategy of soft-power dominance. But it was also a shot across the bow for David Sacks and the would-be masters of tech populism. You can’t have populism without the people.

If Sacks and his cohort want to play a long game in AI policy, they’ll have to do more than drop ideas into the policy laundry of think tank white papers and Beltway briefings. They’ll need to win public trust, respect state sovereignty, and remember that governing by sneaky safe harbors is no substitute for legitimacy.  

The moratorium failed because it presumed America could be governed like a tech startup — from the top, at speed, with no dissent. Turns out the country is still under the impression they have something to say about how they are governed, especially by Big Tech.

Steve’s Not Here–Why AI Platforms Are Still Acting Like Pirate Bay

In 2006, I wrote “Why Not Sell MP3s?” — a simple question pointing to an industry in denial. The dominant listening format was the MP3 file, yet labels were still trying to sell CDs or hide digital files behind brittle DRM. It seems kind of incredible in retrospect, but believe me it happened. Many cycles were burned on that conversation. Fans had moved on. The business hadn’t.

Then came Steve Jobs.

At the launch of the iTunes Store — and I say this as someone who sat in the third row — Jobs gave one of the most brilliant product presentations I’ve ever seen. He didn’t bulldoze the industry. He waited for permission, but only after crafting an offer so compelling it was as if the labels should be paying him to get in. He brought artists on board first. He made it cool, tactile, intuitive. He made it inevitable.

That’s not what’s happening in AI.

Incantor: DRM for the Input Layer

Incantor is trying to be the clean-data solution for AI — a system that wraps content in enforceable rights metadata, licenses its use for training and inference, and tracks compliance. It’s DRM, yes — but applied to training inputs instead of music downloads.

It may be imperfect, but at least it acknowledges that rights exist.

What’s more troubling is the contrast between Incantor’s attempt to create structure and the behavior of the major AI platforms, which have taken a very different route.

AI Platforms = Pirate Bay in a Suit

Today’s generative AI platforms — the big ones — aren’t behaving like Apple. They’re behaving like The Pirate Bay with a pitch deck.

– They ingest anything they can crawl.
– They claim “public availability” as a legal shield.
– They ignore licensing unless forced by litigation or regulation.
– They posture as infrastructure, while vacuuming up the cultural labor of others.

These aren’t scrappy hackers. They’re trillion-dollar companies acting like scraping is a birthright. Where Jobs sat down with artists and made the economics work, the platforms today are doing everything they can to avoid having that conversation.

This isn’t just indifference — it’s design. The entire business model depends on skipping the licensing step and then retrofitting legal justifications later. They’re not building an ecosystem. They’re strip-mining someone else’s.

What Incantor Is — and Isn’t

Incantor isn’t Steve Jobs. It doesn’t control the hardware, the model, the platform, or the user experience. It can’t walk into the room and command the majors to listen with elegance. But what it is trying to do is reintroduce some form of accountability — to build a path for data that isn’t scraped, stolen, or in legal limbo.

That’s not an iTunes power move. It’s a cleanup job. And it won’t work unless the AI companies stop pretending they’re search engines and start acting like publishers, licensees, and creative partners.

What the MP3 Era Actually Taught Us

The MP3 era didn’t end because DRM won. It ended because someone found a way to make the business model and the user experience better — not just legal, but elegant. Jobs didn’t force the industry to change. He gave them a deal they couldn’t refuse.

Today, there’s no Steve Jobs. No artists on stage at AI conferences. No tactile beauty. Just cold infrastructure, vague promises, and a scramble to monetize other people’s work before the lawsuits catch up. Let’s face it–when it comes to Elon, Sam, or Zuck, would you buy a used Mac from that man?

If artists and AI platforms were in one of those old “I’m a Mac / I’m a PC” commercials, you wouldn’t need to be told which is which. One side is creative, curious, collaborative. The other is corporate, defensive, and vaguely annoyed that you even asked the question.

Until that changes, platforms like Incantor will struggle to matter — and the AI industry will continue to look less like iTunes, and more like Pirate Bay with an enterprise sales team.