The Paradox of Huang’s Rope

If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.

The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.

It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.

This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?

The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.

Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.

Back to Commandeering Again: David Sacks, the AI Moratorium, and the Executive Order Courts Will Hate

Why Silicon Valley’s in-network defenses can’t paper over federalism limits.

The old line attributed to music lawyer Allen Grubman is, “No conflict, no interest.” Conflicts are part of the music business. But the AI moratorium that David Sacks is pushing onto President Trump (the idea that Washington should freeze or preempt state AI protections in the absence of federal AI policy) takes that logic to a different altitude. It asks the public to accept not just conflicts of interest, but centralized control of AI governance built around the financial interests of a small advisory circle, including Mr. Sacks himself.

When the New York Times published its reporting on Sacks’s hundreds of AI investments and his role in shaping federal AI and chip policy, the reaction from Silicon Valley was immediate and predictable. What’s most notable is who didn’t show up. No broad political coalition. No bipartisan defense. Just a tight cluster of VC and AI-industry figures from he AI crypto–tech nexus, praising their friend Mr. Sacks and attacking the story.

And the pattern was unmistakable: a series of non-denial denials from people who it is fair to say are massively conflicted themselves.

No one said the Times lied.

No one refuted the documented conflicts.

Instead, Sacks’ tech bros defenders attacked tone and implied bias, and suggested the article merely arranged “negative truths” in an unflattering narrative (although the Times did not even bring up Mr. Sacks’ moratorium scheme).

And you know who has yet to defend Mr. Sacks? Donald J. Trump. Which tells you all you need to know.

The Rumored AI Executive Order and Federal Lawsuits Against States

Behind the spectacle sits the most consequential part of the story: a rumored executive order that would direct the U.S. Department of Justice to sue states whose laws “interfere with AI development.” Reuters reports that “U.S. President Donald Trump is considering an executive order that would seek to preempt state laws on artificial intelligence through lawsuits and by withholding federal funding, according to a draft of the order seen by Reuters….”

That is not standard economic policy. That is not innovation strategy. That is commandeering — the same old unconstitutional move in shiny AI packaging that we’ve discussed many times starting with the One Big Beautiful Bill Act catastrophe.

The Supreme Court has been clear on this such as in Printz v. United States (521 U.S. 898 (1997) at 925): “[O]pinions of ours have made clear that the Federal Government may not compel the States to implement,by legislation or executive action, federal regulatory programs.”

Crucially, the Printz Court teaches us what I think is the key fact. Federal policy for all the United States is to be made by the legislative process in regular order subject to a vote of the people’s representatives, or by executive branch agencies that are led by Senate-confirmed officers of the United States appointed by the President and subject to public scrutiny under the Administrative Procedures Act. Period.

The federal government then implements its own policies directly. It cannot order states to implement federal policy, including in the negative by prohibiting states from exercising their Constitutional powers in the absence of federal policy. The Supreme Court crystalized this issue in a recent Congressional commandeering case of Murphy v. NCAA (138 S. Ct. 1461 (2018)) where the court held “[t]he distinction between compelling a State to enact legislation and prohibiting a State from enacting new laws is an empty one. The basic principle—that Congress cannot issue direct orders to state legislatures—applies in either event.” Read together, Printz and Murphy extend this core principle of federalism to executive orders.

The “presumption against preemption” is a canon of statutory interpretation that the Supreme Court has repeatedly held to be a foundational principle of American federalism. It also has the benefit of common sense. The canon reflects the deep Constitutional understanding that, unless Congress clearly says otherwise—which implies Congress has spoken—states retain their traditional police powers over matters such as the health, safety, land use, consumer protection, labor, and property rights of their citizens. Courts begin with the assumption that federal law does not displace state law, especially in areas the states have regulated for generations, all of which are implicated in the AI “moratorium”.

The Supreme Court has repeatedly affirmed this principle. When Congress legislates in fields historically occupied by the states, courts require a clear and manifest purpose to preempt state authority. Ambiguous statutory language is interpreted against preemption. This is not a policy preference—it is a rule of interpretation rooted in constitutional structure and respect for state sovereignty that goes back to the Founders.

The presumption is strongest where federal action would displace general state laws rather than conflict with a specific federal command. Consumer protection statutes, zoning and land-use controls, tort law, data privacy, and child-safety laws fall squarely within this protected zone. Federal silence is not enough; nor is agency guidance or executive preference.

In practice, the presumption against preemption forces Congress to own the consequences of preemption. If lawmakers intend to strip states of enforcement authority, they must do so plainly and take political responsibility for that choice. This doctrine serves as a crucial brake on back-door federalization, preventing hidden preemption in technical provisions and preserving the ability of states to respond to emerging harms when federal action lags or stalls. Like in A.I.

Applied to an A.I. moratorium, the presumption against preemption cuts sharply against federal action. A moratorium that blocks states from legislating even where Congress has chosen not to act flips federalism on its head—turning federal inaction into total regulatory paralysis, precisely what the presumption against preemption forbids.

As the Congressional Research Service primer on preemption concludes:

The Constitution’s Supremacy Clause provides that federal law is “the supreme Law of the Land” notwithstanding any state law to the contrary. This language is the foundation for the doctrine of federal preemption, according to which federal law supersedes conflicting state laws. The Supreme Court has identified two general ways in which federal law can preempt state law. First, federal law can expressly preempt state law when a federal statute or regulation contains explicit preemptive language. Second, federal law can impliedly preempt state law when Congress’s preemptive intent is implicit in the relevant federal law’s structure and purpose.

In both express and implied preemption cases, the Supreme Court has made clear that Congress’s purpose is the “ultimate touchstone” of its statutory analysis. In analyzing congressional purpose, the Court has at times applied a canon of statutory construction known as the “presumption against preemption,” which instructs that federal law should not be read as superseding states’ historic police powers “unless that was the clear and manifest purpose of Congress.”

If there is no federal statute, no one has any idea what that purpose is, certainly no justiciabile idea. Therefore, my bet is that the Court would hold that the Executive Branch cannot unilaterally create preemption, and neither can the DOJ sue states simply because the White House dislikes their AI, privacy, or biometric laws, much less their zoning laws applied to data centers.

Why David Sacks’s Involvement Raises the Political Temperature

As Scott Fitzgerald famously wrote, the very rich are different. But here’s what’s not different—David Sacks has something he’s not used to having. A boss. And that boss has polls. And those polls are not great at the moment. It’s pretty simple, really. When you work for a politician, your job is to make sure his polls go up, not down.

David Sacks is making his boss look bad. Presidents do not relish waking up to front-page stories that suggest their “A.I. czar” holds hundreds of investments directly affected by federal A.I. strategy, that major policy proposals track industry wish lists more closely than public safeguards, or that rumored executive orders could ignite fifty-state constitutional litigation led by your supporters like Mike Davis and egged on by people like Steve Bannon.

Those stories don’t just land on the advisor; they land on the President’s desk, framed as questions of his judgment, control, and competence. And in politics, loyalty has a shelf life. The moment an advisor stops being an asset and starts becoming a daily distraction much less liability, the calculus changes fast. What matters then is not mansions, brilliance, ideology, or past service, but whether keeping that adviser costs more than cutting them loose. I give you Elon Musk.

AI Policy Cannot Be Built on Preemption-by-Advisor

At bottom, this is a bet. The question isn’t whether David Sacks is smart, well-connected, or persuasive inside the room. The real question is whether Donald Trump wants to stake his presidency on David Sacks being right—right about constitutional preemption, right about executive authority, right about federal power to block the states, and right about how courts will react.

Because if Sacks is wrong, the fallout doesn’t land on him. It lands on the President. A collapsed A.I. moratorium, fifty-state litigation, injunctions halting executive action, and judges citing basic federalism principles would all be framed as defeats for Trump, not for an advisor operating at arm’s length.

Betting the presidency on an untested legal theory pushed by a politically exposed “no conflict no interest” tech investor isn’t bold leadership. It’s unnecessary risk. When Trump’s second term is over in a few years, Trump will be in the history books for all time. No one will remember who David Sacks was.

Marc Andreessen’s Dormant Commerce Clause Fantasy

There’s a special kind of hubris in Silicon Valley, but Marc Andreessen may have finally discovered its purest form: imagining that the Dormant Commerce Clause (DCC) — a Constitutional doctrine his own philosophical allies loathe — will be his golden chariot into the Supreme Court to eliminate state AI regulation.

If you know the history, it borders on comedic, if you think that Ayn Rand is a great comedienne.

The DCC is a judge‑created doctrine inferred from the Commerce Clause (Article I, Section 8, Clause 3), preventing states from discriminating against or unduly burdening interstate commerce. Conservatives have long attacked it as a textless judicial invention. Justice Scalia called it a “judicial fraud”; Justice Thomas wants it abolished outright. Yet Andreessen’s Commerce Clause playbook is built on expanding a doctrine the conservative legal movement has spent 40 years dismantling.

Worse for him, the current Supreme Court is the least sympathetic audience possible.

Justice Gorsuch has repeatedly questioned DCC’s legitimacy and rejects free‑floating “extraterritoriality” theories. Justice Barrett, a Scalia textualist, shows no appetite for expanding the doctrine beyond anti‑protectionism. Justice Kavanaugh is business‑friendly but wary of judicial policymaking. None of these justices would give Silicon Valley a nationwide deregulatory veto disguised as constitutional doctrine. Add Alito and Thomas, and Andreessen couldn’t scrape a majority.

And then there’s Ted Cruz — Scalia’s former clerk — loudly cheerleading a doctrine his mentor spent decades attacking.

National Pork Producers Council v. Ross (2023): The Warning Shot

Andreessen’s theory also crashes directly into the Supreme Court’s fractured decision in the most recent DCC case before SCOTUS, National Pork Producers Council v. Ross (2023), where industry groups tried to use the DCC to strike down California’s animal‑welfare law due to its national economic effects.

The result? A deeply splintered Court produced several opinions.  Justice Gorsuch  announced the judgment of the Court, and delivered the opinion of the Court with respect to Parts I, II, III, IV–A, and V, in which Justices Thomas, Sotomayor, Kagan and Barrett joined, an opinion with respect to Parts IV–B and IV–D, in which Justice Thomas and Barrett joined, and an opinion with respect to Part IV–C, in which Justices Thomas, Sotomayor, and Kagan joined.  Justice Sotomayor filed an opinion concurring in part, in which Justice Kagan joined.  Justice Barrett filed an opinion concurring in part. Chief Justice Roberts filed an opinion concurring in part and dissenting in part, in which Justices Alito, Kavanaugh and Jackson joined. Justice Kavanaugh filed an opinion concurring in part and dissenting in part.

Got it?  

The upshot:
– No majority for expanding DCC “extraterritoriality.”
– No appetite for using DCC to invalidate state laws simply because they influence out‑of‑state markets.
– Multiple justices signaling that courts should not second‑guess state policy judgments through DCC balancing.
– Gorsuch’s lead opinion rejected the very arguments Silicon Valley now repackages for AI.

If Big Tech thinks this Court that decided National Pork—no pun intendedwill hand them a nationwide kill‑switch on state AI laws, they profoundly misunderstand the doctrine and the Court.

Andreessen didn’t just pick the wrong legal strategy. He picked the one doctrine the current Court is least willing to expand. The Dormant Commerce Clause isn’t a pathway to victory — it’s a constitutional dead end masquerading as innovation policy.

But…maybe he’s crazy like a fox.  

The Delay’s the Thing: The Dormant Commerce Clause as Delay Warfare

To paraphrase Saul Alinksy, the issue is never the issue, the issue is always delay.  Of course, if delay is the true objective, you couldn’t pick a better stalling tactic than hanging an entire federal moratorium on one of the Supreme Court’s most obscure and internally conflicted doctrines. The Dormant Commerce Clause isn’t a real path to victory—not with a Court where Scalia’s intellectual heirs openly question its legitimacy. But it is the perfect fig leaf for an executive order.

The point isn’t to win the case. The point is to give Trump just enough constitutional garnish to issue the EO, freeze state enforcement, and force every challenge into multi‑year litigation. That buys the AI industry exactly what it needs:  time. Time to scale. Time to consolidate. Time to embed itself into public infrastructure and defense procurement. Time  to become “too essential to regulate” or as Senator Hawley asked, too big to prosecute?

Big Tech doesn’t need a Supreme Court victory. It needs a judicial cloud, a preemption smokescreen, and a procedural maze that chills state action long enough for the industry to entrench itself permanently.  And no one knows that better than the moratorium’s biggest cheerleader, Senator Ted Cruz the Scalia clerk.

The Dormant Commerce Clause, in this context, isn’t a doctrine. It’s delay‑ware—legal molasses poured over every attempt by states to protect their citizens. And that delay may just be the real prize.

Structural Capture and the Trump AI Executive Order

The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. 


In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.”  This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.

AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions.  States also control the electrical utilities and zoning infrastructure that AI data centers depend on. 

Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.

This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. 
Those engineering incentives map cleanly onto the EO’s enforcement logic. 

The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.

There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.

The constitutional landscape is equally important.  Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress.  Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding.  And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.

So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.

The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?

A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.

But the deeper issue is how the AI ecosystem responds to a constrait.  If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.

At the systems level, a DOJ loss may even feed back into corporate strategy.  Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.

Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.

The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.

But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states.  If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.

This deserves close scrutiny before it becomes the template for AI governance moving forward.

DOJ Authority and the “Because China” Trump AI Executive Order

When an Executive Order purports to empower the Department of Justice to sue states, the stakes go well beyond routine federal–state friction.  In the draft Trump AI Executive Order “Eliminating State Law Obstruction of National AI Policy”, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation” whatever that means.  It sounds an awful lot like laws that interfere with Google’s business model. This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven at least as much by private interests of the richest corporations in commercial history as by public policy.

AI regulation sits squarely in longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land use, and labor conditions.  Crucially, states also control the electrical and zoning infrastructure that AI data centers depend on like say putting a private nuclear reactor next to your house.  Directing DOJ to attack these laws effectively deputizes the federal government as the legal enforcer for a handful of private AI companies seeking unbridled “growth” without engaging in the legislative process. Meaning you don’t get a vote. All this against the backdrop of one of the biggest economic bubbles since the last time these companies nearly tanked the U.S. economy.

This inversion is constitutionally significant. 

Historically, DOJ sues states to vindicate federal rights or enforce federal statutes—not to advance the commercial preferences of private industries.  Here, the EO appears to convert DOJ into a litigation shield for private companies looking to avoid state oversight altogether.  Under Youngstown Sheet & Tube Company, et al. v. Charles Sawyer, Secretary of Commerce, the President lacks authority to create new enforcement powers without congressional delegation, and under the major questions doctrine (West Virginia v. EPA), a sweeping reallocation of regulatory power requires explicit statutory grounding from Congress, including the Senate. That would be the Senate that resoundingly stripped the last version of the AI moratorium from the One Big Beautiful Bill Act by a vote of 99-1 against.

There are also First Amendment implications.  Many state AI laws address synthetic impersonation, deceptive outputs, and risks associated with algorithmic distribution.  If DOJ preempts these laws, the speech environment becomes shaped not by public debate or state protections but by executive preference and the operational needs of the largest AI platforms. Courts have repeatedly warned that government cannot structure the speech ecosystem indirectly through private intermediaries (Bantam Books v. Sullivan.)

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states. These provisions warrant careful scrutiny before they become the blueprint for AI governance moving forward.

Too Dynamic to Question, Too Dangerous to Ignore

When Ed Newton-Rex left Stability AI, he didn’t just make a career move — he issued a warning. His message was simple: we’ve built an industry that moves too fast to be honest.

AI’s defenders insist that regulation can’t keep up, that oversight will “stifle innovation.” But that speed isn’t a by-product; it’s the business model. The system is engineered for planned obsolescence of accountability — every time the public begins to understand one layer of technology, another version ships, invalidating the debate. The goal isn’t progress; it’s perpetual synthetic novelty, where nothing stays still long enough to be measured or governed, and “nothing says freedom like getting away with it.”

We’ve seen this play before. Car makers built expensive sensors we don’t want that fail on schedule; software platforms built policies that expire the moment they bite. In both cases, complexity became a shield and a racket — “too dynamic to question.” And yet, like those unasked-for, but paid for, features in the cars we don’t want, AI’s design choices are too dangerous to ignore. (Like what if your brakes really are going out, not just the sensor is malfunctioning.)

Ed Newton-Rex’s point — echoed in his tweets and testimony — is that the industry has mistaken velocity for virtue. He’s right. The danger is not that these systems evolve too quickly to regulate; it’s that they’re designed that way designed to fail just like that brake sensor. And until lawmakers recognize that speed itself is a form of governance, we’ll keep mistaking momentum for inevitability.

SB 683: California’s Quiet Rejection of the DMCA—and a Roadmap for Real AI Accountability

When Lucian Grainge drew a bright line—“UMG will not do business with bad actors regardless of the consequences”—he did more than make a corporate policy statement.  He threw down a moral challenge to an entire industry: choose creators or choose exploitation.

California’s recently passed SB 683 does not shout as loudly, but it answers the same call. By refusing to copy Washington’s bureaucratic NO FAKES Act and its DMCA-style “notice-and-takedown” maze, SB 683 quietly re-asserts a lost principle: rights are vindicated through courts and accountability, not compliance portals.

What SB 683 actually does

SB 683 amends California Civil Code § 3344, the state’s right-of-publicity statute for living persons, to make injunctive relief real and fast.  If someone’s name, voice, or likeness is exploited without consent, a court can now issue a temporary restraining order or preliminary injunction.  If the order is granted without notice, the defendant must comply within two business days.  

That sounds procedural—and it is—but it matters. SB 683 replaces “send an email to a platform” with “go to a judge.”   It converts moral outrage into enforceable law.

The deeper signal: a break from the DMCA’s bureaucracy

For twenty-seven years, the Digital Millennium Copyright Act (DMCA) has governed online infringement through a privatized system of takedown notices, counter-notices, and platform safe harbors.  When it was passed, Silicon Valley came alive with schemes to get around copyright infringement through free riding schemes that beat a path to Grokster‘s door.

But the DMCA was built for a dial-up internet and has aged about as gracefully as a boil on cow’s butt.

The Copyright Office’s 2020 Section 512 Study concluded that whatever Solomonic balance Congress thought it was making has completely collapsed:

“[T]he volume of notices demonstrates that the notice-and-takedown system does not effectively remove infringing content from the internet; it is, at best, a game of whack-a-mole.”

“Congress’ original intended balance has been tilted askew.”  

“Rightsholders report notice-and-takedown is burdensome and ineffective.”  

“Judicial interpretations have wrenched the process out of alignment with Congress’ intentions.” 
 
“Rising notice volume can only indicate that the system is not working.”  

Unsurprisingly, the Office concluded that “Roughly speaking, many OSPs spoke of section 512 as being a success, enabling them to [free ride and] grow exponentially and serve the public without facing debilitating lawsuits [or one might say, paying the freight]. Rightsholders reported a markedly different perspective, noting grave concerns with the ability of individual creators to meaningfully use the section 512 system to address copyright infringement and the “whack-a-mole” problem of infringing content re-appearing after being taken down. Based upon its own analysis of the present effectiveness of section 512, the Office has concluded that Congress’ original intended balance has been tilted askew.”

Which is a genteel way of saying the DMCA is an abject failure for creators and halcyon days for venture-backed online service providers. So why would anyone who cared about creators want to continue that absurd process?

SB 683 flips that logic. Instead of creating bureaucracy and rewarding the one who can wait out the last notice standing, it demands obedience to law.  Instead of deferring to internal “trust and safety” departments, it puts a judge back in the loop. That’s a cultural and legal break—a small step, but in the right direction.

The NO FAKES Act: déjà vu all over again

Washington’s proposed NO FAKES Act is designed to protect individuals from AI-generated digital replicas which is great. However—NO FAKES recreates the truly awful DMCA’s failed architecture: a federal registry of “designated agents,” a complex notice-and-takedown workflow, and a new safe-harbor regime based on “good-faith compliance.”    You know, notice and notice and notice and notice and notice and notice and…..

If NO FAKES passes, platforms like Google would again hold all the procedural cards: largely ignore notices until they’re convenient, claim “good faith,” and continue monetizing AI-generated impersonations.  In other words, it gives the platforms exactly what they wanted because delay is the point.  I seriously doubt that Congress of 1998 thought that their precious DMCA would be turned into a not so funny joke on artists, and I do remember Congressman Howard Berman (one of the House managers for DMCA) looking like he was going to throw up during the SOPA hearings when he found out how many millions of DMCA notices YouTube alone receives.  So why would we want to make the same mistake again thinking we’ll have a different outcome?  With the same platforms now richer beyond category? Who could possibly defend such garbage as anything but a colossal mistake?

The approach of SB 683 is, by contrast, the opposite of NO FAKES. It tells creators: you don’t need to find the right form—you need to find a judge.  It tells platforms: if a court says take it down, you have two days, not two months of emails, BS counter notices and a bad case of learned helplessness.  True, litigation is more costly than sending a DMCA notice, but litigation is far more likely to be effective in keeping infringing material down and will not become a faux “license” like DMCA has become.  

The DMCA heralded twenty-seven years of normalizing massive and burdensome copyright infringement and raising generations of lawyers to defend the thievery while Big Tech scooped up free rider rents that they then used for anti-creator lobbying around the world.  It should be entirely unsurprising that all of that litigation and lobbying has lead us to the current existential crisis.

Lucian Grainge’s throw-down and the emerging fault line

When Mr. Grainge spoke, he wasn’t just defending Universal’s catalog; he was drawing a perimeter around normalizing AI exploitation, and not buying into an even further extension of “permissionless innovation.”

Universal’s position aligns with what California just did. While Congress toys with a federal opt-out regime for AI impersonations, Sacramento quietly passed a law grounded in judicial enforcement and personal rights.  It’s not perfect, but it’s a rejection of the “catch me if you can” ethos that has defined Silicon Valley’s relationship with artists for decades.

A job for the Attorney General

SB 683 leaves enforcement to private litigants, but the scale of AI exploitation demands public enforcement under the authority of the State.  California’s Attorney General should have explicit power to pursue pattern-or-practice actions against companies that:

– Manufacture or distribute AI-generated impersonations of deceased performers (like Sora 2’s synthetic videos).
– Monetize those impersonations through advertising or subscription revenue (like YouTube does right now with the Sora videos).
– Repackage deepfake content as “user-generated” to avoid responsibility.

Such conduct isn’t innovation—it’s unfair competition under California law. AG actions could deliver injunctions, penalties, and restitution far faster than piecemeal suits. And as readers know, I love a good RICO, so let’s put out there that the AG should consider prosecuting the AI cabal with its interlocking investments under Penal Code §§ 186–186.8, known as the California Control of Profits of Organized Crime Act (CCPOCA) (h/t Seeking Alpha).

While AI platforms complain of “burdensome” and “unproductive” litigation, that’s simply not true of enterprises like the AI cabal—litigation is exactly what was required in order to reveal the truth about massive piracy powering the circular AI bubble economy. Litigation has revealed that the scale of infringement by AI platforms like Anthropic and Meta is so vast that private damages are meaningless. It is increasingly clear these companies are not alone—they have relied on pirate libraries and torrent ecosystems to ingest millions of works across every creative category. Rather than whistle past the graveyard while these sites flourish, government must confront its failure to enforce basic property rights. When theft becomes systemic, private remedies collapse, and enforcement becomes a matter for the state. Even Anthropic’s $1.5 billion settlement feels hollow because the crime is so immense. Not just because statutory damages in the US were also established in 1999 to confront…CD ripping.

AI regulation as the moment to fix the DMCA

The coming wave of AI legislation represents the first genuine opportunity in a generation to rewrite the online liability playbook.  AI and the DMCA cannot peacefully coexist—platforms will always choose whichever regime helps them keep the money.

If AI regulation inherits the DMCA’s safe harbors, nothing changes. Instead, lawmakers should take the SB 683 cue:
– Restore judicial enforcement.  
– Tie AI liability to commercial benefit. 
– Require provenance, not paperwork.  
– Authorize public enforcement.

The living–deceased gap: California’s unfinished business


SB 683 improves enforcement for living persons, but California’s § 3344.1 already protects deceased individuals against digital replicas.  That creates an odd inversion: John Coltrane’s estate can challenge an AI-generated “Coltrane tone,” but a living jazz artist cannot.   The Legislature should align the two statutes so the living and the dead share the same digital dignity.

Why this matters now

Platforms like YouTube host and monetize videos generated by AI systems such as Sora, depicting deceased performers in fake performances.  If regulators continue to rely on notice-and-takedown, those platforms will never face real risk.   They’ll simply process the takedown, re-serve the content through another channel, and cash another check.

The philosophical pivot

The DMCA taught the world that process can replace principle. SB 683 quietly reverses that lesson.  It says: a person’s identity is not an API, and enforcement should not depend on how quickly you fill out a form.

In the coming fight over AI and creative rights, that distinction matters. California’s experiment in court-centered enforcement could become the model for the next generation of digital law—where substance defeats procedure, and accountability outlives automation.

SB 683 is not a revolution, but it’s a reorientation. It abandons the DMCA’s failed paperwork culture and points toward a world where AI accountability and creator rights converge under the rule of law.

If the federal government insists on doubling down with the NO FAKES Act’s national “opt-out” registry, California may once again find itself leading by quiet example: rights first, bureaucracy last.

Shilling Like It’s 1999: Ars, Anthropic, and the Internet of Other People’s Things

Ars Technica just ran a piece headlined “AI industry horrified to face largest copyright class action ever certified.”

It’s the usual breathless “innovation under siege” framing—complete with quotes from “public interest” groups that, if you check the Google Shill List submitted to Judge Alsup in the Oracle case and Public Citizen’s Mission Creep-y, have long been in the paid service of Big Tech. Judge Alsup…hmmm…isn’t he the judge in the very Anthropic case that Ars is going on about?

Here’s what Ars left out: most of these so-called advocacy outfits—EFF, Public Knowledge, CCIA, and their cousins—have been doing Google’s bidding for years, rebranding corporate priorities as public interest. It’s an old play: weaponize the credibility of “independent” voices to protect your bottom line.

The article parrots the industry’s favorite excuse: proving copyright ownership is too hard, so these lawsuits are bound to fail. That line would be laughable if it weren’t so tired; it’s like elder abuse. We live in the age of AI deduplication, manifest checking, and robust content hashing—technologies the AI companies themselves use daily to clean, track, and optimize their training datasets. If they can identify and strip duplicates to improve model efficiency, they can identify and track copyrighted works. What they mean is: “We’d rather not, because it would expose the scale of our free-riding.”

That’s the unspoken truth behind these lawsuits. They’re not about “stifling innovation.” They’re about holding accountable an industry that’s built its fortunes on what can only be called the Internet of Other People’s Things—a business model where your creative output, your data, and your identity are raw material for someone else’s product, without permission, payment, or even acknowledgment.

Instead of cross-examining these corporate talking points like you know…journalists…Ars lets them pass unchallenged, turning what could have been a watershed moment for transparency into a PR assist. That’s not journalism—it’s message laundering.

The lawsuit doesn’t threaten the future of AI. It threatens the profitability of a handful of massive labs—many backed by the same investors and platforms that bankroll these “public interest” mouthpieces. If the case succeeds, it could force AI companies to abandon the Internet of Other People’s Things and start building the old-fashioned way: by paying for what they use.

Come on, Ars. Do we really have to go through this again? If you’re going to quote industry-adjacent lobbyists as if they were neutral experts, at least tell readers who’s paying the bills. Otherwise, it’s just shilling like it’s 1999.

AI’s Manhattan Project Rhetoric, Clearance-Free Reality

Every time a tech CEO compares frontier AI to the Manhattan Project, take a breath—and remember what that actually means.  Master spycatcher James Jesus Angleton is rolling in his grave. (aka Matt Damon in The Good Shepherd.). And like most elevator pitch talking points, that analogy starts to fall apart on inspection.

The Manhattan Project wasn’t just a moonshot scientific collaboration. It was the most tightly controlled, security-obsessed R&D operation in American history. Every physicist, engineer, and janitor involved had a federal security clearance. Facilities were locked down under military command of General Leslie Groves. Communications were monitored. Access was compartmentalized. And still—still—the Soviets penetrated it.  See Klaus Fuchs.  Let’s understand just how secret the Manhattan Project was—General Curtis LeMay had no idea it was happening until he was asked to set up facilities for the Enola Gay on his bomber base on Tinian a few months before the first nuclear bomb.  You want to find out about the details of any frontier lab, just pick up the newspaper.  Not nearly the same thing. There were no chatbots involved and there were no Special Government Employees with no security clearance.

Oppie Sacks

So when today’s AI executives name-drop Oppenheimer and invoke the gravity of dual-use technologies, what exactly are they suggesting? That we’re building world-altering capabilities without any of the safeguards that even the AI Whiz Kids admit are historically necessary by their Manhattan Project talking point in the pitch deck?

These frontier labs aren’t locked down. They’re open-plan. They’re not vetting personnel. They’re recruiting from Discord servers. They’re not subject to classified environments. They’re training military-civilian dual-use models on consumer cloud platforms. And when questioned, they invoke private sector privilege and push back against any suggestion of state or federal regulation.  And here’s a newsflash—requiring a security clearance for scientific work in the vital national interest is not regulation.  (Neither is copyright but that’s another story.)

Meanwhile, they’re angling for access to Department of Energy nuclear real estate, government compute subsidies, and preferred status in export policy—all under the justification of “national security” because, you know, China.  They want the symbolism of the Manhattan Project without the substance. They want to be seen as indispensable without being held accountable.

The truth is that AI is dual-use. It can power logistics and surveillance, language learning and warfare. That’s not theoretical—it’s already happening. China openly treats AI as part of its military-civil fusion strategy. Russia has targeted U.S. systems with information warfare bots. And our labs? They’re scraping from the open internet and assuming the training data hasn’t been poisoned with the massive misinformation campaigns on Wikipedia, Reddit and X that are routine.

If even the Manhattan Project—run under maximum secrecy—was infiltrated by Soviet spies, what are the chances that today’s AI labs, operating in the wide open are immune?  Wouldn’t a good spycatcher like Angleton assume these wunderkinds have already been penetrated?

We have no standard vetting for employees. No security clearances. No model release controls. No audit trail for pretraining data integrity. And no clear protocol for foreign access to model weights, inference APIs, or sensitive safety infrastructure. It’s not a matter of if. It’s a matter of when—or more likely, a matter of already.

Remember–nobody got rich out of working on the Manhattan Project. That’s another big difference. These guys are in it for the money, make no mistake.

So when you hear the Manhattan Project invoked again, ask the follow-up question: Where’s the security clearance?  Where’s the classification?  Where’s the real protection?  Who’s playing the role of Klaus Fuchs?

Because if AI is our new Manhattan Project, then running it without security is more than hypocrisy. It’s incompetence at scale.

AI Frontier Labs and the Singularity as a Modern Prophetic Cult

It gets rid of your gambling debts 
It quits smoking 
It’s a friend, it’s a companion 
It’s the only product you will ever need
From Step Right Up, written by Tom Waits

The AI “frontier labs” — OpenAI, Anthropic, DeepMind, xAI, and their constellation of evangelists — often present themselves as the high priests of a coming digital transcendence. This is sometimes called “the singularity” which refers to a hypothetical future point when artificial intelligence surpasses human intelligence, triggering rapid, unpredictable technological growth. Often associated with self-improving AI, it implies a transformation of society, consciousness, and control, where human decision-making may be outpaced or rendered obsolete by machines operating beyond our comprehension. 

But viewed through the lens of social psychology, the AI evangelists increasingly resembles that of cognitive dissonance cults, as famously documented in Dr. Leon Festinger and team’s important study of a UFO cult (a la Heaven’s Gate), When Prophecy Fails.  (See also The Great Disappointment.)

In that social psychology foundational study, a group of believers centered around a woman named “Marian Keech” predicted the world would end in a cataclysmic flood, only to be rescued by alien beings — but when the prophecy failed, they doubled down. Rather than abandoning their beliefs, the group rationalized the outcome (“We were spared because of our faith”) and became even more committed. They get this self-hypnotized look, kind of like this guy (and remember-this is what the Meta marketing people thought was the flagship spot for Meta’s entire superintelligence hustle):


This same psychosis permeates Singularity narratives and the AI doom/alignment discourse:
– The world is about to end — not by water, but by unaligned superintelligence.
– A chosen few (frontier labs) hold the secret knowledge to prevent this.
– The public must trust them to build, contain, and govern the very thing they fear.
– And if the predicted catastrophe doesn’t come, they’ll say it was their vigilance that saved us.

Like cultic prophecy, the Singularity promises transformation:
– Total liberation or annihilation (including liberation from annihilation by the Red Menace, i.e., the Chinese Communist Party).
– A timeline (“AGI by 2027”, “everything will change in 18 months”).
– An elite in-group with special knowledge and “Don’t be evil” moral responsibility.
– A strict hierarchy of belief and loyalty — criticism is heresy, delay is betrayal.

This serves multiple purposes:
1. Maintains funding and prestige by positioning the labs as indispensable moral actors.
2. Deflects criticism of copyright infringement, resource consumption, or labor abuse with existential urgency (because China, don’t you know).
3. Converts external threats (like regulation) into internal persecution, reinforcing group solidarity.

The rhetoric of “you don’t understand how serious this is” mirrors cult defenses exactly.

Here’s the rub: the timeline keeps slipping. Every six months, we’re told the leap to “godlike AI” is imminent. GPT‑4 was supposed to upend everything. That didn’t happen, so GPT‑5 will do it for real. Gemini flopped, but Claude 3 might still be the one.

When prophecy fails, they don’t admit error — they revise the story:
– “AI keeps accelerating”
– “It’s a slow takeoff, not a fast one.”
– “We stopped the bad outcomes by acting early.”
– “The doom is still coming — just not yet.”

Leon Festinger’s theories seen in When Prophecy Fails, especially cognitive dissonance and social comparison, influence AI by shaping how systems model human behavior, resolve conflicting inputs, and simulate decision-making. His work guides developers of interactive agents, recommender systems, and behavioral algorithms that aim to mimic or respond to human inconsistencies, biases, and belief formation.   So this isn’t a casual connection.

As with Festinger’s study, the failure of predictions intensifies belief rather than weakening it. And the deeper the believer’s personal investment, the harder it is to turn back. For many AI cultists, this includes financial incentives, status, and identity.

Unlike spiritual cults, AI frontier labs have material outcomes tied to their prophecy:
– Federal land allocations, as we’ve seen with DOE site handovers.
– Regulatory exemptions, by presenting themselves as saviors.
– Massive capital investment, driven by the promise of world-changing returns.

In the case of AI, this is not just belief — it’s belief weaponized to secure public assets, shape global policy, and monopolize technological futures. And when the same people build the bomb, sell the bunker, and write the evacuation plan, it’s not spiritual salvation — it’s capture.

The pressure to sustain the AI prophecy—that artificial intelligence will revolutionize everything—is unprecedented because the financial stakes are enormous. Trillions of dollars in market valuation, venture capital, and government subsidies now hinge on belief in AI’s inevitable dominance. Unlike past tech booms, today’s AI narrative is not just speculative; it is embedded in infrastructure planning, defense strategy, and global trade. This creates systemic incentives to ignore risks, downplay limitations, and dismiss ethical concerns. To question the prophecy is to threaten entire business models and geopolitical agendas. As with any ideology backed by capital, maintaining belief becomes more important than truth.

The Singularity, as sold by the frontier labs, is not just a future hypothesis — it’s a living ideology. And like the apocalyptic cults before them, these institutions demand public faith, offer no accountability, and position themselves as both priesthood and god.

If we want a secular, democratic future for AI, we must stop treating these frontier labs as prophets — and start treating them as power centers subject to scrutiny, not salvation.