Back to Commandeering Again: David Sacks, the AI Moratorium, and the Executive Order Courts Will Hate

Why Silicon Valley’s in-network defenses can’t paper over federalism limits.

The old line attributed to music lawyer Allen Grubman is, “No conflict, no interest.” Conflicts are part of the music business. But the AI moratorium that David Sacks is pushing onto President Trump (the idea that Washington should freeze or preempt state AI protections in the absence of federal AI policy) takes that logic to a different altitude. It asks the public to accept not just conflicts of interest, but centralized control of AI governance built around the financial interests of a small advisory circle, including Mr. Sacks himself.

When the New York Times published its reporting on Sacks’s hundreds of AI investments and his role in shaping federal AI and chip policy, the reaction from Silicon Valley was immediate and predictable. What’s most notable is who didn’t show up. No broad political coalition. No bipartisan defense. Just a tight cluster of VC and AI-industry figures from he AI crypto–tech nexus, praising their friend Mr. Sacks and attacking the story.

And the pattern was unmistakable: a series of non-denial denials from people who it is fair to say are massively conflicted themselves.

No one said the Times lied.

No one refuted the documented conflicts.

Instead, Sacks’ tech bros defenders attacked tone and implied bias, and suggested the article merely arranged “negative truths” in an unflattering narrative (although the Times did not even bring up Mr. Sacks’ moratorium scheme).

And you know who has yet to defend Mr. Sacks? Donald J. Trump. Which tells you all you need to know.

The Rumored AI Executive Order and Federal Lawsuits Against States

Behind the spectacle sits the most consequential part of the story: a rumored executive order that would direct the U.S. Department of Justice to sue states whose laws “interfere with AI development.” Reuters reports that “U.S. President Donald Trump is considering an executive order that would seek to preempt state laws on artificial intelligence through lawsuits and by withholding federal funding, according to a draft of the order seen by Reuters….”

That is not standard economic policy. That is not innovation strategy. That is commandeering — the same old unconstitutional move in shiny AI packaging that we’ve discussed many times starting with the One Big Beautiful Bill Act catastrophe.

The Supreme Court has been clear on this such as in Printz v. United States (521 U.S. 898 (1997) at 925): “[O]pinions of ours have made clear that the Federal Government may not compel the States to implement,by legislation or executive action, federal regulatory programs.”

Crucially, the Printz Court teaches us what I think is the key fact. Federal policy for all the United States is to be made by the legislative process in regular order subject to a vote of the people’s representatives, or by executive branch agencies that are led by Senate-confirmed officers of the United States appointed by the President and subject to public scrutiny under the Administrative Procedures Act. Period.

The federal government then implements its own policies directly. It cannot order states to implement federal policy, including in the negative by prohibiting states from exercising their Constitutional powers in the absence of federal policy. The Supreme Court crystalized this issue in a recent Congressional commandeering case of Murphy v. NCAA (138 S. Ct. 1461 (2018)) where the court held “[t]he distinction between compelling a State to enact legislation and prohibiting a State from enacting new laws is an empty one. The basic principle—that Congress cannot issue direct orders to state legislatures—applies in either event.” Read together, Printz and Murphy extend this core principle of federalism to executive orders.

The “presumption against preemption” is a canon of statutory interpretation that the Supreme Court has repeatedly held to be a foundational principle of American federalism. It also has the benefit of common sense. The canon reflects the deep Constitutional understanding that, unless Congress clearly says otherwise—which implies Congress has spoken—states retain their traditional police powers over matters such as the health, safety, land use, consumer protection, labor, and property rights of their citizens. Courts begin with the assumption that federal law does not displace state law, especially in areas the states have regulated for generations, all of which are implicated in the AI “moratorium”.

The Supreme Court has repeatedly affirmed this principle. When Congress legislates in fields historically occupied by the states, courts require a clear and manifest purpose to preempt state authority. Ambiguous statutory language is interpreted against preemption. This is not a policy preference—it is a rule of interpretation rooted in constitutional structure and respect for state sovereignty that goes back to the Founders.

The presumption is strongest where federal action would displace general state laws rather than conflict with a specific federal command. Consumer protection statutes, zoning and land-use controls, tort law, data privacy, and child-safety laws fall squarely within this protected zone. Federal silence is not enough; nor is agency guidance or executive preference.

In practice, the presumption against preemption forces Congress to own the consequences of preemption. If lawmakers intend to strip states of enforcement authority, they must do so plainly and take political responsibility for that choice. This doctrine serves as a crucial brake on back-door federalization, preventing hidden preemption in technical provisions and preserving the ability of states to respond to emerging harms when federal action lags or stalls. Like in A.I.

Applied to an A.I. moratorium, the presumption against preemption cuts sharply against federal action. A moratorium that blocks states from legislating even where Congress has chosen not to act flips federalism on its head—turning federal inaction into total regulatory paralysis, precisely what the presumption against preemption forbids.

As the Congressional Research Service primer on preemption concludes:

The Constitution’s Supremacy Clause provides that federal law is “the supreme Law of the Land” notwithstanding any state law to the contrary. This language is the foundation for the doctrine of federal preemption, according to which federal law supersedes conflicting state laws. The Supreme Court has identified two general ways in which federal law can preempt state law. First, federal law can expressly preempt state law when a federal statute or regulation contains explicit preemptive language. Second, federal law can impliedly preempt state law when Congress’s preemptive intent is implicit in the relevant federal law’s structure and purpose.

In both express and implied preemption cases, the Supreme Court has made clear that Congress’s purpose is the “ultimate touchstone” of its statutory analysis. In analyzing congressional purpose, the Court has at times applied a canon of statutory construction known as the “presumption against preemption,” which instructs that federal law should not be read as superseding states’ historic police powers “unless that was the clear and manifest purpose of Congress.”

If there is no federal statute, no one has any idea what that purpose is, certainly no justiciabile idea. Therefore, my bet is that the Court would hold that the Executive Branch cannot unilaterally create preemption, and neither can the DOJ sue states simply because the White House dislikes their AI, privacy, or biometric laws, much less their zoning laws applied to data centers.

Why David Sacks’s Involvement Raises the Political Temperature

As Scott Fitzgerald famously wrote, the very rich are different. But here’s what’s not different—David Sacks has something he’s not used to having. A boss. And that boss has polls. And those polls are not great at the moment. It’s pretty simple, really. When you work for a politician, your job is to make sure his polls go up, not down.

David Sacks is making his boss look bad. Presidents do not relish waking up to front-page stories that suggest their “A.I. czar” holds hundreds of investments directly affected by federal A.I. strategy, that major policy proposals track industry wish lists more closely than public safeguards, or that rumored executive orders could ignite fifty-state constitutional litigation led by your supporters like Mike Davis and egged on by people like Steve Bannon.

Those stories don’t just land on the advisor; they land on the President’s desk, framed as questions of his judgment, control, and competence. And in politics, loyalty has a shelf life. The moment an advisor stops being an asset and starts becoming a daily distraction much less liability, the calculus changes fast. What matters then is not mansions, brilliance, ideology, or past service, but whether keeping that adviser costs more than cutting them loose. I give you Elon Musk.

AI Policy Cannot Be Built on Preemption-by-Advisor

At bottom, this is a bet. The question isn’t whether David Sacks is smart, well-connected, or persuasive inside the room. The real question is whether Donald Trump wants to stake his presidency on David Sacks being right—right about constitutional preemption, right about executive authority, right about federal power to block the states, and right about how courts will react.

Because if Sacks is wrong, the fallout doesn’t land on him. It lands on the President. A collapsed A.I. moratorium, fifty-state litigation, injunctions halting executive action, and judges citing basic federalism principles would all be framed as defeats for Trump, not for an advisor operating at arm’s length.

Betting the presidency on an untested legal theory pushed by a politically exposed “no conflict no interest” tech investor isn’t bold leadership. It’s unnecessary risk. When Trump’s second term is over in a few years, Trump will be in the history books for all time. No one will remember who David Sacks was.

Too Dynamic to Question, Too Dangerous to Ignore

When Ed Newton-Rex left Stability AI, he didn’t just make a career move — he issued a warning. His message was simple: we’ve built an industry that moves too fast to be honest.

AI’s defenders insist that regulation can’t keep up, that oversight will “stifle innovation.” But that speed isn’t a by-product; it’s the business model. The system is engineered for planned obsolescence of accountability — every time the public begins to understand one layer of technology, another version ships, invalidating the debate. The goal isn’t progress; it’s perpetual synthetic novelty, where nothing stays still long enough to be measured or governed, and “nothing says freedom like getting away with it.”

We’ve seen this play before. Car makers built expensive sensors we don’t want that fail on schedule; software platforms built policies that expire the moment they bite. In both cases, complexity became a shield and a racket — “too dynamic to question.” And yet, like those unasked-for, but paid for, features in the cars we don’t want, AI’s design choices are too dangerous to ignore. (Like what if your brakes really are going out, not just the sensor is malfunctioning.)

Ed Newton-Rex’s point — echoed in his tweets and testimony — is that the industry has mistaken velocity for virtue. He’s right. The danger is not that these systems evolve too quickly to regulate; it’s that they’re designed that way designed to fail just like that brake sensor. And until lawmakers recognize that speed itself is a form of governance, we’ll keep mistaking momentum for inevitability.

AI Frontier Labs and the Singularity as a Modern Prophetic Cult

It gets rid of your gambling debts 
It quits smoking 
It’s a friend, it’s a companion 
It’s the only product you will ever need
From Step Right Up, written by Tom Waits

The AI “frontier labs” — OpenAI, Anthropic, DeepMind, xAI, and their constellation of evangelists — often present themselves as the high priests of a coming digital transcendence. This is sometimes called “the singularity” which refers to a hypothetical future point when artificial intelligence surpasses human intelligence, triggering rapid, unpredictable technological growth. Often associated with self-improving AI, it implies a transformation of society, consciousness, and control, where human decision-making may be outpaced or rendered obsolete by machines operating beyond our comprehension. 

But viewed through the lens of social psychology, the AI evangelists increasingly resembles that of cognitive dissonance cults, as famously documented in Dr. Leon Festinger and team’s important study of a UFO cult (a la Heaven’s Gate), When Prophecy Fails.  (See also The Great Disappointment.)

In that social psychology foundational study, a group of believers centered around a woman named “Marian Keech” predicted the world would end in a cataclysmic flood, only to be rescued by alien beings — but when the prophecy failed, they doubled down. Rather than abandoning their beliefs, the group rationalized the outcome (“We were spared because of our faith”) and became even more committed. They get this self-hypnotized look, kind of like this guy (and remember-this is what the Meta marketing people thought was the flagship spot for Meta’s entire superintelligence hustle):


This same psychosis permeates Singularity narratives and the AI doom/alignment discourse:
– The world is about to end — not by water, but by unaligned superintelligence.
– A chosen few (frontier labs) hold the secret knowledge to prevent this.
– The public must trust them to build, contain, and govern the very thing they fear.
– And if the predicted catastrophe doesn’t come, they’ll say it was their vigilance that saved us.

Like cultic prophecy, the Singularity promises transformation:
– Total liberation or annihilation (including liberation from annihilation by the Red Menace, i.e., the Chinese Communist Party).
– A timeline (“AGI by 2027”, “everything will change in 18 months”).
– An elite in-group with special knowledge and “Don’t be evil” moral responsibility.
– A strict hierarchy of belief and loyalty — criticism is heresy, delay is betrayal.

This serves multiple purposes:
1. Maintains funding and prestige by positioning the labs as indispensable moral actors.
2. Deflects criticism of copyright infringement, resource consumption, or labor abuse with existential urgency (because China, don’t you know).
3. Converts external threats (like regulation) into internal persecution, reinforcing group solidarity.

The rhetoric of “you don’t understand how serious this is” mirrors cult defenses exactly.

Here’s the rub: the timeline keeps slipping. Every six months, we’re told the leap to “godlike AI” is imminent. GPT‑4 was supposed to upend everything. That didn’t happen, so GPT‑5 will do it for real. Gemini flopped, but Claude 3 might still be the one.

When prophecy fails, they don’t admit error — they revise the story:
– “AI keeps accelerating”
– “It’s a slow takeoff, not a fast one.”
– “We stopped the bad outcomes by acting early.”
– “The doom is still coming — just not yet.”

Leon Festinger’s theories seen in When Prophecy Fails, especially cognitive dissonance and social comparison, influence AI by shaping how systems model human behavior, resolve conflicting inputs, and simulate decision-making. His work guides developers of interactive agents, recommender systems, and behavioral algorithms that aim to mimic or respond to human inconsistencies, biases, and belief formation.   So this isn’t a casual connection.

As with Festinger’s study, the failure of predictions intensifies belief rather than weakening it. And the deeper the believer’s personal investment, the harder it is to turn back. For many AI cultists, this includes financial incentives, status, and identity.

Unlike spiritual cults, AI frontier labs have material outcomes tied to their prophecy:
– Federal land allocations, as we’ve seen with DOE site handovers.
– Regulatory exemptions, by presenting themselves as saviors.
– Massive capital investment, driven by the promise of world-changing returns.

In the case of AI, this is not just belief — it’s belief weaponized to secure public assets, shape global policy, and monopolize technological futures. And when the same people build the bomb, sell the bunker, and write the evacuation plan, it’s not spiritual salvation — it’s capture.

The pressure to sustain the AI prophecy—that artificial intelligence will revolutionize everything—is unprecedented because the financial stakes are enormous. Trillions of dollars in market valuation, venture capital, and government subsidies now hinge on belief in AI’s inevitable dominance. Unlike past tech booms, today’s AI narrative is not just speculative; it is embedded in infrastructure planning, defense strategy, and global trade. This creates systemic incentives to ignore risks, downplay limitations, and dismiss ethical concerns. To question the prophecy is to threaten entire business models and geopolitical agendas. As with any ideology backed by capital, maintaining belief becomes more important than truth.

The Singularity, as sold by the frontier labs, is not just a future hypothesis — it’s a living ideology. And like the apocalyptic cults before them, these institutions demand public faith, offer no accountability, and position themselves as both priesthood and god.

If we want a secular, democratic future for AI, we must stop treating these frontier labs as prophets — and start treating them as power centers subject to scrutiny, not salvation.

AI Needs Ever More Electricity—And Google Wants Us to Pay for It

Uncle Sugar’s “National Emergency” Pitch to Congress

At a recent Congressional hearing, former Google CEO Eric “Uncle Sugar” Schmidt delivered a message that was as jingoistic as it was revealing: if America wants to win the AI arms race, it better start building power plants. Fast. But the subtext was even clearer—he expects the taxpayer to foot the bill because, you know, the Chinese Communist Party. Yes, when it comes to fighting the Red Menace, the all-American boys in Silicon Valley will stand ready to fight to the last Ukrainian, or Taiwanese, or even Texan.

Testifying before the House Energy & Commerce Committee on April 9, Schmidt warned that AI’s natural limit isn’t chips—it’s electricity. He projected that the U.S. would need 92 gigawatts of new generation capacity—the equivalent of nearly 100 nuclear reactors—to keep up with AI demand.

Schmidt didn’t propose that Google, OpenAI, Meta, or Microsoft pay for this themselves, just like they didn’t pay for broadband penetration. No, Uncle Sugar pushed for permitting reform, federal subsidies, and government-driven buildouts of new energy infrastructure. In plain English? He wants the public sector to do the hard and expensive work of generating the electricity that Big Tech will profit from.

Will this Improve the Grid?

And let’s not forget: the U.S. electric grid is already dangerously fragile. It’s aging, fragmented, and increasingly vulnerable to cyberattacks, electromagnetic pulse (EMP) weapons, and even extreme weather events. Pouring public money into ultra-centralized AI data infrastructure—without first securing the grid itself—is like building a mansion on a cracked foundation.

If we are going to incur public debt, we should prioritize resilience, distributed energy, grid security, and community-level reliability—not a gold-plated private infrastructure buildout for companies that already have trillion-dollar valuations.

Big Tech’s Growing Appetite—and Private Hoarding

This isn’t just a future problem. The data center buildout is already in full swing and your Uncle Sugar must be getting nervous about where he’s going to get the money from to run his AI and his autonomous drone weapons. In Oregon, where electricity is famously cheap thanks to the Bonneville Power Administration’s hydroelectric dams on the Columbia River, tech companies have quietly snapped up huge portions of the grid’s output. What was once a shared public benefit—affordable, renewable power—is now being monopolized by AI compute farms whose profits leave the region to the bank accounts in Silicon Valley.

Meanwhile, Microsoft is investing in a nuclear-powered data center next to the defunct Three Mile Island reactor—but again, it’s not about public benefit. It’s about keeping Azure’s training workloads running 24/7. And don’t expect them to share any of that power capacity with the public—or even with neighboring hospitals, schools, or communities.

Letting the Public Build Private Fortresses

The real play here isn’t just to use public power—it’s to get the public to build the power infrastructure, and then seal it off for proprietary use. Moats work both ways.

That includes:
– Publicly funded transmission lines across hundreds of miles to deliver power to remote server farms;
– Publicly subsidized generation capacity (nuclear, gas, solar, hydro—you name it);
– And potentially, prioritized access to the grid that lets AI workloads run while the rest of us face rolling blackouts during heatwaves.

All while tech giants don’t share their models, don’t open their training data, and don’t make their outputs public goods. It’s a privatized extractive model, powered by your tax dollars.

Been Burning for Decades

Don’t forget: Google and YouTube have already been burning massive amounts of electricity for 20 years. It didn’t start with ChatGPT or Gemini. Serving billions of search queries, video streams, and cloud storage events every day requires a permanent baseload—yet somehow this sudden “AI emergency” is being treated like a surprise, as if nobody saw it coming.

If they knew this was coming (and they did), why didn’t they build the power? Why didn’t they plan for sustainability? Why is the public now being told it’s our job to fix their bottleneck?

The Cold War Analogy—Flipped on Its Head

Some industry advocates argue that breaking up Big Tech or slowing AI infrastructure would be like disarming during a new Cold War with China. But Gail Slater, the Assistant Attorney General leading the DOJ’s Antitrust Division, pushed back forcefully—not at a hearing, but on the War Room podcast.

In that interview, Slater recalled how AT&T tried to frame its 1980s breakup as a national security threat, arguing it would hurt America’s Cold War posture. But the DOJ did it anyway—and it led to an explosion of innovation in wireless technology.

“AT&T said, ‘You can’t do this. We are a national champion. We are critical to this country’s success. We will lose the Cold War if you break up AT&T,’ in so many words. … Even so, [the DOJ] moved forward … America didn’t lose the Cold War, and … from that breakup came a lot of competition and innovation.”

“I learned that in order to compete against China, we need to be in all these global races the American way. And what I mean by that is we’ll never beat China by becoming more like China. China has national champions, they have a controlled economy, et cetera, et cetera.

We win all these races and history has taught by our free market system, by letting the ball rip, by letting companies compete, by innovating one another. And the reason why antitrust matters to that picture, to the free market system is because we’re the cop on the beat at the end of the day. We step in when competition is not working and we ensure that markets remain competitive.”

Slater’s message was clear: regulation and competition enforcement are not threats to national strength—they’re prerequisites to it. So there’s no way that the richest corporations in commercial history should be subsidized by the American taxpayer.

Bottom Line: It’s Public Risk, Private Reward

Let’s be clear:

– They want the public to bear the cost of new electricity generation.
– They want the public to underwrite transmission lines.
– They want the public to streamline regulatory hurdles.
– And they plan to privatize the upside, lock down the infrastructure, keep their models secret and socialize the investment risk.

This isn’t a public-private partnership. It’s a one-way extraction scheme. America needs a serious conversation about energy—but it shouldn’t begin with asking taxpayers to bail out the richest companies in commercial history.

Deduplication and Discovery: The Smoking Gun in the Machine

WINSTON

“Wipe up all those little pieces of brains and skull”

From Pulp Fiction, screenplay by Quentin Tarantino and Roger Avary

Deduplication—the process of removing identical or near-identical content from AI training data—is a critical yet often overlooked indicator that AI platforms actively monitor and curate their training sets. This is the kind of process that one would expect given the kind of “scrape, ready, aim” business practices that seems precisely the approach of AI platforms that have ready access to large amounts of fairly high quality data from users of other products placed into commerce by business affiliates or confederates of the AI platforms.

For example, Google Gemini could have access to gmail, YouTube, at least “publicly available” Google Docs, Google Translate, or Google for Education, and then of course one of the great scams of all time, Google Books. Microsoft uses Bing searches, MSN browsing, the consumer Copilot experience, and ad interactions. Amazon uses Alexa prompts, Facebook uses “public” posts and so on.

This kind of hoovering up of indiscriminate amounts of “data” in the form of your baby pictures posted on Facebook and your user generated content on YouTube is bound to produce duplicates. After all, how may users have posted their favorite Billie Eilish or Taylor Swift music video. AI doesn’t need 10000 versions of “Shake it Off” they probably just need the official video. Enter deduplication–which by definition means the platform knows what it has scraped and also knows what it wants to get rid of.

“Get rid of” is a relative concept. In many systems—particularly in storage environments like backup servers or object stores—deduplication means keeping only one physical copy of a file. Any other instances of that data don’t get stored again; instead, they’re represented by pointers to the original copy. This approach, known as inline deduplication, happens in real time and minimizes storage waste without actually deleting anything of functional value. It requires knowing what you have, knowing you have more than one version of the same thing, and being able to tell the system where to look to find the “original” copy without disturbing the process and burning compute inefficiently.

In other cases, such as post-process deduplication, the system stores data initially, then later scans for and eliminates redundancies. Again, the AI platform knows there are two or more versions of the same thing, say the book Being and Nothingness, knows where to find the copies and has been trained to keep only one version. Even here, the duplicates may not be permanently erased—they might be archived, versioned, or logged for auditing, compliance, or reconstruction purposes.

In AI training contexts, deduplication usually means removing redundant examples from the training set to avoid copyright risk. The duplicate content may be discarded from the training pipeline but often isn’t destroyed. Instead, AI companies may retain it in a separate filtered corpus or keep hashed fingerprints to ensure future models don’t retrain on the same material unknowingly.

So they know what they have, and likely know where it came from. They just don’t want to tell any plaintiffs.

Ultimately, deduplication is less about destruction and more about optimization. It’s a way to reduce noise, save resources, and improve performance—while still allowing systems to track, reference, or even rehydrate the original data if needed.

Its existence directly undermines claims that companies are unaware of which copyrighted works were ingested. Indeed, it only makes sense that one of the hidden consequences of the indiscriminate scraping that underpins large-scale AI training is the proliferation of duplicated data. Web crawlers ingest everything they can access—news articles republished across syndicates, forum posts echoed in aggregation sites, Wikipedia mirrors, boilerplate license terms, spammy SEO farms repeating the same language over and over. Without any filtering, this avalanche of redundant content floods the training pipeline.

This is where deduplication becomes not just useful, but essential. It’s the cleanup crew after a massive data land grab. The more messy and indiscriminate the scraping, the more aggressively the model must filter for quality, relevance, and uniqueness to avoid training inefficiencies or—worse—model behaviors that are skewed by repetition. If a model sees the same phrase or opinion thousands of times, it might assume it’s authoritative or universally accepted, even if it’s just a meme bouncing around low-quality content farms.

Deduplication is sort of the Winston Wolf of AI. And if the cleaner shows up, somebody had to order the cleanup. It is a direct response to the excesses of indiscriminate scraping. It’s both a technical fix and a quiet admission that the underlying data collection strategy is, by design, uncontrolled. But while the scraping may be uncontrolled to get copies of as much of your data has they can lay hands on, even by cleverly changing their terms of use boilerplate so they can do all this under the effluvia of legality, they send in the cleaner to take care of the crime scene.

So to summarize: To deduplicate, platforms must identify content-level matches (e.g., multiple copies of Being and Nothingness by Jean-Paul Sartre). This process requires tools that compare, fingerprint, or embed full documents—meaning the content is readable and classifiable–and, oh, yes, discoverable.

Platforms may choose the ‘cleanest’ copy to keep, showing knowledge and active decision-making about which version of a copyrighted work is retained. And–big finish–removing duplicates only makes sense if operators know which datasets they scraped and what those datasets contain.

Drilling down on a platform’s deduplication tools and practices may prove up knowledge and intent to a precise degree—contradicting arguments of plausible deniability in litigation. Johnny ate the cookies isn’t going to fly. There’s a market clearing level of record keeping necessary for deduping to work at all, so it’s likely that there are internal deduplication logs or tooling pipelines that are discoverable.

When AI platforms object to discovery about deduplication, plaintiffs can often overcome those objections by narrowing their focus. Rather than requesting broad details about how a model deduplicates its entire training set, plaintiffs should ask a simple, specific question: Were any of these known works—identified by title or author—deduplicated or excluded from training?

This approach avoids objections about overbreadth or burden. It reframes discovery as a factual inquiry, not a technical deep dive. If the platform claims the data was not retained, plaintiffs can ask for existing artifacts—like hash filters, logs, or manifests—or seek a sworn statement explaining the loss and when it occurred. That, in turn, opens the door to potential spoliation arguments.

If trade secrets are cited, plaintiffs can propose a protective order, limiting access to outside counsel or experts like we’ve done 100,000 times before in other cases. And if the defendant claims “duplicate” is too vague, plaintiffs can define it functionally—as content that’s identical or substantially similar, by hash, tokens, or vectors.

Most importantly, deduplication is relevant. If a platform identified a plaintiff’s work and trained on it anyway, that speaks to volitional use, copying, and lack of care—key issues in copyright and fair use analysis. And if they lied about it, particularly to the court—Helloooooo Harper & Row. Discovery requests that are focused, tailored, and anchored in specific works stand a far better chance of surviving objections and yielding meaningful evidence which hopefully will be useful and lead to other positive results.

David Sacks Is Learning That the States Still Matter

For a moment, it looked like the tech world’s powerbrokers had pulled it off. Buried deep in a Republican infrastructure and tax package was a sleeper provision — the so-called AI moratorium — that would have blocked states from passing their own AI laws for up to a decade. It was an audacious move: centralize control over one of the most consequential technologies in history, bypass 50 state legislatures, and hand the reins to a small circle of federal agencies and especially to tech industry insiders.

But then it collapsed.

The Senate voted 99–1 to strike the moratorium. Governors rebelled. Attorneys general sounded the alarm. Artists, parents, workers, and privacy advocates from across the political spectrum said “no.” Even hardline conservatives like Ted Cruz eventually reversed course when it came down to the final vote. The message to Big Tech or the famous “Little Tech” was clear: the states still matter — and America’s tech elite ignore that at their peril.  (“Little Tech” is the latest rhetorical deflection promoted by Big Tech aka propaganda.)

The old Google crowd pushed the moratorium–their fingerprints were obvious. Having gotten fabulously rich off of their two favorites: The DMCA farce and the Section 230 shakedown. But there’s increasing speculation that White House AI Czar and Silicon Valley Viceroy David Sacks, PayPal alum and vocal MAGA-world player, was calling the ball. If true, that makes this defeat even more revealing.

Sacks represents something of a new breed of power-hungry tech-right influencer — part of the emerging “Red Tech” movement that claims to reject woke capitalism and coastal elitism but still wants experts to shape national policy from Silicon Valley, a chapter straight out of Philip Dru: Administrator. Sacks is tied to figures like Peter Thiel, Elon Musk, and a growing network of Trump-aligned venture capitalists. But even that alignment couldn’t save the moratorium.

Why? Because the core problem wasn’t left vs. right. It was top vs. bottom.

In 1964, Ronald Reagan’s classic speech called A Time for Choosing warned about “a little intellectual elite in a far-distant capitol” deciding what’s best for everyone else. That warning still rings true — except now the “capitol” might just be a server farm in Menlo Park or a podcast studio in LA.

The AI moratorium was an attempt to govern by preemption and fiat, not by consent. And the backlash wasn’t partisan. It came from red states and blue ones alike — places where elected leaders still think they have the right to protect their citizens from unregulated surveillance, deepfakes, data scraping, and economic disruption.

So yes, the defeat of the moratorium was a blow to Google’s strategy of soft-power dominance. But it was also a shot across the bow for David Sacks and the would-be masters of tech populism. You can’t have populism without the people.

If Sacks and his cohort want to play a long game in AI policy, they’ll have to do more than drop ideas into the policy laundry of think tank white papers and Beltway briefings. They’ll need to win public trust, respect state sovereignty, and remember that governing by sneaky safe harbors is no substitute for legitimacy.  

The moratorium failed because it presumed America could be governed like a tech startup — from the top, at speed, with no dissent. Turns out the country is still under the impression they have something to say about how they are governed, especially by Big Tech.

The Patchwork They Fear Is Accountability: Why Big AI Wants a Moratorium on State Laws

Why Big Tech’s Push for a Federal AI Moratorium Is Really About Avoiding State Investigations, Liability, and Transparency

As Congress debates the so-called “One Big Beautiful Bill Act,” one of its most explosive provisions has stayed largely below the radar: a 10-year or 5-year or any-year federal moratorium on state and local regulation of artificial intelligence. Supporters frame it as a common sense way to prevent a “patchwork” of conflicting state laws. But the real reason for the moratorium may be more self-serving—and more ominous.

The truth is, the patchwork they fear is not complexity. It’s accountability.

Liability Landmines Beneath the Surface

As has been well-documented by the New York Times and others, generative AI platforms have likely ingested and processed staggering volumes of data that implicate state-level consumer protections. This includes biometric data (like voiceprints and faces), personal communications, educational records, and sensitive metadata—all of which are protected under laws in states like Illinois (BIPA), California (CCPA/CPRA), and Texas.

If these platforms scraped and trained on such data without notice or consent, they are sitting on massive latent liability. Unlike federal laws, which are often narrow or toothless, many state statutes allow private lawsuits and statutory damages. Class action risk is not hypothetical—it is systemic.  It is crucial for policymakers to have a clear understanding of where we are today with respect to the collision between AI and consumer rights, including copyright.  The corrosion of consumer rights by the richest corporations in commercial history is not something that may happen in the future.  Massive violations have  already occurred, are occurring this minute, and will continue to occur into the future at an increasing rate.  

The Quiet Race to Avoid Discovery

State laws don’t just authorize penalties; they open the door to discovery. Once an investigation or civil case proceeds, AI platforms could be forced to disclose exactly what data they trained on, how it was retained, and whether any red flags were ignored.

This mirrors the arc of the social media addiction lawsuits now consolidated in multidistrict litigation. Platforms denied culpability for years—until internal documents showed what they knew and when. The same thing could happen here, but on a far larger scale.

Preemption as Shield and Sword

The proposed AI moratorium isn’t a regulatory timeout. It’s a firewall. By halting enforcement of state AI laws, the moratorium could prevent lawsuits, derail investigations, and shield past conduct from scrutiny.

Even worse, the Senate version conditions broadband infrastructure funding (BEAD) on states agreeing to the moratorium—an unconstitutional act of coercion that trades state police powers for federal dollars. The legal implications are staggering, especially under the anti-commandeering doctrine of Murphy v. NCAA and Printz v. United States.

This Isn’t About Clarity. It’s About Control.

Supporters of the moratorium, including senior federal officials and lobbying arms of Big Tech, claim that a single federal standard is needed to avoid chaos. But the evidence tells a different story.

States are acting precisely because Congress hasn’t. Illinois’ BIPA led to real enforcement. California’s privacy framework has teeth. Dozens of other states are pursuing legislation to respond to harms AI is already causing.

In this light, the moratorium is not a policy solution. It’s a preemptive strike.

Who Gets Hurt?
– Consumers, whose biometric data may have been ingested without consent
– Parents and students, whose educational data may now be part of generative models
– Artists, writers, and journalists, whose copyrighted work has been scraped and reused
– State AGs and legislatures, who lose the ability to investigate and enforce

Google Is an Example of Potential Exposure

Google’s former executive chairman Eric Schmidt has seemed very, very interested in writing the law for AI.  For example, Schmidt worked behind the scenes for the two years at least to establish US artificial intelligence policy under President Biden. Those efforts produced the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence“, the longest executive order in history. That EO was signed into effect by President Biden on October 30.  In his own words during an Axios interview with Mike Allen, the Biden AI EO was signed just in time for Mr. Schmidt to present that EO as what Mr. Schmidt calls “bait” to the UK government–which convened a global AI safety conference at Bletchley Park in the UK convened by His Excellency Rishi Sunak (the UK’s tech bro Prime Minister) that just happened to start on November 1, the day after President Biden signed the EO.  And now look at the disaster that the UK AI proposal would be.  

As Mr. Schmidt told Axios:

So far we are on a win, the taste of winning is there.  If you look at the UK event which I was part of, the UK government took the bait, took the ideas, decided to lead, they’re very good at this,  and they came out with very sensible guidelines.  Because the US and UK have worked really well together—there’s a group within the National Security Council here that is particularly good at this, and they got it right, and that produced this EO which is I think is the longest EO in history, that says all aspects of our government are to be organized around this.

Apparently, Mr. Schmidt hasn’t gotten tired of winning.  Of course, President Trump rescinded the Biden AI EO which may explain why we are now talking about a total moratorium on state enforcement which percolated at a very pro-Google shillery called R Street Institute, apparently by one Adam Thierer .  But why might Google be so interested in this idea?

Google may face exponentially acute liability under state laws if it turns out that biometric or behavioral data from platforms like YouTube Kids or Google for Education were ingested into AI training sets. 

These services, marketed to families and schools, collect sensitive information from minors—potentially implicating both federal protections like COPPA and more expansive state statutes. As far back as 2015, Senator Ben Nelson raised alarms about YouTube Kids, calling it “ridiculously porous” in terms of oversight and lack of safeguards. If any of that youth-targeted data has been harvested by generative AI tools, the resulting exposure is not just a regulatory lapse—it’s a landmine. 

The moratorium could be seen as an attempt to preempt the very investigations that might uncover how far that exposure goes.

What is to be Done?

Instead of smuggling this moratorium into a must-pass bill, Congress should strip it out and hold open hearings. If there’s merit to federal preemption, let it be debated on its own. But do not allow one of the most sweeping power grabs in modern tech policy to go unchallenged.

The public deserves better. Our children deserve better.  And the states have every right to defend their people. Because the patchwork they fear isn’t legal confusion.

It’s accountability.

Steve’s Not Here–Why AI Platforms Are Still Acting Like Pirate Bay

In 2006, I wrote “Why Not Sell MP3s?” — a simple question pointing to an industry in denial. The dominant listening format was the MP3 file, yet labels were still trying to sell CDs or hide digital files behind brittle DRM. It seems kind of incredible in retrospect, but believe me it happened. Many cycles were burned on that conversation. Fans had moved on. The business hadn’t.

Then came Steve Jobs.

At the launch of the iTunes Store — and I say this as someone who sat in the third row — Jobs gave one of the most brilliant product presentations I’ve ever seen. He didn’t bulldoze the industry. He waited for permission, but only after crafting an offer so compelling it was as if the labels should be paying him to get in. He brought artists on board first. He made it cool, tactile, intuitive. He made it inevitable.

That’s not what’s happening in AI.

Incantor: DRM for the Input Layer

Incantor is trying to be the clean-data solution for AI — a system that wraps content in enforceable rights metadata, licenses its use for training and inference, and tracks compliance. It’s DRM, yes — but applied to training inputs instead of music downloads.

It may be imperfect, but at least it acknowledges that rights exist.

What’s more troubling is the contrast between Incantor’s attempt to create structure and the behavior of the major AI platforms, which have taken a very different route.

AI Platforms = Pirate Bay in a Suit

Today’s generative AI platforms — the big ones — aren’t behaving like Apple. They’re behaving like The Pirate Bay with a pitch deck.

– They ingest anything they can crawl.
– They claim “public availability” as a legal shield.
– They ignore licensing unless forced by litigation or regulation.
– They posture as infrastructure, while vacuuming up the cultural labor of others.

These aren’t scrappy hackers. They’re trillion-dollar companies acting like scraping is a birthright. Where Jobs sat down with artists and made the economics work, the platforms today are doing everything they can to avoid having that conversation.

This isn’t just indifference — it’s design. The entire business model depends on skipping the licensing step and then retrofitting legal justifications later. They’re not building an ecosystem. They’re strip-mining someone else’s.

What Incantor Is — and Isn’t

Incantor isn’t Steve Jobs. It doesn’t control the hardware, the model, the platform, or the user experience. It can’t walk into the room and command the majors to listen with elegance. But what it is trying to do is reintroduce some form of accountability — to build a path for data that isn’t scraped, stolen, or in legal limbo.

That’s not an iTunes power move. It’s a cleanup job. And it won’t work unless the AI companies stop pretending they’re search engines and start acting like publishers, licensees, and creative partners.

What the MP3 Era Actually Taught Us

The MP3 era didn’t end because DRM won. It ended because someone found a way to make the business model and the user experience better — not just legal, but elegant. Jobs didn’t force the industry to change. He gave them a deal they couldn’t refuse.

Today, there’s no Steve Jobs. No artists on stage at AI conferences. No tactile beauty. Just cold infrastructure, vague promises, and a scramble to monetize other people’s work before the lawsuits catch up. Let’s face it–when it comes to Elon, Sam, or Zuck, would you buy a used Mac from that man?

If artists and AI platforms were in one of those old “I’m a Mac / I’m a PC” commercials, you wouldn’t need to be told which is which. One side is creative, curious, collaborative. The other is corporate, defensive, and vaguely annoyed that you even asked the question.

Until that changes, platforms like Incantor will struggle to matter — and the AI industry will continue to look less like iTunes, and more like Pirate Bay with an enterprise sales team.

The OBBBA’s AI Moratorium Provision Has Existential Constitutional Concerns and Policy Implications

As we watch the drama of the One Big Beautiful Bill Act play out there’s a plot twist waiting in the wings that could create a cliffhanger in the third act: The poorly thought out, unnecessary and frankly offensive AI moratorium safe harbor that serves only the Biggest of Big Tech that we were gifted by Adam Theirer of the R Street Institute.

The latest version of the AI moratorium poison pill in the Senate version of OBBBA (aka HR1) reads something like this:

The AI moratorium provision within the One Big Beautiful Bill Act (OBBBA) reads like the fact pattern for a bar exam crossover question. The proposed legislation raises significant Constitutional and policy concerns. Before it even gets to the President’s desk, the legislation likely violates the Senate’s Byrd Rule that allows the OBBBA to avoid the 60 vote threshold (and the filibuster) and get voted on in “reconciliation” on a simple majority. The President’s party has a narrow simple majority in the Senate so if it were not for the moratorium the OBBBA should pass.

There are lots of people who think that the moratorium should fail the “Byrd Bath” analysis because it is not “germane” to the budget and tax process required to qualify for reconciliation. This is important because if the Senate Parliamentarian does not hold the line on germaine-ness, everyone will get into the act for every bill simply by attaching a chunk of money to your favorite donor, and that will not go over well. According to Roll Call, Senator Cruz is already talking about introducing regulatory legislation with the moratorium, which would likely only happen if the OBBBA poison pill was cut out:

The AI moratorium has already picked up some serious opponents in the Senate who would likely have otherwise voted for the President’s signature legislation with the President’s tax and spending policies in place. The difference between the moratorium and spending cuts is that money is fungible and a moratorium banning states from acting under their police powers really, really, really is not fungible at all. The moratorium is likely going to fail or get close to failing, and if the art of the deal says getting 80% of something is better than 100% of nothing, that moratorium is going to go away in the context of a closing. Maybe.

And don’t forget, the bill has to go back to the House which passed it by a single vote and there are already Members of the House who are getting buyers remorse about the AI moratorium specifically. So when they get a chance to vote again…who knows.

Even if it passes, the 40 state Attorneys General who oppose it may be gearing up to launch a Constitutional challenge to the provision on a number of grounds starting with the Tenth Amendment, its implications for federalism, and other Constitutional issues that just drip out of this thing. And my bet is that Adam Thierer will be eyeball witness #1 in that litigation.

So to recap the vulnerabilities:

Byrd Rule Violation

The Byrd Rule prohibits non-budgetary provisions in reconciliation bills. The AI moratorium’s primary effect is regulatory, not fiscal, as it preempts state laws without directly impacting federal revenues or expenditures. Senators, including Ed Markey (D-MA) as reported by Roll Call, have indicated intentions to challenge the provision under the Byrd Rule. The Hill reports:

Federal Preemption, the Tenth Amendment and Anti-Commandeering Doctrine

The Tenth Amendment famously reserves powers not delegated to the federal government to the states and to the people (remember them?). The constitutional principle of “anticommandeering” is a doctrine under U.S. Constitutional law that prohibits the federal government from compelling states or state officials to enact, enforce, or administer federal regulatory programs.

Anticommandeering is grounded primarily in the Tenth Amendment. Under this principle, while the federal government can regulate individuals directly under its enumerated powers (such as the Commerce Clause), it cannot force state governments to govern according to federal instructions. Which is, of course, exactly what the moratorium does, although the latest version would have you believe that the feds aren’t really commandeering, they are just tying behavior to money which the feds do all the time. I doubt anyone believes it.

The AI moratorium infringes upon the good old Constitution by:

  • Overriding State Authority: It prohibits states from enacting or enforcing AI regulations, infringing upon their traditional police powers to legislate for the health, safety, and welfare of their citizens.
  • Lack of Federal Framework: Unlike permissible federal preemption, which operates within a comprehensive federal regulatory scheme, the AI moratorium lacks such a framework, making it more akin to unconstitutional commandeering.
  • Precedent in Murphy v. NCAA: The Supreme Court held that Congress cannot prohibit states from enacting laws, as that prohibition violates the anti-commandeering principle. The AI moratorium, by preventing states from regulating AI, mirrors the unconstitutional aspects identified in Murphy. So there’s that.

The New Problem: Coercive Federalism

By conditioning federal broadband funds (“BEAD money”) on states’ agreement to pause AI regulations , the provision exerts undue pressure on states, potentially violating principles established in cases like NFIB v. Sebelius. Plus, the Broadband Equity, Access, and Deployment (BEAD) Program is a $42.45 billion federal initiative established under the Infrastructure Investment and Jobs Act of 2021. Administered by the National Telecommunications and Information Administration (NTIA), BEAD aims to expand high-speed internet access across the United States by funding planning, infrastructure deployment, and adoption programs. In other words, BEAD has nothing to do with the AI moratorium. So there’s that.

Supremacy Clause Concerns

The moratorium may conflict with existing state laws, leading to legal ambiguities and challenges regarding federal preemption. That’s one reason why 40 state AGs are going to the mattresses for the fight.

Lawmakers Getting Cold Feet or In Opposition

Several lawmakers have voiced concerns or opposition to the AI moratorium:

  • Rep. Marjorie Taylor Greene (R-GA): Initially voted for the bill but later stated she was unaware of the AI provision and would have opposed it had she known. She has said that she will vote no on the OBBBA when it comes back to the House if the Mr. T’s moratorium poison pill is still in there.
  • Sen. Josh Hawley (R-MO): Opposes the moratorium, emphasizing the need to protect individual rights over corporate interests.
  • Sen. Marsha Blackburn (R-TN): Expressed concerns that the moratorium undermines state protections, particularly referencing Tennessee’s AI-related laws.
  • Sen. Edward Markey (D-MA): Intends to challenge the provision under the Byrd Rule, citing its potential to harm vulnerable communities.

Recommendation: Allow Dissenting Voices

Full disclosure, I don’t think Trump gives a damn about the AI moratorium. I also think this is performative and is tied to giving the impression to people like Masa at Softbank that he tried. It must be said that Masa’s billions are not quite as important after Trump’s Middle East roadshow than they were before, speaking of leverage. While much has been made of the $1 million contributions that Zuckerberg, Tim Apple, & Co. made to attend the inaugural, there’s another way to look at that tableau–remember Titus Andronicus when the general returned to Rome with Goth prisoners in chains following his chariot? That was Tamora, the Queen of the Goths, her three sons Alarbus, Chiron, and Demetrius along with Aaron the Moor. Titus and the Goth’s still hated each other. Just sayin’.

Somehow I wouldn’t be surprised if this entire exercise was connected to the TikTok divestment in ways that aren’t entirely clear. So, given the constitutional concerns and growing opposition, it is advisable for President Trump to permit members of Congress to oppose the AI moratorium provision without facing political repercussions, particularly since Rep. Greene has already said she’s a no vote–on the 214-213 vote the first time around. This approach would:

  • Respect the principles of federalism and states’ rights.
  • Tell Masa he tried, but oh well.
  • Demonstrate responsiveness to legitimate legislative concerns on a bi-partisan basis.
  • Ensure that the broader objectives of the OBBBA are not jeopardized by a contentious provision.

Let’s remember: The tax and spend parts of OBBBA are existential to the Trump agenda; the AI moratorium definitely is not, no matter what Mr. T wants you to believe. While the OBBBA encompasses significant policy initiatives which are highly offensive to a lot of people, the AI moratorium provision presents constitutional and procedural challenges and fundamental attacks on our Constitution that warrant its removal. Cutting it out will strengthen the bill’s likelihood of passing and uphold the foundational principles of American governance, at least for now.

Hopefully Trump looks at it that way, too.

What Bell Labs and Xerox PARC Can Teach Us About the Future of Music

When we talk about the great innovation engines of the 20th century, two names stand out: Bell Labs and Xerox PARC. These legendary research institutions didn’t just push the boundaries of science and technology—they found solutions that brought us breakthroughs to challenges. The transistor, the laser, the UNIX operating system, the graphical user interface, and Ethernet networking all trace their origins to these hubs of long-range, cross-disciplinary thinking.

These breakthroughs didn’t happen by accident. They were the product of institutions that were intentionally designed to explore what might be possible outside the pressures of quarterly earnings reports–which means monthly which means weekly. Bell Labs and Xerox PARC proved that bold ideas need space, time, and a mandate to explore—even if commercial applications aren’t immediately apparent. You cannot solve big problems with an eye on weekly revenues–and I know that because I worked at A&M Records.

Now imagine if music had something like Bell Labs and Xerox PARC.

What if there were a Bell Labs for Music—an independent research and development hub where songwriters, engineers, logisticians, rights experts, and economists could collaborate to solve deep-rooted industry challenges? Instead of letting dominant tech platforms dictate the future, the music industry could build its own innovation engine, tailored to the needs of creators. Let’s consider how similar institutions could empower the music industry to reclaim its creative and economic future particularly confronted by AI and its institutional takeover.

Big Tech’s Self-Dealing: A $500 Million Taxpayer-Funded Windfall

While creators are being told to “adapt” to the age of AI, Big Tech has quietly written itself a $500 million check—funded by taxpayers—for AI infrastructure. Buried within the sprawling “innovation and competitiveness” sections of legislation being promoted as part of Trump’s “big beautiful bill,” this provision would hand over half a billion dollars in public funding—more accurately, public debt—to cloud providers, chipmakers, and AI monopolists with little transparency and even fewer obligations to the public.

Don’t bother looking–it will come as no surprise that there are no offsetting provisions for musicians, authors, educators, or even news publishers whose work is routinely scraped to train these AI models. There are no earmarks for building fair licensing infrastructure or consent-based AI training databases. There is no “AI Bell Labs” for the creative economy.

Once again, we see that innovation policy is being written by and for the same old monopolists who already control the platforms and the Internet itself, while the people whose work fills those platforms are left unprotected, uncompensated, and uninformed. If we are willing to borrow hundreds of millions to accelerate private AI growth, we should be at least as willing to invest in creator-centered infrastructure that ensures innovation is equitable—not extractive.

Innovation Needs a Home—and a Conscience

Bell Labs and Xerox PARC were designed not just to build technology, but to think ahead. They solved many future challenges often before the world even knew they existed.

The music industry can—and must—do the same. Instead of waiting for another monopolist to exercise its political clout to grant itself new safe harbors to upend the rules–like AI platforms are doing right now–we can build a space where songwriters, developers, and rights holders collaborate to define a better future. That means metadata that respects rights and tracks payments to creators. That means fair discovery systems. That means artist-first economic models.

It’s time for a Bell Labs for music. And it’s time to fund it not through government dependency—but through creator-led coalitions, industry responsibility, and platform accountability.

Because the future of music shouldn’t be written in Silicon Valley boardrooms. It should be composed, engineered, and protected by the people who make it matter.