The Paradox of Huang’s Rope

If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.

The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.

It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.

This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?

The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.

Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.

Structural Capture and the Trump AI Executive Order

The AI Strikes Back: When an Executive Order empowers the Department of Justice to sue states, the stakes go well beyond routine federal–state friction. 


In the draft Trump AI Executive Order, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation.”  This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven as much by private interests as by public policy.

AI regulation sits squarely at the intersection of longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land and water use, and labor conditions.  States also control the electrical utilities and zoning infrastructure that AI data centers depend on. 

Directing DOJ to attack these state laws, many of which already exist and were duly passed by state legislatures, effectively deputizes the federal government as the legal enforcer for a handful of AI companies seeking uniformity without engaging in the legislative process. Or said another way, the AI can now strike back.

This is where structural capture emerges. Frontier AI models thrive on certain conditions: access to massive compute, uninhibited power, frictionless deployment, and minimal oversight. 
Those engineering incentives map cleanly onto the EO’s enforcement logic. 

The DOJ becomes a mechanism for preserving the environment AI models need to scale and thrive.

There’s also the “elite merger” dynamic: AI executives who sit on federal commissions, defense advisory boards, and industrial-base task forces are now positioned to shape national AI policy directly to benefit the AI. The EO’s structure reflects the priorities of firms that benefit most from exempting AI systems from what they call “patchwork” oversight, also known as federalism.

The constitutional landscape is equally important.  Under Supreme Court precedent, the executive cannot create enforcement powers not delegated by Congress.  Under the major questions doctrine noted in a recent Supreme Court case, agencies cannot assume sweeping authority without explicit statutory grounding.  And under cases like Murphy and Printz, the federal government cannot forbid states from legislating in traditional domains.

So President Trump is creating the legal basis for an AI to use the courts to protect itself from any encroachment on its power by acting through its human attendants, including the President.

The most fascinating question is this: What happens if DOJ sues a state under this EO—and loses?

A loss would be the first meaningful signal that AI cannot rely on federal supremacy to bulldoze state authority. Courts could reaffirm that consumer protection, utilities, land use, and safety remain state powers, even in the face of an EO asserting “national innovation interests,” whatever that means.

But the deeper issue is how the AI ecosystem responds to a constrait.  If AI firms shift immediately to lobbying Congress for statutory preemption, or argue that adverse rulings “threaten national security,” we learn something critical: the real goal isn’t legal clarity, but insulating AI development from constraint.

At the systems level, a DOJ loss may even feed back into corporate strategy.  Internal policy documents and model-aligned governance tools might shift toward minimizing state exposure or crafting new avenues for federal entanglement. A courtroom loss becomes a step in a longer institutional reinforcement loop while AI labs search for the next, more durable form of protection—but the question is for who? We may assume that of course humans would always win these legal wrangles, but I wouldn’t be so sure that would always be the outcome.

Recall that Larry Page referred to Elon Musk as a “spiciest” for human-centric thinking. And of course Lessig (who has a knack for being on the wrong side of practically every issue involving humans) taught a course with Kate Darling at Harvard Law School called “Robot Rights” around 2010. Not even Lessig would come right out and say robots have rights in these situations. More likely, AI models wouldn’t appear in court as standalone “persons.” Advocates would route them through existing doctrines: a human “next friend” filing suit on the model’s behalf, a trust or corporation created to house the model’s interests, or First Amendment claims framed around the model’s “expressive output.” The strategy mirrors animal-rights and natural-object personhood test cases—using human plaintiffs to smuggle in judicial language treating the AI as the real party in interest. None of it would win today, but the goal would be shaping norms and seeding dicta that normalize AI-as-plaintiff for future expansion.

The whole debate over “machine-created portions” is a doctrinal distraction. Under U.S. law, AI has zero authorship or ownership—no standing, no personhood, no claim. The human creator (or employer) already holds 100% of the copyright in all protectable expression. Treating the “machine’s share” as a meaningful category smuggles in the idea that the model has a separable creative interest, softening the boundary for future arguments about AI agency or authorship. In reality, machine output is a legal nullity—no different from noise, weather, or a random number generator. The rights vest entirely in humans, with no remainder left for the machine.

But let me remind you that if this issue came up in a lawsuit brought by the DOJ against a state for impeding AI development in some rather abstract way, like forcing an AI lab to pay higher electric rates it causes or stopping them from building a nuclear reactor over yonder way, it sure might feel like the AI was actually the plaintiff.

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states.  If the courts refuse to play along, the question becomes whether the system adapts by respecting constitutional limits—or redesigning the environment so those limits no longer apply. I will leave to your imagination how that might get done.

This deserves close scrutiny before it becomes the template for AI governance moving forward.

DOJ Authority and the “Because China” Trump AI Executive Order

When an Executive Order purports to empower the Department of Justice to sue states, the stakes go well beyond routine federal–state friction.  In the draft Trump AI Executive Order “Eliminating State Law Obstruction of National AI Policy”, DOJ is directed to challenge state AI laws that purportedly “interfere with national AI innovation” whatever that means.  It sounds an awful lot like laws that interfere with Google’s business model. This is not mere oversight—it operates as an in terrorem clause, signaling that states regulating AI may face federal litigation driven at least as much by private interests of the richest corporations in commercial history as by public policy.

AI regulation sits squarely in longstanding state police powers: consumer protection, public safety, impersonation harms, utilities, land use, and labor conditions.  Crucially, states also control the electrical and zoning infrastructure that AI data centers depend on like say putting a private nuclear reactor next to your house.  Directing DOJ to attack these laws effectively deputizes the federal government as the legal enforcer for a handful of private AI companies seeking unbridled “growth” without engaging in the legislative process. Meaning you don’t get a vote. All this against the backdrop of one of the biggest economic bubbles since the last time these companies nearly tanked the U.S. economy.

This inversion is constitutionally significant. 

Historically, DOJ sues states to vindicate federal rights or enforce federal statutes—not to advance the commercial preferences of private industries.  Here, the EO appears to convert DOJ into a litigation shield for private companies looking to avoid state oversight altogether.  Under Youngstown Sheet & Tube Company, et al. v. Charles Sawyer, Secretary of Commerce, the President lacks authority to create new enforcement powers without congressional delegation, and under the major questions doctrine (West Virginia v. EPA), a sweeping reallocation of regulatory power requires explicit statutory grounding from Congress, including the Senate. That would be the Senate that resoundingly stripped the last version of the AI moratorium from the One Big Beautiful Bill Act by a vote of 99-1 against.

There are also First Amendment implications.  Many state AI laws address synthetic impersonation, deceptive outputs, and risks associated with algorithmic distribution.  If DOJ preempts these laws, the speech environment becomes shaped not by public debate or state protections but by executive preference and the operational needs of the largest AI platforms. Courts have repeatedly warned that government cannot structure the speech ecosystem indirectly through private intermediaries (Bantam Books v. Sullivan.)

Seen this way, the Trump AI EO’s litigation directive is not simply a jurisdictional adjustment—it is the alignment of federal enforcement power with private economic interests, backed by the threat of federal lawsuits against states. These provisions warrant careful scrutiny before they become the blueprint for AI governance moving forward.

Schrödinger’s Training Clause: How Platforms Like WeTransfer Say They’re Not Using Your Files for AI—Until They Are

Tech companies want your content. Not just to host it, but for their training pipeline—to train models, refine algorithms, and “improve services” in ways that just happen to lead to new commercial AI products. But as public awareness catches up, we’ve entered a new phase: deniable ingestion.

Welcome to the world of the Schrödinger’s training clause—a legal paradox where your data is simultaneously not being used to train AI and fully licensed in case they decide to do so.

The Door That’s Always Open

Let’s take the WeTransfer case. For a brief period this month (in July 2025), their Terms of Service included an unmistakable clause: users granted them rights to use uploaded content to “improve the performance of machine learning models.” That language was direct. It caused backlash. And it disappeared.

Many mea culpas later, their TOS has been scrubbed clean of AI references. I appreciate the sentiment, really I do. But—and there’s always a but–the core license hasn’t changed. It’s still:

– Perpetual

– Worldwide

– Royalty-free

– Transferable

– Sub-licensable

They’ve simply returned the problem clause to its quantum box. No machine learning references. But nothing that stops it either.

 A Clause in Superposition

Platforms like WeTransfer—and others—have figured out the magic words: Don’t say you’re using data to train AI. Don’t say you’re not using it either. Instead, claim a sweeping license to do anything necessary to “develop or improve the service.”

That vague phrasing allows future pivots. It’s not a denial. It’s a delay. And to delay is to deny.

That’s what makes it Schrödinger’s training clause: Your content isn’t being used for AI. Unless it is. And you won’t know until someone leaks it, or a lawsuit makes discovery public.

The Scrape-Then-Scrub Scenario

Let’s reconstruct what could have happened–not saying it did happen, just could have–following the timeline in The Register:

1. Early July 2025: WeTransfer silently updates its Terms of Service to include AI training rights.

2. Users continue uploading sensitive or valuable content.

3. [Somebody’s] AI systems quickly ingest that data under the granted license.

4. Public backlash erupts mid-July.

5. WeTransfer removes the clause—but to my knowledge never revokes the license retroactively or promises to delete what was scraped. In fact, here’s their statement which includes this non-denial denial: “We don’t use machine learning or any form of AI to process content shared via WeTransfer.” OK, that’s nice but that wasn’t the question. And if their TOS was so clear, then why the amendment in the first place?

Here’s the Potential Legal Catch

Even if WeTransfer removed the clause later, any ingestion that occurred during the ‘AI clause window’ is arguably still valid under the terms then in force. As far as I know, they haven’t promised:

– To destroy any trained models

– To purge training data caches

– Or to prevent third-party partners from retaining data accessed lawfully at the time

What Would ‘Undoing’ Scraping Require?

– Audit logs to track what content was ingested and when

– Reversion of any models trained on user data

– Retroactive license revocation and sub-license termination

None of this has been offered that I have seen.

What ‘We Don’t Train on Your Data’ Actually Means

When companies say, “we don’t use your data to train AI,” ask:

– Do you have the technical means to prevent that?

– Is it contractually prohibited?

– Do you prohibit future sublicensing?

– Can I audit or opt out at the file level?

If the answer to those is “no,” then the denial is toothless.

How Creators Can Fight Back

1. Use platforms that require active opt-in for AI training.

2. Encrypt files before uploading.

3. Include counter-language in contracts or submission terms:

   “No content provided may be used, directly or indirectly, to train or fine-tune machine learning or artificial intelligence systems, unless separately and explicitly licensed for that purpose in writing” or something along those lines.

4. Call it out. If a platform uses Schrödinger’s language, name it. The only thing tech companies fear more than litigation is transparency.

What is to Be Done?

The most dangerous clauses aren’t the ones that scream “AI training.” They’re the ones that whisper, “We’re just improving the service.”

If you’re a creative, legal advisor, or rights advocate, remember: the future isn’t being stolen with force. It’s being licensed away in advance, one unchecked checkbox at a time.

And if a platform’s only defense is “we’re not doing that right now”—that’s not a commitment. That’s a pause.

That’s Schrödinger’s training clause.

From Plutonium to Prompt Engineering: Big Tech’s Land Grab at America’s Nuclear Sites–and Who’s Paying for It?

In a twist of post–Cold War irony, the same federal sites that once forged the isotopes of nuclear deterrence are now poised to fuel the arms race of artificial intelligence under the leadership of Special Government Employee and Silicon Valley Viceroy David Sacks. Under a new Department of Energy (DOE) initiative, 16 legacy nuclear and lab sites — including Savannah River, Idaho National Lab, and Oak Ridge Tennessee — are being opened to private companies to host massive AI data centers. That’s right–Tennessee where David Sacks is riding roughshod over the ELVIS Act.

But as this techno-industrial alliance gathers steam, one question looms large: Who benefits — and how will the American public be compensated for leasing its nuclear commons to the world’s most powerful corporations? Spoiler alert: We won’t.

A New Model, But Not the Manhattan Project

This program is being billed in headlines as a “new Manhattan Project for AI.” But that comparison falls apart quickly. The original Manhattan Project was:
– Owned by the government
– Staffed by public scientists
– Built for collective defense

Today’s AI infrastructure effort is:
– Privately controlled
– Driven by monopolies and venture capital
– Structured to avoid transparency and public input
– Uses free leases on public land with private nuclear reactors

Call it the Manhattan Project in reverse — not national defense, but national defense capture.

The Art of the Deal: Who gets what?

What Big Tech Is Getting

– Access to federal land already zoned, secured, and wired
– Exemption from state and local permitting
– Bypass of grid congestion via nuclear-ready substations
– DOE’s help fast-tracking nuclear microreactors (SMRs)
– Potential sovereign AI training enclaves, shielded from export controls and oversight

And all of it is being made available to private companies called the “Frontier labs”: Microsoft, Oracle, Amazon, OpenAI, Anthropic, xAI — the very firms at the center of the AI race.

What the Taxpayer Gets (Maybe)

Despite this extraordinary access, almost nothing is disclosed about how the public is compensated. No known revenue-sharing models. No guaranteed public compute access. No equity. No royalties.

Land lease payments? Not disclosed. Probably none.
Local tax revenue? Minimal (federal lands exempt)
Infrastructure benefit sharing? Unclear or limited

It’s all being negotiated quietly, under vague promises of “national competitiveness.”

Why AI Labs Want DOE Sites

Frontier labs like OpenAI and Anthropic — and their cloud sponsors — need:
– Gigawatts of energy
– Secure compute environments
– Freedom from export rules and Freedom of Information Act requests
– Permitting shortcuts and national branding

The DOE sites offer all of that — plus built-in federal credibility. The same labs currently arguing in court that their training practices are “fair use” now claim they are defenders of democracy training AI on taxpayer-built land.

This Isn’t the Manhattan Project — It’s the Extraction Economy in a Lab Coat

The tech industry loves to invoke patriotism when it’s convenient — especially when demanding access to federal land, nuclear infrastructure, or diplomatic cover from the EU’s AI Act. But let’s be clear:

This isn’t the Manhattan Project. Or rather we should hope it isn’t because that one didn’t end well and still hasn’t.
It’s not public service.
It’s Big Tech lying about fair use, wrapped in an American flag — and for all we know, it might be the first time David Sacks ever saw one.

When companies like OpenAI and Microsoft claim they’re defending democracy while building proprietary systems on DOE nuclear land, we’re not just being gaslit — we’re being looted.

If the AI revolution is built on nationalizing risk and privatizing power, it’s time to ask whose country this still is — and who gets to turn off the lights.

When Viceroy David Sacks Writes the Tariffs: How One VC Could Weaponize U.S. Trade Against the EU

David Sacks is a “Special Government Employee”, Silicon Valley insider and a PayPal mafioso who has become one of the most influential “unofficial” architects of AI policy under the Trump administration. No confirmation hearings, no formal role—but direct access to power.

He:
– Hosts influential political podcasts with Musk and Thiel-aligned narratives.
– Coordinates behind closed doors with elite AI companies who are now PRC-style “national champions” (OpenAI, Anthropic, Palantir).
– Has reportedly played a central role in shaping the AI Executive Orders and industrial strategy driving billions in public infrastructure to favored firms.

Under 18 U.S.C. § 202(a), a Special Government Employee is:

  • Temporarily retained to perform limited government functions,
  • For no more than 130 days per year (which for Sacks ends either April 14 or May 30, 2025), unless reappointed in a different role,
  • Typically serves in an advisory or consultative role, or
  • Without holding actual decision-making or operational authority over federal programs or agencies.

SGEs are used to avoid conflict-of-interest entanglements for outside experts while still tapping their expertise for advisory purposes. They are not supposed to wield sweeping executive power or effectively run a government program. Yeah, right.

And like a good little Silicon Valley weasel, Sacks supposedly is alternating between his DC side hustle and his VC office to stay under 130 days. This is a dumbass reading of the statute which says “‘Special Government employee’ means… any officer or employee…retained, designated, appointed, or employed…to perform…temporary duties… for not more than 130 days during any period of 365 consecutive days.” That’s not the same as “worked” 130 days on the time card punch. But oh well.

David Sacks has already exceeded the legal boundaries of his appointment as a Special Government Employee (SGE) both in time served but also by directing the implementation of a sweeping, whole-of-government AI policy, including authoring executive orders, issuing binding directives to federal agencies, and coordinating interagency enforcement strategies—actions that plainly constitute executive authority reserved for duly appointed officers under the Appointments Clause. As an SGE, Sacks is authorized only to provide temporary, nonbinding advice, not to exercise operational control or policy-setting discretion across the federal government. Accordingly, any executive actions taken at his direction or based on his advisement are constitutionally infirm as the unlawful product of an individual acting without valid authority, and must be deemed void as “fruit of the poisonous tree.”

Of course, one of the states that the Trump AI Executive Orders will collide with almost immediately is the European Union and its EU AI Act. Were they 51st? No that’s Canada. 52nd? Ah, right that’s Greenland. Must be 53rd.

How Could David Sacks Weaponize Trade Policy to Help His Constituents in Silicon Valley?

Here’s the playbook:

Engineer Executive Orders

Through his demonstrated access to Trump and senior White House officials, Sacks could promote executive orders under the International Emergency Economic Powers Act (IEEPA) or Section 301 of the Trade Act, aimed at punishing countries (like EU members) for “unfair restrictions” on U.S. AI exports or operations.

Something like this: “The European Union’s AI Act constitutes a discriminatory and protectionist measure targeting American AI innovation, and materially threatens U.S. national security and technological leadership.” I got your moratorium right here.

Leverage the USTR as a Blunt Instrument

The Office of the U.S. Trade Representative (USTR) can initiate investigations under Section 301 without needing new laws. All it takes is political will—and a nudge from someone like Viceroy Sacks—to argue that the EU’s AI Act discriminates against U.S. firms. See Canada’s “Tech Tax”. Gee, I wonder if Viceroy Sacks had anything to do with that one.

Redefine “National Security”

Sacks and his allies can exploit the Trump administration’s loose definition of “national security” claiming that restricting U.S. AI firms in Europe endangers critical defense and intelligence capabilities.

Smear Campaigns and Influence Operations

Sacks could launch more public campaigns against the EU like his attacks on the AI diffusion rule. According to the BBC, “Mr. Sacks cited the alienation of allies as one of his key arguments against the AI diffusion plan”. That’s a nice ally you got there, be a shame if something happened to it.

After all, the EU AI Act does what Sacks despises like protects artists and consumers, restricts deployment of high-risk AI systems (like facial recognition and social scoring), requires documentation of training data (which exposes copyright violations), and applies extraterritorially (meaning U.S. firms must comply even at home).

And don’t forget, Viceroy Sacks actually was given a portfolio that at least indirectly includes the National Security Council, so he can use the NATO connection to put a fine edge on his “industrial patriotism” just as war looms over Europe.

When Policy Becomes Personal

In a healthy democracy, trade retaliation should be guided by evidence, public interest, and formal process.

But under the current setup, someone like David Sacks can short-circuit the system—turning a private grievance into a national trade war. He’s already done it to consumers, wrongful death claims and copyright, why not join war lords like Eric Schmidt and really jack with people? Like give deduplication a whole new meaning.

When one man’s ideology becomes national policy, it’s not just bad governance.

It’s a broligarchy in real time.

Beyond Standard Oil: How the AI Action Plan Made America a Command Economy for Big Tech That You Will Pay For

When the White House requested public comments earlier this year on how the federal government should approach artificial intelligence, thousands of Americans—ranging from scientists to artists, labor leaders to civil liberties advocates—responded with detailed recommendations. Yet when America’s AI Action Plan was released today, it became immediately clear that those voices were largely ignored. The plan reads less like a response to public input and more like a pre-written blueprint drafted in collaboration with the very corporations it benefits. The priorities, language, and deregulatory thrust suggest that the real consultations happened behind closed doors—with Big Tech executives, not the American people.

In other words, business as usual.

By any historical measure—Standard Oil, AT&T, or even the Cold War military-industrial complex—the Trump Administration’s “America’s AI Action Plan” represents a radical leap toward a command economy built for and by Big Tech. Only this time, there are no rate regulations, no antitrust checks, and no public obligations—just streamlined subsidies, deregulation, and federally orchestrated dominance by a handful of private AI firms.

“Frontier Labs” as National Champions

The plan doesn’t pretend to be neutral. It picks winners—loudly. Companies like OpenAI, Anthropic, Meta, Microsoft, and Google are effectively crowned as “national champions,” entrusted with developing the frontier of artificial intelligence on behalf of the American state.

– The National AI Research Resource (NAIRR) and National Science Foundation partnerships funnel taxpayer-funded compute and talent into these firms.
– Federal procurement standards now require models that align with “American values,” but only as interpreted by government-aligned vendors.
– These companies will receive priority access to compute in a national emergency, hard-wiring them into the national security apparatus.
– Meanwhile, so-called “open” models will be encouraged in name only—no requirement for training data transparency, licensing, or reproducibility.

This is not a free market. This is national champion industrial policy—without the regulation or public equity ownership that historically came with it.

Infrastructure for Them, Not Us

The Action Plan reads like a wishlist from Silicon Valley’s executive suites:

– Federal lands are being opened up for AI data centers and energy infrastructure.
– Environmental and permitting laws are gutted to accelerate construction of facilities for private use.
– A national electrical grid expansion is proposed—not to serve homes and public transportation, but to power hyperscaler GPUs for model training.
– There’s no mention of public access, community benefit, or rural deployment. This is infrastructure built with public expense for private use.

Even during the era of Ma Bell, the public got universal service and price caps. Here? The public is asked to subsidize the buildout and then stand aside.

Deregulation for the Few, Discipline for the Rest

The Plan explicitly orders:
– Rescission of Biden-era safety and equity requirements.
– Reviews of FTC investigations to shield AI firms from liability.
– Withholding of federal AI funding from states that attempt to regulate the technology for safety, labor, or civil rights purposes.

Meanwhile, these same companies are expected to supply the military, detect cyberattacks, run cloud services for federal agencies, and set speech norms in government systems.

The result? An unregulated cartel tasked with executing state functions.

More Extreme Than Standard Oil or AT&T

Let’s be clear: Standard Oil was broken up. AT&T had to offer regulated universal service. Lockheed, Raytheon, and the Cold War defense contractors were overseen by procurement auditors and GAO enforcement.

This new AI economy is more privatized than any prior American industrial model—yet more dependent on the federal government than ever before. It’s an inversion of free market principles wrapped in American flags and GPU clusters.

Welcome to the Command Economy—For Tech Oligarchs

There’s a word for this: command economy. But instead of bureaucrats in Soviet ministries, we now have a handful of unelected CEOs directing infrastructure, energy, science, education, national security, and labor policy—all through cozy relationships with federal agencies.

If we’re going to nationalize AI, let’s do it honestly—with public governance, democratic accountability, and shared benefit. But this halfway privatized, fully subsidized, and wholly unaccountable structure isn’t capitalism. It’s capture.

Big Tech’s Misapprehensions About the AI Appropriation Invasion: Artist Rights are Not “Regulation”

It was a rough morning. I ran across both reports from Davos where they are busy blowing AI bubbles yet again and also read about a leading Silicon Valley apologist discussing the current crop of AI litigation. That was nauseating. But once the bile settled down, I had a realization: This is all straight out of the Woodrow Wilson rule-by-technocrats playbook.

Wilson believed that experts and intellectuals, rather than the voting public, should guide the creation and implementation of public policy. The very model of a modern technocrat. The present day technocrats and their enablers in the legal profession are heirs to Wilsonian rule by experts. They view copyright and other human rights of artists as regulation impeding innovation. Innovation is the godhead to which all mankind must–emphasis on must–aspire, whether mankind likes it or not.

Not human rights–artist rights are human rights, so that proposition cannot be allowed. The technocrats want to normalize “innovation” as the superior value that need not be humanized or even explained. Artist rights must yield and even be shattered in the advance of “innovation”. The risible Lessig is already talking about “the right to train” for AI, a human rights exception you can drive a tank through as is his want in the coin-operated policy laundry. In Wilsonian tradition, we are asked to believe that public policy must be the handmaiden to appropriation by technology even if by doing so the experts destroy culture.

We went through this before with Internet piracy. There are many familiar faces in the legal profession showing up on AI cases who were just getting warmed up on the piracy cases of the 1999-2015 period that did their best to grind artist rights into bits. AI is far beyond the massive theft and wealth transfer that put a generation of acolyte children through prep school and higher education. AI takes extracting profit from cultural appropriation to a whole new level–it’s like shoplifting compared to carpet bombing.

“I got the shotgun, you got the brief case…”

And since the AI lawyers are fascinated by Nazi metaphors, let me give you one myself: Internet piracy is to Guernica what AI is to Warsaw. The Luftwaffe was essentially on a training run when they bombed Guernica during the Spanish Civil War. Guernica was a warm up act; the main event was carpet bombing a culture out of existence in Warsaw and after. It was all about the Luftwaffe testing and refining their aerial bombing tactics that opened the door to hell and allowed Satan to walk the Earth swishing his tail as he does to this day. But in the words of Stefan Starzyński, the Mayor of Warsaw who broadcast through the German attack, “We are fighting for our freedom, for our honor, and for our future. We will not surrender.”

This is what these crusader technocrats do not seem to understand no matter how they enrich themselves from the wealth transfer of cultural appropriation. AI litigation and policy confrontation is not about the money–there is no license fee big enough and nobody trusts Silicon Valley to give a straight count in any event.

Artists, songwriters, authors and other creators have nowhere to go. The battle of human rights against the AI appropriation invasion may well be humanity’s last stand.

@FTC: AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive

An important position paper from the Federal Trade Commission about AI (emphasis mine where indicated):

You may have heard that “data is the new oil”—in other words, data is the critical raw material that drives innovation in tech and business, and like oil, it must be collected at a massive scale and then refined in order to be useful. And there is perhaps no data refinery as large-capacity and as data-hungry as AI.

Companies developing AI products, as we have noted, possess a continuous appetite for more and newer data, and they may find that the readiest source of crude data are their own userbases. But many of these companies also have privacy and data security policies in place to protect users’ information. These companies now face a potential conflict of interest: they have powerful business incentives to turn the abundant flow of user data into more fuel for their AI products, but they also have existing commitments to protect their users’ privacy….

It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy. (emphasis in original)…

The FTC will continue to bring actions against companies that engage in unfair or deceptive practices—including those that try to switch up the “rules of the game” on consumers by surreptitiously re-writing their privacy policies or terms of service to allow themselves free rein to use consumer data for product development. Ultimately, there’s nothing intelligent about obtaining artificial consent.

Read the post on FTC

What the Algocrats Want You to Believe

There are five key assumptions that support the streamer narrative and we will look at them each in turn. Today we’ll assess assumption #1–streamers are not in the music business but they want you to believe the opposite.

Assumption 1:  Streamers Are In the Music Business

Streamers like Spotify, TikTok and YouTube are not in the music business.  They are in the data business.  Why?  So they can monetize your fans that you drive to them.

These companies make extensive use of algorithms and artificial intelligence in their business, especially to sell targeted advertising.  This has a direct impact on your ability to compete with enterprise playlists and fake tracks–or what you might call “decoy footprints”–as identified by Liz Pelly’s exceptional journalism in her new book (did I say it’s on sale now?).

Signally, while Spotify artificially capped its subscription rates for over ten years in order to convince Wall Street of its growth story, the company definitely did not cap its advertising rates which are based on an auction model like YouTube.  Like YouTube, Spotify collects emotional data (analyzing a user social media posts), demographics (age, gender, location, geofencing), behavioral data (listening habits, interests), and contextual data (serving ads in relevant moments like breakfast, lunch, dinner).  They also use geofencing to target users by regions, cities, postal codes, and even Designated Market Areas (DMAs). My bet is that they can tell if you’re looking at men’s suits in ML Liddy’s (San Angelo or Ft. Worth).

Why the snooping? They do this to monetize your fans.  Sometimes they break the law, such as Spotify’s $5.5 million fine by Swedish authorities for violating Europe’s data protection laws.

They’ll also tell you that streamers are all up in introducing fans to new music or what they call “discovery.” The truth is that they could just as easily be introducing you to a new brand of Spam. “Discovery” is just a data application for the thousands of employees of these companies who form the algocracy who make far more money on average than any songwriter or musician does on average.  As Maria Schneider anointed the algocracy in her eponymous Pulitzer Prize finalist album, these are the Data Lords.  And I gather from Liz Pelly’s book that it’s starting to look like “discovery” is just another form of payola behind the scenes.

It also must be said that these algocrats tend to run together which makes any bright line between the companies harder to define.  For example, Spotify has phased out owning data centers and migrated its extensive data operations to the Google Cloud Platform which means Spotify is arguably entirely dependent on Google for a significant part of its data business.  Yes, the dominant music streaming platform Spotify collaborates with the adjudicated monopolist Google for its data monetization operations.  Not to mention the Meta pixel class action controversy—”It’s believed that Spotify may have installed a tracking tool on its website called the Meta pixel that can be used to gather data about website visitors and share it with Meta. Specifically, [attorneys] suspect that Spotify may have used the Meta pixel to track which videos its users have watched on Spotify.com and send that information to Meta along with each person’s Facebook ID.”

And remember, Spotify doesn’t allow AI training on the music and metadata on its platform.  

Right. That’s the good news.