TikTok’s Divestment Ouroboros: How the “Sale” Changed the Optics but Not the Leverage

When the TikTok USDS deal was announced under the Protecting Americans from Foreign Adversary Controlled Applications Act, it was framed as a clean resolution to years of national-security concerns expressed by many in the US. TikTok was to be reborn as a U.S. company, with U.S. control, and foreign influence neutralized. But if you look past the press language and focus on incentives, ownership, and law, a different picture emerges.

TikTok’s “forced sale” under the PAFACA (not to be confused with COVFEFE) traces back to years of U.S. national-security concern that TikTok’s owner ByteDance—one of the People’s Republic of China’s biggest tech companies founded by Zhang Yiming among China’s richest men and a self-described member of the ruling Chinese Communist Party—could be compelled under PRC law to share data or to allow the CCP to influence the platform’s operations. TikTok and its lobbyists repeatedly attempted to deflect the attention of regulators through measures like U.S. data localization and third-party oversight (e.g., “Project Texas”). However, lawmakers concluded that aggressive structural separation—not promises which nobody was buying—was needed. Congress then passed, and President Biden signed, legislation requiring either divestiture of “foreign adversary controlled” apps like TikTok or face a total a U.S. ban. Facing app-store and infrastructure cutoff risk, TikTok and ByteDance pursued a restructuring to keep U.S. operations alive and maintain an exit to US financial markets.

Lawmakers’ concerns were real and obvious. By trading on social media addiction, TikTok can compile rich behavioral profiles—especially on minors—by combining what users watch, like, share, search, linger on, and who they interact with, along with device identifiers, network data, and (where permitted) location signals. At scale, that kind of telemetry can be used to infer vulnerabilities and target susceptibility. For the military, the concern is not only “TikTok tracks troop movements,” but also that social media posts, aggregated location and social-graph signals across hundreds of millions of users could reveal patterns around bases, deployments, routines, or sensitive communities—hence warnings that harvested information could “possibly even reveal troop movements,” hence TikTok’s longstanding bans on government-issued devices.

These concerns shot through government circles while the Tok became ubiquitous and carefully engineered social media addiction gripped the US, and indeed the West. (TikTok just this week settled out of the biggest social media litigation in history.) Congress was very concerned and with good reason—Rep. Mike Gallagher demanded that TikTok “Break up with the Chinese Communist Party (CCP) or lose access to your American users.”  Rep. Cathy McMorris Rodgers said the bill would “prevent foreign adversaries, such as China, from surveilling and manipulating the American people.” Sen. Pete Ricketts warned “If the Chinese Communist Party is refusing to let ByteDance sell TikTok… they don’t want [control of] those algorithms coming to America.” 

And of course, who can forget a classic Marsha line from Senator Marsha Blackburn. I don’t know how to say “Bless your heart” in Mandarin, but in English it’s “we heard you were opening a TikTok headquarters in Nashville and what you’re probably going to find is that the welcome mat isn’t going to be rolled out for you in Nashville.”

So there’s that.

TikTok can compile rich behavioral profiles—especially on minors—by combining what users post, watch, like, share, search, linger on, and who they interact with, along with device identifiers, network data, and location signals. At scale, that kind of telemetry can be used to infer vulnerabilities and targeting susceptibility. These exploits have real strategic value. With the CCP’s interest in undermining US interests and especially blunting the military, the concern is not necessarily that “the CCP tracks troop movements” directly (although who really knows), but that aggregated location and social-graph signals could reveal patterns around bases, deployments, routines, or sensitive communities—hence warnings that harvested information could “possibly even reveal troop movements,” and the TikTok’s longstanding bans on government-issued devices.  You know, kind of like if you flew a balloon across the CONUS. military bases.

It must also be said that when you watch TikTok’s poor performance before Congress at hearings, it really came down to a simple question of trust. I think nobody believed a word they said and the TikTok witnesses exuded a kind of arrogance that simply does not work when Congress has a bit in the teeth. Full disclosure, I have never believed a word they said and have always been troubled that artists were unwittingly leading their fans to the social media abattoir.

I’ve been writing about TikTok for years, and not because it was fashionable or politically easy. After a classic MTP-style presentation at the MusicBiz conference in 2020 where I laid out all the issues with TikTok and the CCP, somehow I never got invited back. Back in 2020, I warned that “you don’t need proof of misuse to have a national security problem—you only need legal leverage and opacity.” I also argued that “data localization doesn’t solve a governance problem when the parent company [Bytedance] remains subject to foreign national security law,” and that focusing on the location of data storage missed “the more important question of who controls the system that decides what people see.” The forced sale didn’t vindicate any one prediction so much as confirm the basic point: structure matters more than assurances, and control matters more than rhetoric. I still have that concern after all the sound and fury.

There is also a legitimate constitutional concern with PAFACA: a government-mandated divestiture risks resembling a Fifth Amendment taking if structured to coerce a sale without just compensation. PAFACA deserved serious scrutiny even given the legitimate national security concerns. Had the dust settled with the CCP suing the U.S. government under a takings theory, it would have been both too cute by half and entirely on-brand—an example of the CCP’s “unrestricted warfareapproach to lawfare, exploiting Western legal norms strategically. (The CCP’s leading military strategy doctrine, Unrestricted Warfare poses terrorism (and “terror-like” economic and information attacks such as TikTok’s potential use) as part of a spectrum of asymmetric methods that can weaken a technologically superior power like the US.)

Indeed, TikTok did challenge the divest-or-ban statute in the Supreme Court and mounted a SOPA-style campaign that largely failed. TikTok argued that a government-mandated forced sale violated the First Amendment rights of its users and exceeded Congress’s national-security authority. The Supreme Court upheld (unanimously) the PAFACA law, concluding that Congress permissibly targeted foreign-adversary control for national-security reasons rather than suppressing speech, and that the resulting burden on expression did not violate the First Amendment. The case ultimately underscored how far national-security rationales can narrow judicial appetite to second-guess political branches in foreign-adversary disputes no matter how many high-priced lawyers, lobbyists and spin doctors line up at your table. And, boy, did they have them. I think at one point close to half the shilleries in DC were on the PRC payroll.

In that sense, the TikTok deal itself may prove to be another illustration of Master Sun’s maxim about winning without fighting, i.e., achieving strategic advantage not through open confrontation, but by shaping the terrain, the rules, and the opponent’s choices in advance—and perhaps most importantly in this case…deception.

But the deal we got is the deal we have so let’s see what we actually have achieved (or how bad we got hosed this time). As I often say, it’s a damn good thing we never let another MTV build a business on our backs.

The Three Pillars of TikTok

TikTok USDS is the U.S.-domiciled parent holding company for TikTok’s American operations, created to comply with the divest-or-ban law. It is majority owned by U.S. investors, with ByteDance retaining a non-controlling minority stake (reported around 19.9%) and licensing core recommendation technology to the U.S. business. (Under U.S. GAAP, 20%+ ownership is a common rebuttable presumption of “significant influence,” which can trigger less favorable accounting and more scrutiny of the relationship. Staying below 20% helps keep the stake looking purely passive which is kind of a joke considering Byte still owns the key asset. And we still have to ask if BD (or CCP) has any special voting rights (“golden share”), board control, dual-class stock, etc.)

The deal appears to rest on three pillars—and taken together, they point to something closer to an ouroboros than a divestment: the structure consumes itself, leaving ByteDance, and by extension the PRC, in a position that is materially different on paper but strikingly similar in practice.

Pillar One: ByteDance Keeps the Crown Jewel

The first and most important point is the simplest: ByteDance retains ownership of TikTok’s recommendation algorithm.

That algorithm is not an ancillary asset. It is TikTok’s product. Engagement, ad pricing, cultural reach, and political concern all flow from it. Selling TikTok without selling the algorithm is like selling a car without the engine and calling it a divestiture because the buyer controls the steering wheel.

Public reporting strongly suggests the solution was not a sale of the algorithm, but a license or controlled use arrangement. TikTok USDS may own U.S.-specific “tweaks”—content moderation parameters, weighting adjustments, compliance filters—but those sit on top of a core system ByteDance still owns and controls.

That distinction matters, because ownership determines who ultimately controls:

  • architectural changes,
  • major updates,
  • retraining methodology,
  • and long-term evolution of the system.

In other words, the cap table changed, but the switch did not necessarily move.

Pillar Two: IPO Optionality Without Immediate Disclosure

The second pillar is liquidity. ByteDance did not fight this battle simply to keep operating TikTok in the U.S.; it fought to preserve access to an exit in US financial markets.

The TikTok USDS structure clearly keeps open a path to an eventual IPO. Waiting a year or two is not a downside. There is a crowded IPO pipeline already—AI platforms, infrastructure plays, defense-adjacent tech—and time helps normalize the structure politically and operationally.

But here’s the catch: an IPO collapses ambiguity.

A public S-1 would have to disclose, in plain English:

  • who owns the algorithm,
  • whether TikTok USDS owns it or licenses it,
  • the material terms of any license,
  • and the risks associated with dependence on a foreign related party.

This is where old Obama-era China-listing tricks no longer work. Based on what I’ve read, TikTok USDS would likely be a U.S. issuer with a U.S.-inspectable auditor. ByteDance can’t lean on the old HFCAA/PCAOB opacity playbook, because HFCAA is about audit access—not about shielding a related-party licensor from scrutiny.

ByteDance surely knows this. Which is why the structure buys time, not relief from transparency. The IPO is possible—but only when the market is ready to price the risk that the politics are currently papering over.

Pillar Three: PRC Law as the Ultimate Escape Hatch

The third pillar is the quiet one, but it may be the most consequential: PRC law as an external constraint. As long as ByteDance owns the algorithm, PRC law is always waiting in the wings. Those laws include:

Export-control rules on recommendation algorithms.
Data security and cross-border transfer regimes.
National security and intelligence laws that impose duties on PRC companies and citizens.

Together, they form a universal answer to every hard question:

  • Why can’t the algorithm be sold? PRC export controls.
  • Why can’t certain technical details be disclosed? PRC data laws.
  • Why can’t ByteDance fully disengage? PRC legal obligations.

This is not hypothetical. It’s the same concern that animated the original TikTok controversy, just reframed through contracts instead of ownership.

So while TikTok USDS may be auditable, governed by a U.S. board, and compliant with U.S. operational rules, the moment oversight turns upstream—toward the algorithm, updates, or technical dependencies—PRC law reenters the picture.

The result is a U.S. company that is transparent at the edges and opaque at the core. My hunch is that this sovereign control risk is clearly spelled out in any license document and will get disclosed in an IPO.

Putting It Together: Divestment of Optics, Not Control

Taken together, the three pillars tell a consistent story:

  • ByteDance keeps the algorithm.
  • ByteDance gets paid and retains an exit.
  • PRC law remains available to constrain transfer, disclosure, or cooperation.
  • U.S. regulators oversee the wrapper, not the engine.

That does not mean ByteDance is in exactly the same legal position as before. Governance and ownership optics have changed. Some forms of U.S. oversight are real. But in terms of practical control leverage, ByteDance—and by extension Beijing—may be uncomfortably close to where they started.

The foreign control problem that launched the TikTok saga was never just about equity. It was about who controls the system that shapes attention, culture, and information flow. If that system remains owned upstream, the rest is scaffolding.

The Ouroboros Moment

This is why Congress is likely to be furious once the implications sink in.

The story began with concerns about PRC control.
It moved through years of negotiation and political theater.
It ends with an “approved structure” that may leave PRC leverage intact—just expressed through licenses, contracts, and sovereign law rather than a majority stake.

The divestment eats its own tail.

Or put more bluntly: the sale may have changed the paperwork, but it did not necessarily change who can say no when it matters most. And that’s control.

As we watch the People’s Liberation Army practicing its invasion of Taiwan, it’s not rocket science to ask how all this will look if the PRC invades Taiwan tomorrow and America comes to Taiwan’s defense. In a U.S.–PRC shooting war, TikTok USDS would likely face either a rapid U.S. distribution ban on national-security grounds (already blessed by SCOTUS), a forced clean-room severance from ByteDance’s algorithm and services, or an operational breakdown if PRC law or wartime measures disrupt the licensed technology the platform depends on.

The TikTok “sale” looks less like a divestiture of control than a divestiture of optics. ByteDance may have reduced its equity stake and ceded governance formalities, but if it retained ownership of the recommendation algorithm and the U.S. company remains dependent on ByteDance by license, then ByteDance’s—and by extension the CCP’s—legal leverage over ByteDance—can remain in a largely similar control position in practice.

TikTok USDS may change the cap table, but it doesn’t necessarily change the sovereign. As long as ByteDance owns the algorithm and PRC law can be invoked to restrict transfer, disclosure, or cooperation without CCP approval, the end state risks looking eerily familiar: a U.S.-branded wrapper around a system Beijing can still influence at the critical junctions. The whole saga starts with bitter complaints in Congress about “foreign control,” ends with “approved structure,” but largely lands right back where it began—an ouroboros of governance optics swallowing itself.

Surely I’m missing something.

The Paradox of Huang’s Rope

If the tech industry has a signature fallacy for the 2020s aside from David Sacks, it belongs to Jensen Huang. The CEO of Nvidia has perfected a circular, self-consuming logic so brazen that it deserves a name: The Paradox of Huang’s Rope. It is the argument that China is too dangerous an AI adversary for the United States to regulate artificial intelligence at home or control export of his Nvidia chips abroad—while insisting in the very next breath that the U.S. must allow him to keep selling China the advanced Nvidia chips that make China’s advanced AI capabilities possible. The justification destroys its own premise, like handing an adversary the rope to hang you and then pointing to the length of that rope as evidence that you must keep selling more, perhaps to ensure a more “humane” hanging. I didn’t think it was possible to beat “sharing is caring” for utter fallacious bollocks.

The Paradox of Huang’s Rope works like this: First, hype China as an existential AI competitor. Second, declare that any regulatory guardrails—whether they concern training data, safety, export controls, or energy consumption—will cause America to “fall behind.” Third, invoke national security to insist that the U.S. government must not interfere with the breakneck deployment of AI systems across the economy. And finally, quietly lobby for carveouts that allow Nvidia to continue selling ever more powerful chips to the same Chinese entities supposedly creating the danger that justifies deregulation.

It is a master class in circularity: “China is dangerous because of AI → therefore we can’t regulate AI → therefore we must sell China more AI chips → therefore China is even more dangerous → therefore we must regulate even less and export even more to China.” At no point does the loop allow for the possibility that reducing the United States’ role as China’s primary AI hardware supplier might actually reduce the underlying threat. Instead, the logic insists that the only unacceptable risk is the prospect of Nvidia making slightly less money.

This is not hypothetical. While Washington debates export controls, Huang has publicly argued that restrictions on chip sales to China could “damage American technology leadership”—a claim that conflates Nvidia’s quarterly earnings with the national interest. Meanwhile, U.S. intelligence assessments warn that China is building fully autonomous weapons systems, and European analysts caution that Western-supplied chips are appearing in PLA research laboratories. Yet the policy prescription from Nvidia’s corner remains the same: no constraints on the technology, no accountability for the supply chain, and no acknowledgment that the market incentives involved have nothing to do with keeping Americans safe. And anyone who criticizes the authoritarian state run by the Chinese Communist Party is a “China Hawk” which Huang says is a “badge of shame” and “unpatriotic” because protecting America from China by cutting off chip exports “destroys the American Dream.” Say what?

The Paradox of Huang’s Rope mirrors other Cold War–style fallacies, in which companies invoke a foreign threat to justify deregulation while quietly accelerating that threat through their own commercial activity. But in the AI context, the stakes are higher. AI is not just another consumer technology; its deployment shapes military posture, labor markets, information ecosystems, and national infrastructure. A strategic environment in which U.S. corporations both enable and monetize an adversary’s technological capabilities is one that demands more regulation, not less.

Naming the fallacy matters because it exposes the intellectual sleight of hand. Once the circularity is visible, the argument collapses. The United States does not strengthen its position by feeding the very capabilities it claims to fear. And it certainly does not safeguard national security by allowing one company’s commercial ambitions to dictate the boundaries of public policy. The Paradox of Huang’s Rope should not guide American AI strategy. It should serve as a warning of how quickly national priorities can be twisted into a justification for private profit.

You Can’t Prosecute Smuggling NVIDIA chips to CCP and Authorize Sales to CCP at the Same Time

The Trump administration is attempting an impossible contradiction: selling advanced NVIDIA AI chips to China while the Department of Justice prosecutes criminal cases for smuggling the exact same chips into China.

According to the DOJ:

“Operation Gatekeeper has exposed a sophisticated smuggling network that threatens our Nation’s security by funneling cutting-edge AI technology to those who would use it against American interests,” said Ganjei. “These chips are the building blocks of AI superiority and are integral to modern military applications. The country that controls these chips will control AI technology; the country that controls AI technology will control the future. The Southern District of Texas will aggressively prosecute anyone who attempts to compromise America’s technological edge.”

That divergence from the prosecutors is not industrial policy. That is incoherence. But mostly it’s just bad advice, likely coming from White House AI Czar David Sacks, Mr. Trump’s South African AI policy advisor who may have a hard time getting a security clearance in the first place..

On one hand, DOJ is rightly bringing cases over the illegal diversion of restricted AI chips—recognizing that these processors are strategic technologies with direct national-security implications. On the other hand, the White House is signaling that access to those same chips is negotiable, subject to licensing workarounds, regulatory carve-outs, or political discretion.

You cannot treat a technology as contraband in federal court and as a commercial export in the West Wing.

Pick one.

AI Chips Are Not Consumer Electronics

The United States does not sell China F-35 fighter jets. We do not sell Patriot missile systems. We do not sell advanced avionics platforms and then act surprised when they show up embedded in military infrastructure. High-end AI accelerators are in the same category.

NVIDIA’s most advanced chips are not merely commercial products. They are general-purpose intelligence infrastructure or what China calls military-civil fusion. They train surveillance systems, military logistics platforms, cyber-offensive tools, and models capable of operating autonomous weapons and battlefield decision-making pipelines with no human in the loop.

If DOJ treats the smuggling of these chips into China as a serious federal crime—and it should—there is no coherent justification for authorizing their sale through executive discretion. Except, of course, money, or in Mr. Sacks case, more money.

Fully Autonomous Weapons—and Selling the Rope

China does not need U.S. chips to build consumer AI. It wants them for military acceleration.Advanced NVIDIA AI chips are not just about chatbots or recommendation engines. They are the backbone of fully autonomous weapons systems—autonomous targeting, swarm coordination, battlefield logistics, and decision-support models that compress the kill chain beyond meaningful human control.

There is an old warning attributed to Vladimir Lenin—that capitalists would sell the rope by which they would later be hanged. Apocryphal or not, it captures this moment with uncomfortable precision.

If NVIDIA chips are powerful enough to underpin autonomous weapons systems for allied militaries, they are powerful enough to underpin autonomous weapons systems for adversaries like China. Trump’s own National Security Strategy statement clearly says previous U.S. elites made “mistaken” assumptions about China such as the famous one that letting China into the WTO would integrate Beijing into the famous rules-based international order. Trump tells us that instead China “got rich and powerful” and used this against us, and goes on to describe the CCP’s well known predatory subsidies, unfair trade, IP theft, industrial espionage, supply-chain leverage, and fentanyl precursor exports as threats the U.S. must “end.” By selling them the most advanced AI chips?

Western governments and investors simultaneously back domestic autonomous-weapons firms—such as Europe-based Helsing, supported by Spotify CEO Daniel Ek—explicitly building AI-enabled munitions for allied defense. That makes exporting equivalent enabling infrastructure to a strategic competitor indefensible.

The AI Moratorium Makes This Worse, Not Better

This contradiction unfolds alongside a proposed federal AI moratorium executive order originating with Mr. Sacks and Adam Thierer of Google’s R Street Institute that would preempt state-level AI protections.
States are told AI is too consequential for local regulation, yet the federal government is prepared to license exports of AI’s core infrastructure abroad.

If AI is too dangerous for states to regulate, it is too dangerous to export. Preemption at home combined with permissiveness abroad is not leadership. It is capture.

This Is What Policy Capture Looks Like

The common thread is not national security. It is Silicon Valley access. David Sacks and others in the AI–VC orbit argue that AI regulation threatens U.S. competitiveness while remaining silent on where the chips go and how they are used.

When DOJ prosecutes smugglers while the White House authorizes exports, the public is entitled to ask whose interests are actually being served. Advisory roles that blur public power and private investment cannot coexist with credible national-security policymaking particularly when the advisor may not even be able to get a US national security clearance unless the President blesses it.

A Line Has to Be Drawn

If a technology is so sensitive that its unauthorized transfer justifies prosecution, its authorized transfer should be prohibited absent extraordinary national interest. AI accelerators meet that test.

Until the administration can articulate a coherent justification for exporting these capabilities to China, the answer should be no. Not licensed. Not delayed. Not cosmetically restricted.

And if that position conflicts with Silicon Valley advisers who view this as a growth opportunity, they should return to where they belong. The fact that the US is getting 25% of the deal (which i bet never finds its way into America’s general account), means nothing except confirming Lenin’s joke about selling the rope to hang ourselves, you know, kind of like TikTok.

David Sacks should go back to Silicon Valley.

This is not venture capital. This is our national security and he’s selling it like rope.

Good News for TikTok Users: The PRC Definitely Isn’t Interested in Your Data (Just the Global Internet Backbone, Apparently)

If you’re a TikTok user who has ever worried, even a tiny bit, that the People’s Republic of China might have an interest in your behavior, preferences, movements, or social graph, take heart. A newly released Joint Cybersecurity Advisory from intelligence agencies in the United States, Canada, the U.K., Australia, New Zealand, and a long list of allied intelligence agencies proves beyond any shadow of a doubt that the PRC is far too busy compromising the world’s telecommunications infrastructure to care about your TikTok “For You Page.”

Nothing to see here. Scroll on.

For those who like their reassurance with a side of evidence, the advisory—titled “Countering Chinese State Actors’ Compromise of Networks Worldwide to Feed Global Espionage System”—is one of the clearest, broadest warnings ever issued about a Chinese state-sponsored intrusion campaign. And, because the agencies involved designated it as not sensitive and may be shared publicly without restriction (TLP:CLEAR), you can read it yourself.

The World’s Telecom Backbones: Now Featuring Uninvited Guests

The intel agency advisory describes a “Typhoon class” global espionage ecosystem run through persistent compromises of backbone routers, provider-edge and customer-edge routers, ISP and telecom infrastructure, transportation networks, lodging and hospitality systems, government and military-adjacent networks.

This is not hypothetical. The advisory includes extremely detailed penetration chains: attackers exploit widely known “Common Vulnerabilities and Exposures” (CVEs) in routers, firewalls, VPNs, and management interfaces, then establish persistence through configuration modifications, traffic mirroring, injected services, and encrypted tunnels. This lets them monitor, redirect, copy, or exfiltrate traffic across entire service regions.

Put plainly: if your internet service provider has a heartbeat and publicly routable equipment, the attackers have probably knocked on the door. And for a depressingly large number of large-scale network operators, they got in.

This is classical intelligence tradecraft. The PRC’s immediate goal isn’t ransomware. It’s not crypto mining. It’s not vandalism. It’s good old-fashioned espionage: long-term access, silent monitoring, and selective exploitation.

What They’re Collecting: Clues About Intent

The advisory makes the overall aim explicit: to give PRC intelligence the ability to identify and track targets’ communications and movements worldwide.

That includes metadata on calls, enterprise-internal communications, hotel and travel itineraries, traffic patterns for government and defense systems, persistent vantage points on global networks.

This is signals intelligence (SIGINT), not smash-and-grab.

And importantly: this kind of operation requires enormous intelligence-analytic processing, not a general-purpose “LLM training dataset.” These are targeted, high-value accesses, not indiscriminate web scrapes. The attackers are going after specific information—strategic, diplomatic, military, infrastructure, and political—not broad consumer content.

So no, this advisory is not about “AI training.” It is about access, exfiltration, and situational awareness across vital global communications arteries.

Does This Tell Us Anything About TikTok?

Officially, no. The advisory never mentions TikTok, ByteDance, or consumer social media apps. It is focused squarely on infrastructure.

But from a strategic-intent standpoint, it absolutely matters. Because when you combine:

1. Global telecom-layer access
2. Persistent long-term SIGINT footholds
3. The PRC’s demonstrated appetite for foreign behavioral data
4. The existence of the richest behavioral dataset on Earth—TikTok’s U.S. user base

—you get a coherent picture of the intelligence ecosystem the Chinese Communist Party is building on…I guess you’d have to say “the world”.

If a nation-state is willing to invest years compromising backbone routers, it is not a stretch to imagine what they could do with a mobile app installed on the phones of oh say 170 million Americans to pick a random number that conveniently collects social graphs, location traces, contact patterns, engagement preferences, political and commercial interests that are visible in the PRC.

But again, don’t worry. The advisory suggests only that Chinese state actors have global access to the infrastructure over which your TikTok traffic travels—not that they would dare take an interest in the app itself. And besides, the TikTok executives swore under oath to the U.S. Congress that it didn’t happen that way so it must be true.

After all, why would a government running a worldwide intrusion program want access to the largest behavioral-data sensor array outside the NSA?

If you still believe the PRC is nowhere near TikTok’s data, then this advisory will reassure you: it’s just a gentle reminder that Chinese state actors are burrowed into global telecom backbones, hotel networks, transportation systems, and military-adjacent infrastructure—pure souls simply striving to make sure your “For You” page loads quickly.

After all, why would a government running a worldwide network-intrusion program have any interest in the richest behavioral dataset on Earth?

AI’s Manhattan Project Rhetoric, Clearance-Free Reality

Every time a tech CEO compares frontier AI to the Manhattan Project, take a breath—and remember what that actually means.  Master spycatcher James Jesus Angleton is rolling in his grave. (aka Matt Damon in The Good Shepherd.). And like most elevator pitch talking points, that analogy starts to fall apart on inspection.

The Manhattan Project wasn’t just a moonshot scientific collaboration. It was the most tightly controlled, security-obsessed R&D operation in American history. Every physicist, engineer, and janitor involved had a federal security clearance. Facilities were locked down under military command of General Leslie Groves. Communications were monitored. Access was compartmentalized. And still—still—the Soviets penetrated it.  See Klaus Fuchs.  Let’s understand just how secret the Manhattan Project was—General Curtis LeMay had no idea it was happening until he was asked to set up facilities for the Enola Gay on his bomber base on Tinian a few months before the first nuclear bomb.  You want to find out about the details of any frontier lab, just pick up the newspaper.  Not nearly the same thing. There were no chatbots involved and there were no Special Government Employees with no security clearance.

Oppie Sacks

So when today’s AI executives name-drop Oppenheimer and invoke the gravity of dual-use technologies, what exactly are they suggesting? That we’re building world-altering capabilities without any of the safeguards that even the AI Whiz Kids admit are historically necessary by their Manhattan Project talking point in the pitch deck?

These frontier labs aren’t locked down. They’re open-plan. They’re not vetting personnel. They’re recruiting from Discord servers. They’re not subject to classified environments. They’re training military-civilian dual-use models on consumer cloud platforms. And when questioned, they invoke private sector privilege and push back against any suggestion of state or federal regulation.  And here’s a newsflash—requiring a security clearance for scientific work in the vital national interest is not regulation.  (Neither is copyright but that’s another story.)

Meanwhile, they’re angling for access to Department of Energy nuclear real estate, government compute subsidies, and preferred status in export policy—all under the justification of “national security” because, you know, China.  They want the symbolism of the Manhattan Project without the substance. They want to be seen as indispensable without being held accountable.

The truth is that AI is dual-use. It can power logistics and surveillance, language learning and warfare. That’s not theoretical—it’s already happening. China openly treats AI as part of its military-civil fusion strategy. Russia has targeted U.S. systems with information warfare bots. And our labs? They’re scraping from the open internet and assuming the training data hasn’t been poisoned with the massive misinformation campaigns on Wikipedia, Reddit and X that are routine.

If even the Manhattan Project—run under maximum secrecy—was infiltrated by Soviet spies, what are the chances that today’s AI labs, operating in the wide open are immune?  Wouldn’t a good spycatcher like Angleton assume these wunderkinds have already been penetrated?

We have no standard vetting for employees. No security clearances. No model release controls. No audit trail for pretraining data integrity. And no clear protocol for foreign access to model weights, inference APIs, or sensitive safety infrastructure. It’s not a matter of if. It’s a matter of when—or more likely, a matter of already.

Remember–nobody got rich out of working on the Manhattan Project. That’s another big difference. These guys are in it for the money, make no mistake.

So when you hear the Manhattan Project invoked again, ask the follow-up question: Where’s the security clearance?  Where’s the classification?  Where’s the real protection?  Who’s playing the role of Klaus Fuchs?

Because if AI is our new Manhattan Project, then running it without security is more than hypocrisy. It’s incompetence at scale.