AI, Soft Power, and New Thucydides Trap

The White House’s latest AI framework reads like a familiar story dressed in new clothes: we must move fast, avoid “overregulation,” and ensure that the United States “wins” the AI race—because China.

That framing is not new. It is, in fact, a modern version of the Thucydides Trap: the idea that when a rising power threatens to displace an established one, conflict—economic, political, or otherwise—becomes more likely. But what is striking here is not the invocation of competition. It’s how narrowly that competition is defined.

The framework implicitly treats AI dominance as a function of compute, capital, and model scale. Build bigger models faster, feed them more data, and ensure that domestic firms face as few constraints as possible. In that telling, creators, rights, and consent become secondary considerations—at best friction, at worst obstacles.

But that is a profound misread of where U.S. advantage actually lies.

American leadership has never been just about scale. It has been about legitimacy—the ability to build systems that other countries, companies, and individuals trust enough to adopt. That is the essence of soft power. And soft power is not generated by extraction; it is generated by rules that are perceived as fair.

When U.S. policy signals that training on creative works without meaningful consent is acceptable—or even necessary to “win”—it risks trading long-term legitimacy for short-term acceleration. That is a dangerous bargain. It tells the world that American AI leadership is built not on innovation alone, but on the uncompensated appropriation of global cultural and informational resources.

Other jurisdictions are already responding. The EU is experimenting with transparency mandates. Rights holders globally are pushing for enforceable consent regimes. Even countries that want to encourage AI development are increasingly wary of frameworks that look like data extraction at scale without accountability.

This is where the Thucydides analogy breaks down—or at least becomes more complicated. The real risk is not simply that China catches up technologically. It is that the United States, in trying to outrun that possibility, undermines the normative foundations of its own leadership.

Soft power erosion is not dramatic. It doesn’t announce itself with a headline. It accumulates quietly: in trade negotiations, in regulatory divergence, in the willingness of other countries to align—or not align—with U.S. standards. Over time, that erosion can matter more than any benchmark score or model release.

There is another path. The United States could lead by insisting that AI development is compatible with consent, compensation, and provenance. It could treat creators not as inputs to be harvested, but as stakeholders in a system that depends on their work. It could build infrastructure—technical and legal—that makes those principles operational, not aspirational.

That approach may look slower in the short term. It may impose costs that competitors are willing to ignore. But it is also how durable leadership is built.

Because in the long run, the question is not just who builds the most powerful models. It is who builds systems that the rest of the world is willing to trust.

And that is a competition the United States cannot afford to lose.

Leave a comment