From Plutonium to Prompt Engineering: Big Tech’s Land Grab at America’s Nuclear Sites–and Who’s Paying for It?

In a twist of post–Cold War irony, the same federal sites that once forged the isotopes of nuclear deterrence are now poised to fuel the arms race of artificial intelligence under the leadership of Special Government Employee and Silicon Valley Viceroy David Sacks. Under a new Department of Energy (DOE) initiative, 16 legacy nuclear and lab sites — including Savannah River, Idaho National Lab, and Oak Ridge Tennessee — are being opened to private companies to host massive AI data centers. That’s right–Tennessee where David Sacks is riding roughshod over the ELVIS Act.

But as this techno-industrial alliance gathers steam, one question looms large: Who benefits — and how will the American public be compensated for leasing its nuclear commons to the world’s most powerful corporations? Spoiler alert: We won’t.

A New Model, But Not the Manhattan Project

This program is being billed in headlines as a “new Manhattan Project for AI.” But that comparison falls apart quickly. The original Manhattan Project was:
– Owned by the government
– Staffed by public scientists
– Built for collective defense

Today’s AI infrastructure effort is:
– Privately controlled
– Driven by monopolies and venture capital
– Structured to avoid transparency and public input
– Uses free leases on public land with private nuclear reactors

Call it the Manhattan Project in reverse — not national defense, but national defense capture.

The Art of the Deal: Who gets what?

What Big Tech Is Getting

– Access to federal land already zoned, secured, and wired
– Exemption from state and local permitting
– Bypass of grid congestion via nuclear-ready substations
– DOE’s help fast-tracking nuclear microreactors (SMRs)
– Potential sovereign AI training enclaves, shielded from export controls and oversight

And all of it is being made available to private companies called the “Frontier labs”: Microsoft, Oracle, Amazon, OpenAI, Anthropic, xAI — the very firms at the center of the AI race.

What the Taxpayer Gets (Maybe)

Despite this extraordinary access, almost nothing is disclosed about how the public is compensated. No known revenue-sharing models. No guaranteed public compute access. No equity. No royalties.

Land lease payments? Not disclosed. Probably none.
Local tax revenue? Minimal (federal lands exempt)
Infrastructure benefit sharing? Unclear or limited

It’s all being negotiated quietly, under vague promises of “national competitiveness.”

Why AI Labs Want DOE Sites

Frontier labs like OpenAI and Anthropic — and their cloud sponsors — need:
– Gigawatts of energy
– Secure compute environments
– Freedom from export rules and Freedom of Information Act requests
– Permitting shortcuts and national branding

The DOE sites offer all of that — plus built-in federal credibility. The same labs currently arguing in court that their training practices are “fair use” now claim they are defenders of democracy training AI on taxpayer-built land.

This Isn’t the Manhattan Project — It’s the Extraction Economy in a Lab Coat

The tech industry loves to invoke patriotism when it’s convenient — especially when demanding access to federal land, nuclear infrastructure, or diplomatic cover from the EU’s AI Act. But let’s be clear:

This isn’t the Manhattan Project. Or rather we should hope it isn’t because that one didn’t end well and still hasn’t.
It’s not public service.
It’s Big Tech lying about fair use, wrapped in an American flag — and for all we know, it might be the first time David Sacks ever saw one.

When companies like OpenAI and Microsoft claim they’re defending democracy while building proprietary systems on DOE nuclear land, we’re not just being gaslit — we’re being looted.

If the AI revolution is built on nationalizing risk and privatizing power, it’s time to ask whose country this still is — and who gets to turn off the lights.

Beyond Standard Oil: How the AI Action Plan Made America a Command Economy for Big Tech That You Will Pay For

When the White House requested public comments earlier this year on how the federal government should approach artificial intelligence, thousands of Americans—ranging from scientists to artists, labor leaders to civil liberties advocates—responded with detailed recommendations. Yet when America’s AI Action Plan was released today, it became immediately clear that those voices were largely ignored. The plan reads less like a response to public input and more like a pre-written blueprint drafted in collaboration with the very corporations it benefits. The priorities, language, and deregulatory thrust suggest that the real consultations happened behind closed doors—with Big Tech executives, not the American people.

In other words, business as usual.

By any historical measure—Standard Oil, AT&T, or even the Cold War military-industrial complex—the Trump Administration’s “America’s AI Action Plan” represents a radical leap toward a command economy built for and by Big Tech. Only this time, there are no rate regulations, no antitrust checks, and no public obligations—just streamlined subsidies, deregulation, and federally orchestrated dominance by a handful of private AI firms.

“Frontier Labs” as National Champions

The plan doesn’t pretend to be neutral. It picks winners—loudly. Companies like OpenAI, Anthropic, Meta, Microsoft, and Google are effectively crowned as “national champions,” entrusted with developing the frontier of artificial intelligence on behalf of the American state.

– The National AI Research Resource (NAIRR) and National Science Foundation partnerships funnel taxpayer-funded compute and talent into these firms.
– Federal procurement standards now require models that align with “American values,” but only as interpreted by government-aligned vendors.
– These companies will receive priority access to compute in a national emergency, hard-wiring them into the national security apparatus.
– Meanwhile, so-called “open” models will be encouraged in name only—no requirement for training data transparency, licensing, or reproducibility.

This is not a free market. This is national champion industrial policy—without the regulation or public equity ownership that historically came with it.

Infrastructure for Them, Not Us

The Action Plan reads like a wishlist from Silicon Valley’s executive suites:

– Federal lands are being opened up for AI data centers and energy infrastructure.
– Environmental and permitting laws are gutted to accelerate construction of facilities for private use.
– A national electrical grid expansion is proposed—not to serve homes and public transportation, but to power hyperscaler GPUs for model training.
– There’s no mention of public access, community benefit, or rural deployment. This is infrastructure built with public expense for private use.

Even during the era of Ma Bell, the public got universal service and price caps. Here? The public is asked to subsidize the buildout and then stand aside.

Deregulation for the Few, Discipline for the Rest

The Plan explicitly orders:
– Rescission of Biden-era safety and equity requirements.
– Reviews of FTC investigations to shield AI firms from liability.
– Withholding of federal AI funding from states that attempt to regulate the technology for safety, labor, or civil rights purposes.

Meanwhile, these same companies are expected to supply the military, detect cyberattacks, run cloud services for federal agencies, and set speech norms in government systems.

The result? An unregulated cartel tasked with executing state functions.

More Extreme Than Standard Oil or AT&T

Let’s be clear: Standard Oil was broken up. AT&T had to offer regulated universal service. Lockheed, Raytheon, and the Cold War defense contractors were overseen by procurement auditors and GAO enforcement.

This new AI economy is more privatized than any prior American industrial model—yet more dependent on the federal government than ever before. It’s an inversion of free market principles wrapped in American flags and GPU clusters.

Welcome to the Command Economy—For Tech Oligarchs

There’s a word for this: command economy. But instead of bureaucrats in Soviet ministries, we now have a handful of unelected CEOs directing infrastructure, energy, science, education, national security, and labor policy—all through cozy relationships with federal agencies.

If we’re going to nationalize AI, let’s do it honestly—with public governance, democratic accountability, and shared benefit. But this halfway privatized, fully subsidized, and wholly unaccountable structure isn’t capitalism. It’s capture.

Deduplication and Discovery: The Smoking Gun in the Machine

WINSTON

“Wipe up all those little pieces of brains and skull”

From Pulp Fiction, screenplay by Quentin Tarantino and Roger Avary

Deduplication—the process of removing identical or near-identical content from AI training data—is a critical yet often overlooked indicator that AI platforms actively monitor and curate their training sets. This is the kind of process that one would expect given the kind of “scrape, ready, aim” business practices that seems precisely the approach of AI platforms that have ready access to large amounts of fairly high quality data from users of other products placed into commerce by business affiliates or confederates of the AI platforms.

For example, Google Gemini could have access to gmail, YouTube, at least “publicly available” Google Docs, Google Translate, or Google for Education, and then of course one of the great scams of all time, Google Books. Microsoft uses Bing searches, MSN browsing, the consumer Copilot experience, and ad interactions. Amazon uses Alexa prompts, Facebook uses “public” posts and so on.

This kind of hoovering up of indiscriminate amounts of “data” in the form of your baby pictures posted on Facebook and your user generated content on YouTube is bound to produce duplicates. After all, how may users have posted their favorite Billie Eilish or Taylor Swift music video. AI doesn’t need 10000 versions of “Shake it Off” they probably just need the official video. Enter deduplication–which by definition means the platform knows what it has scraped and also knows what it wants to get rid of.

“Get rid of” is a relative concept. In many systems—particularly in storage environments like backup servers or object stores—deduplication means keeping only one physical copy of a file. Any other instances of that data don’t get stored again; instead, they’re represented by pointers to the original copy. This approach, known as inline deduplication, happens in real time and minimizes storage waste without actually deleting anything of functional value. It requires knowing what you have, knowing you have more than one version of the same thing, and being able to tell the system where to look to find the “original” copy without disturbing the process and burning compute inefficiently.

In other cases, such as post-process deduplication, the system stores data initially, then later scans for and eliminates redundancies. Again, the AI platform knows there are two or more versions of the same thing, say the book Being and Nothingness, knows where to find the copies and has been trained to keep only one version. Even here, the duplicates may not be permanently erased—they might be archived, versioned, or logged for auditing, compliance, or reconstruction purposes.

In AI training contexts, deduplication usually means removing redundant examples from the training set to avoid copyright risk. The duplicate content may be discarded from the training pipeline but often isn’t destroyed. Instead, AI companies may retain it in a separate filtered corpus or keep hashed fingerprints to ensure future models don’t retrain on the same material unknowingly.

So they know what they have, and likely know where it came from. They just don’t want to tell any plaintiffs.

Ultimately, deduplication is less about destruction and more about optimization. It’s a way to reduce noise, save resources, and improve performance—while still allowing systems to track, reference, or even rehydrate the original data if needed.

Its existence directly undermines claims that companies are unaware of which copyrighted works were ingested. Indeed, it only makes sense that one of the hidden consequences of the indiscriminate scraping that underpins large-scale AI training is the proliferation of duplicated data. Web crawlers ingest everything they can access—news articles republished across syndicates, forum posts echoed in aggregation sites, Wikipedia mirrors, boilerplate license terms, spammy SEO farms repeating the same language over and over. Without any filtering, this avalanche of redundant content floods the training pipeline.

This is where deduplication becomes not just useful, but essential. It’s the cleanup crew after a massive data land grab. The more messy and indiscriminate the scraping, the more aggressively the model must filter for quality, relevance, and uniqueness to avoid training inefficiencies or—worse—model behaviors that are skewed by repetition. If a model sees the same phrase or opinion thousands of times, it might assume it’s authoritative or universally accepted, even if it’s just a meme bouncing around low-quality content farms.

Deduplication is sort of the Winston Wolf of AI. And if the cleaner shows up, somebody had to order the cleanup. It is a direct response to the excesses of indiscriminate scraping. It’s both a technical fix and a quiet admission that the underlying data collection strategy is, by design, uncontrolled. But while the scraping may be uncontrolled to get copies of as much of your data has they can lay hands on, even by cleverly changing their terms of use boilerplate so they can do all this under the effluvia of legality, they send in the cleaner to take care of the crime scene.

So to summarize: To deduplicate, platforms must identify content-level matches (e.g., multiple copies of Being and Nothingness by Jean-Paul Sartre). This process requires tools that compare, fingerprint, or embed full documents—meaning the content is readable and classifiable–and, oh, yes, discoverable.

Platforms may choose the ‘cleanest’ copy to keep, showing knowledge and active decision-making about which version of a copyrighted work is retained. And–big finish–removing duplicates only makes sense if operators know which datasets they scraped and what those datasets contain.

Drilling down on a platform’s deduplication tools and practices may prove up knowledge and intent to a precise degree—contradicting arguments of plausible deniability in litigation. Johnny ate the cookies isn’t going to fly. There’s a market clearing level of record keeping necessary for deduping to work at all, so it’s likely that there are internal deduplication logs or tooling pipelines that are discoverable.

When AI platforms object to discovery about deduplication, plaintiffs can often overcome those objections by narrowing their focus. Rather than requesting broad details about how a model deduplicates its entire training set, plaintiffs should ask a simple, specific question: Were any of these known works—identified by title or author—deduplicated or excluded from training?

This approach avoids objections about overbreadth or burden. It reframes discovery as a factual inquiry, not a technical deep dive. If the platform claims the data was not retained, plaintiffs can ask for existing artifacts—like hash filters, logs, or manifests—or seek a sworn statement explaining the loss and when it occurred. That, in turn, opens the door to potential spoliation arguments.

If trade secrets are cited, plaintiffs can propose a protective order, limiting access to outside counsel or experts like we’ve done 100,000 times before in other cases. And if the defendant claims “duplicate” is too vague, plaintiffs can define it functionally—as content that’s identical or substantially similar, by hash, tokens, or vectors.

Most importantly, deduplication is relevant. If a platform identified a plaintiff’s work and trained on it anyway, that speaks to volitional use, copying, and lack of care—key issues in copyright and fair use analysis. And if they lied about it, particularly to the court—Helloooooo Harper & Row. Discovery requests that are focused, tailored, and anchored in specific works stand a far better chance of surviving objections and yielding meaningful evidence which hopefully will be useful and lead to other positive results.

David Sacks Is Learning That the States Still Matter

For a moment, it looked like the tech world’s powerbrokers had pulled it off. Buried deep in a Republican infrastructure and tax package was a sleeper provision — the so-called AI moratorium — that would have blocked states from passing their own AI laws for up to a decade. It was an audacious move: centralize control over one of the most consequential technologies in history, bypass 50 state legislatures, and hand the reins to a small circle of federal agencies and especially to tech industry insiders.

But then it collapsed.

The Senate voted 99–1 to strike the moratorium. Governors rebelled. Attorneys general sounded the alarm. Artists, parents, workers, and privacy advocates from across the political spectrum said “no.” Even hardline conservatives like Ted Cruz eventually reversed course when it came down to the final vote. The message to Big Tech or the famous “Little Tech” was clear: the states still matter — and America’s tech elite ignore that at their peril.  (“Little Tech” is the latest rhetorical deflection promoted by Big Tech aka propaganda.)

The old Google crowd pushed the moratorium–their fingerprints were obvious. Having gotten fabulously rich off of their two favorites: The DMCA farce and the Section 230 shakedown. But there’s increasing speculation that White House AI Czar and Silicon Valley Viceroy David Sacks, PayPal alum and vocal MAGA-world player, was calling the ball. If true, that makes this defeat even more revealing.

Sacks represents something of a new breed of power-hungry tech-right influencer — part of the emerging “Red Tech” movement that claims to reject woke capitalism and coastal elitism but still wants experts to shape national policy from Silicon Valley, a chapter straight out of Philip Dru: Administrator. Sacks is tied to figures like Peter Thiel, Elon Musk, and a growing network of Trump-aligned venture capitalists. But even that alignment couldn’t save the moratorium.

Why? Because the core problem wasn’t left vs. right. It was top vs. bottom.

In 1964, Ronald Reagan’s classic speech called A Time for Choosing warned about “a little intellectual elite in a far-distant capitol” deciding what’s best for everyone else. That warning still rings true — except now the “capitol” might just be a server farm in Menlo Park or a podcast studio in LA.

The AI moratorium was an attempt to govern by preemption and fiat, not by consent. And the backlash wasn’t partisan. It came from red states and blue ones alike — places where elected leaders still think they have the right to protect their citizens from unregulated surveillance, deepfakes, data scraping, and economic disruption.

So yes, the defeat of the moratorium was a blow to Google’s strategy of soft-power dominance. But it was also a shot across the bow for David Sacks and the would-be masters of tech populism. You can’t have populism without the people.

If Sacks and his cohort want to play a long game in AI policy, they’ll have to do more than drop ideas into the policy laundry of think tank white papers and Beltway briefings. They’ll need to win public trust, respect state sovereignty, and remember that governing by sneaky safe harbors is no substitute for legitimacy.  

The moratorium failed because it presumed America could be governed like a tech startup — from the top, at speed, with no dissent. Turns out the country is still under the impression they have something to say about how they are governed, especially by Big Tech.

Steve’s Not Here–Why AI Platforms Are Still Acting Like Pirate Bay

In 2006, I wrote “Why Not Sell MP3s?” — a simple question pointing to an industry in denial. The dominant listening format was the MP3 file, yet labels were still trying to sell CDs or hide digital files behind brittle DRM. It seems kind of incredible in retrospect, but believe me it happened. Many cycles were burned on that conversation. Fans had moved on. The business hadn’t.

Then came Steve Jobs.

At the launch of the iTunes Store — and I say this as someone who sat in the third row — Jobs gave one of the most brilliant product presentations I’ve ever seen. He didn’t bulldoze the industry. He waited for permission, but only after crafting an offer so compelling it was as if the labels should be paying him to get in. He brought artists on board first. He made it cool, tactile, intuitive. He made it inevitable.

That’s not what’s happening in AI.

Incantor: DRM for the Input Layer

Incantor is trying to be the clean-data solution for AI — a system that wraps content in enforceable rights metadata, licenses its use for training and inference, and tracks compliance. It’s DRM, yes — but applied to training inputs instead of music downloads.

It may be imperfect, but at least it acknowledges that rights exist.

What’s more troubling is the contrast between Incantor’s attempt to create structure and the behavior of the major AI platforms, which have taken a very different route.

AI Platforms = Pirate Bay in a Suit

Today’s generative AI platforms — the big ones — aren’t behaving like Apple. They’re behaving like The Pirate Bay with a pitch deck.

– They ingest anything they can crawl.
– They claim “public availability” as a legal shield.
– They ignore licensing unless forced by litigation or regulation.
– They posture as infrastructure, while vacuuming up the cultural labor of others.

These aren’t scrappy hackers. They’re trillion-dollar companies acting like scraping is a birthright. Where Jobs sat down with artists and made the economics work, the platforms today are doing everything they can to avoid having that conversation.

This isn’t just indifference — it’s design. The entire business model depends on skipping the licensing step and then retrofitting legal justifications later. They’re not building an ecosystem. They’re strip-mining someone else’s.

What Incantor Is — and Isn’t

Incantor isn’t Steve Jobs. It doesn’t control the hardware, the model, the platform, or the user experience. It can’t walk into the room and command the majors to listen with elegance. But what it is trying to do is reintroduce some form of accountability — to build a path for data that isn’t scraped, stolen, or in legal limbo.

That’s not an iTunes power move. It’s a cleanup job. And it won’t work unless the AI companies stop pretending they’re search engines and start acting like publishers, licensees, and creative partners.

What the MP3 Era Actually Taught Us

The MP3 era didn’t end because DRM won. It ended because someone found a way to make the business model and the user experience better — not just legal, but elegant. Jobs didn’t force the industry to change. He gave them a deal they couldn’t refuse.

Today, there’s no Steve Jobs. No artists on stage at AI conferences. No tactile beauty. Just cold infrastructure, vague promises, and a scramble to monetize other people’s work before the lawsuits catch up. Let’s face it–when it comes to Elon, Sam, or Zuck, would you buy a used Mac from that man?

If artists and AI platforms were in one of those old “I’m a Mac / I’m a PC” commercials, you wouldn’t need to be told which is which. One side is creative, curious, collaborative. The other is corporate, defensive, and vaguely annoyed that you even asked the question.

Until that changes, platforms like Incantor will struggle to matter — and the AI industry will continue to look less like iTunes, and more like Pirate Bay with an enterprise sales team.

The Delay’s The Thing: Anthropic Leapfrogs Its Own November Valuation Despite Litigation from Authors and Songwriters in the Heart of Darkness

If you’ve read Joseph Conrad’s Heart of Darkness, you’ll be familiar with the Congo Free State, a private colony of Belgian King Leopold II that is today largely the Democratic Republic of the Congo. When I say “private” I mean literally privately owned by his Leopoldness. Why would old King Leo be so interested in owning a private colony in Africa? Why for the money, of course. Leo had to move some pieces around the board and get other countries to allow him to get away with essentially “buying” the place, if “buying” is the right description.

So Leo held an international conference in Berlin to discuss the idea and get international buy-in, kind of like the World Economic Forum with worse food and no skiing. Rather than acknowledging his very for-profit intention to ravage the Congo for ivory (aka slaughtering elephants) and rubber (the grisly extraction of which was accomplished by uncompensated slave labor) with brutal treatment of all concerned, Leo convinced the assembled nations that his intentions were humanitarian and philanthropic. You know, don’t be evil. Just lie.

Of course, however much King Leopold may have foreshadowed our sociopathic overlords from Silicon Valley, it must be said that Leo’s real envy won’t so much be the money as what he could have done with AI himself had he only known. Oh well, he just had to make do with Kurtz.

Which bring us to AI in general and Anthropic in particular. Anthropic’s corporate slogan is equally humanitarian and philanthropic: “Anthropic is an AI research company that focuses on the safety and alignment of AI systems with human values.” Oh yes, all very jolly.

All very innocent and high minded, until you get punched in the face (to coin a phrase). It turns out–quelle horreur–that Anthropic stands accused of massive copyright infringement rather than lauded for its humanitarianism. Even more shocking? The company’s valuation is going through the stratosphere! These innocents surely must be falsely accused! The VC’s are voting with their bucks, so they wouldn’t put their shareholders’ money or limiteds money on the line for a–RACKETEER INFLUENCED CORRUPT ORGANIZATION?!?

Not only have authors brought this class action against Anthropic which is both Google’s stalking horse and cats paw to mix a metaphor, but the songwriters and music publishers have sued them as well. Led by Concord and Universal, the publishers have sued for largely the same reasons as the authors but for their quite distinct copyrights.

So let’s understand the game that’s being played here–as the Artist Rights Institute submitted in a comment to the UK Intellectual Property Office in the IPO’s current consultation on AI and copyright, the delay is the thing. And thanks to Anthropic, we can now put a valuation on the delay since the $4,000,000,000 the company raised in November 2024: $3,500,000,000. This one company is valued at $61.5 billion, roughly half of the entire creative industries in the UK and roughly equal to the entire U.S. music industry. No wonder delay is their business model.

However antithetical, copyright and AI must be discussed together for a very specific reason:  Artificial intelligence platforms operated by Google, Microsoft/OpenAI, Meta and the like have scraped and ingested works of authorship from baby pictures to Sir Paul McCartney as fast and as secretly as possible.  And the AI platforms know that the longer they can delay accountability, the more of the world’s culture they will have devoured—or as they might say, the more data they will have ingested.  And Not to mention the billions in venture capital they will have raised, just like Anthropic. For the good of humanity, of course, just like old King Leo.

As the Hon. Alison Hume, MP recently told Parliament, this theft is massive and has already happened, another example of why any “opt out” scheme (as had been suggested by the UK government) has failed before it starts:

This week, I discovered that the subtitles from one of my episodes of New Tricks have been scraped and are being used to create learning materials for artificial intelligence.  Along with thousands of other films and television shows, my original work is being used by generative AI to write scripts which one day may replace versions produced by mere humans like me.

This is theft, and it’s happening on an industrial scale.  As the law stands, artificial intelligence companies don’t have to be transparent about what they are stealing.[1]

Any delay[2] in prosecuting AI platforms simply increases their de facto “text and data mining” safe harbor while they scrape ever more of world culture.  As Ms. Hume states, this massive “training” has transferred value to these data-hungry mechanical beasts to a degree that confounds human understanding of its industrial scale infringement.  This theft dwarfs even the Internet piracy that drove broadband penetration, Internet advertising and search platforms in the 1999-2010 period.  It must be said that for Big Tech, commerce and copyright are once again inherently linked for even greater profit.

As the Right Honourable Baroness Kidron said in her successful opposition to the UK Government’s AI legislation in the House of Lords:

The Government are doing this not because the current law does not protect intellectual property rights, nor because they do not understand the devastation it will cause, but because they are hooked on the delusion that the UK’s best interests and economic future align with those of Silicon Valley.[3]  

Baroness Kidron identifies a question of central importance that mankind is forced to consider by the sheer political brute force of the AI lobbying steamroller:  What if AI is another bubble like the Dot Com bubble?  AI is, to a large extent, a black box utterly lacking in transparency much less recordkeeping or performance metrics.  As Baroness Kidron suggests, governments and the people who elect them are making a very big bet that AI is not pursuing an ephemeral bubble like the last time.

Indeed, the AI hype has the earmarks of a bubble, just as the Dot Com bubble did.  Baroness Kidron also reminds us of these fallacious economic arguments surrounding AI:

The Prime Minister cited an IMF report that claimed that, if fully realised, the gains from AI could be worth up to an average of £47 billion to the UK each year over a decade. He did not say that the very same report suggested that unemployment would increase by 5.5% over the same period. This is a big number—a lot of jobs and a very significant cost to the taxpayer. Nor does that £47 billion account for the transfer of funds from one sector to another. The creative industries contribute £126 billion per year to the economy. I do not understand the excitement about £47 billion when you are giving up £126 billion.[4]  

As Hon. Chris Kane, MP said in Parliament,  the Government runs the risk of enabling a wealth transfer that itself is not producing new value but would make old King Leo feel right at home: 

Copyright protections are not a barrier to AI innovation and competition, but they are a safeguard for the work of an industry worth £125 billion per year, employing over two million people.  We can enable a world where much of this value  is transferred to a handful of big tech firms or we can enable a win-win situation for the creative industries and AI developers, one where they work together based on licensed relationships with remuneration and transparency at its heart.


[1] Paul Revoir, AI companies are committing ‘theft’ on an ‘industrial scale’, claims Labour MP – who has written for TV series including New Tricks, Daily Mail (Feb. 12, 2025) available at https://www.dailymail.co.uk/news/article-14391519/AI-companies-committing-theft-industrial-scale-claims-Labour-MP-wrote-TV-shows-including-New-Tricks.html

[2] See, e.g., Kerry Muzzey, [YouTube Delay Tactics with DMCA Notices], Twitter (Feb. 13, 2020) available at https://twitter.com/kerrymuzzey/status/1228128311181578240  (Film composer with Content ID account notes “I have a takedown pending against a heavily-monetized YouTube channel w/a music asset that’s been fine & in use for 7 yrs & 6 days. Suddenly today, in making this takedown, YT decides “there’s a problem w/my metadata on this piece.” There’s no problem w/my metadata tho. This is the exact same delay tactic they threw in my way every single time I applied takedowns against broadcast networks w/monetized YT channels….And I attached a copy of my copyright registration as proof that it’s just fine.”); Zoë Keating, [Content ID secret rules], Twitter (Feb. 6. 2020) available at https://twitter.com/zoecello/status/1225497449269284864  (Independent artist with Content ID account states “[YouTube’s Content ID] doesn’t find every video, or maybe it does but then it has selective, secret rules about what it ultimately claims for me.”).

[3] The Rt. Hon. Baroness Kidron, Speech regarding Data (Use and Access) Bill [HL] Amendment 44A, House of Lords (Jan. 28, 2025) available at https://hansard.parliament.uk/Lords%E2%80%8F/2025-01-28/debates/9BEB4E59-CAB1-4AD3-BF66-FE32173F971D/Data(UseAndAccess)Bill(HL)#contribution-9A4614F3-3860-4E8E-BA1E-53E932589CBF 

[4] Id.