Missile Gap, Again: Big Tech’s Private Power vs. the Public Grid

If we let a hyped “AI gap” dictate land and energy policy, we’ll privatize essential infrastructure and socialize the fallout.

Every now and then, it’s important to focus on what our alleged partners in music distribution are up to, because the reality is they’re not record people—their real goal is getting their hands on the investment we’ve all made in helping compelling artists find and keep an audience. And when those same CEOs use the profits from our work to pivot to “defense tech” or “dual use” AI (civilian and military), we should hear what that euphemism really means: killing machines.

Daniel Ek is backing battlefield-AI ventures; Eric Schmidt has spent years bankrolling and lobbying for the militarization of AI while shaping the policies that green-light it. This is what happens when we get in business with people who don’t share our values: the capital, data, and social license harvested from culture gets recycled into systems built to find, fix, and finish human beings. As Bob Dylan put it in Masters of War, “You fasten the triggers for the others to fire.” These deals aren’t value-neutral—they launder credibility from art into combat. If that’s the future on offer, our first duty is to say so plainly—and refuse to be complicit.

The same AI outfits that for decades have refused to license or begrudgingly licensed the culture they ingest are now muscling into the hard stuff—power grids, water systems, and aquifers—wherever governments are desperate to win their investment. Think bespoke substations, “islanded” microgrids dedicated to single corporate users, priority interconnects, and high-volume water draws baked into “innovation” deals. It’s happening globally, but nowhere more aggressively than in the U.S., where policy and permitting are being bent toward AI-first infrastructure—thanks in no small part to Silicon Valley’s White House “AI viceroy,” David Sacks. If we don’t demand accountability at the point of data and at the point of energy and water, we’ll wake up to AI that not only steals our work but also commandeers our utilities. Just like Senator Wyden accomplished for Oregon.

These aren’t pop-up server farms; they’re decades-long fixtures. Substations and transmission are built on 30–50-year horizons, generation assets run 20–60, with multi-decade PPAs, water rights, and recorded easements that outlive elections. Once steel’s in the ground, rate designs and priority interconnects get contractually sticky. Unlike the Internet fights of the last 25 years—where you could force a license for what travels through the pipe—this AI footprint binds communities for generations; it’s essentially forever. So we will be stuck for generations with the decisions we make today.

Because China–The New Missle Gap

There’s a familiar ring to the way America is now talking about AI, energy, and federal land use (and likely expropriation). In the 1950s Cold War era, politicians sold the country on a “missile gap” that later proved largely mythical, yet it hardened budgets, doctrine, and concrete in ways that lasted decades.

Today’s version is the “AI gap”—a story that says China is sprinting on AI, so we must pave faster, permit faster, and relax old guardrails to keep up. Of course, this diverts attention from China’s advances in directed-energy weapons and hypersonic missiles which are here right now today and will play havoc in an actual battlefield—which the West has no counter to. But let’s not talk about those (at least not until we lose a carrier in the South China Sea), let’s worry about AI because that will make Silicon Valley even richer.

Watch any interview of executives from the frontier AI labs and within minutes they will hit their “because China” talking point. National security and competitiveness are real concerns, but they don’t justify blank checks and Constitutional-level safe harbors. The missile‑gap analogy is useful because it reminds us how a compelling threat narrative propaganda can swamp due diligence. We can support strategic compute and energy without letting an AI‑gap story permanently bulldoze open space and saddle communities with the bill.

Energy Haves (Them) and Have Nots (Everyone else)

The result is a two‑track energy state AKA hell on earth. On Track A, the frontier AI lab hyperscalers like Google, Meta, Microsoft, OpenAI & Co. build company‑town infrastructure for AI—on‑site electricity generation by microgrids outside of everyone else’s electric grid, dedicated interties and other interconnections between electric operators—often on or near federal land. On Track B, the public grid carries everyone else: homes, hospitals, small manufacturers, water districts. As President Trump said at the White House AI dinner this week, Track A promises to “self‑supply,” but even self‑supplied campuses still lean on the public grid for backup and monetization, and they compete for scarce interconnection headroom.

President Trump is allowing the hyperscalers to get permanent rights to build on massive parcels of government land, including private utilities to power the massive electricity and water cooling needs for AI data centers. Strangely enough, this is continuing a Biden policy under an executive order issued late in Biden Presidency that Trump now takes credit for, and is a 180 out from America First according to people who ought to know like Steve Bannon. And yet it is happening.

White House Dinners are Old News in Silicon Valley

If someone says “AI labs will build their own utilities on federal land,” that land comes in two flavors: Department of Defense (now War Department) or Department of Energy sites and land owned by the Bureau of Land Management (BLM). This are vastly different categories.  DoD/DOE sites such as Idaho National Laboratory Oak Ridge Reservation, Paducah GDP, and the Savannah River Site, imply behind-the-fence, mission-tied microgrids with limited public friction; BLM land implies public-land rights-of-way and multi-use trade-offs (grazing, wildlife, cultural), longer timelines, and grid-export dynamics with potential “curtailment” which means prioritizing electricity for the hyperscalers. For example, Idaho National Laboratory (INL) as one of the four AI/data-center sites. INL’s own environmental reports state that about 60% of the INL site is open to livestock grazing, with monitoring of grazing impacts on habitat.  That’s likely over.

This is about how we power anything not controlled by a handful of firms. And it’s about the land footprint: fenced solar yards, switchyards, substations, massive transport lines, wider roads, laydown areas. On BLM range and other open spaces, those facilities translate into real, local losses—grazable acres inside fences, stock trails detoured, range improvements relocated.

What the two tracks really do

Track A solves a business problem: compute growth outpacing the public grid’s construction cycle. By putting electrons next to servers (literally), operators avoid waiting years for a substation or a 230‑kV line. Microgrids provide islanding during emergencies and participation in wholesale markets when connected. It’s nimble, and it works—for the operator.

Track B inherits the volatility: planners must consider a surge of large loads that may or may not appear, while maintaining reliability for everyone else. Capacity margins tighten; transmission projects get reprioritized; retail rates absorb the externalities. When utilities plan for speculative loads and those projects cancel or slide, the region can be left with stranded costs or deferred maintenance elsewhere.

The land squeeze we’re not counting

Public agencies tout gigawatts permitted. They rarely publish the acreage fenced, AUMs affected, or water commitments. Utility‑scale solar commonly pencils out to on the order of 5–7 acres per megawatt of capacity depending on layout and topography. At that ratio, a single gigawatt occupies thousands of acres—acres that, unlike wind, often can’t be grazed once panels and security fences go in. Even where grazing is technically possible, access roads, laydown yards, and vegetation control impose real costs on neighboring users.

Wind is more compatible with grazing, but it isn’t footprint‑free. Pads, roads, and safety buffers fragment pasture. Transmission to move that energy still needs corridors—and those corridors cross someone’s water lines and gates. Multiple use is a principle; on the ground it’s a schedule, a map, and a cost. Just for reference, a rule‑of‑thumb for acres/electricity produces is approximately 5–7 acres per megawatt of direct current (“MWdc”), but access roads, laydown, and buffers extend beyond the fence line.

We are going through this right now in my part of the world. Central Texas is bracing for a wave of new high-voltage transmission. These are 345-kV corridors cutting (literally) across the Hill Country to serve load growth for chip fabricators and data centers and tie-in distant generation (so big lines are a must once you commit to the usage). Ranchers and small towns are pushing back hard: eminent-domain threats, devalued land, scarred vistas, live-oak and wildlife impacts, and routes that ignore existing roads and utility corridors. Packed hearings and county resolutions demand co-location, undergrounding studies, and real alternatives—not “pick a line on a map” after the deal is done. The fight isn’t against reliability; it’s against a planning process that externalizes costs onto farmers, ranchers, other landowners and working landscapes.

Texas’s latest SB 6 is the case study. After a wave of ultra-large AI/data-center loads, frontier labs and their allies pushed lawmakers to rewrite reliability rules so the grid would accommodate them. SB 6 empowers the Texas grid operator ERCOT to police new mega-loads—through emergency curtailment and/or firm-backup requirements—effectively reshaping interconnection priorities and shifting reliability risk and costs onto everyone else. “Everyone else” means you and me, kind of like the “full faith and credit of the US”. Texas SB 6 was signed into law in June 2025 by Gov. Greg Abbott. It’s now in effect and directs PUCT/ERCOT to set new rules for very large loads (e.g., data centers), including curtailment during emergencies and added interconnection/backup-power requirements. So the devil will be in the details and someone needs to put on the whole armor of God, so to speak.

The phantom problem

Another quiet driver of bad outcomes is phantom demand: developers filing duplicative load or interconnection requests to keep options open. On paper, it looks like a tidal wave; in practice, only a slice gets built. If every inquiry triggers a utility study, a route survey, or a placeholder in a capital plan, neighborhoods can end up paying for capacity that never comes online to serve them.

A better deal for the public and the range

Prioritize already‑disturbed lands—industrial parks, mines, reservoirs, existing corridors—before greenfield BLM range land. Where greenfield is unavoidable, set a no‑net‑loss goal for AUMs and require real compensation and repair SLAs for affected range improvements.

Milestone gating for large loads: require non‑refundable deposits, binding site control, and equipment milestones before a project can hold scarce interconnection capacity or trigger grid upgrades. Count only contracted loads in official forecasts; publish scenario bands so rate cases aren’t built on hype.

Common‑corridor rules: make developers prove they can’t use existing roads or rights‑of‑way before claiming new footprints. Where fencing is required, use wildlife‑friendly designs and commit to seasonal gates that preserve stock movement.

Public equity for public land: if a campus wins accelerated federal siting and long‑term locational advantage, tie that to a public revenue share or capacity rights that directly benefit local ratepayers and counties. Public land should deliver public returns, not just private moats.

Grid‑help obligations: if a private microgrid islands to protect its own uptime, it should also help the grid when connected. Enroll batteries for frequency and reserve services; commit to emergency export; and pay a fair share of fixed transmission costs instead of shifting them onto households.

Or you could do what the Dutch and Irish governments proposed under the guise of climate change regulations—kill all the cattle. I can tell you right now that that ain’t gonna happen in Texas.

Will We Get Fooled Again?

If we let a hyped latter day “missile gap” set the terms, we’ll lock in a two‑track energy state: private power for those who can afford to build it, a more fragile and more expensive public grid for everyone else, and open spaces converted into permanent infrastructure at a discount. The alternative is straightforward: price land and grid externalities honestly, gate speculative demand, require public returns on public siting, and design corridor rules that protect working landscapes. That’s not anti‑AI; it’s pro‑public. Everything not controlled by Big Tech—will be better for it.

Let’s be clear: the data-center onslaught will be financed by the taxpayer one way or another—either as direct public outlays or through sweet-heart “leases” of federal land to build private utilities behind the fence for the richest corporations in commercial history. After all the goodies that Trump is handing to the AI platforms, let’s not have any loose talk of “selling” excess electricity to the public–that price should be zero. Even so, the sales pitch about “excess” electricity they’ll generously sell back to the grid is a fantasy; when margins tighten, they’ll throttle output costs, not volunteer philanthropy. Picture it: do you really think these firms won’t optimize for themselves first and last? We’ll be left with the bills, the land impacts, and a grid redesigned around their needs. Ask yourself—what in the last 25 years of Big Tech behavior says “trustworthy” to you?

From Fictional “Looking Backward” to Nonfiction Silicon Valley: Will Technologists Crown the New Philosopher‑Kings?

More than a century ago, writers like Edward Bellamy and Edward Mandell House asked a question that feels as urgent in 2025 as it did in their era: Should society be shaped by its people, or designed by its elites? Both grappled with this tension in fiction. Bellamy’s Looking Backward (1888) imagined a future society run by rational experts — technocrats and bureaucrats centralizing economic and social life for the greater good. House’s Philip Dru: Administrator (1912) went a step further, envisioning an American civil war where a visionary figure seizes control from corrupt institutions to impose a new era of equity and order.  Sound familiar?

Today, Silicon Valley’s titans are rehearsing their own versions of these stories. In an era dominated by artificial intelligence, climate crisis, and global instability, the tension between democratic legitimacy and technocratic efficiency is more pronounced than ever.

The Bellamy Model: Eric Schmidt and Biden’s AI Order

President Biden’s sweeping Executive Order on AI issued in late 2023 feels like a chapter lifted from Looking Backward. Its core premise is unmistakable: Trust our national champion “trusted” technologists to design and govern the rules for an era shaped by artificial intelligence. At the heart of this approach is Eric Schmidt, former CEO of Google and a key advisor in shaping the AI order at least according to Eric Schmidt

Schmidt has long advocated for centralizing AI policymaking within a circle of vetted, elite technologists — a belief reminiscent of Bellamy’s idealistic vision. According to Schmidt, AI and other disruptive technologies are too pivotal, too dangerous, and too impactful to be left to messy democratic debates. For people in Schmidt’s cabal, this approach is prudent: a bulwark against AI’s darker possibilities. But it doesn’t do much to protect against darker possibilities from AI platforms.  For skeptics like me, it raises a haunting question posed by Bellamy himself: Are we delegating too much authority to a technocratic elite?

The Philip Dru Model: Musk, Sacks, and Trump’s Disruption Politics

Meanwhile, across the aisle, another faction of Silicon Valley is aligning itself with Donald Trump and making a very different bet for the future. Here, the nonfiction playbook is closer to the fictional Philip Dru. In House’s novel, an idealistic and forceful figure emerges from a broken system to impose order and equity. Enter Elon Musk and David Sacks, both positioning themselves as champions of disruption, backed by immense platforms, resources, and their own venture funds. 

Musk openly embraces a worldview wherein technologists have both the tools and the mandate to save society by reshaping transportation, energy, space, and AI itself. Meanwhile, Sacks advocates Silicon Valley as a de facto policymaker, disrupting traditional institutions and aligning with leaders like Trump to advance a new era of innovation-driven governance—with no Senate confirmation or even a security clearance. This competing cabal operates with the implicit belief that traditional democratic institutions, inevitiably bogged down by process, gridlock, and special interests can no longer solve society’s biggest problems. To Special Government Employees like Musk and Sacks, their disruption is not a threat to democracy, but its savior.

A New Gilded Age? Or a New Social Contract?

Both threads — Biden and Schmidt’s technocratic centralization and Musk, Sacks, and Trump’s disruption-driven politics — grapple with the legacy of Bellamy and House. In the Gilded Age that inspired those writers, industrial barons sought to justify their dominance with visions of rational, top-down progress. Today’s Silicon Valley billionaires carry a similar vision for the digital era, suggesting that elite technologists can govern more effectively than traditional democratic institutions like Plato’s “guardians” of The Republic.

But at what cost? Will AI policymaking and its implementation evolve as a public endeavor, shaped by citizen accountability? Or will it be molded by corporate elites making decisions in the background? Will future leaders consolidate their role as philosopher-kings and benevolent administrators — making themselves indispensable to the state?

The Stakes Are Clear

As the lines between Silicon Valley and Washington continue to blur, the questions posed by Bellamy and House have never been more relevant: Will technologist philosopher-kings write the rules for our collective future? Will democratic institutions evolve to balance AI and climate crisis effectively? Will the White House of 2025 (and beyond) cede authority to the titans of Silicon Valley? In this pivotal moment, America must ask itself: What kind of future do we want — one that is chosen by its citizens, or one that is designed for its citizens? The answer will define the character of American democracy for the rest of the 21st century — and likely beyond.

When Viceroy David Sacks Writes the Tariffs: How One VC Could Weaponize U.S. Trade Against the EU

David Sacks is a “Special Government Employee”, Silicon Valley insider and a PayPal mafioso who has become one of the most influential “unofficial” architects of AI policy under the Trump administration. No confirmation hearings, no formal role—but direct access to power.

He:
– Hosts influential political podcasts with Musk and Thiel-aligned narratives.
– Coordinates behind closed doors with elite AI companies who are now PRC-style “national champions” (OpenAI, Anthropic, Palantir).
– Has reportedly played a central role in shaping the AI Executive Orders and industrial strategy driving billions in public infrastructure to favored firms.

Under 18 U.S.C. § 202(a), a Special Government Employee is:

  • Temporarily retained to perform limited government functions,
  • For no more than 130 days per year (which for Sacks ends either April 14 or May 30, 2025), unless reappointed in a different role,
  • Typically serves in an advisory or consultative role, or
  • Without holding actual decision-making or operational authority over federal programs or agencies.

SGEs are used to avoid conflict-of-interest entanglements for outside experts while still tapping their expertise for advisory purposes. They are not supposed to wield sweeping executive power or effectively run a government program. Yeah, right.

And like a good little Silicon Valley weasel, Sacks supposedly is alternating between his DC side hustle and his VC office to stay under 130 days. This is a dumbass reading of the statute which says “‘Special Government employee’ means… any officer or employee…retained, designated, appointed, or employed…to perform…temporary duties… for not more than 130 days during any period of 365 consecutive days.” That’s not the same as “worked” 130 days on the time card punch. But oh well.

David Sacks has already exceeded the legal boundaries of his appointment as a Special Government Employee (SGE) both in time served but also by directing the implementation of a sweeping, whole-of-government AI policy, including authoring executive orders, issuing binding directives to federal agencies, and coordinating interagency enforcement strategies—actions that plainly constitute executive authority reserved for duly appointed officers under the Appointments Clause. As an SGE, Sacks is authorized only to provide temporary, nonbinding advice, not to exercise operational control or policy-setting discretion across the federal government. Accordingly, any executive actions taken at his direction or based on his advisement are constitutionally infirm as the unlawful product of an individual acting without valid authority, and must be deemed void as “fruit of the poisonous tree.”

Of course, one of the states that the Trump AI Executive Orders will collide with almost immediately is the European Union and its EU AI Act. Were they 51st? No that’s Canada. 52nd? Ah, right that’s Greenland. Must be 53rd.

How Could David Sacks Weaponize Trade Policy to Help His Constituents in Silicon Valley?

Here’s the playbook:

Engineer Executive Orders

Through his demonstrated access to Trump and senior White House officials, Sacks could promote executive orders under the International Emergency Economic Powers Act (IEEPA) or Section 301 of the Trade Act, aimed at punishing countries (like EU members) for “unfair restrictions” on U.S. AI exports or operations.

Something like this: “The European Union’s AI Act constitutes a discriminatory and protectionist measure targeting American AI innovation, and materially threatens U.S. national security and technological leadership.” I got your moratorium right here.

Leverage the USTR as a Blunt Instrument

The Office of the U.S. Trade Representative (USTR) can initiate investigations under Section 301 without needing new laws. All it takes is political will—and a nudge from someone like Viceroy Sacks—to argue that the EU’s AI Act discriminates against U.S. firms. See Canada’s “Tech Tax”. Gee, I wonder if Viceroy Sacks had anything to do with that one.

Redefine “National Security”

Sacks and his allies can exploit the Trump administration’s loose definition of “national security” claiming that restricting U.S. AI firms in Europe endangers critical defense and intelligence capabilities.

Smear Campaigns and Influence Operations

Sacks could launch more public campaigns against the EU like his attacks on the AI diffusion rule. According to the BBC, “Mr. Sacks cited the alienation of allies as one of his key arguments against the AI diffusion plan”. That’s a nice ally you got there, be a shame if something happened to it.

After all, the EU AI Act does what Sacks despises like protects artists and consumers, restricts deployment of high-risk AI systems (like facial recognition and social scoring), requires documentation of training data (which exposes copyright violations), and applies extraterritorially (meaning U.S. firms must comply even at home).

And don’t forget, Viceroy Sacks actually was given a portfolio that at least indirectly includes the National Security Council, so he can use the NATO connection to put a fine edge on his “industrial patriotism” just as war looms over Europe.

When Policy Becomes Personal

In a healthy democracy, trade retaliation should be guided by evidence, public interest, and formal process.

But under the current setup, someone like David Sacks can short-circuit the system—turning a private grievance into a national trade war. He’s already done it to consumers, wrongful death claims and copyright, why not join war lords like Eric Schmidt and really jack with people? Like give deduplication a whole new meaning.

When one man’s ideology becomes national policy, it’s not just bad governance.

It’s a broligarchy in real time.

Uncle Sugar, the Lord of War: Drones, Data, and Don’t Be Evil

“You know who’s going to inherit the Earth? Arms dealers. Because everyone else is too busy killing each other.”

The Lord of War, Screenplay by Andrew Niccol

Aren’t you glad that we allowed YouTube to jack us around, let Google distribute pirate tracks and sell advertising to pirate sites? Oh, and don’t forget allowing Google to scan all the world’s books–good thing they’re not using any of that to train AI. All thanks to Google’s former CEO Eric Schmidt, aka Uncle Sugar.

This week, Ukraine’s Office of the President announced a strategic partnership with Swift Beat, an AI drone technology company reportedly linked to Eric Schmidt who is showing up everywhere like a latter day Zelig. Yes, that’s right–your Uncle Sugar is back. The Ukraine memorandum of understanding adds yet another layer to the quiet convergence of Silicon Valley money and 21st century warfare that is looking to be Uncle Sugar’s sweet spot. Given that Ukraine depends on the United States to fund roughly half of its defense budget, it’s a fairly safe assumption that somehow, some way, Uncle Sugar’s Washington buddies are helping to fund this deal.

The President of Ukraine’s announcement says that “[Swift Beat] will produce interceptor drones for the Armed Forces of Ukraine to destroy Russian UAVs and missiles, quadcopters for reconnaissance, surveillance, fire adjustment, and logistics, as well as medium-class strike drones for engaging enemy targets.” All based on US intel. So if Swift Beat uses US money received by Ukraine to manufacture this kit, you don’t suppose that Uncle Sugar might be planning on selling it to the good old US of A at some point in the future? Particularly given that the Russia-Ukraine war is frequently cited as a proving ground for the AI driven battle space?

Swift Beat has been portrayed as a nimble startup positioned to bring real-time battlefield intelligence and autonomous drone operations to Ukraine’s army. But as Defence-UA reported, the company’s website is opaque, its corporate structure elusive, and its origins murky. Despite the gravity of the deal—delivering critical defense technology to a country in a kinetic war—Swift Beat appears to lack a documented track record, a history of defense contracting, or even a clear business address. Reporting suggests that Swift Beat is owned by Volya Robotics OÜ, registered in Tallinn, Estonia, with Eric Schmidt as the sole beneficiary. Yeah, that’s the kind of rock solid pedigree I want from someone manufacturing a weapon system to defend my capitol.

Defence-UA raises further questions: why did Ukraine partner with a new firm (apparently founded in 2023) whose founders are tightly linked to U.S. defense tech circles, but whose public presence is nearly nonexistent? What role, if any, did Eric Schmidt’s extensive political and financial connections play in sealing the agreement? Is this a case of wartime innovation at speed—or something more…shall we say…complicated?

The entire arrangement feels eerily familiar. Nicholas Cage’s character in *Lord of War* wasn’t just trafficking weapons—he was selling access, power, and plausible deniability. Substitute advanced AI for Kalashnikovs and you get a contemporary upgrade to the AI bubble: an ecosystem where elite technologists and financiers claim to be “helping,” while building opaque commercial networks through jurisdictions with far less oversight that your uncle would have back home in the US. Cage’s arms dealer character had swagger, but also cover. You know, babes dig the drone. Not that Uncle Sugar would know anything about that angle. Schmidt’s Swift Beat seems to be playing a similar game to Yuri Orlov—with more money, but no less ambiguity.

And this isn’t Schmidt’s first dance in this space. As readers will recall, his growing entanglement in defense procurement, battlefield innovation, and AI-powered surveillance raises not just ethical questions—but geopolitical ones. The revolving door between Big Tech and government has never spun faster, and now it’s air-dropping influence into actual war zones.

Dr. Sarah Myers West of the AI Now Institute warns that figures like Eric Schmidt—who bridge Big Tech and national security—are crafting frameworks that sideline accountability in favor of accelerated deployment. That critique lands squarely in the case of Swift Beat, whose shadowy profile and deep ties to Silicon Valley make it a case study in how defense contracts and contractors can be opaque and deeply unaccountable. And Swift Beat is definitely a company that Dr. West calls “Eric Schmidt adjacent.”

While no public allegations have been made, the unusual structure of the Swift Beat–Ukraine agreement—paired with the company’s lack of operational history and the involvement of high-profile U.S. individuals—may raise important questions under the Foreign Corrupt Practices Act (FCPA). The FCPA prohibits U.S. entities from offering anything of value to foreign officials to secure business advantages, directly or indirectly. When so-far unaudited wartime procurement contracts are awarded through opaque processes and international actors operate through newly formed entities…dare I say “cutouts”–the risk of FCPA violations needs to be explored. In other words, if Google were to get in to the military hardware business like Meta, there would be an employee revolt at the Googleplex. But if they do it through a trusted source, even one over yonder way across the river, well…what’s the evil in helping an old friend? The whole thing sounds pretty spooky.

As Ukraine deepens its relationships with U.S. technology suppliers, and as prominent U.S. investors and executives like Uncle Sugar increase their involvement with all of the above, it may be appropriate for U.S. oversight bodies to take a closer look—not as a condemnation, but in service of transparency, compliance, and public trust. You know, don’t be evil.

Open the Pod Bay Doors, HAL: Why Eric Schmidt is Insane in his own words

In the GAI, no one can hear you scream. Let’s remember that this man has already stolen world culture–twice. It will be a dark kind of fun watching Schmidt get the World Economic Forum, Lawrence Lessig and Greta Thunberg to do a 180 on climate change. Don’t laugh–if anyone can do it, he can. You watch, the Berkman Center and EFF will lead the charge.

Chronology: The Week in Review, Eric Schmidt Spills on his “Bait” to UK PM, Musk on AI Training and other news

Elon Musk Calls Out AI Training

We’ve all heard the drivel coming from Silicon Valley that AI training is fair use. During his interview with Andrew Ross Sorkin at the DealBook conference, Elon Musk (who ought to know given his involvement with AI) said straight up that anyone who says AI doesn’t train on copyrights is lying.

The UK Government “Took the Bait”: Eric Schmidt Says the Quiet Part Out Loud on Biden AI Executive Order and Global Governance

There are a lot of moves being made in the US, UK and Europe right now that will affect copyright policy for at least a generation. Google’s past chair Eric Schmidt has been working behind the scenes for the last two years at least to establish US artificial intelligence policy. Those efforts produced the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence“, the longest executive order in history. That EO was signed into effect by President Biden on October 30, so it’s done. (It is very unlikely that that EO was drafted entirely at Executive Branch agencies.)

You may ask, how exactly did this sweeping Executive Order come to pass? Who was behind it, because someone always is. As you will see in his own words, Eric Schmidt, Google and unnamed senior engineers from the existing AI platforms are quickly making the rule and essentially drafted the Executive Order that President Biden signed into law on October 30. And which was presented as what Mr. Schmidt calls “bait” to the UK government–which convened a global AI safety conference convened by His Excellency Rishi Sunak (the UK’s tech bro Prime Minister) that just happened to start on November 1, the day after President Biden signed the EO, at Bletchley Park in the UK (see Alan Turing). (See “Excited schoolboy Sunak gushes as mentor Musk warns of humanoid robot catastrophe.”)

Remember, an executive order is an administrative directive from the President of the United States that addresses the operations of the federal government, particularly the vast Executive Branch. In that sense, Executive Orders are anti-majoritarian and are as close to at least a royal decree or Executive Branch legislation as we get in the United States (see Separation of Powers, Federalist 47 and Montesquieu). Executive orders are not legislation; they require no approval from Congress, and Congress cannot simply overturn them.

So you can see if the special interests wanted to slide something by the people that was difficult to undo or difficult to pass in the People’s House…and based on Eric Schmidt’s recent interview with Mike Allen at the Axios AI+ (available here), this appears to be exactly what happened with the sweeping and vastly concerning AI Executive Order. I strongly recommend that you watch Mike Allen’s “interview” with Mr. Schmidt which fortunately is the first conversation in the rather long video of the entire event. I put “interview” in scare quotes because whatever it is, it isn’t the kind of interview that prompts probing questions that might put Mr. Schmidt on the spot. That’s understandable because Axios is selling a conference and you simply won’t get senior corporate executives to attend if you put them on the spot. Not a criticism, but understand that you have to find value for your time. Mr. Schmidt’s ego provides plenty of value; it just doesn’t come from the journalists.

Crucially, Congress is not involved in issuing an executive order. Congress may refuse to fund the subject of the EO which could make it difficult to give it effect as a practical matter but Congress cannot overturn an EO. Only a sitting U.S. President may overturn an existing executive order. In Mr. Schmidt’s interview at AI+, he tells us how all this regulatory activity happened:

The tech people along with myself have been meeting for about a year. The narrative goes something like this: We are moving well past regulatory or government understanding of what is possible, we accept that. [Remember the antecedent of “we” means Schmidt and “the tech people,” or more broadly the special interests, not you, me or the American people.].

Strangely…this is the first time that the senior leaders who are engineers have basically said that they want regulation, but we want it in the following ways…which as you know never works in Washington [unless you can write an Executive Order and get the President to sign it because you are the biggest corporation in commercial history].

There is a complete agreement that there are systems and scenarios that are dangerous. [Agreement by or with whom? No one asks.]. And in all of the big [AI platforms with which] you are familiar like GPT…all of them have groups that look at the guard rails [presumably internal groups of managers] and they put constraints on [their AI platform in their silo]. They say “thou shalt not talk about death, thou shall not talk about killing”. [Anthropic, which received a $300 million investment from Google] actually trained the model with its own constitution [see “Claude’s Constitution“] which they did not just write themselves, they hired a bunch of people [actually Claude’s Constitution was crowd sourced] to design a “constitution” for an AI, so it’s an interesting idea.

The problem is none of us believe this is strong enough….Our opinion at the moment is that the best path is to build some IPCC-like environment globally that allows accurate information of what is going on to the policy makers. [This is a step toward global governance for AI (and probably the Internet) through the United Nations. IPCC is the Intergovernmental Panel on Climate Change.]

So far we are on a win, the taste of winning is there.  If you look at the UK event which I was part of, the UK government took the bait, took the ideas, decided to lead, they’re very good at this,  and they came out with very sensible guidelines.  Because the US and UK have worked really well together—there’s a group within the National Security Council here that is particularly good at this, and they got it right, and that produced this EO which is I think is the longest EO in history, that says all aspects of our government are to be organized around this.

While Mr. Schmidt may say, aw shucks dictating the rules to the government never works in Washington, but of course that’s simply not true if you’re Google. In which case it’s always true and that’s how Mr. Schmidt got his EO and will now export it to other countries.

It’s not Just Google: Microsoft Is Getting into the Act on AI and Copyright

Be sure to read Joe Bambridge (Politico’s UK editor) on Microsoft’s moves in the UK. You have to love the “don’t make life too difficult for us” line–as in respecting copyright.

Google and New Mountain Capital Buy BMI: Now what?

Careful observers of the BMI sale were not led astray by BMI’s Thanksgiving week press release that was dutifully written up as news by most of the usual suspects except for the fabulous Music Business Worldwide and…ahem…us. You may think we’re making too much out of the Google investment through it’s CapitalG side fund, but judging by how much BMI tried to hide the investment, I’d say that Google’s post-sale involvement probably varies inversely to the buried lede. Not to mention the culture clash over ageism so common at Google–if you’re a BMI employee who is over 30 and didn’t go to Carnegie Mellon, good luck.

And songwriters? Get ready to jump if you need to.

Spotify Brings the Streaming Monopoly to Uruguay

After Uruguay was the first Latin American country to pass streaming remuneration laws to protect artists, Spotify threw its toys out of the pram and threatened to go home. Can we get that in writing? A Spotify exit would probably be the best thing that ever happened to increase local competition in a Spanish language country. Also, this legislation has been characterized as “equitable remuneration” which it really isn’t. It’s its own thing, see the paper I wrote for WIPO with economist Claudio Feijoo. Complete Music Update’s Chris Cook suggested that a likely result of Spotify paying the royalty would be that they would simply do a cram down with the labels on the next round of license negotiations. If that’s not prohibited in the statute, it should be, and it’s really not “paying twice for the same music” anyway. The streaming remuneration is compensation for the streamers use of and profit from the artists’ brand (both featured and nonfeatured), e.g., as stated in the International Covenant on Economic, Social and Cultural Rights and many other human rights documents:

The Covenant recognizes everyone’s right — as a human right–to the protection and the benefits from the protection of the moral and material interests derived from any scientific, literary or artistic production of which he or she is the author. This human right itself derives from the inherent dignity and worth of all persons.