How the AI Moratorium Threatens Local Educational Control

The proposed federal AI moratorium currently in the One Big Beautiful Bill Act states:

[N]o State or political subdivision thereof may enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation of that State or a political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.

What is a “political subdivision”?  According to a pretty standard definition offered by the Social Security Administration:

A political subdivision is a separate legal entity of a State which usually has specific governmental functions.  The term ordinarily includes a county, city, town, village, or school district, and, in many States, a sanitation, utility, reclamation, drainage, flood control, or similar district.

The proposed moratorium would prevent school districts—classified as political subdivisions—from adopting policies that regulate artificial intelligence. This includes rules restricting students’ use of AI tools such as ChatGPT, Gemini, or other platforms in school assignments, exams, and academic work. Districts may be unable to prohibit AI-generated content in essays, discipline AI-related cheating, or require disclosures about AI use unless they write broad rules for ‘unauthorized assistance’ in general or something like that.

Without clear authority to restrict AI in educational contexts, school districts will likely struggle to maintain academic integrity or to update honor codes. The moratorium could even interfere with schools’ ability to assess or certify genuine student performance. 

Parallels with Google’s Track Record in Education

The dangers of preempting local educational control over AI echo prior controversies involving Google’s deployment of tools like Chromebooks, Google Classroom, and Workspace for Education in K–12 environments. Despite being marketed as free and privacy-safe, Google has repeatedly been accused of covertly tracking students, profiling minors, and failing to meet federal privacy standards.  It’s entirely likely that Google has integrated its AI into all of its platforms including those used in school districts, so Google could likely raise the AI moratorium as a safe harbor defense to claims by parents or schools that they violate privacy or other rights with their products.

2015 complaint by the Electronic Frontier Foundation (EFF) alleged that Google tracked student activity even with privacy settings enabled although this was probably an EFF ‘big help, little bad mouth’ situation. New Mexico sued Google in 2020 for collecting student data without parental consent. Most recently, lawsuits in California allege that Google continues to fingerprint students and gather metadata despite educational safeguards.

Although the EFF filed an FTC complaint against Google in 2015, it did not launch a broad campaign or litigation strategy afterward. Critics argue that EFF’s muted follow-up may reflect its financial ties to Google, which has funded the organization in the past. This creates a potential conflict: while EFF publicly supports student privacy, its response to Google’s misconduct has been comparatively restrained.

This has led to the suggestion that EFF operates in a ‘big help, little bad mouth’ mode—providing substantial policy support to Google on issues like net neutrality and platform immunity, while offering limited criticism on privacy violations that directly affect vulnerable groups like students.

AI Use in Schools vs. Google’s Educational Data Practices: A Dangerous Parallel

The proposed AI moratorium would prevent school districts from regulating how artificial intelligence tools are used in classrooms—including tools that generate student work or analyze student behavior. This prohibition becomes even more alarming when we consider the historical abuses tied to Google’s education technologies, which have long raised concerns about student profiling and surveillance.

Over the past decade, Google has aggressively expanded its presence in American classrooms through products like Google Classroom, Chromebooks with Google Workspace for Education, Google Docs and Gmail for student accounts.

Although marketed as free tools, these services have been criticized for tracking children’s browsing behavior and location, storing search histories, even when privacy settings were enabled, creating behavioral profiles for advertising or product development, and sharing metadata with third-party advertisers or internal analytics teams.

Google previously entered into a 2014 agreement with the Electronic Frontier Foundation (EFF) to curb these practices—but watchdog groups and investigative journalists have continued to document covert tracking of minors, even in K–12 settings where children cannot legally consent to data collection.

AI Moratorium: Legalizing a New Generation of Surveillance Tools

The AI moratorium would take these concerns a step further by prohibiting school districts from regulating newer AI systems, even if they profile students using facial recognition, emotion detection, or predictive analytics, auto-grade essays and responses, building proprietary datasets on student writing patterns, offer “personalized learning” in exchange for access to sensitive performance and behavior data, or encourage use of generative tools (like ChatGPT) that may store and analyze student prompts and usage patterns

If school districts cannot ban or regulate these tools, they are effectively stripped of their local authority to protect students from the next wave of educational surveillance.

Contrast in Power Dynamics

IssueGoogle for EducationAI Moratorium Impacts
Privacy ConcernsTracked students via Gmail, Docs, and Classroom without proper disclosures.Prevents districts from banning or regulating AI tools that collect behavioral or academic data.
Policy ResponseLimited voluntary reforms; Google maintains a dominant K–12 market share.Preempts all local regulation, even if communities demand stricter safeguards.
Legal RemediesFew successful lawsuits due to weak enforcement of COPPA and FERPA.Moratorium would block even the potential for future local rules.
Educational ImpactCreated asymmetries in access and data protection between schools.Risks deepening digital divides and eroding academic integrity.

Why It Matters

Allowing companies to introduce AI tools into classrooms—while simultaneously barring school districts from regulating them—opens the door to widespread, unchecked profiling of minors, with no meaningful local oversight. Just as Google was allowed to shape a generation’s education infrastructure behind closed doors, this moratorium would empower new AI actors to do the same, shielded from accountability.

Parents groups should let lawmakers know that the AI moratorium has to come out of the legislation.