California Governor Gavin Newsom has signed an executive order requiring artificial intelligence companies to meet stricter safeguards as a condition of securing state contracts. Under the order, firms selling AI systems to California agencies must demonstrate policies that prevent misuse and protect privacy, security, and civil rights. The move places California in direct tension with the Trump administration‘s push to establish uniform national AI standards and curtail state-level regulation.
The order directs the state’s Government Operations Agency to develop procurement standards for AI vendors covering issues such as illegal content generation, model bias, and risks to civil rights and freedom of speech. It also instructs the California Department of Technology to produce recommendations for watermarking AI-generated images and manipulated video. Newsom framed the action as a necessary counterweight to what he described as insufficient federal oversight.
“California’s always been the birthplace of innovation. But we also understand the flip side: in the wrong hands, innovation can be misused in ways that put people at risk,” Newsom said in a statement. He added that while others in Washington are designing policy in the shadow of misuse, California intends to protect people’s rights rather than exploit them. The governor has positioned the state as a leader in responsible AI development.
The order arrives shortly after the Trump administration released a national AI policy framework urging Congress to set federal standards and reduce what officials characterize as a fragmented landscape of state regulations. Kevin Frazier, an adjunct research fellow at the Cato Institute, said the dispute reflects a longstanding constitutional tension between state and federal authority. He described Newsom’s order as “a prime example of federalism in action,” noting that companies unwilling to meet California’s requirements can simply choose not to bid for state business.
Frazier also noted that Congress retains the ability to shape the broader direction of the country’s AI ambitions. “Every technological breakthrough—from the steamboat to superintelligence—raises key questions about how to allocate regulatory authority between the states and the federal government,” he said. He argued the Constitution provides a clear framework: federal leadership on matters of economic and national security, with states exercising traditional authority within their borders.
Quinn Anex-Reis, a senior policy analyst at the Center for Democracy and Technology, said California’s size and purchasing power give the state significant leverage over how AI companies design and test their products. Government contracting represents a substantial and growing revenue stream for technology developers, he noted, meaning procurement rules can function as a powerful regulatory tool. “The procurement process is a really important place to pay attention to,” Anex-Reis said, “because that’s really the most important place the state can look to set protections and expectations about how vendors develop their tools.”
The broader political context adds another dimension to the dispute. Newsom has emerged as a prominent national Democratic figure and a potential 2028 presidential candidate; a recent Politico–UC Berkeley Citrin Center poll showed him leading former Vice President Kamala Harris by 14 points among likely Democratic primary voters in California. Last summer, the Trump administration directed federal agencies to avoid contracts with what it termed “woke AI” models and to procure systems demonstrating ideological neutrality. Despite the partisan overtones, Anex-Reis argued the core issue transcends politics, saying it is fundamentally about ensuring taxpayer dollars are not wasted and that government-purchased tools actually work.
Originally reported by Decrypt.
