The Trump administration is planning to sue states over their artificial intelligence laws, but how this pressure affects work on future legislation depends on state politics.
President Donald Trump issued an executive order in December that directs the Justice Department to prosecute states with AI laws that the administration deems “burdensome” to the industry, including those that underpin interstate commerce. It comes amid failed efforts in Congress to impose a White House-backed moratorium on state AI regulations.
Supporters of AI regulation, including Colorado, California and Texas, say their states are in favor of moving forward on AI while Congress and the executive branch take time, but lawmakers aligned with the administration may take threats to sue or block funding more seriously.
Cody Wenzke, senior policy adviser at the American Civil Liberties Union, called the plan “a mishmash of flawed legal principles.”
“I don't think that, for the most part, it will be effective in preventing states from regulating artificial intelligence to protect their citizens.”
Under a December 11 executive order, the Commerce Department and White House Special Advisor for AI and Crypto David O. Sachs were tasked with publishing “an assessment of existing state AI laws that identifies onerous laws” within 90 days.
Once states are identified by the administration as having tougher laws, they could face lawsuits and have some federal funding blocked, including grants under the $42.5 billion Broadband Equity, Access, and Deployment program, which aims to expand access to high-speed internet.
“The federal government is limited in that it can unilaterally change the terms of federal grants to states,” Wenzke said, because programs like BEAD were established by Congress.
Wenzke also questioned the authority of the Federal Communications Commission and the Federal Trade Commission, included in the executive order, to regulate AI or preempt states.
utah bill
The administration is already trying to exert informal influence on state policies. Last week, the White House Office of Intergovernmental Affairs sent a memo obtained by CQ Roll Call in opposition to a Utah bill that would require developers of large frontier AI models to publish public safety and child protection plans. The memo was first reported by Axios.
“We unequivocally oppose Utah HB 286 and view it as an irreversible bill that goes against the Administration's AI agenda,” the memo said. The one-page statement did not provide any legal justification for the protest.
The White House did not immediately respond to a request for comment.
The Utah bill, which passed out of a state House committee late last month, would also require risk assessments for Frontier models, require developers to report certain security incidents to the state government and establish whistleblower protections for employees of large Frontier developers.
The bill's sponsor, Republican state Representative Doug Fifia, expressed his opposition to the executive order in a post on TikTok in December.
“This executive order goes too far. I support the idea of a national AI framework, but it must come through Congress, where there is transparency, debate, collaboration. That's how you build trust and lasting policy. Don't forget about states' rights and the 10th Amendment. And until that happens, states should be allowed to protect their own people,” FIFA said.
Even before that White House move, legal uncertainty had caused some states to put their existing laws and possible future legislation on hold.
Colorado's AI Act is currently scheduled to go into effect this summer. A working group established before the executive order is negotiating updates to the law.
As it currently stands, the measure would require developers and deployers of “high-risk” AI systems – those making consequential decisions – to take “reasonable care” to protect users from the risks of discrimination, which it defines based on impact rather than intent.
In the executive order, Trump used Colorado's law and the “disparate impact” standard as examples of state laws “increasingly accounting for the need for institutions to embed ideological bias within the model.”
Lauren Furman is the CEO of the Colorado Chamber of Commerce and represents the industry in working group negotiations. He said he hasn't seen “a lot of attention” paid to the Trump administration order.
“When you're in a state like ours, and you have … a majority of Democrats who, you know, are in control, I think they should … feel like the legislature will make a decision and move forward,” Furman said.
Furman also said he expects at least two other AI-focused bills to be filed in Colorado this session, including one that will likely focus on health care.
The executive order paves a way for states to avoid conflicts with the administration, including over access to discretionary grant funding. States can access funding “by entering into a binding agreement with the agency concerned not to enforce any such law during the performance period” of the grant.
But Furman expects that if the federal government sues Colorado over its law, state Attorney General Phil Weiser, who is also running for governor, will continue his opposition to the administration.
“Attorney General Weiser is filing lawsuits almost daily, so I'm certainly hopeful that will be the case,” Furman said.
Colorado isn't the only state considering new AI legislation this session. In New York, a bill would require disclaimers on AI-generated news content. Another would impose a minimum three-year moratorium on permits for new data centers.
So far this year, lawmakers in Florida, Washington, Utah and Virginia have made progress on their AI bills.
Advocates for regulating AI in California also appear to be similarly unconvinced, according to Terry Olley, vice president of California Action, a progressive nonprofit economic security. The group's 501(c)3 partner, the Economic Security Project, was co-founded by Facebook co-founder Chris Hughes and promotes guaranteed income programs.
The group was an organizational co-sponsor of California's Transparency in Frontier Artificial Intelligence Act, known as SB 53. The law requires large frontier developers to publish AI frameworks to explain how they incorporate standards and best practices. It also requires developers to file a summary of catastrophic risk assessment.
The bill's sponsor, Democratic state Senator Scott Weiner, has been vocal in defending the ability of states to regulate where Congress has not done so.
Ogle called the executive order a “harassment scheme.” He also predicted that California would fight any potential lawsuit from the Trump administration.
“I have no indication that California … will allow our rights to be trampled upon,” Olley said.
Olley said he is not currently aware of concerns about California potentially being denied BEAD funding or other federal grants while the state works on its budget.
Olley said she was surprised by how easily the California law passed through the legislature, though she added that tech industry CEOs “didn't like the fact that they were being undercut in any way.” And going forward, she sees those CEOs as more of an obstacle to advancing AI legislation in California than the administration.
“Tech CEOs are not taking it quietly, like, I think that's the thing that's difficult… The thing that's probably more impactful is the fact that tech CEOs… their combined interests have, you know, spent millions of dollars in PACs to try to defeat candidates,” she said.
Ole says the money runs counter to public opinion that favors regulating AI. A Gallup poll conducted last year found that 80% of those surveyed “supported maintaining regulations for AI safety and data protection, even if it means developing AI capabilities at a slower pace.”
“The real politics of this issue are in favor of common sense regulation of the technology,” Olley said.
GOP states' views
However, the politics of the executive order may differ in Republican-led states.
The Texas Responsible Artificial Intelligence Governance Act, known as HB 149, bans developing or deploying AI “with the intent to unlawfully discriminate against a protected class in violation of state or federal law.” It states that disparate impact is not sufficient to prove intent. Texas law also requires government agencies that deploy AI systems to disclose when an interaction is AI-generated.
David Dunmoyer of the conservative nonprofit Texas Public Policy Foundation said there was “a lot of disappointment” in Texas in the reaction to the executive order.
“There's a feeling that states are being punished for moving forward and doing something that, in the case of Texas, is a really good law that is thoughtful and intentional,” Dunmoyer said.
He said parts of the Texas law appear to match the intent of the executive order and preserve state laws governing child safety, data center infrastructure and state government procurement and use of AI.
He highlighted Texas law's ban on government entities using AI for “social scoring,” which is consistent with the administration's opposition to “viewpoint discrimination” in AI.
He also said the Texas law's focus on outcomes may be more consistent with the type of policies being put forward by Sachs in the White House.
“Most of Texas' approach is not on 'check these boxes before you operate.'” Instead, 'there is a clear harm that is intended to cause those bad outcomes,'” he said.
Kevin Welch, president of the digital civil liberties group EFF-Austin, said other parts of the law could fit into the carveout.
The law prohibits “developing or distributing an artificial intelligence system with the sole intent” of producing child sexual exploitation material, which can be included in a child protection plan. The transparency provision regulates the use of AI by the state government, which can also be exempted from the order.
But even though some parts of Texas law won't fit into the carve-out, Welch thinks the state may be safe from pulling BEAD or other funding for political reasons.
“When they use that tool, in my opinion, it's more partisan toward the Trump administration. They're more likely to use those kinds of threats against what they see as a blue state,” Welch said.
He predicted that if the Trump administration disagrees with the Texas law, state leaders and the White House are more likely to discuss their differences and “negotiate.”
Dunmoyer said the inclusion of BEAD funding in the executive order raises difficult questions for Texas, which was approved $1.27 billion in broadband deployment funds under the program last year.
“If it comes down to it, you have to choose between maintaining AI laws or adding disconnected people in vulnerable and rural communities, that's an extremely difficult political decision,” Dunmoyer said.
The decision to litigate the case may also depend on who is selected as Texas' new attorney general in November, he said.
Dunmoyer said Texas lawmakers are balancing the needs of state residents with “political realities” on AI while the state legislature is out of session in 2026.
“In conversations with lawmakers, there's definitely a sense of, OK, let's wait and see what happens with the executive order,” Dunmoyer said.
©2026 CQ-Roll Call, Inc., All Rights Reserved. visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.
