The AI industry has picked a side in the 2026 midterms. Several sides, actually — and they are not aligned.
Combined AI-related political spending for the 2026 midterm elections has now passed $300 million, according to reporting from Axios and CNBC. The money is flowing in two directions that reflect a genuine ideological split inside the industry about how AI should be governed.
The deregulation push
The largest pool of money is backing deregulation. Innovation Council Action, a new pro-AI political nonprofit, announced plans to spend over $100 million on the 2026 midterms backing candidates who oppose new AI regulations and support the Trump administration’s approach of a single federal framework.
The group is led by Taylor Budowich, who previously served as White House Deputy Chief of Staff and ran the MAGA Inc. super PAC. It has the endorsement of David Sacks, Trump’s AI czar and co-chair of the President’s Council of Advisors on Science and Technology.
The group has built a lawmaker scorecard that ranks politicians based on their alignment with Trump’s AI policy agenda — favoring deregulation, blocking state-level AI laws, and accelerating US AI infrastructure development. That scorecard will guide where the money goes.
Innovation Council Action is not alone. Leading the Future, backed by figures including OpenAI co-founder Greg Brockman, investor Joe Lonsdale, and Marc Andreessen, has raised $50 million. Meta has a separate super PAC effort expected to spend roughly $65 million focused on state-level races. The total AI deregulation spend is well over $200 million.
The counter-push
Taking the opposite position, Anthropic donated $20 million in February 2026 to an organization called Public First Action, describing it as a bipartisan group focused on candidates who support meaningful AI safeguards. Public First Action has since raised approximately $50 million total.
Anthropic’s reasoning was explicit: the company agreed with most Americans that not enough was being done to address the risks of AI. This is a notable position for an AI company to take publicly — and it puts Anthropic directly at odds with many of its competitors on the question of government oversight.
AI-backed candidates have already won 10 of 11 congressional primaries in early 2026, following a playbook similar to crypto’s successful super PAC strategy from 2024. The direction of those wins has largely favored deregulation.
What the split means in practice
This is not an abstract policy debate. The outcome of the midterms will shape the regulatory environment that businesses operate in for the next several years.
If the deregulation push succeeds, you will likely see a single federal AI framework that preempts state laws — meaning one set of rules instead of a patchwork of California, Texas, Colorado, and Illinois requirements. That simplifies compliance significantly for companies operating across state lines.
If the pro-regulation effort gains more seats, expect more rigorous requirements around AI transparency, training data disclosure, employee whistleblower protections, and liability for AI-generated decisions.
Businesses currently deploying AI agents, using AI in hiring or lending, or building AI into customer-facing products are already operating in a gray zone. The FTC has issued informal guidance. Several state laws took effect January 1, 2026. A federal framework is coming — the question is how restrictive it will be.
What This Means for Business
The practical implication for any business running AI in 2026 is straightforward: keep your documentation current and your governance visible.
Regardless of which political direction wins, the businesses that will be caught flat-footed are the ones treating AI deployment as purely a technology decision. The ones that will be fine are those that have documented what their AI systems do, why decisions are made, and how humans stay in the loop where needed.
That groundwork protects you under a light-touch federal framework and under a stricter one. It also makes it easier to demonstrate compliance if you operate across multiple states with different rules.
If you are deploying AI agents in your business and want to make sure your governance and documentation are in shape — regardless of how the regulatory picture settles — our Omni Advisory team can help you get ahead of it. The companies that build governance in now will spend far less time and money on compliance later, whatever form that compliance ends up taking.
Source
Axios