Enterprise DNA

Omni by Enterprise DNA

Enterprise DNA Resources

Latest AI and industry news. Practical AI operating-system thinking for owners, operators, and teams doing real work.

220k+

Data professionals

Omni

AI agents and apps

Audit

Map the manual work

News Breaking Regulation

White House Eyes Mandatory AI Model Vetting, FDA-Style

Kevin Hassett says the administration is studying an executive order that would require all AI companies to pass security checks before releasing new models.

Enterprise DNA | | via Bloomberg
Enterprise DNA News

The Trump administration is considering a significant shift in its AI policy stance. National Economic Council Director Kevin Hassett confirmed on May 6 that the White House is studying an executive order that would require AI companies to submit new models for security vetting before public release — a process he compared directly to FDA drug approval.

“We’re studying, possibly an executive order to give a clear roadmap to everybody about how this is going to go,” Hassett told reporters. “Future AIs that also potentially create vulnerabilities should go through a process so that they’re released to the wild after they’ve been proven safe, just like an FDA drug.”

This would mark a notable reversal for an administration that came into office promising a hands-off approach to AI development.

What Triggered the Shift

The immediate catalyst is Anthropic’s Mythos model, which the company has described as far ahead of other models in cybersecurity capabilities — and far too dangerous to release publicly. Anthropic has restricted Mythos access to a small group of approved organizations and has been in discussions with government officials about responsible deployment.

Hassett referenced Mythos directly, noting that “Mythos is the first of them, but it’s incumbent on us to build a system.” The implication is that the administration wants a repeatable process for evaluating highly capable AI models that could pose cybersecurity risks if misused.

Testing requirements under the potential order would, in Hassett’s words, “really quite likely” apply to all AI companies — not just those with dangerous models sitting on the shelf.

Voluntary Testing Already Underway

The Commerce Department moved in parallel this week, announcing an expanded voluntary testing program through the Center for AI Standards and Innovation (CAISI). Google, Microsoft, and xAI signed agreements giving US government evaluators early access to frontier models before public release. OpenAI and Anthropic were already part of the initiative.

The executive order being studied would go further, potentially making such testing mandatory rather than optional.

Tensions Within the Administration

The proposal is far from settled. White House Chief of Staff Susie Wiles pushed back on the idea of government picking winners and losers in AI, and there are genuine disagreements within the administration about how prescriptive any requirements should be.

Multiple reports suggest one or more AI-related executive orders are likely within the next two weeks, but the scope and mandatory nature of any vetting process remains contested. The administration is trying to thread a needle: protect national security and critical infrastructure from AI-enabled attacks, without creating a bureaucratic approval regime that slows down American AI development while China moves faster.

That tension is real and not easily resolved. An FDA-style approval process for software that can be updated daily is architecturally different from approving a physical drug that stays the same until reformulated.

What This Means for Business

If a mandatory vetting framework takes shape, the practical implications depend heavily on what counts as a “new model” and what threshold triggers review. A framework targeting only frontier models above a certain capability level would affect a handful of companies. Broad definitions could slow the pace of model releases across the industry.

For enterprise buyers, this creates a new dimension of vendor risk to track. Companies already planning AI procurement around OpenAI, Anthropic, Google, or Microsoft models should factor in potential delays if a vetting process is introduced. The upside is a government stamp of approval on models cleared for use — which may actually accelerate adoption in regulated industries like finance, healthcare, and defense where compliance teams are cautious about untested AI.

For the AI industry itself, the story is less about one executive order and more about a direction of travel. The voluntary testing regime is becoming an informal standard. The question is how quickly it becomes a formal requirement and what the consequences are for companies that don’t comply.

Enterprise DNA’s view: the direction is clear regardless of what the executive order ultimately says. AI models with significant cybersecurity capabilities will face increased government scrutiny. Businesses building AI strategies should plan for a procurement environment where some models may have restricted availability and others come with government clearance as a feature. This is different from the current state, and it’s worth adjusting vendor selection criteria now.

The complete picture of US AI policy is still forming. What looked like a pure innovation-first, deregulation agenda is developing more nuance as the capabilities of frontier models become genuinely consequential.