Connecticut has passed one of the most comprehensive artificial intelligence laws in the United States, and Governor Ned Lamont is expected to sign it into law. The bill, SB5 — formally the Connecticut Artificial Intelligence Responsibility and Transparency Act — cleared the state House in a 131-17 vote and the Senate 32-4, both with the kind of bipartisan margins that suggest this law will stick.
For businesses that use AI in employment decisions, customer interactions, or operations, this is the regulatory playbook that other states are watching closely.
What SB5 Actually Does
The law covers several distinct areas: frontier AI models, consumer chatbots, employment decisions, and content provenance. The parts that will affect the most businesses immediately are the employment and chatbot provisions.
Employment Decision Technology
Any company using AI as a “substantial factor” in hiring, promotion, discipline, or dismissal has new obligations. Specifically, they must:
- Give employees and job applicants written notice that AI is being used in the decision-making process
- Explain the purpose of the system
- Disclose what categories of data it uses and where that data comes from
The “substantial factor” framing matters. Connecticut is not banning AI from employment decisions. It is requiring transparency about when AI plays a meaningful role. That is a more pragmatic approach than outright prohibition, and it is closer to what employers can realistically implement.
Developers of AI tools used in employment contexts are also on the hook. They must give deployers the compliance information they need to meet the law’s requirements. If you are buying an AI hiring tool, the vendor now has a legal obligation to make it possible for you to comply.
Chatbot and Consumer AI Requirements
Businesses deploying conversational AI systems that interact with Connecticut residents need to disclose that users are talking to an automated system. The disclosure requirements apply to consumer-facing applications, which catches a wide range of use cases from customer service bots to virtual assistants embedded in websites.
Frontier Model Obligations
Developers of frontier AI models face their own obligations around safety testing and reporting. This provision is aimed squarely at the large AI labs, not at most businesses deploying existing tools.
Why Connecticut, and Why Now
Connecticut has been working toward AI regulation for several years. Previous versions of the bill stalled, partly because Governor Lamont was concerned that aggressive regulation would make the state less attractive for technology investment. This version struck a balance he was willing to accept.
The shift in tone matters. Connecticut has historically been cautious about technology regulation, so a bipartisan supermajority voting for this bill signals that policymakers across the political spectrum see AI transparency requirements as reasonable, not extreme.
The state’s position in the northeastern corridor, home to major financial services firms, insurance companies, and healthcare systems, also explains why legislators were focused on employment and consumer applications rather than narrower tech-sector concerns.
What Other States Are Doing
Connecticut is not alone. Colorado’s SB189 is moving through its legislature. Hawaii’s SB3001 is on track for passage. Several states have moved or are moving on narrower AI bills targeting therapy chatbots, companion AI, and watermarking of AI-generated content.
The pattern that is emerging is not a single federal AI law but a patchwork of state laws, each with overlapping but distinct requirements. A business operating across multiple states in 2026 may find itself subject to five or six different AI disclosure regimes before any federal framework arrives.
The White House released a National Policy Framework for Artificial Intelligence in March 2026 that urged Congress to replace state-level variation with a uniform federal approach. Congress has not acted on that recommendation.
What This Means for Business
Audit your AI stack now. If you use AI in recruitment, performance management, or customer interactions, map which tools are involved and what role they play. “Substantial factor” is the threshold, and if a tool is influencing outcomes, document it.
Get compliance commitments from vendors. SB5 places obligations on AI developers, not just deployers, but deployers are still responsible for the end result. Ask your AI vendors what they are providing to support compliance. If they cannot answer that question, it is a risk you are carrying.
Employee and candidate communications need updating. If you are in Connecticut or hiring people who live there, your job postings, offer letters, performance review processes, and onboarding materials may all need disclosure language. HR teams should be reviewing these now, not after the law takes effect.
Expect this framework to spread. Connecticut’s law will be used as a template. If you build compliance for Connecticut, you are building it for a model that other states will adapt. That makes it worth investing in properly rather than treating it as a one-off local requirement.
Customer-facing AI needs a disclosure layer. If you run a chatbot, virtual assistant, or any conversational AI that interacts with customers in Connecticut, you need a mechanism to identify the system as automated. This is not technically complex, but it requires deliberate implementation.
The Bigger Shift
The era of operating AI systems in business without telling anyone is ending. That has always been the direction of travel — consumers have expected transparency, and employees have increasingly demanded to know when AI is influencing decisions that affect them. Connecticut has now put that expectation into law.
The organisations that treat this as a compliance exercise will spend months scrambling to retrofit disclosures onto systems they built without transparency in mind. The organisations that treat it as a signal to build AI systems responsibly from the start will find compliance is a side effect of doing it right, not a separate project.
AI transparency is not anti-AI. It is the foundation that makes AI in business sustainable.
If you are figuring out how to implement AI across your operations in a way that is effective and compliant, Omni Advisory helps business leaders navigate exactly this kind of decision — the strategy, the vendor questions, and the governance approach.
Source
CT Mirror