Anthropic signed a seven-year, $1.8 billion computing deal with Akamai Technologies on May 8, 2026, locking in one of the largest infrastructure commitments the AI industry has seen from a single lab. Akamai shares jumped more than 20 percent on the news — the biggest single-day move in years for the company — and the contract is the largest deal Akamai has ever signed.
The driver is simple: Anthropic cannot build compute fast enough to meet demand for Claude. CEO Dario Amodei has said the company is “working as quickly as possible” to secure more computing resources after annualized revenue hit roughly $30 billion — a tripling year-over-year — off the back of what the company described as 80x growth in usage and revenue during the first quarter alone.
A Multi-Cloud Infrastructure Push
The Akamai deal is one piece of a broader compute acquisition strategy Anthropic is running in parallel. The company has also secured capacity at SpaceX’s Colossus data center, signed a long-term arrangement with Google Cloud backed by Broadcom-built TPU infrastructure, and has a significant commitment from CoreWeave. The strategy is deliberately diversified: no single provider can meet the scale Anthropic now requires.
Akamai’s role is to bring Claude closer to the edge. The company’s distributed network — built originally for content delivery — can host AI inference workloads closer to where enterprise customers actually run their businesses. That matters for latency-sensitive applications like voice AI and real-time agent workflows.
For Akamai, the Anthropic contract validates a strategic pivot the company began making in 2024 when it launched its cloud services division. Going from content delivery to AI infrastructure was a long shot; a seven-year, $1.8 billion anchor contract from one of the fastest-growing AI companies in the world changes that narrative entirely.
What the Numbers Signal
Anthropic’s revenue trajectory is worth sitting with. The company went from roughly $10 billion annualized revenue at the start of the year to $30 billion by May 2026. That is not normal software growth. It reflects enterprises signing real contracts — financial services firms running Claude-powered analysis, healthcare companies automating documentation, technology teams using Claude Code to accelerate development.
Each of those enterprise workloads consumes significant compute. When you multiply usage across tens of thousands of enterprise deployments, the demand on Anthropic’s infrastructure becomes enormous. The Akamai deal, combined with the other infrastructure commitments, is Anthropic’s answer to that demand.
What This Means for Business
Stability at scale. If you are currently building workflows on Claude or evaluating it for your business, this infrastructure buildout is directly relevant to you. The risk of capacity-driven rate limits or pricing spikes decreases significantly as Anthropic diversifies and expands its compute footprint. A seven-year deal signals commitment, not a short-term fix.
Edge inference for enterprise. Akamai’s network means Anthropic can eventually deliver lower-latency Claude access for workloads that need it — customer-facing voice agents, real-time document processing, live data analysis. If your business relies on fast response times, this matters.
Infrastructure confidence. One concern businesses have when committing to an AI provider is longevity. A company that burns through compute capacity faster than it can secure it is a fragile foundation to build on. The scale of Anthropic’s infrastructure deals — billions locked in across multiple providers — is a signal that this company is planning to be here for a long time.
The cost curve. As Anthropic’s infrastructure expands and compute costs continue falling industry-wide, the economics of running Claude-powered applications improve. Workloads that are expensive to run today become cheaper over the next two to three years. Businesses that build on AI now, even at current cost levels, will benefit from that falling cost curve without having to rebuild their systems.
The Broader Context
Anthropic is not the only lab racing to secure compute. Microsoft, Google, and Amazon have all announced trillion-dollar-scale infrastructure commitments across 2025 and 2026. The pattern is clear: every major AI provider is treating compute capacity as a strategic asset, not just an operating cost. Whoever has the most reliable, lowest-latency infrastructure wins enterprise contracts.
For businesses watching from the sidelines, this infrastructure race has a practical implication: the AI tools available to enterprise customers are about to get significantly more capable, faster, and more reliable. The compute that Anthropic is securing today is what makes the next generation of Claude possible.
The companies that will benefit most are the ones already building internal capability — understanding how to deploy AI agents, train teams on Claude-based tools, and identify which workflows are worth automating. By the time the infrastructure matures, the advantage goes to businesses that already know how to use it.
Enterprise DNA helps businesses build that internal capability. Whether that is through structured data and AI training for your team through EDNA Learn, or deploying AI agents that run business operations through Omni by Enterprise DNA, the window to build meaningful AI advantage is open now — not after the infrastructure race settles.
Source
CNBC