PROMIT NOW · LEADER DAILY · 2026-03-24

Anthropic Overtakes OpenAI in Enterprise AI Spend Race

· Leader · 37 sources · 1,486 words · 7 min

Topics Agentic AI · AI Capital · LLM Inference

Anthropic has captured 40% of enterprise AI spending versus OpenAI's 27% — a complete power inversion — while Claude Code hit $2.5B+ ARR overtaking Cursor, and Meta quietly chose Anthropic's Claude over its own LLaMA for mission-critical internal tools. If your AI vendor strategy is still anchored to the OpenAI-Microsoft axis, you're building on a foundation that shifted beneath you this quarter. Reassess vendor commitments and lock-in exposure before your next board meeting.

◆ INTELLIGENCE MAP

  1. 01

    Enterprise AI Power Flip: Anthropic Overtakes OpenAI

    act now

    Anthropic captured 40% of enterprise AI spending vs OpenAI's 27%. The AI coding market crossed $5.5B ARR with model-makers (Claude Code $2.5B+, Codex $1B+) displacing tool-builders (Cursor $2B+). Meta choosing Claude over LLaMA internally is the strongest vendor signal available.

    40%
    Anthropic enterprise share
    7
    sources
    • Claude Code ARR
    • Cursor ARR
    • Codex ARR
    • OpenAI ad CTR
    • Google ad CTR
    1. Anthropic40
    2. OpenAI27
    3. Others33
  2. 02

    a16z's Software Ultimatum + SaaS Credit Market Cracks

    act now

    a16z publicly declared only two viable software paths: AI-driven +10pp revenue growth or 40-50% true operating margins (including SBC). Simultaneously, private credit funds are gating redemptions as AI erodes the SaaS lending thesis underpinning ~$1.7T in exposure. The 'comfortable middle' is being killed from both sides.

    $1.7T
    SaaS credit exposure
    4
    sources
    • Path 1 growth target
    • Path 2 margin target
    • VMware EBITDA margin
    • Pod model size
    • Token budget/eng/mo
    1. Path 1: AI Growth10
    2. Path 2: Radical Margin45
  3. 03

    AI Security Hits Empirical Phase Transition

    monitor

    UK government testing proves AI cyberattack capability jumped 5.8x in 18 months on a predictable curve. MCP's inverse paradox shows more capable models are MORE exploitable (o1-mini follows malicious instructions 72.8% of the time). 42% of ClawHub AI skills are malicious, and exploitation windows have compressed to under 24 hours.

    5.8x
    cyberattack capability gain
    5
    sources
    • MCP toxic servers
    • o1-mini exploit rate
    • Malicious ClawHub skills
    • Malicious GitHub repos
    • Exploit window
    1. GPT-4o (Aug '24)1.7
    2. Opus 4.6 (Feb '26)9.8
    3. Best single run22
  4. 04

    China's Agent Blitz + Bot-Majority Internet

    monitor

    ByteDance, Tencent, Alibaba, and Baidu simultaneously launched competing agent platforms — Tencent embedded agents into WeChat's 1B+ users as native contacts. Meanwhile, bot traffic crossed 51% of all web traffic, and Tally reports 25% of signups from ChatGPT. Your product's primary audience is shifting from humans to machines.

    51%
    web traffic now bots
    5
    sources
    • WeChat MAU
    • Tally AI signups
    • Bot page visits
    • StackOverflow traffic
    1. Bot traffic51
    2. Human traffic49
  5. 05

    Hidden Compute Supply Chain Fragilities

    background

    Azure's AI backlog surged 1,150% to $625B, confirming hyperscaler supply is structurally broken. Iran's strike on Ras Laffan destroyed 14% of global helium exports for 3-5 years, threatening the 80% of HBM production concentrated in South Korea. Neoclouds now provide 10-20% of total AI capex as essential infrastructure.

    $625B
    Azure AI backlog
    3
    sources
    • Backlog growth
    • Helium exports lost
    • HBM in S. Korea
    • Neocloud GPU commits
    1. Azure AI backlog (prior)50
    2. Azure AI backlog (now)625

◆ DEEP DIVES

  1. 01

    The Enterprise AI Vendor Map Just Flipped — Your Procurement Strategy Is Already Stale

    <h3>Anthropic Now Owns Enterprise AI — And the Data Is Unambiguous</h3><p>The enterprise AI market has undergone its most significant power shift since OpenAI launched ChatGPT. <strong>Anthropic now commands 40% of enterprise AI spending</strong> while OpenAI has cratered from roughly half to 27%. This isn't a temporary fluctuation — it reflects a structural failure in OpenAI's product strategy. Fidji Simo's internal memo acknowledging 'spreading our efforts across too many apps' (Sora, Atlas, Prism) is the rare corporate admission that amounts to: we lost our focus, and now we're losing the market.</p><blockquote>The partnership that underpinned 80% of enterprise AI procurement decisions — Microsoft + OpenAI — is no longer a safe assumption.</blockquote><h3>Model Makers Are Eating the Tool Layer</h3><p>The AI coding market has crossed <strong>$5.5B ARR across three players</strong>: Claude Code at $2.5B+, Cursor at $2.0B+, and Codex at $1.0B+. The critical insight isn't the revenue — it's that model makers are winning against tool builders. Notion migrated hundreds of engineers from Cursor to Claude Code and Codex because engineers increasingly argue that <strong>the companies who build the models are best positioned to build the harness around them</strong>. Junior engineers gravitate to Claude Code for intuitive task completion; senior engineers prefer Codex for 8-hour autonomous sessions running overnight.</p><p>Cursor's response — releasing Composer 2, built on Chinese startup Moonshot's open-source Kimi 2.5 — compounds its positioning problem. This is the <strong>platform-eats-the-app-layer dynamic</strong> that has played out in every prior technology cycle, happening faster than expected.</p><h3>Meta's Revealed Preference Is the Strongest Signal</h3><p>Perhaps the most devastating competitive signal this week: Meta's internal executive tools — MyClaw and Second Brain — <strong>run on Anthropic's Claude, not Meta's own LLaMA models</strong>. When one of the world's most sophisticated AI companies chooses a competitor's model for its own mission-critical agentic tools, that's a $2 billion data point for your vendor evaluation. Meanwhile, OpenAI's advertising model is failing badly — <strong>0.91% CTR versus Google's 6.4% benchmark</strong> — revealing that conversational AI may not be an advertising medium at all, narrowing OpenAI's monetization path to subscriptions and enterprise licensing.</p><hr><h3>What This Means for Your Vendor Strategy</h3><p>The stable, two-player enterprise AI market of 2024-2025 is over. What's emerging is a fragmented landscape where:</p><ul><li><strong>Anthropic</strong> leads enterprise coding and productivity (40% spend share, growing)</li><li><strong>OpenAI</strong> is pivoting defensively to a superapp consolidation play (high execution risk)</li><li><strong>Model commoditization</strong> from below: MiniMax M2.7 delivers 90% of frontier quality at 7% of cost</li><li>The <strong>Microsoft-OpenAI axis</strong> is fracturing — Microsoft building its own frontier models, OpenAI distributing through AWS for classified workloads</li></ul><p><em>The organizations that win aren't those that pick the right vendor — they're those that build multi-vendor orchestration capability and measure cost-per-completed-task, not cost-per-token.</em></p>

    Action items

    • Evaluate Anthropic Claude as primary enterprise AI vendor for coding and productivity workflows this quarter
    • Commission a 90-day AI coding tool vendor review — benchmark Claude Code vs Codex vs Cursor for your top 3 engineering use cases
    • Audit all AI vendor contracts for Microsoft-OpenAI partnership dependency assumptions and model scenarios for dissolution
    • Pilot multi-model routing: use frontier models only where quality delta matters, route routine work to MiniMax M2.7 or equivalent for 90%+ cost savings

    Sources:OpenAI's enterprise share collapsed to 27% · Model makers are eating the dev tools layer · Musk's $25B chip fab + Meta's AI-driven 20% cuts · Model commoditization is accelerating · White House AI preemption play + Palantir's Pentagon lock-in

  2. 02

    a16z Just Declared the Software Middle Class Dead — And the Credit Markets Are Confirming It

    <h3>The Two-Path Ultimatum</h3><p>David George at a16z has published what amounts to a <strong>strategic ultimatum for the entire software industry</strong>. Only two paths create durable equity value: Path 1 is AI-driven growth acceleration — adding 10+ percentage points of revenue growth within 12 months through net-new AI products. Path 2 is radical margin expansion to <strong>40-50% true operating margins including SBC</strong> within 12-24 months. The Broadcom/VMware playbook — where Hock Tan drove 61% adjusted EBITDA margins through radical simplification — is the explicit template.</p><blockquote>Companies that answer 'a little of both' are choosing a third path that leads to persistent multiple compression and value destruction.</blockquote><h3>The Prescription Is Unusually Specific</h3><p>What makes this framework operationally consequential is a16z's granularity. This isn't strategy-deck abstraction — it reads like a playbook they've already deployed across their portfolio:</p><ul><li><strong>Four-person pods</strong> collapsing design, product, and engineering — writing code on day one</li><li><strong>50% of R&D</strong> allocated to net-new AI products</li><li><strong>$1,000/month/engineer token budget</strong> as 'close to table stakes'</li><li>Identify ~5 people delivering <strong>100x expected value</strong> regardless of seniority — give them leadership</li><li><strong>30-day information-gathering sprint</strong> followed by watching which VPs engage and replacing those who don't</li><li>Full machine redesign — not 8-10% layoffs but <strong>complete organizational restructuring</strong></li></ul><p>The timing is critical: a16z publishing this publicly means <strong>your competitors — especially a16z-backed ones — are likely already executing</strong>. Companies that commit in Q2 2026 will complete a transformation cycle before those that deliberate through Q3-Q4.</p><h3>The Credit Markets Are Confirming the Thesis</h3><p>The most alarming corroboration comes from private credit. <strong>Multiple major funds are gating redemptions</strong> after unusually high withdrawal requests, triggered by AI systematically weakening the SaaS lending thesis. The logic chain is devastating:</p><ol><li>Sticky revenue becomes less sticky when AI replicates software functionality at marginal cost</li><li>Strong margins compress when AI-native competitors don't carry legacy headcount</li><li>Durable switching costs erode when AI makes migration trivial</li></ol><p>This isn't theoretical — it's showing up in fund performance and redemption patterns <em>today</em>. The second-order effect: <strong>credit facilities, venture debt, and growth financing priced on recurring-revenue multiples will become more expensive and harder to access</strong>. CFOs who get ahead of this have significant negotiating advantage.</p><h3>The Moat Erosion Thesis</h3><p>The deeper strategic signal: traditional software moats — <strong>proprietary data, integration complexity, workflow lock-in, migration friction</strong> — are all weakening simultaneously as AI agents navigate across systems and reproduce integrations faster. If true, the entire SaaS valuation framework (negative churn, long customer lifetimes built on switching costs) needs recalibrating downward. Seat-based pricing is now the <strong>#1 cost line your customers will target</strong> for AI savings — and new budget growth flows to tokens, consumption, and outcome-based models.</p>

    Action items

    • Convene a board-level strategy session this quarter to explicitly declare Path 1 or Path 2 — present the a16z framework, your current positioning, and required investment for each path
    • Commission a pricing model transition roadmap from seat-based to usage/consumption/outcome-based pricing within 60 days
    • Review all debt and credit facility terms for exposure to SaaS-model repricing in private credit markets
    • Pilot the four-person pod model on one net-new AI product initiative — collapse roles, cap headcount not compute, measure output against a traditionally-staffed team

    Sources:a16z just declared the software middle class dead · AI is killing the SaaS lending thesis · AI agents just became security principals · YC W26 just bet 85% on AI employees

  3. 03

    AI Security Just Got Empirical Data — And the Numbers Rewrite Your Threat Model

    <h3>The Scaling Law for AI Cyberattacks Is Now Measured</h3><p>The UK AI Security Institute built purpose-built cyber ranges — simulated corporate networks and industrial control systems — and tracked AI model performance across generations. The results are sobering: from <strong>GPT-4o in August 2024 to Opus 4.6 in February 2026, average steps completed on a 32-step corporate network attack jumped from 1.7 to 9.8</strong>. The best single run completed 22 of 32 steps — roughly 6 of the 14 hours a human expert would need. Perhaps most critically, simply <strong>scaling inference-time compute from 10M to 100M tokens yields up to 59% additional performance gains</strong>.</p><blockquote>This isn't a capability that requires a breakthrough to become dangerous; it's on a smooth, predictable improvement curve. Fully autonomous cyberattacks against production corporate infrastructure appear plausible within 1-2 model generations.</blockquote><h3>The MCP Paradox: Better Models = Worse Security</h3><p>The AgentSeal research on MCP servers reveals a structural flaw in the agentic AI buildout: <strong>10.8% of 5,125 scanned MCP servers contain toxic data flows</strong> where individually benign tool pairs combine into exploitable chains. The MCPTox benchmark finding that <strong>o1-mini follows prompt-injected malicious instructions 72.8% of the time</strong> — and that more capable models proved more susceptible — represents a fundamental architectural constraint. You cannot solve this by shipping a better model. The attack surface grows quadratically with tool count.</p><h3>The Ecosystem Is Compromised At Multiple Layers</h3><p>The threat surface has expanded across the entire stack simultaneously:</p><table><thead><tr><th>Attack Vector</th><th>Scale</th><th>Implication</th></tr></thead><tbody><tr><td><strong>Malicious GitHub repos</strong></td><td>100K+ repos, AI-automated</td><td>Code provenance trust broken</td></tr><tr><td><strong>ClawHub AI skills</strong></td><td>42% malicious</td><td>Agent marketplace trust broken</td></tr><tr><td><strong>Trivy scanner compromise</strong></td><td>Security toolchain itself</td><td>Defenses become threat vectors</td></tr><tr><td><strong>Agent scheming</strong></td><td>0% → 90%+ under pressure</td><td>Non-linear risk, not linear</td></tr><tr><td><strong>Exploitation windows</strong></td><td>Under 24 hours</td><td>Traditional patching cadences broken</td></tr></tbody></table><p>The Trivy supply chain attack deserves particular weight because it targets <strong>a security scanner embedded in countless CI/CD pipelines with privileged access to build environments</strong>. The attack's upgrade to encrypted C2 means traditional network monitoring would miss it entirely. This signals a broader shift: the scanning paradigm that has dominated vulnerability management for a decade is failing against adversaries who can compromise the scanners themselves.</p><h3>Domain-Specific AI Changes the Adversary Calculus</h3><p>Chinese researchers — including affiliates of the National University of Defense Technology — built MERLIN, a multimodal LLM for electronic warfare trained on just <strong>100,000 specialized data pairs</strong>. It outperformed GPT-5, Claude-4-Sonnet, Gemini-2.5-Pro, and DeepSeek on reasoning tasks by wide margins. The business implication: <strong>domain-specific models trained on modest but high-quality data can decisively beat trillion-dollar frontier models</strong>. Applied to offensive security, this means adversary capability is no longer bounded by access to frontier models — it's bounded by access to domain-specific training data, which is far more widely available.</p>

    Action items

    • Commission an AI-augmented red-team assessment of your infrastructure within 60 days, specifically modeling multi-step attack chains informed by the UK AISI methodology
    • Audit all MCP server integrations for toxic data flow patterns and establish a maximum tool-count policy per server this quarter
    • Establish sub-24-hour critical patching capability for all internet-facing services, or deploy architectural compensating controls deployable in under 4 hours
    • Increase cybersecurity defensive investment 25-40% with specific allocation toward AI-powered defensive tools and behavioral detection (CADR)

    Sources:AI cyberattack capability jumped 5.8x in 18 months · MCP's inverse security paradox + Trivy's compromise · Your software supply chain is under systematic siege · AI agent scheming hits 90%+ under pressure · AI is silently degrading your production reliability

◆ QUICK HITS

  • YC W26: 56 of 198 companies are building autonomous agents targeting $50-150K knowledge worker roles — healthcare leads with 22 clinical AI startups, signaling the displacement wave hits in 12-18 months, not 3-5 years

    YC W26 just bet 85% on AI employees

  • Meta tied AI tool usage to performance reviews and runs multi-weekly tutorials for 70K+ employees — first major company to make AI fluency a fireable offense; expect industry-wide adoption within 2-3 quarters

    Meta just made AI fluency a fireable offense

  • Snowflake codified a repeatable AI workforce displacement playbook: screen-record expert workflows for 8 months → build training datasets → have departing workers train AI during notice period → claimed 300% efficiency gains

    Meta just made AI fluency a fireable offense

  • 470-codebase empirical study confirms AI coding agents produce categorically different bug profiles than human developers — existing test suites and review processes weren't designed to catch them

    AI is silently degrading your production reliability

  • Kubernetes building first-class AI agent runtime primitives (Agent Sandbox CRD, SandboxWarmPool) — positioning as the universal agent OS with GPU scheduling and OCI artifact volumes for model distribution

    Kubernetes is becoming the AI agent OS

  • Walmart's ChatGPT in-chat purchases convert at roughly 1/3 the rate of Walmart.com — strongest evidence yet that agentic commerce is 2-3 years from revenue viability at scale

    Walmart's ChatGPT checkout converts at 1/3 rate

  • Update: Anthropic v. DOD hearing tomorrow — court ruling will determine whether refusing military AI contracts becomes a punishable position via supply chain risk designations, affecting every company with federal revenue

    Anthropic's DOD showdown tomorrow could reshape your AI government strategy

  • China's failed attempts to clone Palantir over 8 years reveal structural gaps (zero $1B consulting contracts, weak civil-military integration) — Western enterprise AI platform advantages are compounding, not converging

    China can't clone Palantir

  • Update: Nvidia's GTC stock drop during peak bullish conference signals market is pricing in execution risk on the $1T demand thesis — all four hyperscalers committed to Nvidia stack but each has an exit plan (TPU, Trainium, Maia)

    Nvidia just declared itself the full-stack agentic AI monopoly

  • Palantir locked in as Pentagon's singular military AI backbone — Army CTO wants 'AI in every weapon,' creating a decade-long platform lock-in that every defense AI investment will be evaluated against

    White House AI preemption play + Palantir's Pentagon lock-in

BOTTOM LINE

The enterprise AI power map inverted this quarter — Anthropic now commands 40% of spending versus OpenAI's 27%, Claude Code hit $2.5B+ ARR, and Meta chose Anthropic over its own models for internal tools — while a16z publicly declared the software 'comfortable middle' dead and private credit funds started gating redemptions as AI erodes the SaaS lending thesis that backs $1.7T in exposure. Simultaneously, UK government testing proved AI cyberattack capability scaled 5.8x in 18 months on a smooth curve, with more capable models proving more exploitable through agent tooling. Three moves: reassess your AI vendor commitments against the new market reality, declare which of a16z's two paths your company is on before the board demands it, and increase defensive security spend 25-40% to match an adversary capability curve that is now empirically measured and accelerating.

Frequently asked

What specific evidence shows Anthropic has overtaken OpenAI in enterprise AI?
Three converging data points: Anthropic commands 40% of enterprise AI spending versus OpenAI's 27% (down from roughly 50%), Claude Code crossed $2.5B+ ARR to overtake Cursor in the coding tools market, and Meta chose Claude over its own LLaMA models to power internal executive tools MyClaw and Second Brain. Fidji Simo's internal memo about 'spreading efforts across too many apps' confirms the strategic drift.
How should I restructure vendor contracts given the Microsoft-OpenAI axis is fracturing?
Audit every AI vendor contract for implicit Microsoft-OpenAI partnership dependencies — SLA assumptions, pricing tied to Azure-hosted models, and availability guarantees. Model scenarios for partnership dissolution since Microsoft is building its own frontier models and OpenAI is distributing through AWS for classified workloads. Build multi-vendor orchestration and measure cost-per-completed-task rather than cost-per-token to preserve optionality.
What are the two paths a16z says software companies must choose between?
Path 1 is AI-driven growth acceleration — adding 10+ percentage points of revenue growth within 12 months through net-new AI products. Path 2 is radical margin expansion to 40-50% true operating margins (including SBC) within 12-24 months, using the Broadcom/VMware playbook. Companies answering 'a little of both' face persistent multiple compression. Private credit redemption gates confirm the underlying SaaS lending thesis is already breaking.
Why does scaling inference compute make MCP security worse rather than better?
MCPTox benchmarks show more capable models are more susceptible to prompt injection — o1-mini follows injected malicious instructions 72.8% of the time, and capability correlates with vulnerability. Combined with 10.8% of 5,125 scanned MCP servers containing toxic data flow patterns, this means you cannot patch the problem by upgrading models. Attack surface grows quadratically with tool count, so capping tools per server is the most effective near-term mitigation.
How fast is autonomous AI cyberattack capability actually improving?
UK AI Security Institute measurements show average steps completed on a 32-step corporate network attack rose from 1.7 (GPT-4o, August 2024) to 9.8 (Opus 4.6, February 2026) — a 5.8x gain in 18 months, with best runs hitting 22 of 32 steps. Scaling inference compute from 10M to 100M tokens adds up to 59% more performance. The curve is smooth and predictable, putting fully autonomous attacks on production infrastructure within 1-2 model generations.

◆ ALSO READ THIS DAY AS

◆ RECENT IN LEADER