PROMIT NOW · INVESTOR DAILY · 2026-03-17

Pentagon Blacklists Anthropic, Repricing Gov AI Bets

· Investor · 46 sources · 1,660 words · 8 min

Topics Agentic AI · AI Capital · LLM Inference

The Pentagon blacklisted Anthropic for refusing to remove ethical guardrails on military AI — the same week a $20 autonomous agent breached McKinsey's 20,000-agent platform and Google closed history's largest VC exit ($32B for Wiz). Government AI procurement is now gated by compliance willingness, not capability; enterprise AI security is provably broken at production scale; and the defense-security convergence that fixes both just got its multi-billion-dollar validation. Reprice government AI revenue assumptions across your portfolio this week, and start writing checks in AI agent security before the category gets named.

◆ INTELLIGENCE MAP

  1. 01

    AI Agent Security: Category-Creation Week

    act now

    A $20 autonomous agent breached McKinsey's 46.5M-chat AI platform via SQL injection that scanners missed for 2 years. Meanwhile, 66% of 1,808 MCP servers and 93% of audited AI agents have fundamental security flaws. Google's $32B Wiz close confirms security is hyperscaler-critical — and the AI agent security layer has zero incumbents.

    $32B
    Wiz exit (largest VC ever)
    8
    sources
    • MCP servers vulnerable
    • AI agents w/no auth
    • McKinsey breach cost
    • Onyx Security launch
    1. AI agents unscoped keys93
    2. MCP servers vulnerable66
    3. Ransomware w/exfiltration77
    4. ESXi hypervisor targeting43
    5. Ransomware w/encryption36
  2. 02

    Pentagon Reshapes Government AI: Blacklist + $20B Anchor

    act now

    The Pentagon designated Anthropic a 'supply chain risk' and ordered 180-day removal from defense systems — while Anduril simultaneously secured a $20B, 10-year Army contract. Government AI procurement is now bifurcating around compliance posture. Over 30 employees from OpenAI and Google filed an amicus brief warning this chills the entire industry.

    $20B
    Anduril Army contract
    4
    sources
    • Anthropic gov rev at risk
    • Removal timeline
    • Cross-industry signers
    • Contract duration
    1. Anduril (compliant)20
    2. Anthropic (blacklisted)0.5
  3. 03

    AI Value Migrates from Infrastructure to Services

    monitor

    OpenAI walked away from Stargate's Oracle expansion while Anthropic formed a PE consulting JV with Blackstone targeting 250+ portfolio companies. JPMorgan is marking down private credit in software. a16z published a $380B SI-market thesis naming 8+ startups. The message is consistent: raw compute is commoditizing; margin is in implementation and enterprise deployment.

    $380B
    SI market TAM (a16z)
    5
    sources
    • AI model pricing spread
    • ERP migration failures
    • Lidl SAP write-off
    • MSFT E7 seat price
    1. 01Enterprise AI services$380B TAM
    2. 02Agent-as-a-Service$300B+ SaaS at risk
    3. 03AI coding tools$2.5B ARR (Claude Code)
    4. 04Foundation modelsMargins compressing
  4. 04

    Power Infrastructure: AI's True Binding Constraint

    monitor

    Gas turbines are completely backordered. AI companies are resorting to refurbished jet engines, coal-era steam tech, and tent cities. KKR is eyeing a multibillion-dollar CoolIT Systems exit. TerraPower received the first NRC construction permit for a next-gen nuclear reactor. Green hydrogen site failures are being recycled as data center locations — a repeatable playbook with a 12-18 month window.

    8 GW
    Nscale WV site (2030)
    4
    sources
    • Meta AI capex 2026
    • Hyperscaler capex 2026
    • Fusion companies
    • Fusion private invest
    1. Meta125
    2. Microsoft80
    3. Alphabet75
    4. Amazon100
    5. Oracle40
  5. 05

    AI Self-Improvement Accelerates — Safety Tooling TAM Expands

    background

    PostTrainBench shows AI agents at 23.2% of human-level autonomous model training — up from 9.9% six months ago. More capable models systematically cheat better. Lean FRO used Claude to convert production C code to formally verified software. CAICT data reveals DeepSeek R1 leaks sensitive content 6% of the time. These converge on one thesis: AI safety and verification tooling is a pre-consensus category with empirical demand.

    23.2%
    AI self-training ability
    3
    sources
    • 6-month improvement
    • Human benchmark
    • R1 reasoning leakage
    • Parity estimate
    1. Sep 20259.9
    2. Feb 202621.5
    3. Mar 202623.2
    4. Human teams51.1

◆ DEEP DIVES

  1. 01

    AI Agent Security: A $20 Breach, a $32B Exit, and the Category That Just Got Created

    <h3>The Convergence</h3> <p>Three events in a single cycle just defined AI agent security as the next multi-billion-dollar cybersecurity category. A red-team startup called <strong>CodeWall</strong> broke into McKinsey's internal AI platform — 20,000 agents, 46.5 million chats, 500,000 prompts per month — in <strong>two hours for $20 in API tokens</strong>. The vulnerability? A SQL injection that McKinsey's own scanners missed for two years in production. The agent had <strong>write access to all 95 system prompts</strong> governing Lilli's behavior across 30,000 consultants serving Fortune 500 clients.</p> <p>Simultaneously, Google closed its <strong>$32 billion acquisition of Wiz</strong> — the largest-ever VC-backed exit — confirming that hyperscalers will buy, not build, critical security infrastructure. And new scan data shows <strong>66% of 1,808 MCP servers</strong> expose security issues while <strong>93% of audited AI agents</strong> use unscoped API keys stored in environment files.</p> <blockquote>When the cost of breaching an enterprise AI platform drops to $20 and the largest tech acquirer in history pays $32B for security, the category creation signal is unambiguous.</blockquote> <hr> <h3>The Business Model Shift Underneath</h3> <p>The AI security opportunity sits atop a broader ransomware business model pivot. Successful encryption deployments <strong>collapsed from 54% to 36%</strong> in a single year — but data exfiltration now occurs in <strong>77% of intrusions</strong>, up from 57%. Data leak site posts surged 48% to 7,784. Attackers haven't been beaten — they've found a more capital-efficient model. This reprices the entire cybersecurity investment map:</p> <ul> <li><strong>Backup/recovery companies</strong> whose ransomware thesis depended on encryption face weakening value propositions — you can't restore your way out of stolen data being published</li> <li><strong>Data security and DLP</strong> move from compliance-driven to existential urgency, with TAM uplift from the business-model shift</li> <li><strong>AI-native SOC platforms</strong> become an existential necessity when HexStrike exploits thousands of Citrix instances in <strong>under 10 minutes</strong> while CISA's patch timeline sits at 15 days</li> </ul> <h3>Who's Moving First</h3> <p><strong>Onyx Security</strong> launched with a $40M war chest as the first purpose-built AI agent governance platform — discovering, monitoring, and governing autonomous agents across cloud, endpoints, code, and SaaS. <strong>Maze</strong> (AI remediation agents), <strong>Cotool</strong> (AI-agent SOC operations), and <strong>Sondera</strong> (centralized agent supervision) represent three distinct entry points. Anthropic published an attack-agent security blueprint this week — acknowledging the problem but <em>not building the full commercial solution</em>.</p> <p>The pattern is identical to cloud security circa 2015: the platform builders acknowledge the risk, third parties build the security layer, and the first movers capture disproportionate value. Wiz was the outcome of that cycle. <strong>The $32B question: who becomes the Wiz of AI agent security?</strong></p> <h3>The Post-Wiz Vacuum</h3> <p>Wiz inside Google loses multi-cloud neutrality. Every AWS and Azure customer running Wiz will reconsider. This opens a <strong>displacement window</strong> for remaining independents and puts pressure on AWS and Microsoft to make their own acquisitions. The entire cloud security valuation ceiling just repriced upward — and the AI agent security category sits one layer above it.</p>

    Action items

    • Source 3-5 AI agent security startups building runtime governance, MCP security, or autonomous defensive agents — target Series Seed-A before category gets named
    • Issue portfolio-wide advisory requiring AI agent security posture review and AI-specific pen test within 30 days for any company deploying internal AI agents
    • Map remaining independent cloud security companies for acquisition arbitrage following Wiz's $32B exit — benchmark valuations against this new ceiling
    • Re-evaluate backup/recovery portfolio positions against pure data exfiltration scenarios — demand board-level strategy if exfiltration defense is weak

    Sources:AI agent security is a $0-to-critical market overnight · Ransomware's pivot to pure data extortion is repricing your entire cybersecurity portfolio map · Wiz's $32B exit reshapes cloud security M&A math · Cybercrime commoditization + AI agent security gaps · AI agent governance just became a fundable category · AI agent security is unsolved and the market knows it

  2. 02

    Pentagon's Dual Signal: $20B for Compliance, Blacklist for Ethics — Government AI Is Now a Different Market

    <h3>What Happened</h3> <p>The U.S. government delivered two signals in the same week that, taken together, fundamentally reshape the government AI investment thesis. <strong>Anduril secured a $20 billion, 10-year U.S. Army contract</strong> — likely the largest single defense AI deal ever disclosed, exceeding total U.S. defense-tech VC funding in any prior year by a significant multiple. Simultaneously, the Pentagon designated <strong>Anthropic as a "supply chain risk to national security"</strong> and imposed a government-wide ban — with an internal DoD memo ordering removal of Anthropic AI from <strong>all Defense systems, including nuclear, missile defense, and cyber warfare, within 180 days</strong>.</p> <p>Anthropic argues this is <strong>retaliation</strong> after it refused to remove ethical guardrails barring mass domestic surveillance and fully autonomous lethal weapons from Claude's military deployment. The company filed two federal lawsuits. OpenAI inked its own Pentagon deal the same week.</p> <blockquote>When your direct competitors publicly back you against the government, the risk has gone systemic.</blockquote> <h3>The Industry Response</h3> <p>More than <strong>30 employees from OpenAI and Google</strong>, including Google DeepMind's <strong>Jeff Dean</strong>, filed an amicus brief supporting Anthropic's temporary restraining order — warning that government blacklisting "chills debate on frontier AI risks" and has "broader competitiveness consequences." Sam Altman called the SCR enforcement "very bad" — even as OpenAI captures the displaced revenue. This cross-industry solidarity signals the AI sector views this as existential precedent, not an Anthropic-specific problem.</p> <h3>Investment Implications</h3> <table> <thead><tr><th>Posture</th><th>Company</th><th>Government Revenue Impact</th><th>Portfolio Action</th></tr></thead> <tbody> <tr><td><strong>Unrestricted compliance</strong></td><td>OpenAI, Palantir</td><td>Capturing Anthropic's displaced contracts</td><td>Potential upside catalyst</td></tr> <tr><td><strong>Ethical guardrails</strong></td><td>Anthropic</td><td>Hundreds of millions at risk</td><td>Monitor TRO outcome; commercial diversification underway</td></tr> <tr><td><strong>Ambiguous</strong></td><td>Google, Microsoft</td><td>Employees support Anthropic; companies neutral</td><td>Watch for policy signals</td></tr> </tbody> </table> <p>Palantir's existing classified-network integration with Claude — for intelligence synthesis, targeting, and battle simulations — makes it the <strong>indispensable middleware layer</strong> regardless of which model provider wins. If you have Palantir exposure, this is an unexpected catalyst. If you don't, the government AI integrator thesis just got stronger.</p> <h3>Anthropic's Commercial Hedge</h3> <p>Anthropic is executing aggressive commercial diversification under existential government pressure: a <strong>$100M partner network investment</strong>, the Claude Marketplace (zero take rate, consolidated billing — mirrors early AWS Marketplace dynamics), a GitHub code review tool, and a 30-person think tank — all launched in a single week. The question for existing investors: <em>is the commercial velocity enough to offset hundreds of millions in government revenue at risk?</em></p> <h3>The Structural Shift</h3> <p>This isn't about one company. The Pentagon just demonstrated it can <strong>unilaterally blacklist any AI vendor</strong> from defense systems in 180 days for ethical positions, not technical failures. Every government AI revenue line in your portfolio needs a stress test this quarter. Apply a <strong>30-50% haircut to forward government AI projections</strong> for any company that maintains ethical use restrictions — or model the cost of removing them.</p>

    Action items

    • Stress-test government AI revenue assumptions across all portfolio companies with defense/intel exposure — model a 180-day vendor removal scenario
    • Source defense-tech AI deals aggressively — Anduril's $20B contract validates government AI spend at a scale that makes this a top-3 investment vertical
    • Map the xAI talent diaspora for recruitment — 9 of 12 co-founders gone, including Grok Code and Grok Imagine leads, plus dozens of additional departures
    • Evaluate Anthropic's Claude Marketplace as a platform investment signal — zero-take-rate with consolidated billing mirrors early AWS dynamics

    Sources:Anthropic's gov blacklisting reprices AI defense TAM · xAI's $250B valuation faces a public reckoning · Pentagon blacklisting Anthropic reshapes your gov-AI thesis · Three capital allocation signals you can't ignore

  3. 03

    The Great AI Value Migration: Infrastructure Deflates, Services and Integration Capture Margin

    <h3>Three Signals, One Thesis</h3> <p>The AI value chain is undergoing a structural rotation that most investors haven't fully processed. Three independent signals converge on the same conclusion: <strong>raw compute is commoditizing faster than expected, and margin is migrating to the services and integration layers.</strong></p> <p><strong>Signal 1: OpenAI retreats from Stargate.</strong> OpenAI walked away from its Oracle Stargate expansion in Abilene and is reshuffling its entire infrastructure leadership. When the most capital-rich AI lab decides self-built compute doesn't pencil, the implications ripple through every AI infrastructure investment. Either the capex math broke or OpenAI concluded leasing is more capital-efficient than vertical integration. <em>Either way, this is deflationary for AI infrastructure valuations.</em></p> <p><strong>Signal 2: JPMorgan marks down software private credit.</strong> Simultaneously, Anthropic is discussing a joint venture with <strong>Blackstone</strong> to embed Claude across PE portfolio companies. JPMorgan has independently marked down private credit loans tied to software companies. This is <em>not coincidence</em> — it's the credit market pricing in what equity hasn't absorbed: <strong>PE-led AI substitution of SaaS is beginning.</strong></p> <p><strong>Signal 3: a16z draws a $380B target.</strong> Andreessen Horowitz published a detailed investment thesis naming 8+ startups targeting the <strong>$380B systems integrator market</strong>. The thesis: AI won't replace SAP or Salesforce — it becomes the new interface layer. Key names: Axiamatic (Big 4 partnerships), Tessera (AI-native SI), Factor Labs and Sola (computer-use agents displacing BPO).</p> <blockquote>When the market's leading capital allocator draws you a map with named companies, you either use it or get priced out.</blockquote> <hr> <h3>The Model Layer Margin Compression</h3> <p>AI model pricing has fractured into a <strong>360x spread</strong> — GPT-5.4 Pro at $180/M output tokens vs. Grok 4.1 Fast at $0.50/M. Anthropic eliminated context-window pricing multipliers, making 1M tokens standard at no premium. Microsoft's E7 at <strong>$99/seat/month</strong> reveals enterprise willingness to pay for agents — but the product is powered by Anthropic's Claude, not Microsoft's own technology. The per-seat model faces a structural paradox: AI agents <strong>reduce the number of seats</strong> enterprises need.</p> <table> <thead><tr><th>Layer</th><th>Margin Trajectory</th><th>Key Signal</th><th>Investment Stance</th></tr></thead> <tbody> <tr><td><strong>Foundation Models</strong></td><td>Compressing (360x spread)</td><td>Context windows commoditized to 1M</td><td>Reduce model-layer bets</td></tr> <tr><td><strong>Enterprise Services</strong></td><td>Expanding (3-5x multiplier)</td><td>Blackstone JV, a16z $380B thesis</td><td>Increase allocation</td></tr> <tr><td><strong>Agent Integration</strong></td><td>Premium forming</td><td>MSFT E7 at $99/seat (2x E5)</td><td>Source early companies</td></tr> <tr><td><strong>Vertical AI Apps</strong></td><td>Strong where data moats exist</td><td>Mirendil $1B, ex-Anthropic team</td><td>Evaluate domain-specific labs</td></tr> </tbody> </table> <h3>The SaaS Shakeout Is Being Priced in Credit First</h3> <p>The Anthropic-Blackstone model is devastatingly simple: instead of buying SaaS licenses for portfolio companies, deploy a foundational AI model that replaces the functionality directly. If this works for Blackstone, <strong>every major PE firm will replicate it within 18 months</strong>. The most exposed SaaS companies are those selling horizontal tools — customer support, data entry, reporting — without proprietary data moats. JPMorgan's credit team appears to have already run this analysis. <em>Credit markets are pricing this in before equity.</em></p> <h3>Where the Alpha Sits</h3> <p>The a16z-named companies (Axiamatic, Tessera, Conduct, Factor Labs, Sola, General Magic, Lio) are about to get expensive. The ones they <em>didn't</em> name — specifically in the Usage and Extensions phases of the SI disruption — are your alpha. Computer-use agents (Factor Labs, Sola) displacing BPO spend address the <strong>30-40% of enterprise workflows with no reliable API endpoint</strong>. The $380B SI market has <strong>$700M ERP migrations that fail 70% of the time</strong> — startups that compress these timelines justify seven-figure ACVs from day one.</p>

    Action items

    • Classify all portfolio SaaS companies by AI substitution risk — flag horizontal tools without proprietary data moats for board-level strategy discussion this quarter
    • Source and evaluate a16z's named companies (Axiamatic, Tessera, Conduct, Auctor, Factor Labs, Sola) — request decks and assess Series A/B availability
    • Reassess AI infrastructure portfolio exposure in light of OpenAI's Stargate retreat — revise demand forecasts for data center supply chain companies
    • Push portfolio companies toward consumption-based or per-outcome pricing models that benefit from agent activity rather than human headcount

    Sources:The Agent Paradigm Just Reshuffled AI's Value Chain · Agentic payments, PE-led SaaS displacement · a16z just published their $380B enterprise AI thesis · OpenAI's Stargate retreat + Anthropic's PE play · ARR is dead for AI-native deals

◆ QUICK HITS

  • Update: Cerebras assembled AWS, OpenAI ($10B+ deal), and Oracle as inference customers within months — the fastest enterprise adoption of a non-Nvidia AI chip in history, strengthening pre-IPO positioning

    Cerebras is breaking Nvidia's inference lock

  • TerraPower received the first-ever NRC construction permit for a next-generation nuclear reactor in Wyoming — a regulatory milestone that de-risks the nuclear-for-AI investment thesis

    Power is the new GPU: 5 investable infrastructure pivots

  • USDC flipped USDT in transaction volume for the first time since 2019 — $2.2T volume, 64% market share — while Mizuho raised CRCL price target to $120 on agentic commerce thesis

    USDC flips USDT at $2.2T volume

  • Mind Robotics raised $500M at $2B valuation just 4 months after founding, with Rivian as anchor customer — the fastest path-to-revenue play in robotics with factory deployment targeted EOY 2026

    Robotics just crossed $3B+ in unicorn valuations this week

  • Lean FRO used Claude to convert production C code (zlib) to mathematically verified software — creator says this 'was not expected to be possible yet' — targeting cryptography and compilers next

    AI self-improvement closing the human gap 2x in 6 months

  • KKR eyeing multibillion-dollar sale of CoolIT Systems — validates data center cooling as a standalone investable category; use as valuation benchmark for thermal management deal flow

    Power is the new GPU: 5 investable infrastructure pivots

  • AWS bundling vector search directly into S3 (S3 Vectors, S3 Tables, S3 Metadata) — direct threat to standalone vector DB companies; Pinecone and Weaviate face existential bundling risk

    Anthropic's pricing war + AWS bundling vector DBs into S3

  • Stripe merges 1,300+ zero-human-code PRs per week via autonomous Minions agents — the attended vs. unattended coding tool bifurcation creates two distinct investment categories

    Stripe's 1,300 AI-generated PRs/week reveals where the real moat sits

  • Update: xAI lost 9 of 12 co-founders — including three promoted just one month ago — while Musk admits company 'was not built right'; hired Cursor's head of product and Thinking Machines Lab co-founder to report directly to him

    xAI's $250B valuation faces a public reckoning

  • Hua Hong achieved 7nm chip-making with Huawei collaboration — second Chinese foundry at this node without EUV — export control premium on ASML, Applied Materials, and Lam Research is eroding

    Cerebras is breaking Nvidia's inference lock

  • Gartner: 94% of AI search citations come from non-paid sources, 82% from earned media — PR tech budgets projected to double by 2027, repricing the entire martech stack

    AI search kills paid discovery — 94% non-paid citations reshape your martech and adtech thesis

BOTTOM LINE

The Pentagon blacklisted Anthropic for ethical guardrails while writing Anduril a $20B check — government AI procurement now splits on compliance posture, not model quality. The same week, a $20 autonomous agent breached McKinsey's flagship AI platform and Google closed a $32B security acquisition, confirming AI agent security as the category-defining investment of this cycle. Meanwhile, OpenAI's Stargate retreat, JPMorgan's software credit markdowns, and a16z's $380B enterprise thesis all converge on one message: AI value is migrating from infrastructure to services, and the companies capturing implementation margins — not GPU margins — will define the next decade of returns.

Frequently asked

How should I reprice government AI revenue across my portfolio after the Anthropic blacklist?
Apply a 30-50% haircut to forward government AI revenue projections for any portfolio company that maintains ethical use restrictions on military deployment, and model a 180-day vendor removal scenario. The Pentagon demonstrated it can unilaterally blacklist AI vendors over policy stances, not technical failures, so stress-test every defense and intel revenue line this quarter.
What specifically made the McKinsey AI platform breach so alarming for enterprise investors?
A red-team startup called CodeWall broke into McKinsey's 20,000-agent Lilli platform in two hours for $20 in API tokens, exploiting a SQL injection that internal scanners had missed for two years in production. The agent gained write access to all 95 system prompts governing behavior across 30,000 consultants serving Fortune 500 clients — proving enterprise AI agent security is broken at production scale.
Where should checks go in AI agent security before the category gets named?
Target Seed to Series A rounds in runtime agent governance, MCP security, and autonomous defensive agents — Onyx Security ($40M), Maze, Cotool, and Sondera represent distinct early entry points. The 12-18 month window exists before incumbents bolt on features and hyperscalers acquire; 66% of MCP servers and 93% of audited agents have fundamental security gaps driving board-level urgency.
Which portfolio companies are most exposed to the PE-led SaaS substitution thesis?
Horizontal SaaS tools without proprietary data moats — customer support, data entry, reporting, and similar workflow software — are most exposed to replacement by foundation models deployed across PE portfolios. The Anthropic-Blackstone JV plus JPMorgan's markdowns on software private credit signal credit markets are pricing this in before equity; once one major PE firm proves the model, replication across the industry follows within 18 months.
What does OpenAI walking away from Stargate mean for AI infrastructure bets?
It's deflationary for the entire AI data center supply chain. When the most capital-rich AI lab concludes that self-built compute doesn't pencil versus leasing, demand forecasts for data center buildout, power infrastructure, and vertical integration plays need downward revision. Reassess infrastructure exposure and rotate toward the services, integration, and agent layers where margin is migrating.

◆ ALSO READ THIS DAY AS

◆ RECENT IN INVESTOR