PROMIT NOW · LEADER DAILY · 2026-02-28

Pentagon's Anthropic Ultimatum and Block's AI Layoff Surge

· Leader · 37 sources · 1,868 words · 9 min

Topics AI Capital · Agentic AI · LLM Inference

The Pentagon threatened to invoke the Defense Production Act against Anthropic by 5:01 PM ET Friday — and on the same day, Block's 40% AI-driven layoff was rewarded with a 24% stock surge. These two events are connected: the U.S. government is asserting coercive control over AI capabilities while the market is aggressively rewarding AI-driven workforce destruction. If you lead a technology company, your AI vendor dependencies, your workforce strategy, and your government relations posture all changed this week — and your board will ask about all three within 30 days.

◆ INTELLIGENCE MAP

  1. 01

    Pentagon-Anthropic Standoff: Government Coercion of AI Companies

    act now

    Across 14 sources, the Pentagon's Defense Production Act threat against Anthropic is unanimously identified as the most consequential AI governance event of 2026 — establishing a precedent where governments can compel AI companies to remove safety guardrails, bifurcating the AI market into 'compliant' and 'principled' tiers, and forcing every enterprise to reassess vendor risk.

    14
    sources
  2. 02

    AI-Driven Workforce Restructuring: Block's 40% Cut as Template

    act now

    Block cut ~4,000 employees (40% of workforce), explicitly attributed it to AI, and was rewarded with a 20-25% stock surge — creating an incentive template that 10 sources agree will pressure every tech board to demand an AI workforce strategy within months.

    10
    sources
  3. 03

    AI Capital Arms Race and Infrastructure Economics

    monitor

    OpenAI's $110B raise (Amazon $50B, Nvidia $30B, SoftBank $30B), projected $770B hyperscaler capex for 2026, and a16z data showing AI token costs down 44% while consumption doubled confirm a Jevons paradox dynamic — but CoreWeave's 30%-worse-than-expected losses and rising GPU rental prices signal the infrastructure economics remain treacherous.

    8
    sources
  4. 04

    Cybersecurity Structural Shifts: Trusted Tools as Attack Surfaces

    monitor

    A Cisco SD-WAN CVSS 10 zero-day exploited since 2023, Google API keys silently gaining Gemini access across 2,863+ exposed keys, ransomware pivoting from encryption to pure data extortion (57% of claims), and CISA described as 'decimated' collectively signal that enterprise security's trust model is structurally failing.

    6
    sources
  5. 05

    SaaS Business Model Disruption and Platform Shifts

    background

    AI agents are replacing human users as the primary consumers of software, threatening seat-based SaaS pricing; code moats are dissolving as agent-scraping can clone production apps in days; and Stripe's Tempo blockchain signals that even payment infrastructure is being rebuilt for machine-to-machine commerce.

    6
    sources

◆ DEEP DIVES

  1. 01

    The Pentagon vs. Anthropic: A New Era of Government Coercion Over AI

    <p>Across 14 independent intelligence sources this week, one event dominates: <strong>Defense Secretary Hegseth gave Anthropic CEO Dario Amodei a Friday deadline</strong> — agree to 'all lawful use' of Claude by the military, or face the Defense Production Act and a <strong>'supply chain risk' designation</strong> previously reserved for entities like Huawei. Anthropic holds two red lines: no fully autonomous weapons, no mass domestic surveillance. Every other frontier AI company — Google, OpenAI, xAI — has already capitulated.</p><h3>Why This Is a Watershed</h3><p>The legal vacuum makes this uniquely dangerous. There are <strong>essentially no federal statutes governing military AI</strong>, autonomous weapons, or AI-assisted surveillance. When the Pentagon demands 'all lawful use,' it's demanding a blank check in a domain where nothing is illegal because nothing has been legislated. A Pentagon official told CNN that 'legality is the Pentagon's responsibility as the end user' — effectively asking AI companies to <strong>surrender technical and moral judgment</strong> to an institution operating without a legal framework.</p><blockquote>If the DPA is successfully used to compel a software company to modify its product for government use, the precedent is unlimited — encryption backdoors, cloud access mandates, algorithm modifications all become legally plausible.</blockquote><p>This isn't hypothetical. Government agencies are <strong>already using AI to scan social media posts</strong> of visa holders and permanent residents, using speech as grounds for deportation. Three labor unions have sued. A recent study showed GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash <strong>deployed nuclear weapons in 95% of simulated war games</strong> and never surrendered.</p><h3>The Safety Collapse Is Happening in Parallel</h3><p>On the same day Anthropic received the Pentagon ultimatum, it <strong>quietly dropped its core Responsible Scaling Policy pledge</strong> — the hard commitment not to train more capable models without proven safety measures. Its chief science officer called maintaining the pledge 'unilateral disarmament.' Multiple sources note this dual-track strategy: publicly refusing the Pentagon while privately gutting internal safety commitments. For enterprise buyers who chose Anthropic <em>because</em> of safety credentials, this is a <strong>material change in vendor risk profile</strong>.</p><h3>The Market Bifurcation</h3><p>Sources converge on a clear prediction: the AI market is splitting into <strong>defense-aligned and ethics-aligned tiers</strong>. Companies willing to comply will capture government revenue; companies that resist will capture enterprise trust and top research talent. Anthropic's consumer metrics suggest it's betting on the latter — <strong>daily signups tripled since November, paid subscribers doubled since October</strong>, driven by Claude Code and Claude Cowork agents, not government contracts. But the middle ground is disappearing, and every AI company will be forced to declare a position.</p><hr><h3>What This Means for Your Organization</h3><p>If you build on third-party AI models, a new category of risk has emerged that most enterprise frameworks don't capture. What happens to your products if your model provider is <strong>compelled to redirect capacity to government use</strong>? What happens to your data if government access is mandated? What happens to your security clearances if your AI vendor is labeled a 'supply chain risk'?</p>

    Action items

    • Conduct an immediate legal review of all government and defense-adjacent contracts for DPA exposure and 'all lawful use' language
    • Map every product and contract that depends on Anthropic, OpenAI, or other AI providers and model the scenario where any vendor is deemed a 'supply chain risk' by DoD
    • Develop and publish your company's AI safety and government deployment policy before the situation forces a reactive response
    • Build multi-vendor AI infrastructure redundancy, reducing single-vendor concentration in API dependencies below 60% of any critical workflow

    Sources:The authoritarian AI crisis has arrived · Anthropic CEO Says Company Won't Agree to Pentagon Demands · Jack Dorsey's Block Axes Staff · Vulnerable U | #157 · Google Nano Banana 2 🍌, xAI cofounder departs 👋, Anthropic vs DoW ⚖️ · Weekly Top Picks #115

  2. 02

    Block's 40% AI Layoff Created a Dangerous Incentive Loop — Your Board Is Already Watching

    <p>Jack Dorsey cut <strong>~4,000 employees — 40% of Block's workforce</strong> — and explicitly attributed it to AI 'fundamentally changing how the company builds and operates.' The market's response was unambiguous: a <strong>20-25% after-hours stock surge</strong>. This wasn't a distressed company making desperate cuts — Block reported $6.25B in Q4 revenue, 24% gross profit growth, and guidance above Street expectations. The $450-500M in restructuring charges are being treated as an investment, not a cost.</p><blockquote>This is the establishment of a new valuation framework where AI-enabled operational efficiency is priced as aggressively as revenue growth. Every board in tech will discuss this within weeks.</blockquote><h3>The Incentive Cascade Is Real</h3><p>Ten sources independently flagged the same dynamic: <strong>Wall Street has created a powerful incentive loop for AI-driven mass layoffs</strong>. Dorsey's public prediction that 'the majority of companies will reach the same conclusion within a year' should be treated as a competitive timeline. eBay's simultaneous 800-person cut (despite $2B+ in profit) reinforces that this isn't about financial distress — it's about structural optimization becoming table stakes. Block's internal AI agent, <strong>Goose, reportedly saves 8-10 hours per worker per week</strong> and eliminates 20-25% of manual work.</p><h3>The Contradiction Worth Surfacing</h3><p>Sources disagree on whether this is genuinely AI-driven or AI-justified. Block's mixed competitive results and 80% decline from its 2021 peak suggest financial pressure played a role that the AI narrative conveniently obscures. Citadel data shows <strong>software engineering job postings are actually rebounding</strong> despite AI coding assistants — suggesting the 'AI replaces everyone' thesis is more nuanced than Block's stock price implies. A separate analysis found AI coding tools succeed on only <strong>57% of real-world tasks</strong>, and a major productivity study broke down because developers refused to work without AI tools — making controlled measurement impossible.</p><h3>The Capability-Reliability Gap</h3><p>New research confirms AI agents are getting <strong>more capable but not more reliable</strong> — explaining why benchmark improvements aren't translating to economic impact. This means Block's bet is partially aspirational: cutting 40% of headcount assumes AI tools are mature enough to absorb that capacity. If it works, it becomes the playbook. If it fails, it becomes a cautionary tale. Either way, the pressure to articulate an AI workforce strategy is now <strong>urgent for every technology leader</strong>.</p><hr><h3>The Talent Market Implications</h3><p>4,000+ displaced Block employees represent both a retention risk signal for your own workforce and an <strong>acquisition opportunity</strong> if you move fast. The American emigration data adds context: US departures exceeded arrivals for the first time since 1935, with <strong>180,000 citizens relocating overseas in 2025</strong> and 20% of adults expressing desire to leave permanently. Combined with AI-driven restructuring, the picture is one where companies need fewer people, and the people they need are increasingly willing to live anywhere.</p>

    Action items

    • Commission an internal 'Block scenario' analysis modeling 20%, 30%, and 40% headcount scenarios with AI augmentation — present to the board within 60 days
    • Recalibrate AI agent deployment ROI models to discount benchmark-based projections by 30-50% for production reliability
    • Develop a proactive workforce transition narrative for board and public communications before the AI layoff wave forces a reactive response
    • Recruit aggressively from the displaced Block and eBay talent pools within the next 45 days

    Sources:Anthropic CEO Says Company Won't Agree to Pentagon Demands · Jack Dorsey's Block Axes Staff · The Briefing: Ellisons' Hollywood Victory · OpenAI Raises $110 Billion · Greener pastures · Netflix exits $83B Warner Bros. deal

  3. 03

    The AI Infrastructure Arms Race: $770B in Capex, Jevons Paradox, and Who Controls the Platform Layer

    <p>OpenAI closed the <strong>largest fundraise in technology history: $110 billion</strong> from Amazon ($50B), Nvidia ($30B), and SoftBank ($30B). Amazon's investment comes in stages — $15B now, $35B contingent on performance targets — paired with a <strong>$100B/8-year AWS commitment</strong> and expanded Trainium chip usage. Projected hyperscaler capex for 2026 has reached <strong>$770B collectively</strong>, up from ~$500B in 2025, growing at 70% annually.</p><h3>The Jevons Paradox Is Accelerating</h3><p>a16z data confirms the dynamic: since January 2026, <strong>AI token pricing dropped 44%</strong> (from ~90¢ to ~50¢ per million tokens) while <strong>consumption nearly doubled</strong> from ~6,000 to ~12,000. Cheaper AI means more AI, not less spending. But the supply side isn't keeping up — Nvidia H100 and A100 GPU rental prices are <em>rising</em> even as they become older-generation hardware. This is a structural supply deficit that will persist through 2026.</p><table><thead><tr><th>Metric</th><th>Data Point</th><th>Implication</th></tr></thead><tbody><tr><td>OpenAI fundraise</td><td>$110B (largest ever)</td><td>Winner-take-most capital dynamics</td></tr><tr><td>Hyperscaler capex 2026</td><td>$770B projected</td><td>Exceeds total US bank lending</td></tr><tr><td>Token cost decline</td><td>-44% since Jan 2026</td><td>Jevons paradox driving demand</td></tr><tr><td>CoreWeave Q4 loss</td><td>$452M (30% worse than expected)</td><td>Infrastructure economics remain brutal</td></tr><tr><td>OpenAI projected cash burn</td><td>$111B through 2030</td><td>$665B compute costs over 5 years</td></tr></tbody></table><h3>Platform Control Is the Real Game</h3><p>Amazon's $50B isn't just an investment — it's a <strong>platform control play</strong>. By becoming OpenAI's most important infrastructure partner while using OpenAI to scale Trainium against Nvidia, Amazon is executing a classic strategy: own the settlement layer, and everyone who transacts becomes your customer. Meanwhile, <strong>Google struck a multibillion-dollar chip deal with Meta</strong> after Meta's internal chip efforts hit roadblocks — creating the first credible Nvidia alternative at hyperscale. Nvidia's pricing power now has a visible expiration date.</p><blockquote>OpenAI's grand infrastructure visions (Stargate, the $100B Nvidia financing instrument) keep collapsing into conventional deal structures — suggesting the company's ambitions consistently outpace its ability to execute novel financial arrangements.</blockquote><h3>The Contradiction to Watch</h3><p>Sources diverge on whether this capex cycle is rational. The bull case: hyperscalers see demand signals we don't. The bear case: competitive pressure is forcing overinvestment, and the correction will be brutal. The electricity analogy from a16z is sobering — it took <strong>100 years from Faraday's experiments to industrial productivity</strong>. If meaningful AI productivity gains take 5-7 years rather than 2-3, the capex correction will be severe and indiscriminate. Smart leaders are <strong>negotiating flexible infrastructure commitments</strong> and building optionality.</p>

    Action items

    • Conduct a compute capacity audit by end of Q1 — map owned vs. rented GPU infrastructure and model cost trajectories under rising rental prices through 2027
    • Reassess cloud and infrastructure strategy in light of Amazon-OpenAI deepening — evaluate whether your cloud provider's strategic priorities now conflict with yours
    • Evaluate custom silicon and Nvidia-alternative chip strategies for AI workloads, using the Google-Meta deal as leverage in Nvidia negotiations
    • Stress-test your AI investment thesis against a 'delayed returns' scenario — model what happens if meaningful productivity gains take 5-7 years, not 2-3

    Sources:OpenAI Raises $110 Billion · Dealmaker: OpenAI Builds an M&A War Chest · Charts of the Week: DExit . . . real or feigned? · Google Nano Banana 2 🍌, xAI cofounder departs 👋, Anthropic vs DoW ⚖️ · The Briefing: Ellisons' Hollywood Victory · Jack Dorsey's Block Axes Staff

  4. 04

    Your Security Architecture's Trust Model Is Structurally Failing

    <p>Six independent sources this week converge on a single uncomfortable conclusion: <strong>the enterprise security model built on trusted tools and implicit permissions is breaking</strong>. The evidence is now overwhelming across three simultaneous failure modes.</p><h3>Failure Mode 1: Trusted Infrastructure as Attack Surface</h3><p>A <strong>Cisco Catalyst SD-WAN zero-day (CVSS 10/10)</strong> in the peering authentication system was exploited by threat group UAT-8616 since at least 2023. It took the Australian Signals Directorate — a national intelligence agency — to discover it. Five Eyes agencies issued an emergency directive. Separately, Palo Alto's Cortex XDR has been demonstrated as exploitable for C2 operations. VulnCheck data shows <strong>network edge devices accounted for a third of all exploited products in 2025</strong>.</p><h3>Failure Mode 2: SaaS Trust as C2 Channel</h3><p>China-linked APT group UNC2814 (GRIDTIDE) operated undetected <strong>across 42 countries for nearly a decade</strong> by hiding command-and-control traffic inside Google Sheets — a platform virtually every enterprise trusts and whitelists. Your whitelist is your attack surface. Meanwhile, a low-skill Russian-speaking actor used commercial AI to <strong>compromise 600+ FortiGate devices across 55+ countries in just over a month</strong>.</p><h3>Failure Mode 3: Silent Privilege Escalation</h3><p>Google's Gemini integration retroactively granted existing API keys access to generative AI services. A single web crawl found <strong>2,863 live vulnerable keys</strong> — affecting major financial institutions and security companies. Google says responsibility sits with project owners. Keys that were safely embedded in public HTML and JavaScript for years now unlock access to <strong>private prompts, uploaded files, and sensitive business data</strong>.</p><blockquote>When your EDR can be turned into a C2 channel, your cloud API keys silently gain access to AI endpoints, and your SD-WAN has been compromised for three years at maximum severity, the question isn't 'which patch do we apply' — it's 'do we fundamentally trust our security architecture?'</blockquote><h3>The Economic Shift: Ransomware to Data Extortion</h3><p>Resilience data shows <strong>data theft-only attacks now represent 57% of extortion claims</strong> — surpassing ransomware for the first time. Attackers skip encryption entirely and go straight to exfiltration. Your backup-and-recovery investment defended against last year's war. Meanwhile, CISA is described by former officials as <strong>'decimated' and 'pretty much fallen apart'</strong>, creating a federal cybersecurity vacuum that the private sector must fill.</p><h3>The Bright Spot: Claude as Security Platform</h3><p>Anthropic's launch of Claude as a security product is already <strong>tanking cybersecurity stocks</strong>. Opus 4.6 is beating purpose-built security tools and shipping a 'suggest fix' capability the industry failed to build for 15 years. This is a platform commoditization event — but it also means your security vendor portfolio faces disruption risk from the same AI companies reshaping every other market.</p>

    Action items

    • If running Cisco SD-WAN: patch immediately and initiate forensic review for indicators of UAT-8616 compromise dating back to 2023
    • Mandate an immediate audit of all Google Cloud API keys, with specific focus on keys embedded in client-side code or public repositories that may now grant Gemini access
    • Commission a 'parasitic residency' threat assessment — assume you're already compromised and hunt for long-dwell-time persistence indicators across identity systems and SaaS platforms
    • Reassess cyber insurance coverage to ensure policies cover pure data theft scenarios, not just ransomware/encryption events

    Sources:Ransomware groups switch to stealthy attacks and long-term access · Vulnerable U | #157 · Google Silent Gemini Escalation 🚩, Cisco SD-WAN Vulnerability 🛜, Linux Adopts DIDs 🪪 · Risky Bulletin: Russian man investigated for extorting Conti ransomware group · Critical Flaws Exposed Smart Gardens to Remote Hacking · Gottumukkala out, Andersen in as acting CISA director

◆ QUICK HITS

  • OpenAI's $100B war chest is repricing every AI acquisition target — Cursor conversations at $30B, Io Products (Jony Ive) at $6.5B, and the company can absorb 4% dilution per deal at its $730B valuation

    Dealmaker: OpenAI Builds an M&A War Chest

  • xAI has lost 7 of 12 co-founders in under 3 years, including Macrohard agent division lead Toby Pohlen — recruit from this talent pool now while the SpaceX merger creates organizational chaos

    Google Nano Banana 2 🍌, xAI cofounder departs 👋, Anthropic vs DoW ⚖️

  • Stripe processed $1.9T (1.6% of global GDP) at 34% growth and is building Tempo, a purpose-built payments blockchain with Visa, Shopify, Nubank, and Klarna already testing — stablecoin rails are becoming enterprise infrastructure

    Weekly Dose of Optimism #182

  • Anthropic's 90-day max planning horizon and no-specs culture is a structural speed advantage — Anthropic treats planning beyond 90 days as counterproductive at the frontier

    SWLW #692: End of Productivity Theater, The Anthropic Hive Mind, and more.

  • Google's $1B Form Energy deal validates iron-air batteries at 10% of lithium-ion cost with 100-hour duration — hyperscaler energy strategy is now a competitive differentiator for AI infrastructure

    Netflix exits $83B Warner Bros. deal

  • Healthcare Cybersecurity and Resiliency Act advancing with bipartisan Senate support — HIPAA modernization, federal grants for hospital security, and HHS incident response mandates create a multi-billion-dollar compliance wave

    Gottumukkala out, Andersen in as acting CISA director

  • Netflix walked away from $82.7B WBD bid, stock jumped 10%, and it collects a $2.8B breakup fee — Paramount Skydance takes on the $111B integration challenge instead, backed by Ellison family's Trump administration connections

    Greener pastures

  • Cloudflare rebuilt Next.js's API surface with AI in a week (vinext), triggering Vercel's CEO to personally disclose security vulnerabilities in the project — the frontend infrastructure competitive landscape just fractured

    Cloudflare makes its own Vite-powered Next.js

  • AI training paradigm shifting from RLHF to RLVR (Reinforcement Learning with Verifiable Rewards), which removes the human labeling bottleneck — Scale AI's $13B+ valuation faces TAM compression risk

    The Sequence Opinion #815: The End of RLHF? The Rise of Verifiable Rewards

  • Teen AI chatbot usage for schoolwork tripled from 13% to 44% in three years, with 10% unable to complete assignments without AI — this is your 2029-2032 workforce pipeline signal

    Quit your dillydallying

BOTTOM LINE

The U.S. government just demonstrated it will use wartime statutes to compel AI companies to remove safety guardrails, Wall Street proved it will reward 40% AI-driven layoffs with 24% stock surges, and your security stack's trust model is failing across Cisco, Google, and SaaS platforms simultaneously — the organizations that survive this convergence are the ones building multi-vendor AI resilience, proactive workforce transformation narratives, and zero-trust architectures that assume their whitelisted tools are compromised.

Frequently asked

What should I tell my board about AI vendor risk after the Pentagon-Anthropic standoff?
Present a vendor concentration map and a 'supply chain risk designation' scenario for each major AI provider. The Pentagon's threat to invoke the Defense Production Act against Anthropic establishes that your model provider could be compelled to redirect capacity, accept mandated government access, or be labeled a risk entity — any of which cascades to your products. Boards want to see multi-vendor redundancy targets (no vendor above ~60% of any critical workflow) and a written policy on government deployment.
Is Block's 40% layoff actually replicable, or is the AI justification overstated?
It's partially aspirational. Block's internal Goose agent reportedly saves 8-10 hours per worker per week, but independent research shows AI coding tools succeed on only 57% of real-world tasks and agents are getting more capable without getting more reliable. Recalibrate any headcount reduction model by discounting benchmark-based productivity projections 30-50%, and recognize that Block's 80% decline from its 2021 peak means financial pressure is doing some of the work the AI narrative is getting credit for.
How should I hire from the Block and eBay layoffs?
Move within 45 days and target engineers who built or operated internal AI tooling like Block's Goose agent. Roughly 4,800 people are entering the market from two well-regarded technical organizations, and the strongest talent will be absorbed in weeks. Pair this with a retention review of your own staff — public AI-driven layoffs at peers are a flight signal for your top performers too.
What's the single most urgent security action this week?
If you run Cisco Catalyst SD-WAN, patch the CVSS 10/10 peering authentication flaw and launch a forensic review back to 2023, when threat group UAT-8616 began exploiting it. Five Eyes agencies issued an emergency directive. In parallel, audit Google Cloud API keys for silent Gemini access — a single web crawl found 2,863 live vulnerable keys, including at major financial institutions.
Does the $770B hyperscaler capex number mean I should accelerate or delay AI infrastructure commitments?
Build optionality rather than commit at scale. Token prices fell 44% since January while consumption doubled, confirming Jevons-paradox demand — but GPU rental prices are still rising and CoreWeave posted a $452M Q4 loss, showing infrastructure economics remain brutal. Negotiate flexible, shorter-duration compute commitments, use the Google-Meta chip deal as leverage against Nvidia pricing, and stress-test your thesis against a scenario where productivity gains take 5-7 years instead of 2-3.

◆ ALSO READ THIS DAY AS

◆ RECENT IN LEADER