PROMIT NOW · LEADER DAILY · 2026-03-29

Microsoft's 34% Crash and Dorsey's Halving Call Reset AI

· Leader · 6 sources · 1,453 words · 7 min

Topics AI Capital · Agentic AI · LLM Inference

Microsoft's 34% crash — its worst quarter since 2008 — collided this week with Jack Dorsey publicly telling investors that AI coding agents could halve Block's headcount, while rate expectations flipped from 90% cut probability to 52% hike probability in 30 days. The market has stopped rewarding AI faith and started demanding receipts, but the CEOs actually producing those receipts are concluding they need dramatically fewer people. Your capital plan and org chart are both built on assumptions that expired this week — stress-test them simultaneously, not sequentially.

◆ INTELLIGENCE MAP

  1. 01

    AI Investment Thesis Cracks Under Triple Shock

    act now

    Microsoft down 34% since October (worst since 2008), rate expectations flipped to 52% hike probability in 30 days, and H100 GPU rentals now exceed 2022 launch prices. AI capex faces simultaneous investor revolt, rising cost of capital, and appreciating compute costs — a triple squeeze that invalidates most FY26 plans.

    34%
    MSFT decline since Oct
    3
    sources
    • MSFT peak decline
    • Rate hike probability
    • Rate flip timeline
    • Nasdaq correction
    1. Rate cut prob (30d ago)90
    2. Rate hike prob (today)52
  2. 02

    AI Labs Become Vertical Category Killers — Security First

    monitor

    Anthropic's leaked Claude Mythos — described internally as posing 'unprecedented cybersecurity risks' — triggered a sector-wide security selloff. RSA 2026 field intelligence confirms: every security product is becoming a commodity API call in agentic workflows within 1-3 years. This pattern will repeat across every knowledge-work software vertical.

    1-3 yrs
    security API dissolution
    3
    sources
    • RSA vendors surveyed
    • Dissolution timeline
    • Yahoo Scout users
    1. NowMythos leak triggers security selloff
    2. 6-12 moAPI-first security products emerge
    3. 1-3 yrsSecurity dissolves into agentic API mesh
    4. 3+ yrsPattern repeats: legal, finance, medical
  3. 03

    CEO Workforce Compression Goes Public

    monitor

    Dorsey told JPMorgan Tech100 investors that using coding agent Goose convinced him he could nearly halve Block's workforce. Databricks CEO Ghodsi echoed the same pattern. Hashimoto disclosed a parallel agent workflow where agents plan while he codes and code while he reviews. The CEO class is setting 12-18 month headcount expectations based on personal AI usage.

    ~50%
    Dorsey's headcount target
    2
    sources
    • Block headcount cut
    • AI talent gap ratio
    • Adoption timeline
    1. Current headcount model100
    2. Agent-augmented target50
  4. 04

    GPU Depreciation Models Break — H100s Are Appreciating Assets

    act now

    H100 rental prices have reversed sharply upward since Dec 2025, now exceeding Oct 2022 launch-era values. Reasoning and agent demand plus better inference software have made 4-year-old chips more capable than any depreciation schedule assumed. Every cloud GPU contract and FY26 infrastructure budget line is potentially mispriced. Google is funding Anthropic's data center buildout, accelerating the capex arms race.

    >2022
    H100 price vs launch
    2
    sources
    • H100 price vs launch
    • Capybara scale (est.)
    • Open model gap
    • RotorQuant speedup
    1. H100 at launch (Oct 2022)100
    2. H100 post-DeepSeek dip60
    3. H100 today110
  5. 05

    Meta Assembles Every Layer of the Next Platform

    background

    Meta simultaneously scaled brain-reading AI from 4 to 720 subjects (1K→70K voxels), filed FCC paperwork for prescription smart glasses with Wi-Fi 6, committed to 7 new gas power plants, published self-improving hyperagent research, and had Zuckerberg privately coordinating with Musk on DOGE and OpenAI IP. This is a coherent platform-consolidation play across hardware, compute, AI, and political capital.

    6
    simultaneous fronts
    1
    sources
    • Brain-reading subjects
    • Voxel resolution
    • New gas plants
    • Eyewear market size
    1. 01Brain-computer interface720 subjects
    2. 02Prescription smart glassesFCC filed
    3. 03Data center power7 gas plants
    4. 04Self-improving agentsResearch published
    5. 05Political coordinationMusk/DOGE/OpenAI

◆ DEEP DIVES

  1. 01

    The AI Investment Thesis Just Hit a Wall — And the Compute Economics Make It Worse

    <h3>Three independent shocks converging on your capital plan</h3><p>The market is no longer rewarding AI infrastructure spending on faith. <strong>Microsoft's 34% decline since October</strong> — its worst quarter since the 2008 financial crisis — is the clearest signal: the company that effectively invented enterprise AI distribution (via OpenAI partnership and Copilot) is being punished because investors want revenue, not roadmaps. When the best-positioned incumbent gets hit this hard, it reprices the entire "spend now, monetize later" playbook.</p><p>The capital environment simultaneously tightened at historic speed. Rate expectations <strong>flipped from 90% cut probability to 52% hike probability in a single month</strong> — the fastest monetary policy sentiment reversal in recent memory. Combined with $110 oil driven by an Iran conflict that is proving <em>structurally persistent</em> rather than tradable, this invalidates most tech companies' 2026-2027 financial plans. The "TACO trade" — buying dips on Trump-era geopolitical bluster — has broken down, meaning the risk premium is real and lasting.</p><h3>Compute costs are rising, not falling</h3><p>While capital conditions tighten, the inputs for AI are getting <strong>more expensive</strong>, not cheaper. H100 GPU rental prices have reversed their 2024 depreciation curve and now <strong>exceed their October 2022 launch-era values</strong>. This isn't a blip — it's structural, driven by reasoning model and agent demand making four-year-old chips more capable than anyone's depreciation schedule assumed. Google's move to fund Anthropic's data center infrastructure confirms the capex arms race is <strong>accelerating</strong>.</p><blockquote>The window for faith-based AI spending has closed. What opens next is a period where execution, unit economics, and strategic independence determine who captures the value.</blockquote><h3>The contrarian opportunity is real — but the math is harder</h3><p>Mag Seven stocks are down <strong>8-34% from peaks</strong>, and AI-native companies' valuations have compressed 20-35% in five months. Cash-rich acquirers with clear monetization stories have a rare window. But underwriting acquisitions under a potential <strong>rate-hike regime</strong> requires balance-sheet cash, not debt-funded deals. SoftBank's $40B unsecured bridge loan from JPMorgan and Goldman to fund its OpenAI commitment is the counter-example: leverage concentration that defines market cycle peaks.</p><p>The companies that emerge strongest will be those that can <strong>demonstrate AI monetization now</strong>, maintain financial flexibility in a higher-rate world, and avoid dependency on volatile AI startup partners — a lesson Disney learned when OpenAI killed Sora overnight, vaporizing a $1B partnership.</p><hr><h3>What this means for your FY26 plan</h3><p>Every line of AI infrastructure spend needs to pass a new test: <strong>12-month payback or clear revenue attribution</strong>. Speculative long-term bets that were fundable at 4% rates and faith-based multiples now require explicit justification. Meanwhile, your compute procurement strategy may be mispriced — H100 contracts written with depreciation assumptions are underwater.</p>

    Action items

    • Convene an emergency capital allocation review this week — stress-test all AI infrastructure spending against a rate-hike scenario with compressed multiples
    • Audit GPU procurement and cloud compute contracts against H100 appreciation reality by end of month — model buy vs. lease vs. lock-in scenarios
    • Reframe board AI narrative from 'infrastructure investment' to 'revenue acceleration' — prepare updated materials quantifying AI-driven revenue, not spending, before next board meeting
    • Build an acquisition watchlist of AI-native companies at compressed valuations — identify targets accretive even under higher-rate scenarios, funded from balance sheet cash

    Sources:Three converging shocks just repriced your capital plan · GPU prices defying gravity + Anthropic's leaked Capybara tier · Anthropic's cyber-capable model just repriced your security stack

  2. 02

    AI Labs Just Became Existential Threats to Entire Software Verticals — Cybersecurity Is the Canary

    <h3>The cybersecurity selloff is a preview of a cross-vertical pattern</h3><p>Anthropic's accidentally leaked <strong>Claude Mythos</strong> — described internally as a "step change" posing "unprecedented cybersecurity risks" — triggered a cybersecurity sector selloff that matters far beyond security stocks. This is the <strong>first time a foundation model lab has been perceived as an existential threat to an entire established software vertical</strong>. The market is internalizing that AI labs can extend models into specialized domains with marginal effort, while incumbents face asymmetric competitive responses. If Anthropic can reason about cyber threats at expert level, the same dynamic applies to legal research, financial analysis, and medical diagnosis.</p><h3>RSA 2026 confirms the dissolution thesis</h3><p>Field intelligence from <strong>50+ vendor conversations at RSA 2026</strong> validates the structural shift: every security product and service is becoming <strong>a commodity API call in customer-orchestrated agentic workflows</strong>. The 1-3 year timeline for this transition is aggressive but credible. The winners won't be those with the best detection engine or dashboard — they'll be those delivering the most reliable, composable API primitives that integrate into whatever orchestration layer the customer selects.</p><blockquote>Companies optimizing their current product model are optimizing for irrelevance. The question isn't whether AI labs commoditize your vertical — it's whether you've built a defensible position before they do.</blockquote><h3>The irony that matters strategically</h3><p>A safety-focused frontier lab accidentally exposed its most powerful unreleased model through an <strong>unsecured data cache</strong>. The operational security failure is strategically relevant: even organizations explicitly designed around AI safety cannot maintain basic data protection as capability accelerates. For any company integrating frontier AI models, your risk model should account for <strong>the provider's operational security</strong>, not just the model's capabilities.</p><h3>Distribution already trumps capability</h3><p>Yahoo's launch of <strong>Anthropic-powered Scout to 250 million users</strong> proves the AI application market has already bifurcated into a model layer (where labs compete on capability) and a distribution layer (where incumbents compete on reach). Anthropic's dual strategy — competing directly via Claude while licensing to Yahoo — is the <strong>AWS playbook applied to AI</strong>: simultaneously compete with and supply your competitors. Model quality is no longer a differentiator below the absolute frontier. Your moat must be in proprietary data, unique user context, or distribution scale.</p><hr><h3>The Pentagon signal</h3><p>The Department of Defense is standardizing AI cybersecurity requirements for all defense contractors — creating both a compliance moat for early adopters and a market signal about where the security stack is heading. Companies that align early gain access; those that don't get locked out of the fastest-growing security budget in the world.</p>

    Action items

    • Audit your product's API surface readiness for consumption as composable primitives in agentic workflows — commission architecture review this quarter
    • Assess which capabilities in your security vendor stack could be commoditized by foundation model providers within 18 months — build a replacement roadmap
    • Evaluate Anthropic's expanding capability set for platform-build opportunities in security-adjacent AI before the next vendor renewal cycle

    Sources:RSA 2026 confirms: Security is dissolving into API calls · Anthropic's cyber-capable model just repriced your security stack · Meta's 6-front offensive, Iranian cyber escalation

  3. 03

    CEO Workforce Compression Has a Name, a Number, and an 18-Month Clock

    <h3>The CEO class just went on record</h3><p>Jack Dorsey told the <strong>JPMorgan Tech100 conference</strong> — attended by the most influential capital allocators in tech — that using the coding agent <strong>Goose</strong> led him to conclude he could nearly <strong>halve Block's workforce</strong>. This isn't a thought experiment. It's a public declaration at an investor conference. Databricks CEO <strong>Ali Ghodsi</strong> echoed the identical pattern. These are sitting CEOs of major technology companies announcing, under their own names, to people who allocate capital, that AI agents have changed their mental model of how many people they need.</p><blockquote>We're entering a phase where CEOs who personally use AI tools will set workforce expectations that cascade through their organizations — and the competitive pressure to demonstrate similar leverage will force every tech company to answer the question within 12-18 months.</blockquote><h3>The productivity model is already being defined</h3><p>Mitchell Hashimoto's disclosed agent-assisted workflow — <strong>agents planning while he codes, agents coding while he reviews</strong> — is the operational template. This isn't about replacing engineers; it's about <strong>capability multiplication</strong> through parallel AI-human loops. The talent market data reinforces this: at <strong>3.2 AI jobs per qualified candidate</strong>, the traditional hiring playbook is broken anyway. Investing in agent-augmented workflows for your current team beats competing for scarce AI specialists at premium prices.</p><h3>The board pressure cascade</h3><p>The strategic implication is severe and immediate. When Dorsey makes this statement publicly, every Block competitor's board will ask: <em>"Are we getting the same leverage?"</em> When their answer is "we haven't measured it," the follow-up will be: <em>"Why not?"</em> This creates a <strong>cascading pressure wave</strong> through the technology industry. Companies that can demonstrate agent-driven productivity gains will be rewarded by investors who just watched Microsoft lose a third of its value for <em>not</em> showing AI returns. Companies that can't will face the same questions Microsoft is facing now.</p><hr><h3>What Anthropic's IPO means for this dynamic</h3><p>Anthropic CEO Amodei chose his Tech100 stage time to discuss how humans will struggle to contain AI — while simultaneously pursuing an IPO and navigating a U.S. government standoff. Post-IPO Anthropic will be subject to <strong>quarterly earnings pressure and public disclosure requirements</strong> that may conflict with its safety-first positioning. Any strategic dependency on Anthropic models should be stress-tested against this transition <em>now</em>, including pricing changes that could follow a public offering. The current era of artificially cheap AI inference — subsidized by venture capital seeking growth — has a visible expiration date.</p>

    Action items

    • Commission a 2-week internal AI agent productivity audit — have engineering leadership use coding agents on real production tasks and quantify headcount-equivalent output before your next board meeting
    • Pilot agent-assisted development workflows (Hashimoto model: parallel agent-human loops) with one engineering team this quarter
    • Develop a board-ready narrative on AI-driven workforce transformation — get ahead of the conversation Dorsey just started publicly
    • Model AI inference cost scenarios at 3x, 5x, and 10x current pricing and stress-test agent-dependent unit economics before FY27 planning

    Sources:Dorsey says AI agents could halve Block's headcount · RSA 2026 confirms: Security is dissolving into API calls · GPU prices defying gravity + Anthropic's leaked Capybara tier

◆ QUICK HITS

  • Update: SoftBank secured a $40B unsecured bridge loan from JPMorgan and Goldman to fund its $30B OpenAI commitment — leverage concentration at this scale signals peak AI capital cycle and sets AI valuations disconnected from market equilibrium

    Anthropic's cyber-capable model just repriced your security stack

  • SAP acquired Reltio (master data management) — signaling that clean, unified, governed data estates are now the competitive prerequisite for AI value extraction, not a back-office nice-to-have

    Anthropic's cyber-capable model just repriced your security stack

  • 30% of LLM-generated code contains vulnerabilities while AI tools increase competition entry by 42% without improving success rates — more code from more people with no quality improvement is expanding your attack surface faster than security teams can match

    Meta's 6-front offensive, Iranian cyber escalation

  • Open-source models close to within 5% of frontier on coding: GLM-5.1 scored 45.3 vs. Claude Opus 4.6's 47.9 — the gap is functionally irrelevant for most production workloads, eroding closed-model API pricing power

    GPU prices defying gravity + Anthropic's leaked Capybara tier

  • Midjourney — profitable and bootstrapped — is being squeezed by Google to the point where founder may take VC for the first time, confirming that product-market fit and profitability are necessary but insufficient for survival against hyperscaler competition

    Dorsey says AI agents could halve Block's headcount

  • Update: FBI director's personal and official email verified breached alongside Handala's destructive wipe of tens of thousands of Stryker medical devices — state-sponsored cyber ops have shifted from espionage to destructive attacks designed for maximum operational damage

    Meta's 6-front offensive, Iranian cyber escalation

  • Whoop is defying FDA guidance on medical features in wearables — a harbinger of the regulatory collision between AI-powered health tech and existing approval frameworks that will affect every company in the space

    Anthropic's cyber-capable model just repriced your security stack

  • Update: Delve's allegedly fraudulent SOC2 and ISO27001 certifications for LiteLLM (3.4M daily downloads) — Karpathy assessed the attack code was 'vibe coded' by AI, meaning the barrier to supply chain attacks has collapsed to near-zero skill requirements

    RSA 2026 confirms: Security is dissolving into API calls

BOTTOM LINE

The AI industry just split into two simultaneous realities that your strategy must reconcile: investors are punishing AI spending without receipts (Microsoft down 34%, rate expectations flipping to hikes, H100 costs rising above launch prices), while CEOs who actually use AI agents are publicly concluding they need half the headcount (Dorsey at Block, Ghodsi at Databricks). Meanwhile, Anthropic's leaked cyber-capable model triggered the first selloff of an entire software vertical by an AI lab — a pattern that will repeat across legal, finance, and healthcare within 2-3 years. The organizations that win the next 18 months are those stress-testing their capital plans against higher rates and rising compute costs, measuring agent productivity gains in their own engineering teams this quarter, and building API-composable products before foundation model labs commoditize their vertical from above.

Frequently asked

What specifically changed in rate expectations, and why does it invalidate FY26 plans?
In 30 days, market pricing flipped from a 90% probability of rate cuts to a 52% probability of hikes — one of the fastest monetary sentiment reversals on record. Combined with $110 oil from a structurally persistent Iran conflict, this means most 2026-2027 capital plans built on cheap money and stable energy assumptions are now mispriced and need stress-testing against a higher-rate, higher-input-cost regime.
Why are H100 GPU rental prices rising instead of depreciating as expected?
H100 rentals now exceed their October 2022 launch-era prices because reasoning models and AI agents have made four-year-old chips far more useful than any depreciation schedule assumed. Demand is structural, not cyclical, which means compute contracts written on standard depreciation curves are underwater and procurement strategies need to be re-modeled for buy-vs-lease-vs-lock-in under appreciation, not decline.
How should I respond to Dorsey's claim that AI agents could halve Block's headcount?
Treat it as a public benchmark your board will reference within 90 days, not a thought experiment. Commission a short internal audit where engineering leaders use coding agents on real production work and quantify headcount-equivalent output, then pilot parallel agent-human workflows with one team. The goal is measured leverage data and a board-ready narrative before competitors force the question.
What does the cybersecurity selloff tied to Anthropic's leaked model signal for other software verticals?
It's the first time a foundation model lab has been priced as an existential threat to an established software category, and the pattern will extend to legal, financial, and medical software. Field intelligence from RSA 2026 shows security products are dissolving into composable API primitives inside customer-orchestrated agent workflows on a 1-3 year horizon. Any vertical software roadmap should be re-evaluated for API-primitive readiness now.
Is the valuation compression in AI-native companies actually an acquisition opportunity?
Yes, but only for balance-sheet-funded deals. AI-native valuations are down 20-35% over five months and Mag Seven names are down 8-34% from peaks, creating a rare window for cash-rich acquirers with clear monetization stories. Debt-funded M&A is dangerous under a potential rate-hike regime — SoftBank's $40B unsecured bridge for its OpenAI commitment is the cautionary example, not the template.

◆ ALSO READ THIS DAY AS

◆ RECENT IN LEADER