Dorsey's Goose Habit Signals Halved Headcount at Block
Topics Agentic AI · AI Capital · LLM Inference
Jack Dorsey told JPMorgan's elite Tech100 that using AI coding agent Goose every morning led him to conclude he could nearly halve Block's workforce — and Databricks' CEO described identical pressure. When C-suite executives personally adopt coding agents and start doing headcount math, reorgs follow within quarters, not years. If you aren't proactively modeling your team's AI-augmented productivity for leadership right now, someone above you will do it with cruder math and less nuance.
◆ INTELLIGENCE MAP
01 CEO Coding Agent Adoption → Workforce Restructuring
act nowDorsey (Block) and Ghodsi (Databricks) both publicly stated AI coding agents prompted them to rethink headcount. HashiCorp's Hashimoto runs agents in parallel all day. The pattern: when CEOs prototype with agents, reorg conversations follow within quarters.
- Block cut target
- AI jobs per candidate
- Agent UX pattern
- CEOs claiming cuts50
- AI improves outcomes0
02 AI Spend Credibility Wall — Markets Demanding ROI Proof
monitorMicrosoft's worst quarter since 2008 (down 34%) and Nasdaq in correction (down 11%) signal the 'invest now, monetize later' narrative hit its wall. Yet SoftBank secured $40B for OpenAI and Moonshot raises at $18B. Public markets punish; private capital floods in. Your AI budget pitch needs hard unit economics now.
- MSFT decline
- Nasdaq correction
- SoftBank AI loan
- Rate hike probability
03 Anthropic's Compounding Vendor Risk — Capybara, Throttling, and 250M New Users
monitorAnthropic leaked Capybara (a tier above Opus with substantially better coding/reasoning scores), is throttling existing API customers, AND just licensed tech to Yahoo Scout's 250M users. Cybersecurity stocks slid on the Capybara rumor alone. You're now competing for Anthropic compute with Yahoo's entire user base.
- GLM-5.1 coding score
- Opus 4.6 coding score
- Yahoo Scout users
- Open-closed gap
04 H100 Prices Defy Depreciation — Compute Cost Models Are Broken
monitorH100 rental prices reversed their depreciation curve since December 2025 and are now worth more than at 2022 launch. Chip shortage plus agent/reasoning demand inflection killed the declining-cost assumption. Any cost model built on 2024 GPU pricing is materially wrong.
- Price trend reversal
- H100 ship date
- Depreciation assumed
- Actual trajectory
- Oct 2022 (launch)100
- 2024 (expected)60
- Mar 2026 (actual)110
05 RSA 2026: Security Products Collapse Into API Calls Within 1-3 Years
backgroundDaniel Miessler spoke with ~50 RSA vendors and found most building proprietary AI dashboards — the wrong play. His thesis: all security products become API calls consumed by customer-controlled agentic orchestration within 1-3 years. The question for any B2B PM: can your product's core value be consumed headlessly?
- Transformation timeline
- Vendors interviewed
- Building wrong thing
- NowProprietary UI era
- 2027API-first migration
- 2028-29Agent backplane era
◆ DEEP DIVES
01 The Dorsey Moment: CEOs Are Doing AI Headcount Math — And You Need to Do It First
<h3>What Happened at Tech100</h3><p>At JPMorgan's invitation-only Tech100 conference (March 25-27, Yellowstone Club, Montana), <strong>Jack Dorsey told investors that using AI coding agent Goose every morning led him to conclude he could nearly halve Block's workforce</strong>. Databricks CEO Ali Ghodsi described the identical realization hitting his team. These aren't startup founders hypothesizing — they're CEOs of major companies telling their largest investors, on the record, that AI agents have changed their headcount assumptions.</p><blockquote>When C-suite executives personally adopt AI coding tools and start doing mental math on headcount, reorg conversations follow within quarters, not years.</blockquote><h3>The Cross-Source Pattern</h3><p>This isn't isolated executive enthusiasm. HashiCorp co-founder <strong>Mitchell Hashimoto</strong> describes running AI agents constantly in the background as his production workflow — when he codes, they plan; when they code, he reviews. The emerging agent UX pattern across coding tools is converging on <strong>'fleet management for software'</strong> — kanban-like cards, isolated worktrees, agent-owned tasks, and diff-based review. CursorBench data shows a median of <strong>181 lines changed per task</strong> in real-world agent sessions. OpenAI's Codex is building a plugin ecosystem around this paradigm.</p><h3>The Critical Contradiction</h3><p>Here's the tension every PM must internalize: <strong>research shows AI tools increase competition entry by 42% without improving individual success rates</strong>. CEOs are seeing dramatic personal productivity gains and extrapolating to workforce-wide cuts, but the evidence suggests AI democratizes participation rather than multiplying quality. The PMs who thrive in this environment are those who can articulate the nuanced reality: AI agents change <em>what</em> your team works on, not just <em>how many</em> people you need.</p><h3>Why You Must Move First</h3><p>The AI job market already shows <strong>3.2 open roles per qualified candidate</strong>, with most applicants lacking critical skills. You can't hire your way to faster delivery. But if you wait for your CEO to have their own 'Goose morning' and dictate cuts from the top, you lose the ability to shape the outcome. The PM who proactively models team productivity with agents — and proposes a reallocation plan that protects your strongest engineers while capturing the gains — controls the narrative.</p>
Action items
- Instrument your team's AI coding tool usage and quantify the productivity delta (tasks completed, time-to-merge, lines per session) over the next two sprints
- Evaluate Goose (Block's coding agent) alongside your current AI dev tools stack this sprint — if Dorsey's drawing workforce conclusions from it, you need direct experience
- Draft a 'team rebalancing' proposal by end of Q2 that shifts engineer hours from code production to specification, review, and agent orchestration
Sources:CEOs are using coding agents daily and planning 50% headcount cuts — your team model needs rethinking now · RSA 2026 verdict: Your product's UI is the wrong moat — API-first agentic integration is the only play left · Your AI compute costs are about to spike — H100 prices defying depreciation while agents reshape your build-vs-buy calculus
02 The AI ROI Credibility Wall — Public Markets Are Done Waiting
<h3>The Numbers Tell a Brutal Story</h3><p>Microsoft's stock is <strong>down 34% since October</strong> — its worst quarter since 2008. The Nasdaq 100 is in correction territory, <strong>down 11%</strong> from its peak, with Business Insider attributing part of the decline to 'AI anxiety.' The S&P 500 just posted <strong>five consecutive down weeks</strong>, the longest streak since 2022. Investors are described as <em>'recoiling'</em> from continued AI infrastructure spending without clear returns — and a new fear is driving the selloff: that AI startups building agents will <strong>replace, not augment</strong>, incumbent software products like Microsoft's.</p><h3>The Capital Contradiction</h3><p>Here's what makes this moment unprecedented for product leaders: <strong>public markets are punishing AI spend while private capital floods in</strong>. SoftBank just secured a $40B unsecured bridge loan from JPMorgan and Goldman Sachs to fund its OpenAI commitment. Moonshot AI is raising at $18B. The implication for your planning: your company's budget may tighten (correction-driven caution), but your AI-focused startup competitors will remain exceptionally well-funded.</p><blockquote>The market is no longer differentiating between 'doing AI' and 'getting ROI from AI.' Your next AI feature pitch needs unit economics, not a demo.</blockquote><h3>The Macro Compressor</h3><p>Rate expectations have undergone the most dramatic reversal in recent memory: CME FedWatch data shows markets went from pricing <strong>90% probability of rate cuts</strong> by September to <strong>52% probability of a rate hike</strong> this year — in a single month. Oil at $110/barrel from the Iran conflict is the primary driver. For product leaders, this directly impacts enterprise deal velocity, customer willingness to spend on new tools, and internal headcount budgets by Q3.</p><h3>What 'AI Productivity' Actually Means</h3><p>The research finding that AI tools <strong>increase competition entry by 42% without improving individual success rates</strong> is the data point every PM needs when framing AI feature value. If you're telling leadership 'AI will improve user outcomes,' the evidence says otherwise. AI <em>lowers barriers to participation</em>. The winning products measure adoption, activation, and time-to-first-value — not quality improvement claims that don't hold up to scrutiny.</p>
Action items
- Rewrite your top AI feature's business case with hard ROI metrics — cost savings per user, revenue per AI-assisted conversion, or churn reduction with confidence intervals — before your next planning review
- Accelerate any enterprise deals planned for Q3 into Q2 wherever contract flexibility allows
- Reframe AI feature OKRs from 'improved outcomes' to 'expanded access and engagement' based on the 42%/0% research, and update PRD metrics sections accordingly
Sources:Your AI partnership strategy just got a $1B cautionary tale — and your budget window is closing · Anthropic may commoditize cybersecurity — and Whoop's 83% DAU/MAU is the engagement benchmark your roadmap needs · Anthropic throttling + 30% vulnerable AI code: two risks already in your stack
03 Anthropic's Triple Risk Event — Capybara, Throttling, and 250M New Mouths to Feed
<h3>The Capybara Leak Changes the Frontier Calculus</h3><p>Anthropic accidentally exposed its most powerful unreleased model through an unsecured data cache. <strong>Capybara sits above Opus</strong> as a new tier — 'larger and more intelligent' than Claude Opus 4.6, with substantially better coding, academic reasoning, and cybersecurity benchmark scores. Fortune corroborated the leak. The cybersecurity capabilities alone were enough to <strong>crater security stocks on March 27</strong>, signaling that markets now price in AI labs commoditizing entire vertical software categories with a single model release.</p><h3>The Capacity Squeeze Is Real</h3><p>Anthropic is simultaneously throttling existing Claude API customers while licensing technology to <strong>Yahoo Scout's 250 million users</strong>. If you're building on Anthropic's API, you're now competing for compute with Yahoo's entire installed base plus Claude's organic consumer demand. Widespread <strong>529 errors</strong> were reported during the leak period. Google is reportedly close to funding Anthropic's data center buildout, which eventually solves the capacity problem — but not on your Q2 timeline.</p><blockquote>Capybara will exist as a capability but be scarce and expensive. Products that can integrate quickly and absorb the premium get a temporary moat. Products that can't need a fallback plan now.</blockquote><h3>The Open Model Escape Valve</h3><p>The good news for build-vs-buy: the open-closed gap has narrowed dramatically. Zhipu's <strong>GLM-5.1 scored 45.3 on coding benchmarks vs. Claude Opus 4.6's 47.9</strong> — a 5.4% gap, down from 26% with the prior GLM-5 (35.4). Quantization breakthroughs like TurboQuant now enable running <strong>Qwen 3.5-9B on a standard MacBook Air</strong> (M4, 16GB) with 20K context. RotorQuant achieves <strong>10-19x speed improvements</strong> at 0.990 vs. 0.991 cosine similarity — essentially equivalent quality. The Arena leaderboard confirms the open-closed gap is 'much narrower than a year ago.'</p><h4>Your Vendor Strategy Decision Matrix</h4><table><thead><tr><th>Scenario</th><th>Risk</th><th>Action</th></tr></thead><tbody><tr><td>Anthropic throttling worsens</td><td>Feature degradation, SLA breach</td><td>Multi-model fallback to OpenAI/Google</td></tr><tr><td>Capybara launches, capacity-constrained</td><td>Competitors with access get temporary moat</td><td>Early access request + premium budget allocation</td></tr><tr><td>Post-IPO pricing increase</td><td>Unit economics compression</td><td>Open model evaluation for non-frontier tasks</td></tr><tr><td>Government standoff restricts capabilities</td><td>Feature removal or access restriction</td><td>Abstraction layer enabling rapid model swap</td></tr></tbody></table>
Action items
- Implement multi-model fallback architecture this sprint — map which features break during Anthropic throttling and configure automated failover to OpenAI or Google endpoints
- Run a structured evaluation of open models (Qwen 3.5-35B, GLM-5.1) against your current Anthropic usage for your top 3 use cases within 30 days
- Run a 'commoditization stress test' on your roadmap: for every major feature, ask what happens if a foundation model ships this as a native capability in 6 months
Sources:Your AI compute costs are about to spike — H100 prices defying depreciation while agents reshape your build-vs-buy calculus · RSA 2026 verdict: Your product's UI is the wrong moat — API-first agentic integration is the only play left · Anthropic may commoditize cybersecurity — and Whoop's 83% DAU/MAU is the engagement benchmark your roadmap needs · Anthropic throttling + 30% vulnerable AI code: two risks already in your stack
◆ QUICK HITS
Update: OpenAI's Sora shutdown destroyed a planned $1B three-year Disney partnership overnight — the most expensive platform-dependency failure in AI to date, compounding Disney's exposure alongside Epic Games' 1,000+ layoffs threatening a separate $1.5B investment
Your AI partnership strategy just got a $1B cautionary tale — and your budget window is closing
Whoop reports 83% DAU/MAU — second only to WhatsApp — on a $200-360/yr subscription that bundles hardware free, with 100%+ revenue growth and cash-flow positive status; study their continuous data loop (collect → personalize → prompt → re-measure) as an engagement architecture blueprint
Anthropic may commoditize cybersecurity — and Whoop's 83% DAU/MAU is the engagement benchmark your roadmap needs
LLM code generation tools produce vulnerable code 30% of the time in tests — if your team uses Codex, Claude Code, or Copilot, add mandatory static analysis gates to your CI/CD pipeline targeting AI-generated code blocks this sprint
Anthropic throttling + 30% vulnerable AI code: two risks already in your stack
Midjourney — profitable, category-defining, no VC — may be forced to accept venture capital as Google intensifies AI image generation competition, and is pivoting into hardware; the stress test for every AI vertical: if Midjourney can't stay independent, what's your moat?
CEOs are using coding agents daily and planning 50% headcount cuts — your team model needs rethinking now
China detained Manus AI founders after their $2B sale to Meta, following the startup's relocation from China to Singapore — a new category of geopolitical risk for AI M&A and talent acquisition
RSA 2026 verdict: Your product's UI is the wrong moat — API-first agentic integration is the only play left
H100 rental prices have reversed their depreciation curve since December 2025 and are now worth more than at October 2022 launch — reforecast any GPU cost model built on 2024 declining-cost assumptions immediately
Your AI compute costs are about to spike — H100 prices defying depreciation while agents reshape your build-vs-buy calculus
Repeating prompts multiple times boosts accuracy by up to 4.7% on translation and summarization tasks in smaller language models — a zero-cost technique for any production pipeline using sub-10B models
Anthropic throttling + 30% vulnerable AI code: two risks already in your stack
SAP agreed to acquire Reltio (master data management, $230M+ raised) — enterprise incumbents now view clean, unified data as the critical missing layer for AI adoption; add a data readiness assessment to your enterprise onboarding flow
Anthropic may commoditize cybersecurity — and Whoop's 83% DAU/MAU is the engagement benchmark your roadmap needs
BOTTOM LINE
Tech CEOs are personally using AI coding agents, doing headcount math, and concluding they can halve their workforces — while public markets just posted the worst tech quarter since 2008 demanding AI ROI proof, and Anthropic is simultaneously leaking a step-change model, throttling existing customers, and licensing to 250 million Yahoo users. The PMs who survive this compression aren't building more AI features; they're proactively modeling AI-augmented team productivity before leadership does it for them, hardening vendor diversification against a capacity-constrained Anthropic, and replacing every 'AI will improve outcomes' claim with unit economics — because the evidence says AI expands participation 42% without improving individual results.
Frequently asked
- Why should a PM model AI-augmented productivity before leadership does?
- Because CEOs like Jack Dorsey and Ali Ghodsi are already doing the headcount math themselves after daily use of coding agents, and when they act they use crude extrapolations from personal experience. A PM who brings bottom-up data on tasks completed, time-to-merge, and agent utilization can shape the reorg narrative and protect the right engineers instead of absorbing top-down cuts.
- Does AI coding agent adoption actually improve output quality, or just participation?
- Current research shows AI tools increase competition entry by roughly 42% without improving individual success rates, meaning they lower barriers to participation rather than lifting outcome quality. That makes 'AI will improve user outcomes' a weak framing for features; adoption, activation, and time-to-first-value are more defensible OKRs than quality-uplift claims leadership can't verify.
- How should the team rebalance work if coding agents take over production?
- Shift engineer hours from writing code toward specification, review, and agent orchestration — the bottlenecks agents don't solve. With CursorBench showing a median of 181 lines changed per agent task, review throughput and clear specs become the constraint, so roles should be reshaped around planning, diff review, and fleet management of parallel agent sessions.
- What should the vendor strategy be given Anthropic's throttling and the Capybara leak?
- Diversify now. Anthropic is throttling API customers while onboarding Yahoo Scout's 250M users, so build a multi-model fallback to OpenAI and Google, add an abstraction layer for rapid model swaps, and evaluate open models like Qwen 3.5 and GLM-5.1 for non-frontier tasks where the benchmark gap to Claude Opus is now roughly 5%.
- How does the market selloff change how AI feature business cases should be written?
- Investors are no longer accepting 'invest now, monetize later' after Microsoft's 34% drop and five straight down weeks in the S&P 500, and that pressure cascades into internal planning. Rewrite top AI feature cases with hard unit economics — cost savings per user, revenue per AI-assisted conversion, or churn reduction with confidence intervals — rather than demo-driven narratives.
◆ ALSO READ THIS DAY AS
◆ RECENT IN PRODUCT
- OpenAI killed Custom GPTs and launched Workspace Agents that autonomously execute across Slack and Gmail — the same week…
- Anthropic's internal 'Project Deal' experiment proved that users with stronger AI models negotiate systematically better…
- GPT-5.5 launched at $5/$30 per million tokens while DeepSeek V4-Flash shipped at $0.14/$0.28 under MIT license — a 35x p…
- Meta burned 60.2 trillion tokens ($100M+) in 30 days — and most of it was waste.
- OpenAI's GPT-Image-2 launched with API access, a +242 Elo lead over every competitor, and day-one integrations from Figm…