PROMIT NOW · PRODUCT DAILY · 2026-04-14

Seat-Based SaaS Pricing Is Dead: Rebuild Before Q3

· Product · 41 sources · 2,073 words · 10 min

Topics Agentic AI · LLM Inference · AI Capital

The seat-based SaaS model just lost 50.5% of its market value in six months — and ServiceNow responded by eliminating separate AI licensing entirely, making its entire portfolio AI-native by default. Meanwhile, a16z field research shows enterprise buyers are deliberately deploying 2-3 AI tools per use case as hedging policy, demanding outcome-based pricing, and planning to build core AI in-house within 12-18 months. Your pricing architecture is now your most urgent product decision: if you still charge per seat and treat AI as an add-on SKU, the market is telling you — with $300B+ in destroyed market cap — that your model is on a countdown clock.

◆ INTELLIGENCE MAP

  1. 01

    Seat-Based SaaS Collapse Demands Pricing Model Overhaul

    act now

    SaaStr.ai Index confirms 50.5% market cap destruction in 6 months, driven by AI agent displacement of per-seat models. ServiceNow made AI native at zero extra cost. a16z data shows buyers want dual pricing (predictable + outcome-based). OpenAI's 3-tier Codex ($20/$100/$200) sets the new monetization template.

    50.5%
    SaaS market cap destroyed
    7
    sources
    • Market cap lost
    • OpenAI Codex tiers
    • Premium headroom
    • ServiceNow workflows
    1. Plus20
    2. Pro (5x)100
    3. Max (20x)200
  2. 02

    Agent-First Interfaces Are the New Table Stakes

    monitor

    Enterprise agent-to-human ratios hit 100:1, forcing SaaS to rebuild around APIs and CLIs. Visa launched Intelligent Commerce Connect for agent-driven transactions. AEO (Agentic Engine Optimization) is emerging as a real discipline — misconfigured docs cause agents to hallucinate. HushSpec sets the open standard for agent runtime governance.

    100:1
    agent-to-human ratio
    8
    sources
    • Agent-human ratio
    • AI agents on-chain
    • AEO pillars
    • Gas Index locations
    1. 01Visa Commerce ConnectPayments
    2. 02HushSpecGovernance
    3. 03AEO DisciplineDiscovery
    4. 04MCP StandardizationInterop
  3. 03

    Your AI Supplier Is Now Your Biggest Competitor

    act now

    Anthropic is building a Lovable competitor inside Claude (Lovable: $6.6B valuation) and shipped Claude for Word as a Trojan horse into Microsoft's ecosystem. Microsoft deliberately starved Azure customers of GPU to prioritize M365 Copilot. Anthropic silently cut cache TTL from 1hr to 5min. Claude Opus 4.6 lost ~67% thinking depth, triggering developer migration to Codex/GPT-5.4.

    67%
    Claude quality drop
    8
    sources
    • Lovable valuation
    • Cache TTL cut
    • Azure growth missed
    • Quality drop (Opus)
    1. Cache TTL (before)60
    2. Cache TTL (after)5
  4. 04

    Conversational AI Replaces Dashboards — Micro-SaaS Explodes

    monitor

    Perplexity launched an AI CFO with Plaid-integrated bank accounts queried via natural language. Revolut collapsed 5 product surfaces into a single chat. Non-engineers are building production micro-tools for $0 (WTP: $15/mo). Tubi killed its proprietary AI and embedded into ChatGPT's 900M users instead. Your competitive set just expanded to every power user with Claude.

    $0
    micro-tool build cost
    4
    sources
    • Tubi via ChatGPT
    • User WTP
    • Slack filter
    • Perplexity models
    1. Before (Slack)150
    2. After (AI filter)30
    3. Build cost0
    4. Monthly WTP15
  5. 05

    The 20/60/20 Adoption Ceiling and Feature Discipline Paradox

    background

    Google's internal data reveals even they follow a 20/60/20 split: 20% power users, 60% casual chat users, 20% refusers. Gen Z excitement dropped 39% (Gallup). Meanwhile, building features is near-free but carrying them costs the same — 88% of AI PoCs never reach production. The winning move is fewer, better-embedded AI features targeting the 60%, not more features chasing the 20%.

    88%
    AI PoCs that fail
    6
    sources
    • Power users
    • Casual users
    • Refusers
    • GenZ excitement drop
    1. Power users20
    2. Casual/chat users60
    3. Refusers20

◆ DEEP DIVES

  1. 01

    The Pricing Model Reckoning: Seats Are Dying, Outcomes Are Coming, and ServiceNow Just Showed the Playbook

    <h3>The Market Has Already Priced In the Death of Per-Seat SaaS</h3><p>The SaaStr.ai Index confirmed a <strong>50.5% market capitalization collapse</strong> across top public software companies in six months. This isn't a cyclical correction — it's the market explicitly pricing in a paradigm shift: if AI agents do the work of 3-5 human seats, per-seat pricing becomes a melting ice cube. If your product charges $X per user per month and an AI agent can do what 3 of those users do, your addressable revenue per customer just dropped <strong>66%</strong> under your current model.</p><h3>ServiceNow's Triple Move Sets the New Standard</h3><p>ServiceNow made three moves simultaneously that every enterprise PM should study. First, they <strong>eliminated separate AI licensing entirely</strong>, making their whole portfolio AI-native by default. This reframes AI from premium upsell to expected capability — like mobile responsiveness was a decade ago. Second, their <strong>Context Engine leverages 85 billion workflow records</strong> to feed LLMs with real-time business context — a data moat that validates Karpathy's insight that structured, persistent context is what makes AI useful in enterprise, not impressive in demos. Third, they opened agent deployment to external IDEs like Cursor and Claude Code, making themselves the <strong>deployment target</strong> for enterprise AI agents.</p><blockquote>If AI is still a separate SKU in your product, you're charging extra for something ServiceNow just made free. Every quarter you wait, the gap widens.</blockquote><h3>a16z's Field Data Reveals How Buyers Actually Decide</h3><p>a16z's field research upends several assumptions. Enterprise buyers <strong>deliberately deploy 2-3 AI tools per use case</strong> — not because they can't choose, but as hedging policy. A head of AI at a top financial institution runs redundant tools because performance fluctuates and hallucinations happen. This means you're not fighting for a single winner-take-all deal — you're fighting for the <strong>premium slot</strong> in a multi-tool portfolio. That premium slot sustains <strong>10-20% price headroom</strong> without material churn, and it's won through reliability and onboarding quality, not discounts.</p><p>The most dangerous signal: enterprises are converging on a <strong>build-vs-buy framework</strong> where non-core AI gets purchased and core AI gets built in-house. A B2C logistics company plans to <strong>move off third-party AI entirely</strong>. Your real competitor in 12-18 months isn't another vendor — it's your customer's engineering team.</p><h3>OpenAI's Codex Tiers Are Your Monetization Template</h3><p>OpenAI's <strong>$20/$100/$200</strong> Codex pricing (Plus/Pro/Max with 1x/5x/20x usage) demonstrates the emerging pattern: usage-based tiers with <strong>non-linear value scaling</strong>. Going from 5x to 20x costs only 2x more — meaning heaviest users get volume discounts that lock them in. The gap between $20 and $100 with no $50 tier signals a clean segmentation break between casual and professional use. If you can capture the $30-$70 range, there's an underserved segment.</p><hr><h3>The Dual Pricing Imperative</h3><p>The clearest action from a16z's data: offer customers a <strong>choice between predictable spend and outcome-based pricing</strong> for the same product. Per-outcome pricing makes apples-to-apples vendor comparison structurally harder, shifting the conversation from 'what's your per-seat cost' to 'what results do you deliver per dollar' — a conversation premium products win. Building this capability requires product instrumentation that tracks value at the <strong>action level, not the user level</strong> — telemetry infrastructure that takes 2-3 quarters to build properly. Start scoping now.</p>

    Action items

    • Run a pricing stress test this sprint: model revenue under scenarios where AI agents reduce customer headcount by 20%, 40%, and 60%
    • Ship dual pricing capability by end of Q3: predictable seat/consumption pricing alongside outcome-based/gainshare pricing for the same product
    • Audit your AI feature packaging against ServiceNow's bundling move — if AI is a separate SKU, model the revenue impact of making it default and present to leadership within 30 days
    • Conduct a build-vs-buy vulnerability assessment: map every use case to 'core' vs. 'non-core' from the customer's perspective

    Sources:Your pricing model is now your moat — a16z data shows how to win AI price wars without cutting price · Seat-based SaaS lost 50.5% in 6 months — your pricing model and AI roadmap need a stress test now · Your backlog discipline is now your moat — cheap AI building + expensive feature carry = new PM calculus · OpenAI's 3-tier Codex pricing is your playbook — and Gen Z is souring on AI · Token costs are blowing up your unit economics — and AI can't ship your UI either

  2. 02

    Agent-First Is the New Mobile-First: The Infrastructure Layer Just Solidified

    <h3>The 100:1 Ratio That Rewrites Your PRD</h3><p>Enterprise agents now outnumber humans <strong>up to 100:1</strong>. If that ratio is even directionally correct, it rewrites every assumption in your PRD about user personas, interaction patterns, and interface design. SaaS companies are already rebuilding around <strong>APIs, CLIs, and structured outputs</strong>, encoding domain expertise into 'skill files' and exposing functionality via MCP servers. Google expanding Skills across Gemini confirms this isn't a niche pattern — it's becoming the <strong>expected interface contract</strong>.</p><blockquote>If your product's primary interface is a GUI that only humans can use, you're building for a shrinking percentage of your total addressable interactions.</blockquote><h3>Three Infrastructure Layers Shipped This Week</h3><p><strong>Payments:</strong> Visa launched Intelligent Commerce Connect — a protocol-agnostic on-ramp for AI agents to discover products and initiate transactions. This is the Stripe-for-agents moment. When the dominant payments network builds specifically for agent-driven transactions, the 'agent as buyer' use case has crossed from demo to deployment.</p><p><strong>Discovery:</strong> Agentic Engine Optimization is emerging as a must-have discipline. Misconfigured robots.txt and token-heavy pages directly cause AI agents to <strong>hallucinate solutions attributed to your product</strong>. The five AEO pillars — discoverability, parsability, token efficiency, capability signaling, and access control — are the information architecture patterns you already know, applied to non-human consumers.</p><p><strong>Governance:</strong> HushSpec defines an open policy format for AI agent runtime constraints — filesystem access, network egress, tool usage, secrets. This is your 'RBAC moment' for agents. Within 2-3 quarters, enterprise security questionnaires will have an agent containment section. First-movers define the constraints; late arrivals inherit them.</p><h3>DeepMind's 6-Vector Attack Taxonomy Is Your Threat Model Update</h3><p>Google DeepMind identified <strong>6 distinct attack genres</strong> against AI agents: Content Injection, Semantic Manipulation, Cognitive State, Behavioural Control, Systemic, and Human-in-the-Loop. The 'Systemic' category is alarming for multi-agent architectures: <strong>jigsaw attacks split harmful instructions across agents</strong>, and fabricated agent identities corrupt collective decision-making. An attacker can embed adversarial instructions in CSS metadata that no human would ever see. These attack surfaces don't exist in traditional software security.</p><hr><h3>What This Means for Your Product</h3><p>The strategic question isn't whether to build an agent-accessible interface — it's <strong>which features agents would use most</strong> and exposing those first. Treat agent-consumability as a product metric alongside time-to-value and NPS. Your documentation, API references, and developer content are now a <strong>product surface for agents</strong>, and content optimized for human readability may be actively hostile to agent consumption. The companies that get AEO right in the next 6 months will build the same compounding advantage that early SEO adopters built in 2008-2012.</p>

    Action items

    • Audit your product's agent-consumability this sprint: score every user-facing feature on whether an AI agent can access it programmatically via API, CLI, or MCP server. Identify the top 5 GUI-only high-value features and create exposure tickets
    • Audit your docs, API references, and robots.txt for AEO readiness within 30 days — check token density, structured data, and capability signaling
    • Incorporate DeepMind's 6-genre agent attack taxonomy into your security threat model for any agentic feature in development. Create a checklist by end of Q2
    • Evaluate Visa's Intelligent Commerce Connect for any product that touches transactions — can an AI agent discover, purchase, and complete a transaction via your product's API?

    Sources:Your SaaS product needs an agent-first interface — 100:1 agent-to-human ratios are reshaping your entire surface area · AEO is the new SEO — your docs strategy needs an agent-first rewrite before competitors lock in AI agent adoption · Visa just built the payments layer for AI agents — your commerce roadmap needs to catch up · Your AI agent roadmap has a 6-vector security gap — and your timelines are ~1.5 years too slow · Your AI agent roadmap needs a security policy layer — HushSpec just set the standard before you did

  3. 03

    Your AI Supplier Just Became Your Biggest Competitive Threat — and They're Degrading Your Service While They Do It

    <h3>The Platform Bundling Playbook Has Arrived</h3><p>Anthropic leaked a <strong>vibe-coding app builder inside Claude</strong> that directly competes with Lovable ($6.6B valuation, raised $330M just four months ago). Simultaneously, Anthropic shipped <strong>Claude for Word</strong> — embedding directly inside Microsoft Word with native Track Changes integration and formatting fidelity, targeting the exact enterprise pain points where Copilot has underdelivered. This is the Microsoft Office playbook applied to AI: identify the most popular things people do with your platform, then make those things native features.</p><blockquote>If you wrap an LLM API with a specialized UI, your supplier is your biggest competitive threat. Elena Verna's warning that 'Big Tech companies are more threatening than rival startups' is now confirmed.</blockquote><h3>Microsoft Proved Platform Risk Isn't Theoretical</h3><p>Microsoft deliberately starved Azure customers of GPU capacity to serve M365 Copilot and GitHub Copilot. Amy Hood confirmed that <strong>Azure growth would have exceeded 40%</strong> had GPUs been allocated externally. Nadella justified this because internal workloads have higher gross margins. Let that sink in: the company you pay for cloud AI compute decided its own products were more valuable than serving you. This is the rational behavior of <strong>every hyperscaler when compute is scarce</strong>.</p><h3>Silent Degradation Is the New Vendor Risk</h3><p>Two data points expose a pattern that should alarm every PM building on LLM APIs:</p><ul><li>Anthropic <strong>silently cut Claude Code's cache TTL from 1 hour to 5 minutes</strong> on March 6 — no changelog, no announcement. A user discovered it via a GitHub issue. This can <strong>12x your caching costs overnight</strong>.</li><li>Leaked analysis of thousands of Claude Code sessions shows <strong>Opus 4.6 thinking depth dropped ~67%</strong>. Users report lazier code edits, shallower reasoning. Developers are migrating to Codex and GPT-5.4 in real time.</li></ul><p>These aren't isolated incidents — they're the predictable behavior of providers managing scarce compute across growing user bases. Anthropic is likely <strong>conserving compute</strong> and will need to 'pay dearly' for more capacity from hyperscalers, costs that flow downstream to you via API pricing.</p><h3>The Multi-Provider Imperative Is No Longer Optional</h3><p>Sources disagree on which provider has the advantage — OpenAI tells investors its <strong>compute warchest</strong> gives it an edge; Anthropic's <strong>233% revenue growth</strong> ($9B→$30B annualized) suggests otherwise; open-source models like MiniMax M2.7 hit <strong>97% instruction compliance</strong> on tool-use tasks. But all three directions point to the same PM conclusion: build the abstraction layer now. OpenClaw has already formalized a <strong>GPT-5.4 vs Opus 4.6 agentic parity gate</strong> in its release pipeline. Your product architecture should be at least that model-portable.</p><table><thead><tr><th>Risk</th><th>Evidence</th><th>Impact</th></tr></thead><tbody><tr><td>Compute allocation</td><td>Microsoft prioritized internal over Azure</td><td>Throttled capacity, missed growth</td></tr><tr><td>Silent parameter changes</td><td>Anthropic cache TTL: 60→5 min</td><td>12x cost increase, no warning</td></tr><tr><td>Quality regression</td><td>Opus 4.6: ~67% thinking depth loss</td><td>Degraded user experience</td></tr><tr><td>Vertical integration</td><td>Anthropic building Lovable competitor</td><td>Supplier becomes direct competitor</td></tr></tbody></table>

    Action items

    • Build a model-routing abstraction layer that allows hot-swapping between Claude, GPT, Gemini, and open-source models without code changes — scope this sprint, ship by end of Q2
    • Run a 'platform risk audit' this week: list every feature a model provider could bundle into their chatbot within 12 months, tag each as 'defensible' or 'at risk'
    • Deploy monitoring dashboards for AI vendor SLAs (cache hit rates, latency p95/p99, per-request costs) that would catch silent regressions like the cache TTL change
    • Evaluate Claude for Word's 'skills' feature as either a distribution channel (embed inside it) or competitive threat (build defense against it) — present analysis to leadership within 2 weeks

    Sources:Anthropic is bundling no-code into Claude — if you build on or compete with AI platforms, reprioritize now · Your AI platform dependency just became a liability — Microsoft chose its own products over Azure customers · Claude's 67% quality drop + 88% PoC failure rate = your AI architecture needs a multi-model escape hatch now · Your AI vendor bets just got riskier — Anthropic's silent downgrade + flawed benchmarks demand a new eval playbook · Agent memory is the new lock-in — and 3 moves to protect your roadmap before it's too late · Anthropic just invaded Microsoft's turf — your AI integration strategy needs a rethink

  4. 04

    The 20/60/20 Rule and the Feature Discipline Paradox — Why Shipping More AI Features Makes You Weaker

    <h3>Google Can't Get Its Own Employees to Use AI — Neither Can You</h3><p>Steve Yegge leaked Google's internal AI adoption data: <strong>20% power users, 60% casual chat tool users, 20% outright refusers</strong>. If Google — a company that builds the models, employs thousands of ML engineers, and has AI in its culture DNA — can only get 20% power adoption, stop pretending your customers will be different. This is the adoption ceiling for intent-driven AI features, the kind requiring users to consciously choose AI.</p><p>The unlock is the <strong>60% — people who'll use AI if you put it in their path</strong>, embedded in workflows requiring zero additional cognitive load. Stanford HAI confirms per-user AI value <strong>tripled year-over-year</strong>, but it's concentrating in the 20% who know how to extract it. Your job is to democratize that value to the 60%.</p><blockquote>Every AI feature in your backlog should be evaluated: 'Does this require the user to be a power user, or does it activate for the casual 60%?'</blockquote><h3>Building Is Cheap. Carrying Is Not.</h3><p>The sharpest insight from TLDR Founders is deceptively simple: <strong>building features has never been cheaper, but carrying one is just as expensive as ever</strong>. AI can ship a feature in 2 days instead of 2 weeks. Stakeholders will push for more because 'it's easy now.' Your job is to resist. Products rarely fail from feature scarcity — they fail from feature bloat nobody had discipline to kill.</p><p>IDC confirms: <strong>88% of AI proof-of-concepts never reach production</strong>. PE firms have shifted from asking 'what's your AI strategy?' to 'how far along are you in rebuilding the company around AI?' The companies winning (Robinhood doubled alert triage capacity with multi-agent systems) are picking 2-3 high-conviction use cases and investing in the unsexy work: evals, observability, guardrails, cost metering.</p><h3>Gen Z Just Broke Your Adoption Assumptions</h3><p>Gallup data shows Gen Z excitement about AI <strong>dropped from 36% to 22%</strong> (a 39% relative decline), with hopefulness dropping from 27% to 18%. This isn't marginal — it's a demographic you assumed would evangelize your AI features becoming actively skeptical. The implication: <strong>'AI-powered' is no longer a selling point</strong> for younger users and may trigger friction. Products that embed AI invisibly (Google auto-complete, not ChatGPT's conversation UI) may have a structural advantage as this cohort enters the workforce in larger numbers.</p><hr><h3>The Human-Weight Boundary</h3><p>Cate Huston's experiment using Claude for coaching found AI excels at structured tasks (validation, frameworks, action plans) but fails when users need <strong>'human weight' — someone else's confidence in you</strong>. This isn't about AI capability — it's a category distinction. If your AI features touch moments where users need to feel seen or emotionally supported, design explicit <strong>handoff points</strong>: AI handles the structure so the human moment is higher-value and more focused. The winning UX pattern isn't 'AI does everything' — it's 'AI handles the scaffolding so the human interaction matters more.'</p>

    Action items

    • Segment your AI feature users using the 20/60/20 framework this quarter — identify power users, casual users, and non-adopters. Redesign your highest-priority AI feature to activate for the 60% (embedded in workflow, zero-effort, no AI literacy required)
    • Run a feature carry-cost audit: for every AI feature shipped in the last 12 months, estimate ongoing maintenance cost vs. usage/revenue attribution. Present a sunset recommendation for the bottom 20%
    • A/B test AI feature positioning: 'AI-powered [feature]' vs. outcome-first framing ('Get [result] instantly') across onboarding and feature announcements
    • Instrument a 'feedback flywheel' for your top AI feature: systematically capture which AI outputs users accept, modify, or reject, and feed corrections back into prompt engineering or fine-tuning

    Sources:The 20/60/20 adoption pattern Google can't escape — and your AI features can't ignore · Your backlog discipline is now your moat — cheap AI building + expensive feature carry = new PM calculus · Gen Z AI excitement cratered 39% · Your AI product's ceiling isn't technical — it's 'human weight' users can't get from bots · Claude's 67% quality drop + 88% PoC failure rate

◆ QUICK HITS

  • Update: EU digital sovereignty migration is now 4 countries — Germany (60K users migrated), Denmark, Netherlands, and France all executing Microsoft-to-Linux transitions, with DINUM's LaSuite offering full open-source replacements for M365

    EU governments are dumping Microsoft at scale — your roadmap needs a sovereignty play now

  • Voxtral TTS (open weights, 4B params) beats ElevenLabs on naturalness 58.3% vs 41.7%, runs at 70ms latency with zero-shot voice cloning from 5-25 seconds — if you pay per-character for TTS, your cost model just inverted

    Agent memory is the new lock-in — and 3 moves to protect your roadmap before it's too late

  • Caveman plugin cuts AI coding output tokens ~75% and input tokens ~46% while maintaining accuracy — works with 40+ agents including Claude Code and Copilot. Run a cost model on your highest-volume AI features with these reductions

    AI code security is now a product category — and your AI feature costs just got a 75% cut option

  • Senior engineer shortage looming: entry-level tech postings down 67% since 2022, 54% of eng leaders plan to hire fewer juniors — OpenAI experimenting with 'super senior + super junior' model as the template

    The senior engineer shortage coming for your 2030 roadmap — and why your team model needs rethinking now

  • LinkedIn replaced 5 separate retrieval systems with a single LLM-powered model serving 1.3B users under 50ms — cold-start breakthrough: infers interests from profile text alone, no engagement history needed

    LinkedIn's 5→1 retrieval consolidation is your blueprint for simplifying ML-powered features

  • Exploit-to-weaponization windows collapsed to 12 hours — AI commit-watchers now auto-generate PoCs from patches. If your enterprise contracts promise 30-day patch SLAs, those commitments are meaningless against current threat velocity

    Your AI agent roadmap needs a security policy layer — HushSpec just set the standard before you did

  • Commerce Department creating government-endorsed AI export bundles for allies — full-stack packages (models, chips, cloud, security) with consortia forming now. If your product touches AI infrastructure or security, this is a new federally-subsidized GTM channel

    Commerce's AI export bundles create a gov-backed distribution channel

  • China narrowed anthropomorphic AI regulation from 'any AI with human features' to 'sustained emotional interaction services' — AI assistants and productivity copilots are now likely out of scope for APAC compliance

    China's agentic AI regulation just scoped down — your AI companion feature may have a clearer path to APAC

  • AI forecasters compressed transformative AI timelines by ~1.5 years: Ryan Greenblatt doubled probability of full AI R&D automation by end 2028 (15%→30%), Lifland and Kokotajlo updated similarly — features planned for late 2027 may be feasible by mid-2027

    Your AI agent roadmap has a 6-vector security gap — and your timelines are ~1.5 years too slow

  • Accessibility lawsuits hit ~5,000 in the US in 2025 with 95% of top sites failing WCAG 2 — WCAG 2.2 now pushes beyond checkbox compliance toward usability. If your acceptance criteria don't include WCAG 2.2 AA, you're betting you won't be one of next year's 5,000 targets

    Apple's glasses pivot + 5K accessibility lawsuits: two signals reshaping your 2026 roadmap

BOTTOM LINE

Seat-based SaaS lost half its market value in six months, and the winners are already visible: ServiceNow made AI free-by-default across 85 billion workflows, a16z confirmed enterprise buyers demand outcome-based pricing, and your most dangerous competitor in 12 months isn't another startup — it's your own customer's engineering team building in-house. Meanwhile, agents outnumber humans 100:1 in enterprise environments, Anthropic is simultaneously competing with its own customers (Lovable clone, Claude for Word) and silently degrading service (67% quality drop, 92% cache TTL cut), and Google proved that even they can only get 20% power adoption of their own AI tools. The PM playbook for Q2 is three moves: migrate pricing from seats to outcomes, build agent-consumable interfaces before your docs become invisible, and ship a multi-model abstraction layer before your AI supplier becomes your competitor.

Frequently asked

If I can't abandon per-seat pricing overnight, what's the minimum viable move?
Ship dual pricing capability: keep predictable seat or consumption pricing while adding an outcome-based or gainshare option for the same product. This alone shifts competitive comparisons from 'cost per seat' to 'results per dollar,' which premium products win. It also buys you time to build the action-level telemetry infrastructure that full outcome pricing requires — a 2-3 quarter effort worth scoping now.
How do I know if my product is vulnerable to being bundled by a model provider like Anthropic or OpenAI?
Run a platform risk audit: list every feature a model provider could plausibly bundle into their chatbot within 12 months, and tag each as 'defensible' or 'at risk.' If your core value is a specialized UI wrapped around an LLM API, your window is likely 6-12 months — Anthropic's Lovable competitor and Claude for Word prove this is happening now. Defensible features typically involve proprietary data, workflow integration depth, or domain-specific evals the provider can't replicate.
What does 'agent-consumable' actually mean for a product that today has a GUI-only interface?
It means exposing your highest-value features via API, CLI, or MCP server so AI agents can invoke them programmatically without screen-scraping or browser automation. Practically: score every user-facing feature on programmatic accessibility, identify the top 5 GUI-only high-value features, and create exposure tickets. At reported 100:1 agent-to-human interaction ratios, GUI-only features are increasingly invisible to your fastest-growing user segment.
How should I hedge against silent quality regressions or pricing changes from a single AI provider?
Build a model-routing abstraction layer that lets you hot-swap between Claude, GPT, Gemini, and open-source models without code changes, and deploy monitoring dashboards for cache hit rates, latency percentiles, and per-request costs. Anthropic's undocumented cache TTL cut (60 to 5 minutes) could 12x caching costs overnight, and reported thinking-depth regressions degrade UX without warning. Without both abstraction and monitoring, you're exposed to vendor decisions you can't see and can't route around.
If only 20% of users become AI power users, how do I justify continued investment in AI features?
Stop optimizing for the 20% and redesign features to activate for the 60% of casual users who'll adopt AI when it's embedded in their existing workflow with zero additional cognitive load. The value is real — per-user AI value tripled year-over-year per Stanford HAI — but it concentrates in power users who know how to extract it. Democratizing that value to the 60% through invisible, in-workflow AI (not opt-in chat UIs) is where the next expansion of ROI lives.

◆ ALSO READ THIS DAY AS

◆ RECENT IN PRODUCT