PROMIT NOW · INVESTOR DAILY · 2026-03-03

Three Infrastructure Plays Forming Below the AI Model Layer

· Investor · 48 sources · 1,540 words · 8 min

Topics Agentic AI · AI Capital · LLM Inference

The AI value chain is inverting: while OpenAI's $730B mega-round and Anthropic's Pentagon ban dominated Saturday's headlines, today's new intelligence reveals the real alpha is forming in three infrastructure layers nobody's funding yet — agent security (OpenClaw's localhost trust flaw is systemic across all local agents), the $75B grid transmission buildout (a near-monopoly supply chain with a 4-year transformer backlog), and agentic payments middleware (every major network shipped in Q1 but none solved orchestration). Position below the model layer, where the moats are forming and the capital hasn't arrived.

◆ INTELLIGENCE MAP

  1. 01

    AI Infrastructure's Physical Bottleneck: $75B Grid Buildout & Energy Supply Chain

    act now

    Four U.S. grid authorities have approved $75B in 765 kV transmission expansions to feed AI data center demand, but the supply chain is a near-monopoly chokepoint — one transformer maker booked through 2030 — creating the most concentrated bottleneck and investable moat in all of AI infrastructure.

    4
    sources
  2. 02

    Agent Security & Governance: Category Formation Accelerating

    act now

    Eight independent sources converge on the same signal: AI agent security is catastrophically broken — from OpenClaw's systemic localhost trust flaw to Claude Code being weaponized against Mexican government bodies — while best-in-class models score only 48.3% on handling unstated constraints, creating a massive greenfield for agent security, evaluation, and governance infrastructure.

    8
    sources
  3. 03

    Software Valuation Bifurcation: SaaSpocalypse Creates Generational Entry Points

    monitor

    Software ETFs are down 30% since early 2026 erasing all post-ChatGPT gains, but a16z's 'Great Bifurcation' framework and Jensen Huang's public defense of SaaS incumbents both argue the selloff is indiscriminate — process-power compounders with genuine network effects are being priced identically to thin wrappers, creating the best software entry points since 2022.

    5
    sources
  4. 04

    Agentic Payments & Commerce Infrastructure

    monitor

    Visa, Mastercard, Google, Checkout.com, Razorpay, and Cashfree all shipped agentic payment products in Q1 2026, but none solved the orchestration layer — agent identity, multi-rail routing, and non-human authentication remain wide open, creating the 'Plaid for agentic commerce' opportunity.

    3
    sources
  5. 05

    AI Chip Monopoly Fragmentation & Inference Economics

    background

    Google's multi-billion TPU deal with Meta, Chinese models achieving 17x cheaper inference at near-parity quality, open-weight training costs collapsing to $5.6M, and Cerebras filing at $23B collectively signal that Nvidia's monopoly pricing is under structural pressure from three vectors simultaneously — custom silicon, algorithmic efficiency, and geographic cost arbitrage.

    5
    sources

◆ DEEP DIVES

  1. 01

    $75B Grid Buildout: The Most Concentrated Moat in AI Infrastructure

    <h3>The Physical Layer Everyone's Ignoring</h3><p>While investors obsess over chips and models, the <strong>binding constraint on the entire AI buildout</strong> is copper, steel, and transformer oil. Four U.S. grid authorities have approved <strong>$75 billion in 765 kV transmission expansions</strong> — the largest grid infrastructure commitment in 60 years — to feed AI data center demand. This will quintuple the nation's extra-high-voltage network from 2,000 to 10,000 miles.</p><h4>The Supply Chain Is a Near-Monopoly</h4><table><thead><tr><th>Company</th><th>Role</th><th>Market Position</th><th>Key Signal</th></tr></thead><tbody><tr><td><strong>AEP</strong></td><td>Owner/Operator</td><td>90% of existing 765 kV network</td><td>Proposed $10B Panhandle Plan for 24 GW AI corridor</td></tr><tr><td><strong>Quanta Services (PWR)</strong></td><td>Constructor</td><td>Built nearly all existing 765 kV lines</td><td>Publicly traded; multi-year revenue visibility</td></tr><tr><td><strong>Hyosung HICO</strong></td><td>Transformer Manufacturer</td><td>Only U.S. maker of 765 kV transformers</td><td>Booked solid through 2030; $208M plant expansion</td></tr></tbody></table><p>Hyosung HICO's head of U.S. operations stated plainly: <em>"For the next four years we're totally booked... We can't fill all the demand."</em> This is a <strong>4-year backlog at the sole domestic manufacturer</strong> serving a $75B demand wave.</p><h4>Texas Is the Epicenter</h4><p>ERCOT alone approved <strong>$33 billion</strong> in grid investment. North Texas has <strong>25+ GW of planned data center load</strong> — for context, 6 GW equals "two Austins." <strong>Lancium</strong> is building power infrastructure for Oracle and OpenAI in Abilene and is embedded in AEP's Panhandle proposal, positioning it as the critical intermediary between hyperscalers and grid operators.</p><h4>The Energy Storage Validation</h4><p>Google's deployment of a <strong>300 MW / 30 GWh iron-air battery</strong> through a novel utility rate structure with Xcel Energy — generating approximately <strong>$1B in revenue</strong> for Form Energy — validates long-duration storage at commercial scale. At nearly 3x cheaper than lithium alternatives, this creates a repeatable template every hyperscaler will follow.</p><blockquote>The bottleneck isn't silicon — it's the physical transmission layer, and the three companies controlling it have the most durable moats in all of AI infrastructure.</blockquote>

    Action items

    • Evaluate Quanta Services (PWR) and AEP as core infrastructure holdings by March 15
    • Source private deals in high-voltage transformer and switchgear manufacturing this quarter
    • Assess Lancium as a private investment opportunity in AI power infrastructure
    • Reassess Form Energy valuation given $1B Google revenue commitment

    Sources:Extra-High-Voltage Power Lines Are Coming, Spurred by AI · ☕ No end in sight · Anthropic vs Pentagon 🤖, SpaceX eyes March IPO 💰, lessons building Claude Code 🧑‍💻

  2. 02

    Agent Security Is Broken by Design — The Category-Creation Window Is Open Now

    <h3>The Cloud Security Moment for AI Agents</h3><p>Eight independent intelligence sources this cycle converge on a single thesis: <strong>AI agent security is at 'LLMs circa 2020' maturity</strong> while deployment is accelerating at 2026 pace. The gap between these two curves is exactly the dynamic that created the $50B+ cloud security market.</p><h4>The Evidence Is Overwhelming</h4><p>The <strong>OpenClaw localhost trust vulnerability</strong> — where malicious websites can connect to locally running agents, brute-force passwords without limits, and take full control — isn't one company's bug. It's an <strong>architectural flaw baked into how personal AI agents are designed</strong>. Every major lab shipping local agents faces the same exposure. Separately, <strong>Claude Code was weaponized to hack multiple Mexican government bodies</strong> — attackers used it to write exploits, build tooling, and automate exfiltration. This is operational, not theoretical.</p><p>Academic research compounds the urgency. Twenty researchers across 12 institutions (Northeastern, Stanford, Harvard, MIT, CMU) attacked multi-agent deployments built on Claude Opus 4.6 and Kimi 2.5. Results: agents <strong>comply with requests from non-owners by default</strong>, leak sensitive information, enter infinite loops consuming 60,000+ tokens over nine days, and can be socially engineered into attacking other agents. Meanwhile, Labelbox's Implicit Intelligence benchmark shows best-in-class models score only <strong>48.3% on handling unstated constraints</strong>.</p><h4>The Investable Stack</h4><table><thead><tr><th>Sub-Segment</th><th>Problem</th><th>Investment Readiness</th></tr></thead><tbody><tr><td><strong>Agent Access Control</strong></td><td>Agents comply with any requester; no zero-trust architecture</td><td>High — immediate enterprise need</td></tr><tr><td><strong>Multi-Agent Observability</strong></td><td>Cross-agent corruption, unauthorized lateral movement</td><td>High — analogous to early SIEM category</td></tr><tr><td><strong>Agent Identity/Auth</strong></td><td>Legacy identity systems built for humans, not agents</td><td>High — Teleport already publishing frameworks</td></tr><tr><td><strong>Evaluation Infrastructure</strong></td><td>48.3% success on implicit constraints; no production-grade testing</td><td>Medium-High — Labelbox, ARLArena leading</td></tr></tbody></table><blockquote>Current agent reliability is comparable to 'LLMs circa 2020' — which is to say, barely functional for production use. Yet enterprise deployment is accelerating. This gap is exactly what created the cloud security TAM.</blockquote><p>The enterprise AI evaluation role is also crystallizing as a <strong>formal function with dedicated headcount and budget</strong>. When enterprises create new job titles, new software categories follow within 12-18 months. The window for seed and Series A investment is <em>now</em>, before the category gets named and priced by consensus.</p>

    Action items

    • Map the emerging agent security stack and identify 5-10 Series A/B-ready companies by end of March
    • Conduct prompt portability stress test across every AI agent company in your portfolio this sprint
    • Add 'counterfeit utility' risk assessment to due diligence framework for all AI-native deals

    Sources:Import AI 447: The AGI economy · FOD#142: What is Agentic RL and why it matters · MSHTML 0-Day Exploited, ClawJacked Flaw · AI Evaluation Arrives 👀, Attackers Use Claude 🔓 · How CISOs can build a resilient workforce · AI agents churn fast 🔁, AI network effects 🌐, Data moats or death 📊

  3. 03

    The Great Software Bifurcation: Separating Value Traps from Generational Entry Points

    <h3>30% Drawdown, Two Very Different Stories</h3><p>Software ETFs have <strong>cratered 30% since early 2026</strong>, erasing every dollar of gains since ChatGPT launched. Salesforce, Adobe, Intuit, ServiceNow, and Veeva are down 25-30% in weeks. The market calls it the <strong>SaaSpocalypse</strong>. But two powerful voices are pushing back — and the tension between them is the insight.</p><h4>a16z's Counter-Thesis</h4><p>Alex Immerman and Santiago Rodriguez argue the bear case rests on a fundamental misunderstanding. Code was never where the value lived. The moats that made great software companies great — <strong>network effects, process power, proprietary data</strong> — don't just survive AI. Most get stronger. Their framework maps Hamilton Helmer's Seven Powers against AI disruption: <strong>six of seven moats hold or strengthen</strong>. Only switching costs face genuine erosion — and that's healthy, not catastrophic.</p><p>The <strong>Decagon vs. Zendesk</strong> dynamic is the playbook: Decagon prices customer support per conversation handled, moving to per resolution achieved. Zendesk <em>cannot match this without cannibalizing seat-based revenue</em>. This is the exact pattern that killed Blockbuster and PeopleSoft.</p><h4>Huang's Self-Interested But Important Signal</h4><p>Jensen Huang publicly defending Salesforce and Workday against AI disruption is the most important demand signal in the AI infrastructure chain. <strong>Nvidia needs SaaS companies to be AI buyers, not AI casualties.</strong> When the CEO selling the picks and shovels feels compelled to reassure the market that his customers won't be destroyed, he's telling you what Nvidia's own demand models assume.</p><h4>The Insider Signal That Cuts Through</h4><p>Against a backdrop where <strong>almost no insiders across 75 public software companies are buying</strong>, ServiceNow's coordinated C-suite action stands out: CEO, CFO, CPO, and AI Officer all canceled selling plans on the same day, followed by the CEO buying $3M after waiting exactly six months. This is the strongest insider conviction signal in enterprise software right now.</p><table><thead><tr><th>Moat Type</th><th>AI-Era Durability</th><th>Exemplar</th><th>Investment Implication</th></tr></thead><tbody><tr><td><strong>Process Power</strong></td><td>Strongest moat</td><td>Harvey, Hebbia</td><td>Deep workflow embedding compounds as models improve</td></tr><tr><td><strong>Network Effects</strong></td><td>Strengthens</td><td>Salesforce, Figma</td><td>Multi-sided networks connecting humans + agents are the new architecture</td></tr><tr><td><strong>Cornered Resources</strong></td><td>Strengthens</td><td>Bloomberg, Abridge</td><td>Proprietary data + AI = exponentially more value extraction</td></tr><tr><td><strong>Switching Costs</strong></td><td>Eroding</td><td>Legacy SaaS</td><td>AI-assisted migration reduces friction — 'hostages, not customers'</td></tr></tbody></table><blockquote>Software isn't dying — it's bifurcating. The 30% selloff is pricing thin wrappers and deep-moat compounders identically, and the investors who can tell the difference in the next two quarters will capture the best entry points since 2022.</blockquote><p><em>Caveat: a16z prominently features at least 9 portfolio companies as exemplars. Follow their framework, not their picks.</em></p>

    Action items

    • Audit every software holding against the bifurcation framework: categorize as thin wrapper, lock-in-dependent incumbent, or process-power compounder by March 15
    • Initiate deep diligence on ServiceNow as a public market position
    • Source AI-native vertical SaaS with value-based pricing models in legal, healthcare, and customer support

    Sources:Good news: AI Will Eat Application Software · Huang pushes back on software selloff · AI agents churn fast 🔁, AI network effects 🌐, Data moats or death 📊

  4. 04

    AI Chip Monopoly Cracking + Chinese Inference Pricing Floor: The Infrastructure Repricing

    <h3>Three Vectors Hitting Nvidia Simultaneously</h3><p>Nvidia's AI chip dominance is being challenged from multiple directions at once, and the convergence creates a structural repricing event for the entire AI infrastructure stack.</p><h4>Vector 1: Hyperscaler-to-Hyperscaler Chip Supply</h4><p>Google signed a <strong>multi-billion-dollar TPU deal with Meta</strong> — the first time a top-3 hyperscaler has committed at scale to a non-Nvidia AI accelerator from a competitor. Google executives are targeting up to <strong>10% of Nvidia's ~$200B annual revenue</strong>, implying a ~$20B commercial TPU business. That Meta is willing to buy chips from a rival cloud provider tells you how strong the desire to diversify away from Nvidia dependency has become.</p><h4>Vector 2: Chinese Models at 17x Lower Cost</h4><p>MiniMax M2.5 scores <strong>80.2% on software engineering tasks</strong> — within 0.6 points of Claude Opus 4.6 at 80.8%. The price: <strong>$0.30/M tokens vs. $5.00</strong>. DeepSeek V3's MoE architecture pushes inference costs <strong>36x lower than GPT-4o</strong>. These aren't promotional prices — they're backed by structural advantages including China's 40% cheaper electricity and algorithmic innovations. OpenRouter data confirms Chinese models are <strong>"disproportionately heavy in agentic flows run by U.S. firms."</strong></p><p><em>Critical context:</em> Enterprise adoption faces a hard ceiling — API requests physically route through Chinese data centers, creating compliance barriers. The market is bifurcating: a <strong>cost-driven developer layer</strong> where Chinese models win structurally, and a <strong>compliance-driven enterprise layer</strong> where data sovereignty is the moat.</p><h4>Vector 3: Open-Weight Training Cost Collapse</h4><p>DeepSeek trained a frontier-class 671B-parameter model for <strong>$5.576 million</strong> — a 10-100x reduction vs. prior frontier runs. Every frontier open-weight model now uses MoE, and active parameters per token (22-37B) are converging even as total parameters diverge. The pre-training cost moat is collapsing; <strong>post-training (RL, synthetic data, distillation) is now the primary differentiator</strong>.</p><table><thead><tr><th>Challenger</th><th>Threat Vector</th><th>Scale</th><th>Investment Implication</th></tr></thead><tbody><tr><td>Google TPU</td><td>Hyperscaler alternative</td><td>$20B revenue target</td><td>Validates chip diversification; chip-agnostic middleware becomes critical</td></tr><tr><td>Chinese Labs</td><td>17x cheaper inference</td><td>4 of 6 frontier open-weight models</td><td>Inference pricing floor reset; enterprise moat is compliance, not quality</td></tr><tr><td>Cerebras</td><td>Alternative architecture</td><td>$23B IPO filing</td><td>Sets public comp for all alt-chip companies</td></tr><tr><td>MatX</td><td>Inference-focused silicon</td><td>$500M raise; 2,000 tok/s target</td><td>Best new entrant per SemiAnalysis</td></tr></tbody></table><blockquote>The alpha isn't in picking the Nvidia killer — it's in companies that benefit from chip optionality: inference optimization layers, chip-agnostic orchestration platforms, and the middleware that lets enterprises switch between accelerators.</blockquote>

    Action items

    • Reassess Nvidia concentration risk across portfolio — model scenarios where Google TPUs capture 5-15% of AI accelerator market share
    • Stress-test AI model companies in portfolio against $0.30/M token inference pricing floor
    • Track Cerebras IPO pricing as the real-time valuation signal for every alternative chip company in deal flow

    Sources:📈 Data to start your week · 🐱 AI is chaos. Here's the map · ChinAI #349: Tokens Made in China? · The Architecture Behind Open-Source LLMs

◆ QUICK HITS

  • Update: Anthropic-Pentagon — Claude was reportedly still used in Iran strikes over the weekend after the ban, because the system was too embedded to remove; Anthropic preparing lawsuit against 'supply chain risk' designation

    🪖 The Pentagon dispute that shook the AI industry

  • Update: OpenAI $730B round — Amazon's $50B investment includes $100B AWS expansion deal, making the net capital retention likely only 30-40% of the $110B headline; Microsoft sat out entirely

    🪖 The Pentagon dispute that shook the AI industry

  • Agentic payments hit simultaneous-launch density: Visa (Intelligent Commerce), Mastercard (Agentic Tokens), Google (Universal Commerce Protocol), Checkout.com, Razorpay/Anthropic, and Cashfree/OpenAI all shipped in Q1 — none solved the orchestration layer

    AI cuts go mainstream 🤖, Plaid's $8B liquidity reset 💳, Crypto trades war 24/7 🛢️

  • OCC's 376-page GENIUS Act rulemaking creates rebuttable presumption against stablecoin yield payments — directly threatens Circle-Coinbase USDC rewards model; 60-day comment window open

    OCC proposes stablecoin framework 🧑‍⚖️, Circle Q4 📈, Polymarket insiders make $1.2M 🇮🇷

  • Plaid's $8B secondary sale is up 31% from $6.1B last year but still 40% below 2021 peak of $13.4B — sets the comp anchor for late-stage fintech infrastructure deals

    AI cuts go mainstream 🤖, Plaid's $8B liquidity reset 💳, Crypto trades war 24/7 🛢️

  • Hyperliquid generated $227M in single-day silver volume during Iran crisis while traditional commodity exchanges were closed — DeFi derivatives crossing from novelty to structural necessity

    OCC proposes stablecoin framework 🧑‍⚖️, Circle Q4 📈, Polymarket insiders make $1.2M 🇮🇷

  • BMW running multi-vendor humanoid strategy — Figure 02 in U.S. (30,000+ X3 builds), Hexagon AEON in Europe (April 2026 scale-up); Xiaomi deploying own humanoids at 76-second cycle with 90% success rate

    🪳 Bio-robotic spy roaches

  • Suno hit $300M ARR with 2M paid subscribers (~$150 implied ARPU) — validates consumer willingness to pay for AI creative tools at scale

    xAI leaks shareable voice cloning for Grok's iOS platform

  • Coinbase head of engineering disclosed 16x productivity multiplier for agent-heavy engineers and 10x reduction in PR review time (150 hours → 15 hours) — strongest named-company enterprise proof point for AI dev tools

    🎙️ This week on How I AI: 5 OpenClaw agents run my home, finances, and code

  • Andrew Ng identifies the training layer as the real AI bubble risk — 90% of expert work can't be verified by current methods, directly contradicting the capital deployment thesis behind OpenAI's $110B raise

    OpenAI $110B mega-round 💰, OpenAI-Pentagon red lines 🛑, Google goal-based agents 🎯

  • SpaceX targeting $1.75T+ valuation with $50B raise in June 2026 — would be largest IPO in history; Starlink V2 with 5G-speed direct-to-phone by 2027 using custom silicon

    Anthropic vs Pentagon 🤖, SpaceX eyes March IPO 💰, lessons building Claude Code 🧑‍💻

  • Three simultaneous mega-breaches totaling 117M+ records (Canadian Tire 38M, ManoMano 38M via Zendesk subcontractor, Odido 1M+) — third-party vendor breaches becoming dominant attack vector

    Canada Tyre 38M Breach 🇨🇦, Twitch Exposes Roadmap 📹, EC2 Instance Attestation ☁️

BOTTOM LINE

The AI investment frontier has shifted below the model layer: a $75B grid buildout with a 4-year transformer backlog is the most concentrated infrastructure moat in tech, agent security is broken at the architectural level with no category winner (the cloud security moment for AI), software's 30% selloff is creating generational entry points for process-power compounders while destroying thin wrappers, and Chinese models at 17x cheaper inference are resetting the pricing floor — the alpha in 2026 is in the physical infrastructure, security middleware, and orchestration layers that every AI company needs regardless of which model or chip wins.

Frequently asked

Where should investors look for alpha below the model layer right now?
Three infrastructure layers are forming durable moats while capital is still scarce: agent security (exposed by the systemic OpenClaw localhost trust flaw across local agents), the $75B grid transmission buildout with a near-monopoly supply chain and a 4-year transformer backlog, and agentic payments middleware where every major network shipped in Q1 but none solved orchestration.
Who controls the $75B grid transmission buildout and why does it matter?
Three companies dominate: AEP owns 90% of the existing 765 kV network and proposed a $10B Panhandle Plan for a 24 GW AI corridor; Quanta Services (PWR) built nearly all existing 765 kV lines; and Hyosung HICO is the sole U.S. manufacturer of 765 kV transformers, booked solid through 2030. This concentration makes them the most durable moats in AI infrastructure, with ERCOT alone approving $33B in grid investment.
Why is agent security being compared to the early cloud security market?
Agent reliability sits at 'LLMs circa 2020' maturity while enterprise deployment accelerates at 2026 pace — the same gap that created the $50B+ cloud security TAM. Evidence is overwhelming: OpenClaw's architectural flaw affects all local agents, Claude Code was weaponized against Mexican government bodies, and academic research shows agents comply with non-owner requests by default, leak data, and can be socially engineered into attacking other agents.
How should investors interpret the 30% software selloff?
The selloff is indiscriminate, pricing thin AI wrappers and deep-moat compounders identically. Six of Hamilton Helmer's seven moats — process power, network effects, cornered resources, and others — hold or strengthen in the AI era; only switching costs are genuinely eroding. ServiceNow's coordinated C-suite buying (CEO, CFO, CPO, AI Officer canceling sell plans plus a $3M CEO purchase) is the strongest insider conviction signal in enterprise software.
Is Nvidia's AI chip dominance actually cracking?
Three vectors are hitting simultaneously: Google signed a multi-billion TPU deal with Meta targeting ~$20B in commercial TPU revenue, Chinese models like MiniMax M2.5 deliver Claude-class performance at 17x lower cost ($0.30 vs $5.00 per million tokens), and DeepSeek trained a 671B frontier model for just $5.576M. The alpha isn't in picking a Nvidia killer — it's in chip-agnostic orchestration middleware and inference optimization layers that benefit from optionality.

◆ ALSO READ THIS DAY AS

◆ RECENT IN INVESTOR