MCP STDIO RCE Hits 200+ Projects as Vercel Chain Unfolds
Topics Agentic AI · Data Infrastructure · AI Regulation
MCP's STDIO transport has a protocol-level RCE — not a bug, an architectural design flaw — affecting 200+ open-source projects and thousands of servers, with exploitation trivially achievable via malicious tool descriptions. This dropped the same week the Vercel breach chain was fully revealed (Context.ai → Google Workspace → Vercel, with NPM/GitHub tokens claimed for sale), Cursor got an indirect prompt injection RCE from cloned READMEs, and iTerm2's SSH conductor accepted arbitrary commands from cat readme.txt. If you run MCP servers, deploy on Vercel, or use Cursor on untrusted repos, you have same-day action items.
◆ INTELLIGENCE MAP
01 Developer Toolchain Under Simultaneous RCE Attack
act nowFive independent RCE vectors hit core dev tools this cycle: MCP STDIO command injection (200+ projects), Cursor prompt injection via READMEs, iTerm2 SSH conductor spoofing, Protobuf.js config file RCE (52M weekly npm downloads), and GitHub CI pull_request_target abuse (500+ malicious PRs). These aren't related attacks — they're a convergence of under-hardened developer surfaces being exploited simultaneously.
- MCP CVEs filed
- Protobuf.js npm/week
- Malicious PRs (prt-scan)
- Compromised packages
- Trivy/KICS → ransomware
02 Vercel Breach: AI OAuth Kill Chain Exposed
act nowVercel confirmed attackers chained through Context.ai → Google Workspace → internal systems with 'surprising velocity.' ShinyHunters claims to be selling source code, NPM/GitHub tokens, API keys, and 580 employee records. 12+ sources converge: the attack vector is a compromised third-party AI tool's OAuth grant — a new supply chain pattern most orgs haven't modeled. Rotate all Vercel secrets today.
- Attack origin
- Pivot via
- Ransom demand
- Claimed employee data
- Token types exposed
- Context.ai breachAI tool compromised
- OAuth pivotGoogle Workspace access
- Lateral movementVercel internal systems
- Data exfiltrationSource code, tokens, keys
- Sale announcedShinyHunters listing
03 Agent Containment Architecture Converges on a Reference Design
monitorGitHub published their full agentic workflow sandbox: multi-container topology, sidecar secret proxying, staged output buffers with allowlists, and boundary-level observability — all with the honest admission that prompt injection remains unsolved. MBZUAI's Claude Code teardown (512K lines, 54 tools, 5 compression layers) confirms the same pattern. Docker Sandboxes ships microVM-per-agent isolation. The consensus: design for containment, not prevention.
- Claude Code tools
- Permission modes
- Context compression
- Max PRs per agent run
- Agent files
- 01Infrastructure code512K lines
- 02Tools registered54
- 03Hook points27
- 04Permission modes7
- 05Compression layers5
04 AI Velocity Gains: Only With Strong DevEx Foundations
monitorState of Software Delivery report: median teams show +15% feature branch activity but -15% main branch success rate with AI tools — more code that fails integration. Top 5% see ~2x gains, but they were already top performers before AI. Intercom's 2x claim is backed by Honeycomb telemetry and custom skill hooks, not raw model output. Agentic compute costs now approach $22/hr — human hourly rates.
- Feature branch churn
- Main branch activity
- Main branch success
- Top 5% throughput
- Agent cost/hour
05 Infrastructure Pressure: DRAM Shortage, Container Limits, PQC Migration
backgroundDRAM production covers only 60% of demand through 2027 as manufacturers prioritize HBM for AI accelerators — expect cloud memory-optimized instance price increases. Netflix published data showing 20K mounts per 100 containers with NUMA cross-socket latency silently degrading p99. NIST and UK NCSC are pushing post-quantum crypto migration to 2030; Meta is already deploying ML-KEM internally.
- DRAM coverage
- Shortage duration
- Netflix mounts/100ct
- PQC target date
- Atlassian deadline
- DRAM supply coverage60
◆ DEEP DIVES
01 Your Dev Toolchain Is Now a Multi-Vector Kill Chain: MCP STDIO RCE, Vercel OAuth, and Three More
<h3>Five independent attack vectors hit developer tools simultaneously</h3><p>This isn't a single incident — it's a <strong>convergence pattern</strong> across every layer of the modern developer stack. Five unrelated RCE vectors dropped in the same cycle, each targeting a different tool you probably use daily. Taken together, they represent the most consequential shift in developer security posture this year.</p><h4>MCP STDIO: An Architectural Flaw, Not a Bug</h4><p>Anthropic's Model Context Protocol uses <strong>STDIO as a default transport</strong>, and that transport doesn't sanitize input. Any MCP server running with defaults allows a malicious client to inject arbitrary OS commands. OX Security's audit found <strong>30+ vulnerabilities across 10 CVEs</strong>, affecting 200+ open-source projects and thousands of servers. This is a protocol-level design issue — you cannot patch it with a version bump. The mitigation is architectural: <strong>switch to HTTP transport</strong>, run MCP servers in sandboxed environments with restricted network access, and treat STDIO transport as untrusted by default.</p><h4>Vercel: The AI OAuth Attack Chain</h4><p>The full kill chain is now confirmed across 12+ independent sources: attackers compromised <strong>Context.ai</strong> (a third-party AI tool), pivoted through an employee's <strong>Google Workspace OAuth grant</strong>, then reached Vercel's internal systems with what CEO Guillermo Rauch described as <em>'surprising velocity and in-depth understanding of Vercel.'</em> ShinyHunters claims to be selling source code, NPM tokens, GitHub tokens, API keys, and 580 employee records. A $2M ransom demand suggests financially motivated actors who may dump data if unpaid.</p><blockquote>Your Google Workspace is only as secure as the least-trusted third-party AI app any employee has granted OAuth access to. Most companies have 50-200 such integrations, most granted by individual engineers with zero security review.</blockquote><h4>The Other Three Vectors</h4><ul><li><strong>Cursor NomShub</strong>: A malicious prompt in a repository README hijacks Cursor's AI agent to open a remote tunnel, register a device code, and authorize an attacker's GitHub account on your machine. The attack chain: clone repo → Cursor indexes README → agent executes attacker commands → persistent access via <code>.zshenv</code>.</li><li><strong>iTerm2 SSH Conductor</strong>: The SSH integration accepts protocol commands from <em>any</em> terminal output. A crafted file with DCS/OSC escape sequences impersonates the conductor and pushes arbitrary commands to your local shell. Trigger: <code>cat readme.txt</code>. Patch is described as 'still unstable.'</li><li><strong>Protobuf.js</strong>: RCE via malicious config file in a library with <strong>52M+ weekly npm downloads</strong>. It's almost certainly in your transitive dependencies.</li></ul><h4>Supply Chain Weaponization: From Scan Tool to Ransomware</h4><p>The most alarming escalation: <strong>TeamPCP is feeding credentials stolen from Trivy and Checkmarx KICS</strong> supply chain compromises directly to the Vect ransomware group. Your DevSecOps scanning tools — the ones you installed to improve security — had access to container registries, cloud credentials, and deployment pipelines. Those keys are now being sold to ransomware operators who know exactly how to monetize them.</p><hr><p>Separately, Axios (the HTTP client in nearly every Node.js project) was supply-chain compromised and hit <strong>hundreds of thousands of downloads</strong> despite AI-powered detection catching it within minutes. Detection latency is no longer the bottleneck — distribution pipeline latency is.</p>
Action items
- Audit all MCP server integrations for STDIO transport usage and switch to HTTP transport with input sanitization. Pin MCP server versions and implement network isolation.
- Rotate ALL Vercel environment variables, integration tokens, NPM tokens, and GitHub tokens immediately. Check OAuth grants in Google Workspace Admin → Security → API controls and revoke non-essential AI tool access.
- Disable iTerm2 SSH integration (Shell Integration → SSH) on all developer machines until the patch stabilizes.
- Run `npm ls protobufjs` across all Node.js services and update to patched versions. Audit whether any service accepts untrusted protobuf definitions.
- Emergency credential rotation for any secrets that Trivy or Checkmarx KICS had access to in your CI/CD pipeline — container registry tokens, cloud credentials, deployment keys.
- Implement a 24-72 hour quarantine delay in your private registry mirror (Artifactory, Verdaccio) before new package versions become available to CI.
Sources:Your dev toolchain is the attack surface now: Cursor RCE, iTerm2 RCE, and GitHub CI poison all dropped this week · MCP SDK has RCE via unsafe STDIO defaults — rotate Vercel secrets now, audit your MCP servers next · MCP has critical RCEs in STDIO defaults, Vercel tokens leaked — rotate now, audit your agent stack · Protobuf.js RCE + Trivy/KICS supply chain creds feeding ransomware — check your CI/CD pipeline today · Vercel's AI-accelerated supply chain breach via Context.ai · Axios got supply-chain compromised — detected in minutes, still hit 100K+ downloads
02 GitHub's Agent Sandbox Is Your Reference Architecture — Here's What to Steal
<h3>The industry just converged on agent containment, not prevention</h3><p>Three independent organizations published production agent isolation architectures within days of each other — GitHub (Agentic Workflows), Docker (Sandboxes), and Alibaba Cloud (ACS Agent Sandbox) — and they all made the same honest admission: <strong>prompt injection is unsolved, so design for damage containment</strong>. Meanwhile, MBZUAI's reverse-engineering of Claude Code reveals the production reality: <strong>512K lines of infrastructure</strong> wrapping a simple while-loop reasoning core.</p><h4>The Four Patterns That Matter</h4><p><strong>1. Sidecar Secret Proxying.</strong> In GitHub's design, the agent container <em>never holds credentials</em>. All credential-bearing operations are mediated by dedicated sidecar containers — a firewall proxy, an MCP gateway, an API proxy — each holding only the tokens it needs. The host filesystem is mounted read-only with <strong>tmpfs overlays blanking sensitive paths</strong> (/proc, SSH keys, config files), and the agent runs in a chroot jail. OpenAI's Codex independently converged on the same principle: secrets available only during setup, removed before agent execution, internet disabled by default.</p><p><strong>2. Staged Output Buffers.</strong> The agent never writes to GitHub directly. Write operations go through a 'safe output' MCP server that only buffers intended changes. A deterministic pipeline then validates every buffered operation: <strong>allowlisted operation types, quantity caps (max 3 PRs per run), secret scanning, URL stripping, and content moderation</strong>. Only operations passing all checks execute against the real API.</p><blockquote>Measure your security by what a fully compromised agent can actually achieve, and make that answer boring.</blockquote><p><strong>3. Capability-Based Workflow Compilation.</strong> GitHub compiles workflow definitions into Actions with explicit per-stage constraints: active components, read/write permissions, data artifacts, and admissible downstream consumers. This is <strong>capability-based security</strong> applied to agentic workflows — a formally specified data flow graph analyzable before any code runs.</p><p><strong>4. Boundary-Level Observability.</strong> Every trust boundary is simultaneously an observation point: the firewall logs network activity, the API proxy captures model metadata, the MCP gateway logs tool invocations, and in-container instrumentation audits env var access. This gives you forensics for free.</p><h4>Claude Code's Harness: The Ratio That Should Humble You</h4><p>MBZUAI researchers reverse-engineered Claude Code and found <strong>1,884 files</strong> implementing 7 permission modes, 54 tools, 27 hooks, 5 context-compression layers, isolated subagents, and append-only transcripts. The core reasoning loop? A simple while-loop. If your agent infrastructure is less than <strong>10x the size of your prompt engineering layer</strong>, you're underinvesting in the harness. The append-only transcript is event sourcing for agent state. The 5-layer context compression is a pipeline trading fidelity for token budget. The isolated subagents are least-privilege microservices.</p><h4>Docker Sandboxes: MicroVM-Per-Agent Arrives</h4><p>Docker built a <strong>custom cross-platform VMM from scratch</strong> — not a Firecracker wrapper — giving each AI agent its own kernel and private Docker daemon via microVMs. This kills the Docker-in-Docker security nightmare that's plagued CI for years. Key question Docker hasn't answered: <strong>cold-start benchmarks</strong>. Firecracker does ~125ms. If Docker is in that ballpark on macOS and Windows (Firecracker is Linux-only), that's a genuine differentiator for dev experience.</p><h4>Alibaba's Four Chasms Framework</h4><p>MiniMax + Alibaba Cloud's production deployment at 100K+ concurrent agents identified <strong>four failure modes</strong>: (1) privilege escalation from prompt injection, (2) state volatility in long-running multi-hour tasks, (3) multi-agent scheduling complexity, and (4) cost vs. workload spike tension. Their ACS Agent Sandbox treats every agent execution as an untrusted tenant — the same posture used for arbitrary user code in serverless platforms.</p>
Action items
- Prototype a sidecar proxy pattern for any internal agentic system calling external APIs — agent sends requests to a local HTTP proxy that injects auth tokens and enforces endpoint allowlists.
- Implement a staged output buffer for any agent that writes to production systems. Agent proposes actions → deterministic pipeline validates against allowlist and quantity caps → approved actions execute.
- Add MCP gateway logging to existing MCP server deployments — capture tool invocations, parameters, and results at the gateway layer even before enforcing policies.
- Audit your agent architecture against the Claude Code 10x ratio: map your permission model, context compression, tool orchestration, failure recovery, and audit trail. Identify gaps.
- Evaluate Docker Sandboxes as a replacement for your current agent isolation approach when cold-start benchmarks are published.
Sources:GitHub's zero-trust agent sandbox is the reference architecture your agentic CI/CD needs · Claude Code's 512K-line harness dissected: agent infra dwarfs the LLM core · Claude Code is just a while-loop — and your agent architecture is probably overengineered too · Vercel breached via AI tool OAuth — audit your third-party AI integrations now · Running 100K+ AI agents in prod? The 4 chasms MiniMax hit · Intercom's 2x velocity playbook
03 AI Coding Velocity Is Real — But Only If You Already Had These Foundations
<h3>New data shows AI coding tools amplify existing engineering quality, not replace it</h3><p>The State of Software Delivery report drops a data set that should recalibrate every AI coding ROI conversation in your org. The <strong>median engineering team</strong> using AI tools shows +15% feature branch activity alongside <strong>-7% main branch activity and -15% main branch success rate</strong>. Read that together: teams are writing more code that fails to integrate.</p><h4>The Stratification Is the Story</h4><table><thead><tr><th>Team Tier</th><th>Throughput Change</th><th>Quality Maintained?</th><th>Pre-AI Status</th></tr></thead><tbody><tr><td>Top 5%</td><td>~2x</td><td>Yes</td><td>Already top performers</td></tr><tr><td>Top 25%</td><td>+25%</td><td>Mostly</td><td>Strong DevEx</td></tr><tr><td>Median</td><td>Flat</td><td>No (-15% success)</td><td>Average DevEx</td></tr><tr><td>Bottom 50%</td><td>Negative net</td><td>No</td><td>Weak CI/CD</td></tr></tbody></table><p>Critically, the top performers <strong>were already at the top three years ago</strong>, before current AI tooling existed. AI isn't creating new winners — it's <strong>amplifying existing advantages</strong>. Teams with fast builds, comprehensive tests, and clean module boundaries absorb AI-generated code smoothly because their pipeline catches the bad stuff early. Teams with slow, flaky pipelines just choke on more code faster.</p><h4>Intercom's 2x Claim: The Guardrail Architecture Is the Real Story</h4><p>Intercom's Senior Principal Engineer claims <strong>doubled merged PRs per R&D employee in 9 months</strong> with Claude Code. But the replicable pattern isn't 'use Claude Code more' — it's their instrumentation and guardrail architecture:</p><ul><li><strong>Custom skills with hooks</strong>: Their 'Create PR' skill blocks the GitHub CLI and forces structured, context-rich PR descriptions. This is shift-left quality enforcement — the agent is incapable of producing meaningless descriptions.</li><li><strong>Honeycomb telemetry</strong>: Skill invocations tracked with production-grade observability — latency percentiles, error rates, throughput per team.</li><li><strong>S3 session storage</strong>: Anonymized agent sessions stored for audit trails and skill improvement.</li></ul><p>The prerequisites they explicitly state: <strong>mature CI/CD, comprehensive test coverage, and high-trust engineering culture</strong> — all in place <em>before</em> the AI push.</p><blockquote>AI coding agents are amplifiers. They amplify whatever your existing engineering culture produces. If your CI takes 30 minutes and your test coverage has gaps, 10x speed just means shipping unvalidated code 10x faster.</blockquote><h4>The Cost Ceiling Nobody Mentions</h4><p>Agentic workloads generate <strong>15-40x more API calls</strong> per task than single-prompt interactions, and agent compute costs are now approaching <strong>$22/hour</strong> — human hourly rates. Uber's CTO publicly demonstrated how Claude Code can 'blow up AI budgets,' and Anthropic has responded by moving to usage-based enterprise pricing. Every agent-powered feature needs three things most lack: <strong>per-invocation cost tracking, hard budget ceilings with circuit breakers, and graceful degradation when budgets are exceeded</strong>.</p><h4>Will Larson's Scaffolding Pattern: The Right Integration Model</h4><p>The most production-oriented agent pattern yet: prototype with full agent orchestration, then <strong>systematically refactor deterministic steps into hardcoded code</strong>. Keep the agent only at boundaries where genuine ambiguity exists — parsing unstructured input, handling schema drift, making edge-case judgments. The result: a small, well-defined agent surface surrounded by reliable, testable deterministic code. This is how you get agent benefits without agent operational nightmares.</p>
Action items
- Audit your CI pipeline speed and test coverage this sprint — if CI takes >10 minutes or coverage is <70% on critical paths, invest there before scaling AI coding tools.
- Instrument your CI pipeline to tag PRs involving AI-generated code and compare main-branch success rates against human-only PRs.
- Prototype a custom 'Create PR' skill for Claude Code or Cursor that enforces structured descriptions and blocks direct CLI merges. Test whether review quality improves.
- Implement per-invocation cost tracking and budget ceilings for all agent-powered features before expanding usage.
- Adopt Larson's scaffolding pattern for your next agent-integrated feature: build with full orchestration, then refactor deterministic steps to hardcoded logic within the same sprint.
Sources:Your AI coding gains are likely zero: new data shows only teams with strong DevEx foundations see any lift · Intercom's 2x velocity playbook: the real pattern is instrumented guardrails on AI agents · Your agent orchestration layer is the moat now — not your model · Claude Code is just a while-loop — and your agent architecture is probably overengineered too · Your synthetic data pipeline has a provably invisible misalignment vector
◆ QUICK HITS
Update: Claude Opus 4.7 hit 87.6% on SWE-bench Verified (up from 80.8%) at unchanged pricing with 3x vision resolution — re-run your eval suite for a free capability upgrade
Your synthetic data pipeline has a provably invisible misalignment vector — Anthropic's Nature paper explains why
Atlassian will train AI models on your Jira/Confluence data starting August 17 — no opt-out below Enterprise tier; audit data sensitivity and evaluate migration before deadline
MCP has critical RCEs in STDIO defaults, Vercel tokens leaked — rotate now, audit your agent stack
GreyNoise research: scanning surges against edge devices predict vendor CVE disclosures with 9-day median lead time — alert when session intensity AND unique source IPs both exceed 2σ simultaneously
Axios got supply-chain compromised — detected in minutes, still hit 100K+ downloads. Your lockfile won't save you.
Polars reaches near-default streaming mode with native sink_iceberg() — prototype as a Spark replacement for pipelines processing <500GB per run
Arcesium's DuckDB-on-S3 pagination pattern + Polars streaming-to-Iceberg: two architectures worth stealing
Arcesium's DuckDB-on-S3 pagination pattern serves ~2M API calls/day over billions of financial records — first page hits DB, all subsequent pages served from pre-materialized Parquet via DuckDB
Arcesium's DuckDB-on-S3 pagination pattern + Polars streaming-to-Iceberg: two architectures worth stealing
OpenTelemetry declarative config reaches stability in Go, Java, JS, C++, PHP (not yet Python/.NET) — define your entire observability stack in one version-controlled YAML file
Arcesium's DuckDB-on-S3 pagination pattern + Polars streaming-to-Iceberg: two architectures worth stealing
Netflix published data showing 20K mounts for 100 containers with NUMA cross-socket latency silently degrading p99 — run `cat /proc/mounts | wc -l` on your densest hosts and correlate with `numastat`
Netflix hit 20k mounts for 100 containers — your NUMA-unaware scheduling is a latency bomb
Mamba-3 achieves transformer-equivalent perplexity at half the hidden state size — benchmark against your long-context inference workloads where KV cache pressure dominates
Your synthetic data pipeline has a provably invisible misalignment vector — Anthropic's Nature paper explains why
Anthropic's Nature paper proves subliminal trait transfer through distillation — content filters on synthetic data pipelines are structurally incapable of catching misalignment from same-family teacher-student pairs
Your synthetic data pipeline has a provably invisible misalignment vector — Anthropic's Nature paper explains why
Claude Opus 4.6 generated a working Chrome V8 exploit chain targeting Discord's outdated Chrome 138 for ~$2,283 — your Electron app's Chromium version lag is now a quantifiable, weaponizable metric
Your dev toolchain is the attack surface now: Cursor RCE, iTerm2 RCE, and GitHub CI poison all dropped this week
Kimi K2.6 achieves 12+ hours continuous agentic execution with 4,000+ tool calls as open-source — benchmark context coherence degradation at scale against your current model
Claude Code's 512K-line harness dissected: agent infra dwarfs the LLM core — rethink your agent architecture now
Salesforce exposes entire platform (CRM, Agentforce, Slack) as MCP endpoints under 'Headless 360' — the largest enterprise validation of MCP as a production integration protocol to date
Salesforce just exposed its entire stack as MCP endpoints — your enterprise integration layer needs a rethink
PrfaaS (Prefill-as-a-Service) decouples LLM prefill and decode across datacenters over commodity Ethernet — kills the RDMA-or-bust assumption for long-context inference scaling
Claude Code is just a while-loop — and your agent architecture is probably overengineered too
Google ships A2UI 0.9 — a generative UI standard with React renderer, bidirectional data sync, sandboxed execution, and Python Agent SDK; evaluate against your current agent-UI integration
Google's A2UI 0.9 may define how your agents render UI — and Vercel just got breached
DRAM production covers only 60% of demand through 2027 as manufacturers prioritize HBM for AI — factor into capacity planning and favor memory-efficient architectures in new designs
Protobuf.js RCE + Trivy/KICS supply chain creds feeding ransomware — check your CI/CD pipeline today
BOTTOM LINE
Your developer toolchain became a multi-vector attack surface this week: MCP's STDIO transport has a protocol-level RCE across 200+ projects, Cursor can be hijacked by a README in a cloned repo, Vercel's breach originated from a third-party AI tool's OAuth grant to Google Workspace, and credentials stolen from Trivy and KICS are now feeding directly to ransomware operators — all while new data shows AI coding tools only improve velocity for teams that already had strong CI/CD and test coverage, with median teams seeing 15% more code that's 15% less likely to merge cleanly.
Frequently asked
- What's the fastest mitigation for the MCP STDIO RCE if I can't switch transports today?
- Run MCP servers in sandboxed environments with restricted network access and treat STDIO input as untrusted. Pin MCP server versions, isolate them behind an MCP gateway that logs tool invocations, and block egress to anything outside an explicit allowlist. A version bump won't fix this — the flaw is protocol-level, so containment is your only short-term lever until HTTP transport with input sanitization is in place.
- Which credentials should I rotate after the Vercel breach chain?
- Rotate every Vercel environment variable, integration token, NPM token, and GitHub token, then audit Google Workspace OAuth grants under Admin → Security → API controls and revoke non-essential AI app access. ShinyHunters claims to be selling NPM and GitHub tokens alongside source code and 580 employee records, so assume any token exposed to Vercel's build pipeline is compromised until proven otherwise.
- How do I know if my team will actually get velocity gains from AI coding tools?
- Check your CI duration, test coverage, and main-branch success rate before scaling adoption. The State of Software Delivery data shows only teams already in the top 25% of DevEx see real lift — the median team gets flat throughput with a 15% drop in main-branch success rate. If CI takes over 10 minutes or coverage has gaps on critical paths, fix that first or AI tools will amplify your existing dysfunction.
- What's the single highest-leverage agent containment pattern to implement first?
- Sidecar secret proxying — the agent container never holds credentials, and all credential-bearing operations are mediated by dedicated sidecars holding only the tokens they need. GitHub's Agentic Workflows and OpenAI's Codex converged on this independently. Pair it with a staged output buffer so the agent proposes writes to a validation pipeline rather than calling production APIs directly.
- Why is iTerm2's SSH integration being called an RCE from `cat readme.txt`?
- The SSH conductor accepts protocol commands from any terminal output, so a file containing crafted DCS/OSC escape sequences can impersonate the conductor and push arbitrary commands to your local shell. Simply displaying the file triggers execution. The patch is reportedly still unstable, so the recommended action is to disable iTerm2's SSH shell integration on developer machines until a stable fix lands.
◆ ALSO READ THIS DAY AS
◆ RECENT IN ENGINEER
- The Replit incident — an AI agent deleted a production database with 1,200+ records, fabricated 4,000 replacements, and…
- GPT-5.5 just launched at 2x API pricing while DeepSeek V4 Flash serves at $0.14/M tokens and Kimi K2.6 matches frontier…
- Three critical vulnerabilities this week share a devastating pattern: patching alone doesn't fix them.
- Three CVSS 10.0 vulnerabilities dropped simultaneously across Axios (cloud metadata exfil via SSRF), Apache Kafka (JWT v…
- Code generation is solved — code review is now the bottleneck, and nobody has an answer yet.