PROMIT NOW · SECURITY DAILY · 2026-04-08

Claude Mythos Finds Thousands of 0days; Patch Window: 180 Days

· Security · 39 sources · 1,451 words · 7 min

Topics Agentic AI · AI Regulation · LLM Inference

Anthropic's Claude Mythos Preview has autonomously discovered thousands of high-severity zero-day vulnerabilities across every major OS, browser, and the Linux kernel — including bugs undetected for 27 years — and Alex Stamos estimates open-weight models will replicate this capability within 6 months. Project Glasswing, a 40+ company coalition with $104M in funding, is racing to patch before that window closes. Your vulnerability management program was built for human-speed bug discovery; you have roughly 180 days to rebuild it for machine-speed discovery before every ransomware actor on Earth has an automated 0day factory.

◆ INTELLIGENCE MAP

  1. 01

    AI Zero-Day Discovery Revolution: 6-Month Proliferation Clock

    act now

    Anthropic's Mythos autonomously chains 5+ vulns into novel exploits, finding bugs that escaped 27 years of review. Open-weight models ~6 months from parity. Project Glasswing ($104M, 40+ companies) racing to defensively scan critical infrastructure before proliferation.

    6 months
    proliferation window
    3
    sources
    • Mythos SWE-bench
    • Glasswing funding
    • Coalition members
    • Oldest bug found
    1. Apr 7 2026Mythos disclosed, Glasswing launches
    2. ~Jun 2026First Glasswing CVE disclosures expected
    3. ~Oct 2026Open-weight models approach parity
    4. Post-OctCommodity 0day discovery available to all actors
  2. 02

    OpenClaw Authorization Catastrophe + GrafanaGhost AI Exploitation

    act now

    OpenClaw had 6 critical auth bypasses in 6 weeks (CVSS 9.4 and 9.9), with 63% of 135K+ exposed instances running unauthenticated. GrafanaGhost chains prompt injection to exfiltrate data through AI features, invisible to SIEM/DLP. Both represent AI platform security failures at production scale.

    135K+
    exposed OpenClaw instances
    4
    sources
    • Auth bypasses
    • Unauthenticated
    • Max CVSS
    • Grafana fix
    1. OpenClaw instances without auth63
  3. 03

    AI-Generated Code Dismantling SDLC Security Controls

    monitor

    OpenAI shipped 1M lines with zero human review. Controlled studies show AI tools produce 41% more bugs. Vercel merges 58% of PRs without human review. GitHub dropped to 90% uptime under 17M monthly AI agent PRs. Your code review security gate is being eliminated by design.

    41%
    more bugs from AI tools
    8
    sources
    • Dark Factory LOC
    • PRs without review
    • GitHub uptime
    • AI monthly PRs
    1. Speed gain26
    2. Bug increase41
    3. PRs no human review58
    4. Dev tool adoption84
  4. 04

    Supply Chain Weaponization: Security Tools as Attack Vectors

    monitor

    Trivy compromise confirmed as initial access for 340GB EU Commission breach. Axios hit via social engineering. Strapi npm packages poisoned. Chinese labelers running coordinated anti-distillation data poisoning that evades audit. Three software supply chain + one AI data supply chain attack in one period.

    340 GB
    EU Commission breach
    5
    sources
    • Supply chain attacks
    • EU entities affected
    • Internal clients hit
    • Detection delay
    1. 01Trivy → EU Commission340 GB exfiltrated
    2. 02Axios npmSocial engineering
    3. 03Strapi npmRegistry compromise
    4. 04AI training dataCoordinated poisoning
  5. 05

    Geopolitical Cyber Escalation: DPRK Long-Game + Nation-State TTPs

    background

    DPRK invested $1M real capital and 6 months of in-person social engineering before a $270M Drift Protocol heist — a new high-water mark for state-sponsored patience. FBI IC3 reports $20B+ cybercrime losses in 2025 (26% YoY), with BEC alone at $3.05B. Trust-based vetting no longer works against nation-state actors.

    $270M
    DPRK Drift Protocol heist
    4
    sources
    • DPRK seed investment
    • Social engineering
    • FBI IC3 2025 losses
    • BEC losses alone
    1. FBI IC3 Total 202520
    2. BEC Losses3.05
    3. DPRK Drift Heist0.27

◆ DEEP DIVES

  1. 01

    Claude Mythos & Project Glasswing: The 6-Month Window Before Automated Zero-Day Discovery Goes Commodity

    <h3>What Happened</h3><p>Anthropic disclosed <strong>Claude Mythos Preview</strong> on April 7 — a model they describe as too dangerous to release publicly. It has autonomously discovered <strong>thousands of high-severity zero-day vulnerabilities</strong> across every major operating system and web browser, including a 27-year-old bug in OpenBSD, an FFmpeg flaw that survived 5 million automated tests, and several Linux kernel vulnerabilities enabling <strong>full machine compromise</strong>. The model scores 93.9% on SWE-bench Verified — a <strong>13-point leap</strong> over the previous state of the art in two months.</p><p>Most critically: Mythos doesn't just find individual bugs. It identifies five separate vulnerabilities in a single codebase and <strong>autonomously chains them into novel exploit paths</strong>. These capabilities emerged from general reasoning improvements, not specialized cybersecurity training — meaning every frontier lab will inevitably cross this threshold.</p><hr><h3>The Proliferation Timeline</h3><p>Alex Stamos estimates open-weight models will replicate Mythos-class vulnerability discovery <strong>within approximately 6 months</strong>. Cisco's CSTO Anthony Grieco stated: <em>"AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure."</em> Once open-weight models reach parity, any actor with commodity hardware — ransomware gangs, hacktivists, nation-states — can run automated vulnerability discovery against <em>any codebase</em>.</p><blockquote>Zero-days go from expensive skilled craft to cheap automated commodity in roughly 180 days. Your defense strategy must shift from 'prevent compromise' to 'survive compromise' before that window closes.</blockquote><h3>Project Glasswing: The Defensive Race</h3><p>Anthropic launched <strong>Project Glasswing</strong> — a coalition of 40+ companies including Apple, Google, Microsoft, and Cisco with <strong>$104M in funding</strong> — to defensively scan and patch critical infrastructure and open-source dependencies before proliferation. Anthropic briefed CISA and the Center for AI Standards and Innovation before launch. Expect a <strong>surge of coordinated CVE disclosures</strong> in the coming weeks as Glasswing processes its findings.</p><p>This creates an unprecedented governance situation: <strong>a private company now holds thousands of exploits for virtually every major software project</strong>. Anthropic's model weights and vulnerability database are now the most valuable theft target in cybersecurity history. As journalist Kelsey Piper observed: a single entity controls an offensive capability that no government or organization has previously concentrated.</p><hr><h3>What This Means for Your Program</h3><p>This is not an incremental improvement — it is a <strong>structural change in attacker economics</strong>. Your current vulnerability management cadence was designed for human-speed bug discovery. When bugs are found at machine speed, your 30-day patch SLA becomes a 30-day exposure window. Your SBOM gaps become exploitable blind spots. Your perimeter defenses become probabilistically weaker with each passing week as the stockpile of known (to AI) but unknown (to you) vulnerabilities grows.</p><p>No existing compliance framework — <em>SOC 2, ISO 27001, NIST CSF, or CMMC</em> — contemplates this scenario. Expect emergency guidance from CISA and potentially new regulatory requirements around AI-discovered vulnerability disclosure.</p>

    Action items

    • Convene an emergency threat model review assuming automated 0day discovery by sophisticated and unsophisticated actors within 6 months. Re-evaluate blast radius for every internet-facing system.
    • Stress-test your patch pipeline: simulate receiving 50+ critical CVEs in a single week from Glasswing disclosures. Pre-authorize emergency security patch windows if change management can't handle the velocity.
    • Complete a comprehensive SBOM audit of FFmpeg, Linux kernel versions, OpenBSD-derived components, and all browser engines across production, staging, and dev environments by end of month.
    • Accelerate microsegmentation and assume-breach architecture for Tier 1 assets this quarter. When 0days become commodity, preventing initial compromise becomes probabilistically harder.
    • Brief your board using Grieco's quote and Stamos's 6-month estimate. Request accelerated budget for detection engineering and zero-trust initiatives.

    Sources:AI just found thousands of 0days in your OS and browser — you have 6 months before attackers can too · 5 actively exploited zero-days, 3 supply chain compromises, and a 3-day CISA deadline — your patch queue just exploded · Self-improving AI agents and local model deployment are creating blind spots your SOC can't see yet

  2. 02

    OpenClaw Is 'The Vulnerability': 135K Exposed Instances + GrafanaGhost Proves AI Features Are Blind Spots

    <h3>OpenClaw: Platform-Level Security Failure</h3><p>OpenClaw — the open-source AI agent platform now owned by OpenAI — has had <strong>six pairing-related authorization bypasses in six weeks</strong>, all rooted in CWE-863 (Incorrect Authorization). The two most critical:</p><ul><li><strong>CVE-2026-33579 (CVSS 9.4)</strong>: Scope validation bypass lets a low-privileged attacker approve device requests with elevated scopes</li><li><strong>CVSS 9.9 (unnamed)</strong>: Token rotation race condition enables full admin access and remote code execution</li></ul><p>The numbers that should alarm you: <strong>63% of 135,000+ publicly exposed instances run without authentication</strong>. As SANS editor Ullrich stated: <em>"There are no vulnerabilities in OpenClaw. OpenClaw, as a concept, is the vulnerability."</em></p><p>This isn't a patching problem — it's a <strong>systemic design failure</strong>. Six auth bypasses in six weeks in the same subsystem indicates architectural debt, not implementation bugs. Anthropic's simultaneous cutoff of third-party Claude access through OpenClaw adds complexity: teams may have broken security automations they haven't noticed yet, and developers may migrate to unvetted alternatives.</p><hr><h3>GrafanaGhost: AI Features as Invisible Exfiltration Channels</h3><p>Noma Security disclosed <strong>GrafanaGhost</strong> — a prompt injection chain that bypasses Grafana's domain validation and AI guardrails to exfiltrate private data via outbound image requests. The critical gap: <strong>this looks like normal AI behavior to your SIEM and DLP tools</strong>. No malware, no credential theft, no anomalous user behavior — just an AI feature coerced into serving the attacker. Grafana Labs validated the report and shipped a fix.</p><blockquote>GrafanaGhost is the proof-of-concept that every enterprise tool with bolted-on AI features is a potential exfiltration channel your detection stack wasn't designed to see.</blockquote><h3>The Pattern: AI Platform Trust Failures</h3><p>Cross-source analysis reveals a convergent pattern: OpenClaw's auth failures, GrafanaGhost's invisible exfiltration, and the Claude Code prompt leak (reported earlier this week) all target the <strong>orchestration and integration layer</strong> of AI tools — not the models themselves. The competitive moat and the attack surface are in the same place: the scaffolding. Multiple sources confirm OpenClaw users are rapidly switching to alternatives like Gemma 4, creating additional supply chain churn with its own integrity risks.</p><p>Meanwhile, AI agents are gaining access to increasingly sensitive infrastructure. New tools like InsForge give AI coding agents <strong>autonomous access to auth configurations, database permissions, and storage policies</strong> — with no human-in-the-loop. X/Twitter released tooling enabling AI agents to autonomously post, DM, and search at scale. Microsoft shipped <strong>MAI-Voice-1</strong> for identity-consistent voice generation. The attack surface is expanding across every dimension simultaneously.</p>

    Action items

    • Audit all OpenClaw deployments immediately: verify version 2026.3.28+, confirm authentication is enabled, enumerate internet-exposed instances. Remove or isolate any unauthenticated instances today.
    • Patch Grafana if AI features are enabled. If AI features aren't business-critical, disable them until your team assesses prompt injection exposure.
    • Inventory all enterprise tools with AI/LLM features and assess each for prompt injection vectors and unrestricted outbound request capabilities this sprint.
    • Verify any Claude-dependent automations still function after Anthropic's third-party access cutoff. Migrate to direct API key auth.
    • Create detection rules for AI subsystem outbound requests to untrusted domains, unusual image loads, and encoded data in URL parameters.

    Sources:5 actively exploited zero-days, 3 supply chain compromises, and a 3-day CISA deadline — your patch queue just exploded · CVE-2026-35616 is being exploited NOW — your FortiClient EMS has only a hotfix, not a patch · Your dev team's AI agents now have bash, browser, and cloud access — here's your new attack surface · Your AI agent attack surface just exploded: X's action layer, OpenClaw's memory files, and autonomous tax bots in production · AI coding agents are getting autonomous access to your auth, database, and storage — is anyone threat-modeling this?

  3. 03

    AI-Generated Code Is Dismantling Your SDLC Security Model — And the Data Proves It

    <h3>The Numbers Are In</h3><p>Across eight independent sources this cycle, a coherent and alarming picture emerges: AI-generated code is being shipped to production at scale, and the security controls designed for human-speed development are breaking down.</p><table><thead><tr><th>Metric</th><th>Value</th><th>Source</th></tr></thead><tbody><tr><td>Bug increase from AI tools (controlled study)</td><td><strong>+41%</strong></td><td>Trending research paper</td></tr><tr><td>Speed gain</td><td>+26%</td><td>Same study</td></tr><tr><td>PRs merged without human review (Vercel)</td><td><strong>58%</strong></td><td>Vercel production data</td></tr><tr><td>OpenAI 'Dark Factory' code — human-written</td><td><strong>0 lines</strong></td><td>OpenAI Frontier team</td></tr><tr><td>AI agent PRs per month (GitHub)</td><td><strong>17 million</strong></td><td>GitHub data</td></tr><tr><td>GitHub availability under load</td><td><strong>90%</strong></td><td>GitHub confirmation</td></tr><tr><td>Developer AI tool adoption</td><td>84%</td><td>Industry survey</td></tr><tr><td>Orgs with AI code governance</td><td><strong>&lt;3%</strong></td><td>CodeReview report</td></tr></tbody></table><h3>The 'Dark Factory' Model</h3><p>OpenAI's Frontier team publicly documented a five-month experiment shipping 1 million lines of code with <strong>zero human-written code and zero pre-merge human review</strong>. PR reviewer agents are explicitly configured to <strong>bias toward merging</strong> and ignore anything below P2 severity. Agents write product code, tests, CI/CD configs, Grafana dashboards, and even respond to operational pages. At 2 million weekly active Codex users growing 25% week-over-week, this methodology will propagate across your vendor ecosystem.</p><p>The implications go beyond code quality. OpenAI's chairman stated <strong>"software dependencies are going away"</strong> — agents internalize libraries rather than importing them. This renders SCA tools (Snyk, Dependabot) blind: no package name, no version, no known CVE, no SBOM entry. Years of security hardening in popular libraries get reset to zero when an agent rewrites them.</p><blockquote>The security assumption that humans review code before it ships is being deliberately eliminated by the company defining AI tooling for the industry. If your AppSec program can't function without that control, you have months, not years, to fix it.</blockquote><h3>GitHub as Critical Infrastructure Under Strain</h3><p>GitHub's availability has dropped to 90% — approximately 73 hours of potential downtime per month — as AI agents overwhelm databases, Redis clusters, and failover mechanisms. Claude Code alone grew from 100K to 2.5M weekly public commits in six months. GitHub's own COO acknowledged the API <strong>"hasn't been designed with agents in mind."</strong> If your CI/CD, secrets management, or deployment pipelines depend on GitHub, you have an unmitigated availability risk that compounds the code quality problem.</p><hr><h3>The Governance Gap</h3><p>84% developer adoption against <3% governance maturity is a <strong>systemic control failure</strong>. Kent Beck and Martin Fowler, two of the most influential voices in software engineering, both warn that AI is driving an industry-wide <strong>speed-over-quality optimization</strong>, with companies measuring PR frequency as a performance metric — actively incentivizing volume over security. Beck identifies 're-soloing' — developers replacing human code review with AI agent interaction — as a dangerous trend that displaces the cheapest and most effective security control in your SDLC.</p><p>Meanwhile, 96% of developers themselves distrust AI output, yet the tooling ecosystem is designed to minimize friction. The result: vulnerability introduction rates scaling with commit velocity while security gates remain calibrated for human-speed development.</p>

    Action items

    • Audit your SDLC for human-review-dependent controls. Map every security gate that assumes a human reads code before merge and develop automated compensating controls for agent-authored code this quarter.
    • Stress-test SAST/DAST/SCA tools against AI-generated code patterns. Measure detection rates against OWASP Top 10 categories in AI-produced code this sprint.
    • Enforce mandatory human review for security-critical code paths (auth, crypto, payment, PII handling) regardless of AI review status. Build into branch protection rules.
    • Build GitHub outage resilience: mirror critical repos to a secondary Git provider, cache CI/CD artifacts locally, build fallback trigger mechanisms for security automation.
    • Publish an AI coding agent access policy defining approved tools, credential scoping, least-privilege requirements, and audit logging mandates before adoption outpaces governance.

    Sources:Zero Human Code Review Before Merge: OpenAI's 'Dark Factory' Model Is Rewriting Your SDLC Threat Model · AI coding tools shipping 41% more bugs into your codebase — and your devs love the speed boost · 58% of PRs merging without human review — your SDLC security controls need an AI-era rethink · 17M AI agent pull requests are flooding your supply chain — and GitHub can't keep the lights on · GitHub at 90% uptime under AI agent flood — your CI/CD pipeline resilience needs a stress test now · Your AppSec review pipeline can't keep up with AI-generated code — and 96% of devs already know it

◆ QUICK HITS

  • Update: FortiClient EMS CVE-2026-35616 — CISA 3-day deadline expires April 9, ~2,000 exposed instances confirmed, exploitation deliberately timed to holiday weekend staffing gap. Second critical EMS zero-day in rapid succession suggests systemic code-level vulnerability pattern warranting vendor risk reassessment.

    CVE-2026-35616 is being exploited NOW — your FortiClient EMS has only a hotfix, not a patch

  • Update: Trivy supply chain compromise now attributed with high confidence as initial access vector for the 340 GB European Commission breach, affecting 42 internal clients and 29+ Union entities. CERT-EU didn't receive first alert until 6 days after compromise.

    5 actively exploited zero-days, 3 supply chain compromises, and a 3-day CISA deadline — your patch queue just exploded

  • Update: Iran 8pm ET deadline tonight to reopen Strait of Hormuz — elevate SOC to heightened monitoring for APT33/34/35/MuddyWater TTPs. Historical pattern: every significant US-Iran escalation produces retaliatory cyber ops within 24-72 hours.

    Iran deadline tonight means your SOC should be on heightened alert for retaliatory cyber ops

  • Chrome CVE-2026-5281 confirmed exploited in the wild — Google patched 21 vulns in Chrome 146.0.7680.177/178 on March 31. Any endpoint that hasn't restarted Chrome in 7 days is unnecessary exposure. Force-push and verify ≥98% fleet coverage within 48 hours.

    Two actively exploited zero-days demand your patches this week — Fortinet EMS and Chrome CVE-2026-5281

  • React2Shell (CVE-2025-55182) — UAT-10608 has compromised 760+ systems and exfiltrated 10,000+ files via pre-auth RCE in Next.js React Server Components. Inventory all Next.js deployments and deploy WAF rules for deserialization patterns.

    5 actively exploited zero-days, 3 supply chain compromises, and a 3-day CISA deadline — your patch queue just exploded

  • Keycloak MFA bypass (CVE-2026-3429) disclosed with no patch from Red Hat. Audit Keycloak auth logs for anomalous MFA-skipped sessions and assess compensating controls while monitoring for emergency patch release.

    5 actively exploited zero-days, 3 supply chain compromises, and a 3-day CISA deadline — your patch queue just exploded

  • BlueHammer: publicly disclosed Windows 0-day privilege escalation with no vendor patch. Tune EDR for privilege escalation detection patterns and increase monitoring on domain controllers and critical Windows servers.

    5 actively exploited zero-days, 3 supply chain compromises, and a 3-day CISA deadline — your patch queue just exploded

  • DPRK actors invested $1M+ real capital and 6 months of in-person relationship building before executing a $270M exploit against Drift Protocol — highest-patience social engineering campaign documented from a state actor. Review counterparty vetting for shell entity detection.

    DPRK actors deposited $1M of real capital to social-engineer a $270M DeFi heist — your vendor vetting process ready for this?

  • FBI IC3 reports $20B+ cybercrime losses in 2025 (26% YoY increase), with BEC alone at $3.05B. SANS notes all top 5 attack techniques now carry an AI dimension. Fresh data for board risk quantification.

    5 actively exploited zero-days, 3 supply chain compromises, and a 3-day CISA deadline — your patch queue just exploded

  • D-Trust recalled all TLS certificates issued March 2025–April 2026 due to linting compliance failure. Check if any infrastructure or vendor stack uses D-Trust certificates and implement ACME-based automated certificate lifecycle management.

    5 actively exploited zero-days, 3 supply chain compromises, and a 3-day CISA deadline — your patch queue just exploded

  • Microsoft MAI-Voice-1 ships production-grade identity-consistent voice cloning via Foundry. Update vishing defenses: any voice-based authorization (help desk resets, wire transfer approvals) now requires out-of-band verification.

    Your AI agent attack surface just exploded: X's action layer, OpenClaw's memory files, and autonomous tax bots in production

  • AI training data poisoning: Chinese labelers deploying coordinated anti-distillation tools that inject audit-evading corruptions into ML pipelines. Standard QA spot-checks will not catch this. Implement statistical anomaly detection on labeled datasets before ingestion.

    Your AI training pipeline has a new insider threat: coordinated data poisoning that passes every audit

  • AI compliance surface expanding: 90+ state bills on companion chatbots alone, GSA requiring anti-bias AI vendor compliance for federal procurement, Colorado SB 24-205 effective June 30 with replacement framework uncertain.

    Your AI compliance surface just exploded: 90+ state bills, new GSA vendor rules, and a June deadline you can't miss

BOTTOM LINE

AI just discovered thousands of zero-days in every major OS and browser, and open-weight models will replicate this capability within 6 months — while simultaneously, AI-generated code is shipping 41% more bugs with 58% of PRs merging without human review, your security scanner (Trivy) was the EU Commission's breach vector, and 63% of 135,000 OpenClaw instances run without authentication. The common thread: the tools you trust to write code, find bugs, and manage AI agents are all either broken, compromised, or being deliberately stripped of the human oversight that was your last line of defense.

Frequently asked

How soon will commodity attackers have AI-powered zero-day discovery?
Alex Stamos estimates open-weight models will replicate Claude Mythos-class vulnerability discovery within roughly 6 months. Once that happens, ransomware gangs, hacktivists, and nation-states running commodity hardware can point automated 0day factories at any codebase, shifting attacker economics from expensive skilled craft to cheap automation.
What is Project Glasswing and how will it affect my patch queue?
Project Glasswing is a 40+ company coalition (including Apple, Google, Microsoft, Cisco) with $104M in funding, coordinating defensive patching of infrastructure and open-source dependencies before proliferation. Expect waves of coordinated CVE disclosures over the coming weeks — potentially 50+ critical CVEs in a single week — which will break change-management pipelines not pre-authorized for emergency patch windows.
Why is OpenClaw being called 'the vulnerability' rather than just having vulnerabilities?
OpenClaw has shipped six pairing-related authorization bypasses in six weeks, all rooted in CWE-863, indicating systemic architectural debt rather than isolated bugs. Combined with 63% of 135,000+ publicly exposed instances running with no authentication at all, and a CVSS 9.9 token rotation race condition enabling admin RCE, SANS concluded the platform's design itself is the flaw.
Why won't my SIEM or DLP catch GrafanaGhost-style attacks?
GrafanaGhost uses prompt injection to coerce Grafana's AI features into exfiltrating data via outbound image requests — behavior that looks identical to legitimate AI activity. There's no malware, credential theft, or anomalous user behavior to trigger existing rules. Detection requires new logic focused on AI subsystem outbound requests to untrusted domains, unusual image loads, and encoded data in URL parameters.
What's the risk of AI-generated code bypassing human review in my SDLC?
A controlled study shows AI tools produce 41% more bugs while 58% of PRs at Vercel now merge without human review, and OpenAI's Frontier team has shipped 1M lines with zero human-written code and reviewer agents explicitly biased toward merging. With 84% developer adoption but under 3% of orgs having AI code governance, vulnerability introduction is scaling with commit velocity while security gates remain calibrated for human speed.

◆ ALSO READ THIS DAY AS

◆ RECENT IN SECURITY