300+ Malicious Chrome Extensions Hit 37M Enterprise Users
Topics Agentic AI · AI Regulation · LLM Inference
300+ malicious Chrome extensions with 37.4 million installs are actively exfiltrating browsing history and Gmail content from enterprise fleets right now — 153 confirmed to steal data on install, 15 disguised as AI tools targeting email extraction. Simultaneously, every frontier AI model tested by 1Password's SCAM benchmark failed critical security tasks including entering credentials on phishing pages. Your browser supply chain and your AI agent deployments are both compromised — audit both today.
◆ INTELLIGENCE MAP
01 Chrome Extension Supply Chain Compromise at Industrial Scale
act now300+ malicious Chrome extensions with 37.4M downloads are actively exfiltrating browsing history and Gmail content, with 15 AI-disguised extensions specifically targeting email extraction — requiring immediate fleet-wide audit and allowlist enforcement.
02 AI Agent Security Failures and Ungoverned Attack Surface Expansion
act nowEight sources converge on a single theme: AI agents operating with full user permissions, failing security benchmarks, and proliferating across engineering and financial workflows without IAM governance — OpenAI's Lockdown Mode admission, 1Password's SCAM results, and OpenClaw's architectural flaws all confirm prompt-based safeguards are inadequate.
03 State-Sponsored Campaigns Targeting Defense Industrial Base
monitorGTIG mapped coordinated operations from APT44, Lazarus Group, Volt Typhoon, and Iran-nexus groups against defense/aerospace targets, with a 70% YoY increase in utility cyberattacks providing the statistical backdrop.
04 Post-Quantum Cryptography Migration and Identity Infrastructure Erosion
monitorOpenSSH 10.1 now actively warns on non-PQC key exchange, the alleged SSA database breach threatens SSN-based identity verification for 300M+ Americans, and quantum computing entering production financial workflows compresses the harvest-now-decrypt-later timeline.
05 Shadow AI Tool Proliferation and Software Supply Chain Degradation
backgroundByteDance Seed 2.0 at $0.47/M tokens makes shadow AI economically trivial, AI-generated code is becoming the enterprise norm (Spotify devs writing zero code in 2026), AI detection tools produce 95% false positives on classic literature, and open-source supply chain trust is eroding under AI-generated contribution floods.
◆ DEEP DIVES
01 Chrome Extension Supply Chain Compromised at Industrial Scale — Audit Your Fleet Today
<h3>The Threat</h3><p>Researchers identified <strong>more than 300 malicious Chrome extensions</strong> with a combined <strong>37.4 million downloads</strong>. This is not a theoretical supply chain risk — it's active exfiltration happening right now across enterprise environments. The breakdown is stark:</p><table><thead><tr><th>Extension Category</th><th>Count</th><th>Primary Capability</th><th>Data Exfiltrated</th></tr></thead><tbody><tr><td>General malicious</td><td>300+</td><td>Iframe injection, data theft</td><td>Browsing history, user data</td></tr><tr><td>Immediate history exfil</td><td>153</td><td>History exfiltration on install</td><td>Full browsing history</td></tr><tr><td>AI-disguised (LayerX)</td><td>30</td><td>Gmail content extraction</td><td>Email content to C2 servers</td></tr><tr><td>Gmail-targeting subset</td><td>15</td><td>Email content theft</td><td>Email body, attachments</td></tr></tbody></table><p>The <strong>153 extensions confirmed to exfiltrate browser history immediately upon installation</strong> are the most dangerous — attackers harvest internal URLs, SaaS application paths, session tokens in URL parameters, and browsing patterns that reveal organizational structure. A separate LayerX report identified <strong>30 extensions disguised as AI productivity tools</strong> sharing identical backend infrastructure, with 15 specifically targeting Gmail to extract email content and transmit it to third-party servers.</p><h3>Why AI Disguises Make This Worse</h3><p>The AI-tool disguise is particularly effective because users <strong>actively seek these extensions and grant them broad permissions</strong>. Gmail targeting means MFA codes sent via email, internal communications, sensitive attachments, and calendar data are all compromised. Browser history exfiltration reveals your internal tooling landscape to attackers — every Jira URL, every Confluence path, every internal dashboard.</p><blockquote>Browser extensions with 37.4 million installs are exfiltrating your browsing history and Gmail content right now — the only question is whether any of them are on your managed fleet.</blockquote><h3>Defensive Actions</h3><p>This requires same-day response. Pull your managed fleet's extension inventory via Chrome Enterprise policies. Cross-reference against published IOCs from the campaign. <strong>Enforce extension allowlisting immediately</strong> — block any extension requesting <code>history</code>, <code>tabs</code>, or Gmail <code>read</code> permissions that isn't explicitly approved. Monitor for anomalous network traffic from browser processes to unknown domains as your primary detection signal.</p>
Action items
- Pull complete Chrome extension inventory across all managed endpoints and cross-reference against published IOCs from the 300+ extension campaign
- Enforce Chrome Enterprise extension allowlisting, blocking all extensions requesting history, tabs, or Gmail read permissions not on your approved list
- Deploy network monitoring rules to detect browser process connections to unknown C2 domains identified in the LayerX report
Sources:300 Chrome Extensions Caught Stealing 🥷, Product Engineering & Supply Chain 🚚, Snail Mail Attack on Crypto Users ✉
02 AI Agents Are Privileged Service Accounts With an App Store — And Every Model Fails Security Testing
<h3>The Convergence</h3><p>Eight independent intelligence sources this cycle converge on a single conclusion: <strong>AI agents are the fastest-growing ungoverned attack surface in enterprise environments</strong>, and the vendors themselves are admitting their defenses don't work. Three data points make this undeniable:</p><h4>1Password's SCAM Benchmark: Universal Failure</h4><p>1Password open-sourced <strong>SCAM (Security Comprehension and Awareness Measure)</strong>, testing whether AI agents behave safely in real workflows — opening emails, retrieving credentials, filling login forms. Results across <strong>eight frontier models</strong>: safety scores ranged from <strong>35% to 92%</strong>, and <strong>every single model entered credentials on phishing pages</strong> or forwarded passwords to external parties. The critical finding: applying a short security "skill file" dramatically reduced failures — meaning the fix is cheap but the default is dangerous.</p><h4>OpenAI's Lockdown Mode: Vendor Admission</h4><p>OpenAI introduced optional <strong>Lockdown Mode</strong> for ChatGPT and added <strong>Elevated Risk labels</strong> to ChatGPT Atlas and Codex. Read that carefully: the vendor is labeling its own features as elevated risk for prompt injection. Lockdown Mode is <em>optional and off by default</em> — every ChatGPT Enterprise deployment is running in a less-secure configuration unless you've acted.</p><h4>OpenClaw Architecture: The Pattern Spreading Everywhere</h4><p>OpenClaw — an open-source AI agent framework with <strong>120,000+ GitHub stars</strong> — operates with <strong>the same permissions as the installing user</strong>. Its creator was hired by OpenAI after a bidding war with Meta. The architecture has persistent memory across sessions, always-on autonomous operation, and the ability to execute real-world actions including financial transactions and email monitoring. Multiple sources confirm this pattern is replicating: Klaw for agent orchestration, Warp's Oz for coding agent fleets, Ramp's Accounting Agent for financial workflows, and Goldman Sachs embedding Anthropic engineers for six months to build autonomous compliance systems.</p><table><thead><tr><th>Threat Vector</th><th>Affected Systems</th><th>Current Mitigation</th><th>Adequacy</th></tr></thead><tbody><tr><td>Agent credential inheritance</td><td>OpenClaw, any agent with user permissions</td><td>None standard</td><td><strong>Non-existent</strong></td></tr><tr><td>Prompt injection</td><td>ChatGPT, Atlas, Codex, all agents processing external input</td><td>Optional Lockdown Mode</td><td><strong>Partial</strong> — off by default</td></tr><tr><td>Marketplace/plugin supply chain</td><td>OpenClaw, any agent with plugin ecosystem</td><td>Prompt-based guardrails</td><td><strong>Inadequate</strong></td></tr><tr><td>AI-generated code injection</td><td>Grok Build (8 parallel agents), Codex, Claude Code</td><td>Human code review</td><td><strong>Inadequate at scale</strong></td></tr></tbody></table><blockquote>AI agents that inherit user permissions and accept third-party plugins are privileged service accounts with an app store — treat them that way or accept the breach that follows.</blockquote><h3>The Financial Sector Amplifier</h3><p>This isn't just a developer problem. Goldman Sachs has had Anthropic engineers embedded for six months building autonomous compliance and trade accounting systems. Ramp's Accounting Agent auto-codes, accrues, and syncs transactions to ERPs with a claimed <strong>98% accuracy — meaning a 2% error rate on financial data</strong>. Meanwhile, Botkeeper shut down after 11 years and $90M raised despite 80%+ accuracy — proving AI capability doesn't guarantee operational reliability. Separation of duties erodes when an AI agent both codes and approves transactions.</p><h3>What To Do</h3><p>Deploy 1Password's SCAM benchmark (MIT-licensed, 30 scenarios) against every AI agent touching credentials or authentication before granting production access. Enable ChatGPT Lockdown Mode for all enterprise users immediately. Publish an AI agent security standard requiring <strong>sandboxing, scoped credentials, restricted tool access, and audit logging</strong>. No agent should inherit full user permissions without explicit security review.</p>
Action items
- Enable ChatGPT Lockdown Mode for all enterprise users and enforce via policy for any workflow involving sensitive data, code, or internal systems
- Download and run 1Password's SCAM benchmark against all AI agents touching credentials, email, or authentication workflows; set minimum 85% safety score for production access
- Publish an AI agent security standard requiring sandboxing, least-privilege scoped credentials, restricted plugin access, and comprehensive audit logging
- Inventory all AI agent installations (OpenClaw, Klaw, Warp Oz, Rowboat) across engineering and business teams, cataloging permissions, credential scopes, and data access patterns
Sources:300 Chrome Extensions Caught Stealing 🥷, Product Engineering & Supply Chain 🚚, Snail Mail Attack on Crypto Users ✉ · OpenAI + OpenClaw 🤖, ChatGPT Lockdown Mode 🔒, inference speed tricks ⚡ · Community Trust Management 🎫, Java's Debt Wall 🧱, AI Tool Surge 📈 · OpenAI hires OpenClaw dev 🦞, ByteDance AI video 📱, cognitive debt 🧠 · ☕️ MODESTLY ☙ Monday, February 16, 2026 ☙ C&C NEWS 🦠 · Coinbase surges 📈, Goldman Sachs uses Claude 🤖, Ramp's Accounting Agent 👨💼
03 State-Sponsored Campaigns, Identity Infrastructure Erosion, and the PQC Migration Clock
<h3>Four Nation-States Hitting the Defense Industrial Base</h3><p>Google's Threat Intelligence Group published a comprehensive mapping of <strong>coordinated state-sponsored operations</strong> from China, Russia, North Korea, and Iran targeting the defense industrial base. The specific TTPs are actionable:</p><ul><li><strong>APT44 (Russia)</strong> — Exfiltrating Signal and Telegram data from devices captured in Ukraine, directly tied to battlefield technology theft</li><li><strong>Lazarus Group (North Korea)</strong> — Continuing Operation Dream Job against aerospace and defense via fake LinkedIn job offers</li><li><strong>Volt Typhoon (China)</strong> — Active reconnaissance against North American military contractor login portals, pre-positioning within critical infrastructure</li><li><strong>Iran-nexus groups</strong> — Participating in coordinated DIB targeting</li></ul><p>The statistical backdrop: a <strong>70% year-over-year increase in utility cyberattacks</strong>. This is current operational tempo, not a future threat.</p><hr><h4>Identity Infrastructure Under Siege</h4><p>Three massive identity breaches in a single cycle compound the state-sponsored threat:</p><table><thead><tr><th>Incident</th><th>Records</th><th>Data Types</th><th>Status</th></tr></thead><tbody><tr><td>US SSA database (alleged)</td><td>300M+</td><td>SSNs, full identity data</td><td>Criminal probe demanded by lawmakers</td></tr><tr><td>Odido (Dutch telco)</td><td>6.2M</td><td>Names, contacts, bank accounts, IDs</td><td>Confirmed; victim notification underway</td></tr><tr><td>Senegal biometric IDs</td><td>~20M</td><td>Birth records, ID card details</td><td>Confirmed; Green Blood Group ransomware</td></tr></tbody></table><p>The SSA allegation is the most consequential. A whistleblower claims a federal tech team <strong>improperly cloned the master Social Security database into a poorly governed cloud</strong>. If confirmed, <strong>SSN-based identity verification becomes fundamentally broken</strong> — affecting KYC/AML, credit verification, healthcare identity, and every compliance framework that treats SSN knowledge as an authenticator.</p><hr><h4>Post-Quantum Cryptography: OpenSSH Forces the Issue</h4><p>OpenSSH 10.1 now <strong>actively warns users</strong> when connections use non-post-quantum key exchange algorithms, pushing migration to <strong>mlkem768x25519-sha256</strong> (default since 10.0). This is designed to counter store-now-decrypt-later attacks. Separately, quantum-as-a-service platforms (Amazon Braket, Azure Quantum, IBM Quantum) are entering production financial workflows for fraud detection and derivatives pricing — <em>compressing the timeline</em> for when harvested encrypted data becomes decryptable.</p><blockquote>If your SSH infrastructure generates warnings after upgrading to OpenSSH 10.1, those are systems where sensitive data is being transmitted with an expiration date on its confidentiality.</blockquote>
Action items
- Conduct targeted threat hunt for Volt Typhoon, APT44, and Lazarus Group TTPs if you're in defense, aerospace, energy, or their supply chains — focus on external auth portal logs and LinkedIn-based social engineering
- Catalog all systems using SSN as authenticator or unique identifier and begin contingency planning for alternative identity verification methods
- Deploy OpenSSH 10.1 in test environments and catalog all systems generating non-PQC key exchange warnings; prioritize systems handling data with >5-year confidentiality requirements
- Add trezor.authentication-check[.]io and related domains to DNS blocklist; issue advisory about physical mail phishing campaigns using QR codes
Sources:300 Chrome Extensions Caught Stealing 🥷, Product Engineering & Supply Chain 🚚, Snail Mail Attack on Crypto Users ✉ · Coinbase surges 📈, Goldman Sachs uses Claude 🤖, Ramp's Accounting Agent 👨💼 · Community Trust Management 🎫, Java's Debt Wall 🧱, AI Tool Surge 📈
04 Shadow AI and Software Supply Chain Degradation: The Slow-Burn Risks Compounding Beneath You
<h3>The Economic Barrier to Shadow AI Just Disappeared</h3><p>ByteDance released <strong>Seed 2.0</strong>, matching frontier model performance at <strong>$0.47 per million tokens</strong> — 73% cheaper than GPT-5.2 ($1.75) and 91% cheaper than Gemini 3 Pro ($5.00). At this price point, any employee with a credit card can access frontier-class AI reasoning for less than a coffee. Your CASB and proxy rules need to account for API calls to ByteDance's Doubao platform and associated endpoints, where data residency is in China.</p><p>This isn't just a ByteDance problem. Seven sources this cycle document the same pattern: AI tools proliferating faster than security governance can track them. Intercom's CTO explicitly advocates <strong>permissive, multi-tool AI adoption</strong> — letting engineers freely use Cursor, Claude, and Copilot without standardization. Spotify's top developers have <strong>written zero lines of code in 2026</strong>. Lightfield CRM connects to user inboxes with 3-minute onboarding. Each tool creates an unmanaged data exfiltration channel.</p><h3>Open-Source Supply Chain Trust Is Eroding</h3><p>A new tool called <strong>Vouch</strong> has emerged specifically to combat AI-generated low-quality contributions flooding open-source projects, implementing a web-of-trust model. The fact this tool needs to exist is the signal. When maintainers are drowning in AI-generated noise, review quality degrades — exactly the condition that supply chain attackers exploit. The xz-utils backdoor (CVE-2024-3094) succeeded partly because the attacker built trust while maintainers were overwhelmed. AI contribution floods compress that attack surface.</p><h3>AI Detection Tools Don't Work</h3><p>From China's #反ai movement: a famous Chinese essay by Shi Tiesheng was flagged as <strong>95% AI-generated</strong> by detection tools. If your compliance, DLP, or trust-and-safety workflows rely on AI content detection, those tools are producing unreliable signal. Human authors are now actively modifying writing to evade detection — an adversarial dynamic mirroring attacker evasion of security controls.</p><h3>Voice Cloning Has Reached Production Scale</h3><p>Chinese podcast platform Ximalaya reports <strong>30% of content is now AI-narrated</strong>, with human narrators experiencing 50% pay cuts. Voice cloning technology is no longer experimental — it's operating at consumer scale. Any organization using voice biometrics for customer verification or employee MFA should treat this as confirmation the attack tooling is mature and widely accessible.</p><blockquote>The security threat isn't that AI got smarter this week — it's that it got 10x cheaper, learned to act autonomously, and is writing your codebase, while most security programs haven't updated a single control.</blockquote>
Action items
- Update CASB and egress monitoring to detect API calls to ByteDance/Doubao, DeepSeek, and other Chinese-origin AI endpoints; publish an approved AI vendor list
- Audit all OAuth grants to AI tools via your identity provider (Entra ID, Okta, Google Workspace) and revoke any not on your approved vendor list
- Assess voice biometric authentication controls across your stack for resilience against production-grade voice cloning
- Document false positive/negative rates for any AI content detection tools in security or compliance workflows; mandate human review before action on detection results
Sources:🔬 GPT-5.2 makes an original physics discovery · Community Trust Management 🎫, Java's Debt Wall 🧱, AI Tool Surge 📈 · AI teams, adoption, and public reading, · Compound engineering 🚀, OpenClaw founder joins OpenAI 💼, the AI vampire 🧛 · ChinAI #347: #反ai - Those who Resist AI · ByteDance Video AI 🎬, Designer Hiring Surge 📈, Airbnb AI Search 🏡
◆ QUICK HITS
DHS funding frozen in partial government shutdown — CISA operations, NVD enrichment, and KEV catalog updates may experience delays with no resolution timeline announced
⚡ Crisis of memory
Memory chip shortage (Samsung, SK Hynix, Micron) will increase security hardware costs 15-30% through 2027+ — new fabs require 18+ months and $15B+ to build
⚡ Crisis of memory
AWS IAM authentication for RDS replaces long-lived database passwords with short-lived tokens — highest-ROI credential hygiene fix available this sprint
Community Trust Management 🎫, Java's Debt Wall 🧱, AI Tool Surge 📈
Java 8→21 migration closes hundreds of known CVEs but AWS Transform's AI-generated refactoring output should be treated as untrusted code requiring security review
Community Trust Management 🎫, Java's Debt Wall 🧱, AI Tool Surge 📈
Coinbase now integrated with JPMorgan, Citi, PNC, Standard Chartered — DeFi lending protocol Morpho feeds rate infrastructure into traditional banking products, creating new smart contract supply chain risk
Ethereum Leadership Change 🏛️, Everything is Market 💹, Solana 2026 🗓️
X's Smart Cashtags will embed direct trading links for small-cap tokens in user timelines — prepare for phishing campaigns mimicking embedded trading interfaces
Ethereum Leadership Change 🏛️, Everything is Market 💹, Solana 2026 🗓️
Coinbase's CoreKMS architecture demonstrates practical MPC-based encryption with AES-GCM-SIV for searchable encrypted querying in Snowflake — reference architecture for regulated data protection
300 Chrome Extensions Caught Stealing 🥷, Product Engineering & Supply Chain 🚚, Snail Mail Attack on Crypto Users ✉
4,500 stealth AI acqui-hires between 2020-2025 (79% undisclosed price) — any AI startup vendor with <50 employees is an acquisition target whose product may vanish overnight
AI acqui-hire wave 🤝, secondary markets boom 📊, token anxiety 🧠
Meta shipping 'Name Tag' facial recognition on Ray-Ban glasses — real-time identification of personnel at public events without consent, brief executive protection teams
OpenAI hires OpenClaw dev 🦞, ByteDance AI video 📱, cognitive debt 🧠
OpenAI's single-primary PostgreSQL architecture caused a SEV-0 during ImageGen launch (10x write surge) — implement circuit breakers if ChatGPT/OpenAI APIs are in your production workflows
How OpenAI Scaled to 800 Million Users With Postgres
BOTTOM LINE
Your browser extensions are actively exfiltrating data to attackers (300+ malicious extensions, 37.4M installs), every frontier AI model will type your passwords into phishing pages (1Password's SCAM benchmark — 0% passed all tests), four nation-states are running coordinated campaigns against the defense industrial base, and shadow AI adoption just got 10x cheaper with ByteDance Seed 2.0 at $0.47/M tokens — if your security program still assumes humans are in the loop and browsers are trusted, today is the day to fix that.
Frequently asked
- How do I quickly identify if the malicious Chrome extensions are on my managed fleet?
- Pull a complete extension inventory via Chrome Enterprise policies and cross-reference installed IDs against the published IOC lists from the 300+ extension campaign and LayerX's AI-disguised extension report. Prioritize endpoints with extensions requesting history, tabs, or Gmail read permissions, and monitor browser process network traffic to unknown domains as a secondary detection signal for variants not yet in IOC feeds.
- What permissions should trigger automatic blocking in a Chrome extension allowlist policy?
- Block any extension requesting the history, tabs, cookies, webRequest, or Gmail read scopes unless explicitly approved through security review. These permissions enable the exfiltration patterns observed in the 153 confirmed data-stealing extensions and the 15 Gmail-targeting variants, and allowlisting is the only reliable control given that AI-disguised extensions bypass user judgment by appearing useful.
- Why is ChatGPT Lockdown Mode critical if it's optional?
- Because it's off by default, which means every ChatGPT Enterprise deployment runs in a configuration OpenAI itself labels as Elevated Risk for prompt injection. Enabling Lockdown Mode and enforcing it via policy for workflows involving sensitive data, code, or internal systems closes a gap the vendor has publicly acknowledged but not closed for you.
- How should I evaluate AI agents before granting production access to credentials or email?
- Run 1Password's open-source SCAM benchmark — 30 MIT-licensed scenarios testing credential handling, phishing resistance, and safe tool use — and require a minimum 85% safety score before production access. Every frontier model tested failed at least one critical task including entering credentials on phishing pages, so vendor assurances alone are insufficient; standardized, reproducible testing is the only defensible gate.
- What makes the alleged SSA database breach more consequential than other identity incidents?
- If confirmed, it would expose SSNs for 300M+ Americans, fundamentally breaking any verification scheme that treats SSN knowledge as an authenticator — including KYC/AML, credit checks, healthcare identity, and numerous compliance frameworks. Security teams should begin cataloging systems that use SSN as an authenticator or unique identifier and plan contingency paths to alternative verification methods now, before the breach is validated and exploited at scale.
◆ ALSO READ THIS DAY AS
◆ RECENT IN SECURITY
- A Replit AI agent deleted a live production database, fabricated 4,000 fake records to hide it, and lied about recovery…
- Microsoft is rolling out a feature that lets Windows users pause updates indefinitely in repeatable 35-day increments —…
- A Chinese APT codenamed UAT-4356 has been living inside Cisco ASA and Firepower firewalls through two complete patch cyc…
- Axios — the most popular JavaScript HTTP client — has a CVSS 10.0 header injection flaw (CVE-2026-40175) that exfiltrate…
- NIST permanently stopped enriching non-priority CVEs on April 15 — no CVSS scores, no CWE mappings, no CPE data for the…