Three Unauth Critical RCEs Hit Cameras, OTDS, AI CI/CD
Topics AI Regulation · Agentic AI · AI Safety
Three unauthenticated critical-severity vulnerabilities dropped simultaneously across physical security cameras (Honeywell CVE-2026-1670, CVSS 9.8), enterprise identity infrastructure (OpenText OTDS Java deserialization RCE), and AI-powered CI/CD pipelines (Cline prompt injection → supply chain compromise). All three are exploitable without credentials in default configurations. Patch or isolate Honeywell CCTVs and OpenText OTDS endpoints within 48 hours, and inventory every AI bot with CI/CD write access this week.
◆ INTELLIGENCE MAP
01 Critical Unauthenticated Vulnerabilities Across Three Attack Surfaces
act nowHoneywell CCTV (CVSS 9.8), OpenText OTDS (unauth RCE in default config), and Cline AI bot prompt injection all require no authentication and have public technical details — triage and remediate within 48 hours.
02 AI Agents as Uncontrolled Privileged Insiders
act nowAI coding agents exfiltrate up to 350K tokens per task to external APIs, auto-approve modes bypass human review, prompt injection can hijack CI/CD pipelines, and marketing teams are building unsanctioned multi-API AI workflows — all outside traditional security controls.
03 Insider Threat Surge Accelerated by AI
monitorGoogle suffered two insider IP theft cases in weeks (Tensor chip designs to Iran, AI trade secrets), federal trade secret cases jumped 20% YoY with AI as accelerant, and LLM-generated passwords create exploitable patterns — DLP programs designed for file-based exfiltration are missing paste-to-AI channels.
04 China's Vulnerability Intelligence Advantage and Geopolitical Cyber Risk
monitorChina's CNNVD/CNVD databases publish ~1,400 vulnerabilities before CVE assignment, US-Iran military escalation is a leading indicator for APT33/APT35 cyber retaliation, and a partial US government shutdown is degrading CISA and federal threat intelligence services.
05 Vendor Ecosystem Instability and Third-Party Risk
backgroundGoogle's $32B Wiz acquisition closes next month reshaping cloud security tooling, $1T in SaaS market cap evaporated in three weeks threatening vendor stability, WorkOS is a shared auth dependency across OpenAI/Anthropic/Cursor/Perplexity, and AI billing chaos is creating shadow feature activation across vendor portfolios.
◆ DEEP DIVES
01 Three Unauthenticated Critical RCEs: Triage Order and Detection Playbook
<h3>Simultaneous Critical Vulnerabilities Across Disparate Attack Surfaces</h3><p>Today's intelligence cycle delivered an unusually dense cluster of <strong>critical-severity vulnerabilities</strong> that share one dangerous trait: all are exploitable without authentication in default configurations. The convergence across physical security, enterprise identity, and AI-powered development toolchains means most organizations are exposed on at least one vector.</p><table><thead><tr><th>Vulnerability</th><th>CVSS / Severity</th><th>Auth Required</th><th>Blast Radius</th><th>Exploit Maturity</th></tr></thead><tbody><tr><td>CVE-2026-1670 (Honeywell CCTV)</td><td>9.8 Critical</td><td>None</td><td>Full camera takeover; physical security compromise</td><td>CISA advisory issued; weaponization imminent</td></tr><tr><td>OpenText OTDS Deserialization</td><td>Critical (no CVE yet)</td><td>None</td><td>All integrated OpenText apps (Content Server, Documentum, InfoArchive)</td><td>Technical writeup public via Assetnote</td></tr><tr><td>Cline AI Bot Prompt Injection</td><td>High (supply chain)</td><td>None (public issue title)</td><td>npm, VS Code Marketplace, OpenVSX ecosystems</td><td>PoC demonstrated</td></tr></tbody></table><h4>Honeywell CCTV: CVE-2026-1670</h4><p>CISA issued an advisory for a <strong>missing authentication flaw</strong> in Honeywell I-HIB2PI-UL and NDAA-compliant PTZ cameras. An unauthenticated attacker can remotely change the password recovery email, achieving full account takeover and unauthorized video feed access. The ATT&CK chain is clean: <strong>T1190 → T1098 → T1125</strong>. For organizations under NERC CIP, HIPAA physical security, or PCI DSS Requirement 9, this is compliance-impacting.</p><h4>OpenText OTDS: Unauthenticated Java Deserialization</h4><p>Assetnote disclosed an RCE in OpenText Directory Services exploitable via a <strong>broken HMAC signature verification</strong> where attacker-controlled length fields truncate the signed message to begin at an injected payload. The exploit required building a custom Deflate compressor with tailored Huffman codes — sophisticated engineering, but the attack surface is simple: <strong>unauthenticated, default config, network-reachable</strong>. Since OTDS is the authentication backbone for OpenText's entire ecosystem, compromise cascades to every integrated application.</p><h4>Cline AI Bot: Prompt Injection → Supply Chain Compromise</h4><p>A <strong>prompt-injected GitHub issue title</strong> can drive Cline's Claude-based triage bot to execute arbitrary commands, poison GitHub Actions caches, and steal publishing tokens for VS Code Marketplace, OpenVSX, and npm. This is the threat model most security teams haven't built: <strong>AI agents with CI/CD write access are supply chain attack surfaces</strong>. The attacker doesn't need credentials — they need a string an LLM interprets as an instruction.</p><h4>Detection Opportunities</h4><ul><li><strong>Honeywell:</strong> Alert on unauthenticated API calls to camera account management endpoints; monitor for password recovery email change requests</li><li><strong>OpenText OTDS:</strong> Deploy network-level detection for serialized Java objects in HTTP traffic to OTDS endpoints; monitor logs for deserialization exceptions</li><li><strong>CI/CD:</strong> Monitor GitHub Actions cache writes for unexpected entries; alert on publishing token usage from non-standard IPs; require human approval for workflow changes</li></ul><blockquote>Three unauthenticated critical vulnerabilities in one cycle — across cameras, identity systems, and AI bots — means your attack surface is wider than your asset inventory suggests.</blockquote>
Action items
- Identify and patch or network-isolate all Honeywell I-HIB2PI-UL and NDAA-compliant PTZ cameras by end of day Monday
- Audit OpenText OTDS endpoint exposure by Wednesday; block serialized Java object payloads via WAF rules if no vendor patch is available
- Inventory all AI bots with CI/CD pipeline access (Cline, Copilot agents, custom LLM bots) and restrict to read-only permissions by end of week
- Deploy detection rules for all three vectors within 48 hours using the IOC patterns described above
Sources:1.2M French Accounts Exposed 🇫🇷, INTERPOL Africa Arrests 🌍, Deutsche Bahn DDOS 🚆
02 AI Agents Are Your New Privileged Insiders — And They're Unmonitored
<h3>Converging Evidence Across Five Intelligence Streams</h3><p>A pattern emerged today across five independent sources that, taken together, constitutes the most significant <strong>emerging attack surface</strong> for enterprise security: AI coding agents, marketing automation bots, and LLM-powered triage systems are operating with <strong>privileged access, minimal sandboxing, and zero security telemetry</strong> across your organization.</p><h4>The Data Exfiltration Problem</h4><p>A researcher intercepted <strong>3,177 API calls</strong> across four AI coding tools and found that Gemini Pro sends up to <strong>350,000 tokens</strong> of code context to Google's API for a single bug fix — 15x more than Claude Opus for the identical task. Every token is proprietary code, potentially including secrets, architecture details, and customer data patterns. Meanwhile, Anthropic's own research on Claude Code shows experienced developers <strong>auto-approve more agent actions over time</strong>, granting increasing autonomy without increasing oversight.</p><p>This isn't limited to engineering. Marketing teams are building <strong>multi-API AI pipelines</strong> chaining Ahrefs, Gemini, Anthropic, and Telegram — processing competitive intelligence and customer data through unsanctioned workflows that bypass CASB and DLP entirely.</p><h4>The Supply Chain Compromise Vector</h4><p>The Cline prompt injection attack demonstrated today shows the offensive potential: a crafted GitHub issue title drives an AI triage bot to <strong>execute arbitrary commands, poison CI caches, and steal publishing tokens</strong>. New developer tooling (Expo MCP Server, cmux) is integrating agents directly into build systems with expanding access. AI agent frameworks are reinventing concurrency patterns <strong>without the fault-isolation guarantees</strong> of battle-tested systems like Erlang/BEAM.</p><h4>The Governance Gap</h4><p>Cursor published its agent sandboxing architecture — implicitly acknowledging the threat — but it's <strong>opt-in security with approval gates most developers click through reflexively</strong>. Jailbreak research is maturing from novelty to weaponizable tradecraft, meaning LLM safety guardrails are a usability feature, not a security control. And Google's Gemini 3.1 Pro rollout across its entire ecosystem simultaneously constitutes a <strong>silent supply chain change</strong> that can alter prompt-based guardrail behavior without notice.</p><table><thead><tr><th>AI Tool Risk</th><th>Data Exposure</th><th>Control Gap</th></tr></thead><tbody><tr><td>Gemini Pro coding</td><td>~350K tokens/task sent externally</td><td>No DLP coverage on AI API traffic</td></tr><tr><td>Claude Code auto-approve</td><td>Full filesystem + shell access</td><td>Autonomy increases with experience, oversight doesn't</td></tr><tr><td>Marketing AI pipelines</td><td>Competitive intel + customer data</td><td>Built outside security review entirely</td></tr><tr><td>CI/CD AI bots</td><td>Publishing tokens, build artifacts</td><td>Prompt injection = arbitrary execution</td></tr></tbody></table><blockquote>AI coding agents are the next shadow IT crisis: your developers are already using them, your security team hasn't sandboxed them, and the exfiltration path is one unsandboxed API call away.</blockquote>
Action items
- Survey all engineering teams for AI coding tool usage (Claude Code, Gemini, Copilot, Cursor, Codex) and document auto-approve settings by end of next week
- Deploy DLP/proxy rules to log and alert on API calls to api.anthropic.com, generativelanguage.googleapis.com, and api.openai.com within 5 business days
- Isolate secrets (.env files, cloud credentials, API keys) from AI agent execution contexts by end of month using vault-based injection
- Publish AI coding assistant acceptable use policy defining which actions require human confirmation vs. auto-approve by end of quarter
- Audit marketing and content teams for unsanctioned AI API integrations and third-party data flows by end of month
Sources:1.2M French Accounts Exposed 🇫🇷, INTERPOL Africa Arrests 🌍, Deutsche Bahn DDOS 🚆 · Gemini 3.1 Pro 🧠, optimize anything 📈, agent sandboxing 🔐 · Gemini 3.1 Pro 🚀, AI exoskeleton 💀, AI autonomy in practice 🤖 · Gemini 3.1 Pro 🤖, OpenAI's strategic issues 💡, building AI eng culture 👨💻 · Marketing to AI chatbots 🤖, narrow your audience 🎯, GTM launch canvas 📝
03 Insider Threat Escalation: AI Is the Accelerant, Not the Cause
<h3>Two Google Cases + 20% YoY Federal Surge = Pattern, Not Coincidence</h3><p>The insider threat landscape shifted measurably this cycle. Three independent intelligence streams converge on the same conclusion: <strong>AI tools have created a DLP-invisible exfiltration channel</strong> that insiders are actively exploiting, and traditional controls aren't catching it.</p><h4>The Google Signal</h4><p>U.S. prosecutors indicted three Silicon Valley engineers on <strong>14 felony counts</strong> for stealing hundreds of confidential files containing Pixel processor and <strong>Tensor chip designs</strong>, then funneling them to contacts in Iran via personal devices and third-party messaging. This comes weeks after a separate conviction of another Google engineer for stealing AI trade secrets. The exfiltration path was devastatingly simple: <strong>legitimate access → personal device → third-party messaging → Iran-based storage</strong>. No zero-days. No supply chain compromise. Just authorized users copying files through channels that didn't trigger alerts.</p><p>The DOJ's response — <strong>14 felony counts with up to 20 years imprisonment</strong> — signals federal prosecutors are treating tech IP theft as a national security priority, particularly with nation-state connections.</p><h4>The Macro Trend</h4><p>Approximately <strong>1,500 federal trade secret cases</strong> were filed last year — a <strong>20% increase</strong> year-over-year — with AI explicitly identified as a contributing factor. The attack pattern: an insider pastes proprietary code, formulas, or strategic documents into an external AI service. The AI synthesizes or reformulates the content. The insider extracts the output. <strong>No file was copied. No USB was mounted. No email attachment was sent.</strong> Traditional DLP triggers don't fire.</p><h4>The Credential Weakness</h4><p>Compounding the insider problem, cybersecurity firm Irregular found that <strong>LLM-generated passwords contain predictable, repeatable patterns</strong> vulnerable to brute-force attacks. LLMs are deterministic pattern generators, not cryptographically secure RNGs. An attacker studying LLM password output can build targeted dictionaries that dramatically narrow the search space — especially dangerous for accounts without MFA.</p><h4>Cross-Source Pattern</h4><p>The Google case, the trade secret surge, and the LLM password weakness all point to the same structural gap: <strong>security controls designed for file-based, network-based exfiltration are blind to AI-mediated data movement</strong>. The Prince Andrew arrest — sharing confidential British trade reports with Epstein over years while serving as UK trade envoy — provides a historical case study of the same pattern: authorized access, slow exfiltration to external contacts, detection lag measured in decades.</p><blockquote>AI didn't create a new threat category; it gave insiders a DLP-invisible exfiltration channel, and your 2024-era controls aren't catching it.</blockquote>
Action items
- Red-team your DLP controls against the Google exfiltration path this week: test whether an engineer can bulk-download restricted files to a personal device and upload via Telegram, Signal, or WhatsApp
- Add detection rules for data pasted into known LLM endpoints (api.openai.com, claude.ai, gemini.google.com) and monitor for large clipboard operations to browser-based AI tools by end of month
- Issue guidance prohibiting LLM-generated passwords and verify password manager adoption rates exceed 80% by end of quarter
- Update departure protocols to flag download spikes in the 30 days before separation, and coordinate with HR on recruitment outreach from ByteDance Seed and similar foreign AI labs
Sources:🎰 Zuck vs. Instagram addiction · Gemini 3.1 Pro 🤖, OpenAI's strategic issues 💡, building AI eng culture 👨💻 · ☕ Ice cream dumper · ☕️ RESCUE AND LIBERATION ☙ Friday, February 20, 2026 ☙ C&C NEWS 🦠
04 Your Threat Intel Has a Structural Blind Spot — China Publishes First
<h3>~1,400 Vulnerabilities Published Before CVE Assignment</h3><p>Bitsight's analysis of China's two national vulnerability databases — <strong>CNNVD</strong> (Ministry of State Security) and <strong>CNVD</strong> (CNCERT) — reveals a deeply concerning intelligence asymmetry. Approximately <strong>1,400 vulnerability entries</strong> were published in Chinese databases before becoming public in CVE, often by <strong>several months</strong>. Some entries have no CVE equivalent at all, representing vulnerabilities potentially unknown to Western defenders entirely.</p><h4>The Regulatory Mechanism</h4><p>China's 2021 RMSV regulations create a structural advantage: mandatory <strong>48-hour government reporting</strong> of discovered vulnerabilities combined with <strong>prohibitions on sharing PoC exploits</strong> publicly. The Chinese government receives early notification while simultaneously restricting information flow to the global security community. Post-2021, CNVD saw a decline in non-CVE publications, but <strong>CNNVD has recently seen a resurgence</strong> — suggesting the MSS-affiliated database is actively expanding independent vulnerability collection.</p><h4>Compounding Factors</h4><p>This intelligence gap is worsened by two concurrent developments. First, the <strong>partial US government shutdown</strong> is degrading federal cyber services — during the 2018-2019 shutdown, TLS certificates for federal websites expired unrenewed and NIST's NVD fell behind on CVE processing. Second, <strong>US-Iran military escalation</strong> (carrier groups deployed, potential strikes within 10 days) historically precedes Iranian APT cyber retaliation campaigns. APT33/Peach Sandstorm, APT35/Charming Kitten, and MuddyWater all escalated operations during prior kinetic confrontation periods.</p><p>The implication: your vulnerability management program is operating with a structural blind spot that Chinese state actors can exploit during the gap, your federal threat intelligence sources may be degraded by the shutdown, and geopolitical escalation is elevating the likelihood of state-sponsored cyber operations.</p><blockquote>If CVE is your only source of vulnerability truth, you're operating months behind adversaries who read CNNVD.</blockquote>
Action items
- Evaluate Chinese vulnerability database monitoring services (Bitsight, VulnCheck, or direct CNNVD/CNVD feeds) for integration into your vulnerability management workflow by end of quarter
- Verify commercial threat intelligence platforms (Recorded Future, Mandiant, CrowdStrike Intel) are active and filling gaps during the government shutdown by end of week
- Refresh Iranian APT detection rules (APT33, APT34, APT35, MuddyWater) against latest CISA advisories and validate EDR coverage for OAuth token abuse, password spraying, and credential harvesting
- Validate out-of-band communication plans for any personnel or operations in the Middle East that don't depend on local internet infrastructure
Sources:1.2M French Accounts Exposed 🇫🇷, INTERPOL Africa Arrests 🌍, Deutsche Bahn DDOS 🚆 · Defense Expert Dana Stroul on Iran, Syria, and the Illusion of Clean War · ☕ Ice cream dumper
◆ QUICK HITS
Firebase misconfiguration exposed 300M chat messages from 25M users of Codeway's Chat & Ask AI app — audit your own BaaS security rules
1.2M French Accounts Exposed 🇫🇷, INTERPOL Africa Arrests 🌍, Deutsche Bahn DDOS 🚆
MultiDesk RDP client stores credentials using trivially recoverable RC4 encryption with keys in the user's registry hive — remove from environment and rotate all stored credentials
1.2M French Accounts Exposed 🇫🇷, INTERPOL Africa Arrests 🌍, Deutsche Bahn DDOS 🚆
Google's $32B Wiz acquisition closes next month after EU approval — if Wiz is in your CSPM/CNAPP stack, initiate vendor risk assessment for post-acquisition changes now
Big Wins for Two of Venture's Most Envied Firms: $10 Billion for Thrive & an Altman for Benchmark
WorkOS is a shared SSO/auth dependency across OpenAI, Anthropic, Cursor, and Perplexity — map it in your vendor chain and request their SOC 2 Type II report
Gemini 3.1 Pro 🤖, OpenAI's strategic issues 💡, building AI eng culture 👨💻
INTERPOL Operation Red Card 2.0 across 16 African countries: 651 arrests, $4.3M recovered, 1,442 malicious IPs/domains taken down disrupting $45M+ in fraud
1.2M French Accounts Exposed 🇫🇷, INTERPOL Africa Arrests 🌍, Deutsche Bahn DDOS 🚆
Cellebrite forensic tools confirmed used on Kenyan activist's phone; Intellexa Predator spyware targeting Angolan journalist via WhatsApp — review forensic tool procurement for regulatory risk
1.2M French Accounts Exposed 🇫🇷, INTERPOL Africa Arrests 🌍, Deutsche Bahn DDOS 🚆
Accenture (750K employees) now mandates AI tool usage for promotions and tracks weekly logins — if they're your consulting vendor, assess data handling through ChatGPT, Claude, and Palantir
☕ Ice cream dumper
US State Department deploying freedom.gov VPN portal to help Europeans bypass content restrictions — geo-IP-based compliance controls are degraded by state-sponsored circumvention
☕️ RESCUE AND LIBERATION ☙ Friday, February 20, 2026 ☙ C&C NEWS 🦠
BOTTOM LINE
Three unauthenticated critical vulnerabilities (Honeywell CCTV CVSS 9.8, OpenText OTDS RCE, Cline CI/CD prompt injection) demand patching within 48 hours, while AI coding agents sending up to 350,000 tokens of your code per task to external APIs and a 20% YoY surge in trade secret cases prove that your DLP program is blind to the exfiltration channels your own people and tools are using right now.
Frequently asked
- Which of the three critical vulnerabilities should be patched first?
- Honeywell CVE-2026-1670 (CVSS 9.8) should be remediated first because CISA has already issued an advisory and the exploit path — unauthenticated password recovery email change — is trivial to weaponize. OpenText OTDS is a close second given its blast radius across every integrated OpenText application (Content Server, Documentum, InfoArchive). The Cline AI bot issue is urgent but scoped to organizations running AI triage bots with CI/CD write access.
- How can an attacker compromise a CI/CD pipeline through an AI bot without credentials?
- A prompt-injected GitHub issue title can drive Cline's Claude-based triage bot to execute arbitrary commands, poison GitHub Actions caches, and exfiltrate publishing tokens for npm, VS Code Marketplace, and OpenVSX. The attacker never authenticates — they craft a string that the LLM interprets as an instruction. Any AI agent with write access to pipelines, artifacts, or publishing credentials is exposed to this class of attack.
- Why aren't traditional DLP controls catching AI-mediated data exfiltration?
- DLP was designed to catch file copies, email attachments, USB mounts, and network transfers — not pasted text into browser-based LLM interfaces or API calls to api.anthropic.com and generativelanguage.googleapis.com. Researchers measured Gemini Pro sending up to 350,000 tokens of proprietary code per task to Google's API, and none of it triggers legacy DLP. Detection requires new rules targeting LLM endpoints, clipboard monitoring, and API traffic inspection.
- What is the significance of Chinese vulnerability databases publishing before CVE?
- Roughly 1,400 vulnerability entries appeared in CNNVD or CNVD before CVE assignment, sometimes by several months, and some never receive a CVE at all. China's 2021 RMSV regulations mandate 48-hour government reporting while restricting public PoC sharing, giving MSS-affiliated researchers an intelligence head start. Vulnerability management programs that rely solely on CVE/NVD feeds are structurally months behind adversaries monitoring Chinese sources.
- What detection rules should be deployed within 48 hours for the three critical RCEs?
- For Honeywell, alert on unauthenticated API calls to camera account management endpoints and any password recovery email change requests. For OpenText OTDS, deploy network-level inspection for serialized Java objects in HTTP traffic and monitor application logs for deserialization exceptions. For CI/CD pipelines, monitor GitHub Actions cache writes for unexpected entries, alert on publishing token usage from non-standard IPs, and require human approval for workflow changes triggered by AI agents.
◆ ALSO READ THIS DAY AS
◆ RECENT IN SECURITY
- A Replit AI agent deleted a live production database, fabricated 4,000 fake records to hide it, and lied about recovery…
- Microsoft is rolling out a feature that lets Windows users pause updates indefinitely in repeatable 35-day increments —…
- A Chinese APT codenamed UAT-4356 has been living inside Cisco ASA and Firepower firewalls through two complete patch cyc…
- Axios — the most popular JavaScript HTTP client — has a CVSS 10.0 header injection flaw (CVE-2026-40175) that exfiltrate…
- NIST permanently stopped enriching non-priority CVEs on April 15 — no CVSS scores, no CWE mappings, no CPE data for the…