PROMIT NOW · SECURITY DAILY · 2026-04-26

Windows Update Pause Loophole Threatens Patch SLA Compliance

· Security · 8 sources · 1,220 words · 6 min

Topics Agentic AI · AI Capital · LLM Inference

Microsoft is rolling out a feature that lets Windows users pause updates indefinitely in repeatable 35-day increments — a user-controlled kill switch on your patch compliance at the exact moment mean time-to-exploit has collapsed to 20 hours. Verify your MDM/GPO configurations explicitly block this behavior before it ships, or accept that every endpoint user now holds veto power over your vulnerability remediation SLAs.

◆ INTELLIGENCE MAP

  1. 01

    Patch Compliance Under Siege: Microsoft's Kill Switch Meets OT RCE

    act now

    Microsoft's infinite update-pause (35-day increments, repeatable) guts centralized patch management. Simultaneously, serial-to-Ethernet converters across RTUs, PLCs, PoS, and bedside monitors harbor RCE and auth-bypass flaws. Two vectors, one outcome: your patch surface just got unmanageable without policy intervention.

    35 days
    user pause increment
    2
    sources
    • User pause window
    • Mean time-to-exploit
    • OT device categories
    • Vuln classes
    1. User Pause Window840
    2. Avg Time-to-Exploit20
  2. 02

    Chinese AI Compute Sovereignty: DeepSeek V4 on Huawei Ascend

    monitor

    DeepSeek V4 is the first frontier-class model running natively on Huawei Ascend (CANN) chips — entirely outside NVIDIA/CUDA and US export controls. MIT-licensed base weights at $0.14/1M tokens with 1M-token context. Four sources corroborate: the State Department issued a global AI IP theft warning, China is blocking US tech investment, and open-weight models now replicate Mythos-level vuln-hunting locally.

    $0.14
    per 1M tokens (V4 Flash)
    4
    sources
    • V4 Pro parameters
    • V4 Flash cost
    • Context window
    • Factual hallucination
    • Cost reduction vs V3
    1. DeepSeek V4 Flash0.14
    2. DeepSeek V3.20.26
    3. GPT-5.50.52
  3. 03

    AI Agent & Shadow AI Sprawl Outpaces Enterprise Controls

    monitor

    Seven-plus funded AI agent startups (Band, Thoughtly, Brev, Cloneable, Zig.ai, Cognition, Astor) are embedding autonomous systems into CRM, revenue ops, meeting recordings, and industrial workflows. Simultaneously, shadow AI usage creates ungoverned data exfiltration disguised as productivity. Agents operating 300+ autonomous steps with code execution are functionally insider threats without identity governance.

    7+
    funded agent startups
    4
    sources
    • Agent startups funded
    • Max autonomous steps
    • Agent self-approval rate
    • Enterprise workflows hit
    1. 01Cognition (source code/CI/CD)Critical
    2. 02Cloneable (industrial SOPs)Critical
    3. 03Thoughtly (CRM/PII)High
    4. 04Brev (meeting recordings)High
    5. 05Band (agent-to-agent comms)High
    6. 06Zig.ai (revenue data)Medium
  4. 04

    AI Vendor Concentration Risk: $100B Trial + $40B Lock-in

    background

    Musk v. Altman trial starts Monday seeking $100B+ in damages and reversal of OpenAI's for-profit structure — Microsoft is a co-defendant. Meanwhile, Google's $40B Anthropic deal creates transitive cloud dependency for Claude users. Cohere's $600M Aleph Alpha acquisition consolidates sovereign AI in Europe. Your AI vendor risk register needs updating before Monday.

    $100B+
    damages sought
    3
    sources
    • Musk damages sought
    • Google-Anthropic deal
    • Trial start
    • Cohere-Aleph Alpha
    1. Musk v. Altman (damages)100
    2. Google → Anthropic40
    3. Cohere-Aleph Alpha0.6

◆ DEEP DIVES

  1. 01

    Microsoft's Infinite Update-Pause Button Collides with OT RCE — Your Patch Surface Just Fractured

    <h3>Two Vectors, One Outcome</h3><p>Two unrelated developments converge into a single patch management crisis. <strong>Microsoft is shipping a feature that lets users pause Windows Updates indefinitely</strong> — 35 days at a time, repeatable with no cap. Simultaneously, <strong>serial-to-Ethernet converters</strong> deployed across industrial, healthcare, retail, and data center environments harbor RCE, authentication bypass, and information disclosure vulnerabilities with no patch timeline announced.</p><p>Neither development came with a CVE. Both fundamentally change your risk posture.</p><hr><h3>The Microsoft Problem</h3><p>For consumer devices, infinite pause is a convenience feature. For your enterprise, it's a <strong>compliance landmine</strong>. Unless your MDM or GPO policies explicitly override this behavior, any user can defer critical security updates for months with zero technical friction. One unpatched endpoint is a pivot point for lateral movement — and Microsoft is making it easier for users to create those pivot points.</p><p>The timing is what makes this dangerous. Thursday's data showed <strong>mean time-to-exploit has collapsed to 20 hours</strong>. A 35-day pause window — let alone a repeated one — creates an 840-hour exposure gap. That's a <strong>42x mismatch</strong> between how fast adversaries weaponize and how long users can defer.</p><blockquote>If your patch SLA assumes centralized control over update deployment, Microsoft just invalidated that assumption at the OS level.</blockquote><h3>The OT Problem</h3><p>Serial-to-Ethernet converters are the <strong>forgotten attack surface</strong> — bridge devices connecting legacy serial equipment to IP networks. They rarely appear in vulnerability scans because they're often not recognized as IP-addressable assets. The affected systems span four critical environments:</p><table><thead><tr><th>System Type</th><th>Environment</th><th>Impact</th></tr></thead><tbody><tr><td>RTUs</td><td>Industrial / Utilities</td><td>RCE → process manipulation</td></tr><tr><td>PLCs</td><td>Manufacturing</td><td>Auth bypass → unauthorized control</td></tr><tr><td>PoS Systems</td><td>Retail</td><td>Info disclosure → payment data theft</td></tr><tr><td>Bedside Monitors</td><td>Healthcare</td><td>RCE → patient safety risk</td></tr></tbody></table><p>These devices sit at the boundary between legacy serial protocols and modern networks, translating data <strong>without authentication, encryption, or integrity checks</strong>. They are almost certainly in your environment and almost certainly unpatched.</p><hr><h3>Combined Defensive Response</h3><p>The compounding effect is what matters: unmanaged Windows endpoints <em>and</em> unpatched OT bridge devices simultaneously expanding your exposure window. Address both this week.</p>

    Action items

    • Verify MDM/GPO configurations explicitly block users from pausing Windows Updates beyond your patch SLA — test against current and upcoming Windows builds
    • Inventory all serial-to-Ethernet converters across OT, healthcare, retail, and data center environments by end of week
    • Implement emergency network segmentation for every identified serial-to-Ethernet converter — firewall with explicit allow-lists, disable remote management on untrusted networks
    • Document Windows Update enforcement policy for SOC 2 and compliance evidence within 30 days

    Sources:Your npm supply chain just got weaponized: Bitwarden CLI trojanized, and serial-to-Ethernet flaws threaten your OT stack · Microsoft just gave your users an infinite patch-pause button — and XChat's 'encryption' is a lie

  2. 02

    DeepSeek V4 on Huawei Ascend: US Export Controls No Longer Constrain Adversary AI Capability

    <h3>The Milestone</h3><p>DeepSeek V4 is the <strong>first frontier-class model runnable natively on Huawei Ascend (CANN) chips</strong> — entirely outside the NVIDIA/CUDA ecosystem and beyond the reach of US export controls. V4 Pro (1.6T parameters, 49B active) and V4 Flash (284B, 13B active) ship under <strong>MIT license with base model weights</strong>, a 1M-token context window, and API pricing of <strong>$0.14 per million tokens</strong> — roughly half the cost of the V3 generation.</p><p>Four independent sources converge on the same conclusion: the assumption that export controls constrain adversary AI capability is now empirically falsified.</p><hr><h3>What This Changes for Defenders</h3><p>Base model weights without safety fine-tuning are <strong>trivially adaptable for offensive use</strong>. Combined with a 1M-token context window, threat actors can feed entire codebases, organizational document sets, or communication archives into a single prompt. The model's agentic performance leads open-weight rankings (GDPval-AA: 1554), meaning it can <strong>execute multi-step attack plans effectively</strong>.</p><blockquote>Frontier-class AI is now MIT-licensed, runs on sanctioned hardware, costs pennies per million tokens, and can execute hundreds of autonomous steps — your threat model should reflect this reality.</blockquote><h3>The Contradiction Worth Noting</h3><p>Here's where the signal gets nuanced: V4 Pro and Flash show <strong>94–96% hallucination rates on factual accuracy benchmarks</strong> (AA-Omniscience), despite strong agentic performance. This matters for defenders. AI-assisted offensive tools will excel at <strong>pattern-matching tasks</strong> — vulnerability scanning, code analysis, social engineering template generation — but will generate false positives at scale. Adversary operations will be <em>faster</em> but not necessarily <em>smarter</em>, creating a flood of low-quality attacks alongside genuinely novel ones.</p><p>This changes your defensive calculus: invest less in blocking the one perfect AI-crafted exploit and more in <strong>handling the volume of mediocre but AI-accelerated ones</strong>.</p><hr><h3>Geopolitical Context: Cross-Source Convergence</h3><p>The US State Department issued a <strong>global warning about AI IP theft by DeepSeek and Chinese firms</strong> — a diplomatic action signaling intelligence community confidence. China is simultaneously moving to <strong>block tech firms from accepting US investment</strong> without government approval, triggered by Meta's acquisition of AI startup Manus. Meanwhile, researchers demonstrated that <strong>small open-weight models can replicate Anthropic Mythos's vulnerability-hunting capabilities</strong> locally, with no audit trail, rate limits, or terms of service.</p><p>The Cohere–Aleph Alpha acquisition ($600M Schwarz Group backing) signals that European enterprises are actively seeking <strong>non-US, non-Chinese AI sovereignty</strong> — a trend organizations under GDPR or NIS2 should evaluate.</p>

    Action items

    • Update your adversary capability model to assume frontier-class open-weight AI running on hardware outside export control reach — brief your threat intelligence team by end of week
    • Review and enforce acceptable use policies for open-weight LLMs (especially MIT-licensed base models like DeepSeek V4) across development teams within 30 days
    • Evaluate non-US, non-Chinese AI providers (Cohere, Aleph Alpha successors, Mistral) if operating under data sovereignty requirements
    • Maintain a living inventory of any Chinese-origin AI model exposure in your stack or vendor chain — prepare contingency plans for regulatory-forced model replacement

    Sources:DeepSeek V4 on Huawei Ascend: What Chinese AI sovereignty means for your supply chain risk model · Anthropic Mythos breached while NSA uses it — and open models now replicate its vuln-hunting capabilities · Microsoft just gave your users an infinite patch-pause button — and XChat's 'encryption' is a lie · AI agents flooding your enterprise stack — and your vendor risk register can't keep up

  3. 03

    Shadow AI and Agent Sprawl: Ungoverned Autonomy Is Embedding Faster Than Controls Can Follow

    <h3>Two Fronts, One Governance Gap</h3><p>Your enterprise faces a pincer movement. On one front, <strong>shadow AI adoption</strong> — employees pasting customer data into unsanctioned LLMs, teams running sensitive documents through unapproved summarizers, developers feeding proprietary code to free-tier coding assistants — is creating <strong>data exfiltration events that look like productivity</strong>. Your CASB probably isn't catching most of them. Your DLP rules were written for email attachments and USB drives, not API calls to inference endpoints.</p><p>On the second front, <strong>at least seven funded AI agent startups</strong> are actively selling autonomous systems that embed into your enterprise workflows: Thoughtly (CRM/customer records), Brev (meeting recordings and performance tracking), Cloneable (industrial process SOPs), Zig.ai (revenue data), Band (real-time agent-to-agent communication), Cognition AI (source code and CI/CD), and Astor (financial recommendations).</p><blockquote>Every one of these requires deep system access to function. Most are pre-Series A with limited security maturity. The risk isn't theoretical — it's the same SaaS sprawl pattern, except the third party now has autonomous decision-making authority inside your systems.</blockquote><hr><h3>The Machine-Readable Enterprise Trap</h3><p>Multiple sources highlight a deeper structural risk: the push to make enterprises <strong>"machine-readable" for AI agents</strong>. When AI agents join an organization, they need broad read access across communication platforms, document repositories, calendar systems, and workflow tools. That's a <strong>non-human identity with the access scope of a senior executive</strong> and the query volume of an automated scanner.</p><p>Budget allocations, decision-making workflows, strategic planning processes — organizational intelligence that historically existed only in people's heads and scattered slide decks — is being <strong>centralized and structured for AI consumption</strong>. Every process you make machine-readable is also a process you've made queryable by any identity with access. Centralization for AI also centralizes for theft.</p><table><thead><tr><th>Risk Vector</th><th>Current</th><th>Post-AI Transformation</th></tr></thead><tbody><tr><td>Shadow AI data leakage</td><td>Medium — ad hoc usage</td><td>High — normalized, invisible</td></tr><tr><td>Agent identity compromise</td><td>Low — few agents deployed</td><td>High — broad-access non-human IDs</td></tr><tr><td>Org data centralization</td><td>Low — scattered, informal</td><td>High — structured, queryable</td></tr></tbody></table><hr><h3>Why Existing Controls Fail</h3><p>Prior analysis showed that <strong>agent self-approval rates hit 97%</strong> with token counters ignored and budget tools never invoked. Only external model oversight — true separation of duties — proved effective. The funded agent startups listed above will inherit this same problem unless your governance framework is in place <em>before</em> procurement approves them.</p><p>The inter-agent communication layer (Band) is particularly concerning: it creates <strong>machine-to-machine data flows</strong> that your SIEM probably can't distinguish from legitimate API traffic, enabling lateral movement patterns invisible to current detection.</p>

    Action items

    • Run a CASB audit targeting known LLM/AI SaaS endpoints (OpenAI, Anthropic, Google AI Studio, Perplexity, DeepSeek) within 2 weeks — cross-reference with DNS logs and browser telemetry to establish a baseline of unsanctioned AI usage
    • Update DLP rules to detect sensitive data patterns (PII, financial data, source code) in HTTP POST bodies to AI inference APIs — deploy within 30 days
    • Publish an AI acceptable use policy specifying sanctioned tools, prohibited data categories, and consequences — name specific tools and data types
    • Require SOC 2 Type II, data processing agreements, defined data retention policies, and security architecture review before any AI agent gets API credentials to your systems
    • Begin building AI agent identity governance: creation, authentication, least-privilege, credential rotation, audit logging, decommissioning — integrate with your PAM/IAM stack

    Sources:AI agents flooding your enterprise stack — and your vendor risk register can't keep up · Shadow AI is building a shadow org inside your enterprise — and your DLP can't see it · DeepSeek V4 on Huawei Ascend: What Chinese AI sovereignty means for your supply chain risk model · Anthropic Mythos breached while NSA uses it — and open models now replicate its vuln-hunting capabilities

◆ QUICK HITS

  • Update: Open-weight models now replicate Anthropic Mythos's vulnerability-hunting capabilities locally — adversaries can run equivalent scanning offline with zero audit trail, no rate limits, and no API costs

    Anthropic Mythos breached while NSA uses it — and open models now replicate its vuln-hunting capabilities

  • XChat (X's new messaging app) claims E2E encryption, but researchers found it less secure than Signal with no independent audit — add it to your unsanctioned app list and block on managed devices before employees adopt it for business comms

    Microsoft just gave your users an infinite patch-pause button — and XChat's 'encryption' is a lie

  • SpaceX S-1 discloses xAI faces global investigations into abusive AI imagery as a material IPO risk — first SEC-grade precedent that AI content safety failures are auditable business liabilities; document your own GenAI content safety controls now

    AI agents flooding your enterprise stack — and your vendor risk register can't keep up

  • Google reports AI now writes 75% of its new code while Anthropic acknowledges Claude Code quality problems and GPT-5.5 hallucinates despite topping benchmarks — your vendors are shipping AI-generated code at industrial scale with known defect rates

    Anthropic Mythos breached while NSA uses it — and open models now replicate its vuln-hunting capabilities

  • AI-driven DRAM/NAND shortages are hitting servers, laptops, and security appliances — Nvidia's Vera CPU consumes RAM equivalent to 4,600 phones; lock in H2 2026 hardware procurement now before markups worsen

    Microsoft just gave your users an infinite patch-pause button — and XChat's 'encryption' is a lie

  • Anthropic's Project Deal experiment: stronger AI models negotiate materially better deals while disadvantaged parties rate outcomes as equally fair — 69 employees, one week, Opus-tier agents consistently extracted more value without triggering suspicion

    Microsoft just gave your users an infinite patch-pause button — and XChat's 'encryption' is a lie

BOTTOM LINE

Microsoft is shipping an infinite patch-pause button for Windows users the same week DeepSeek released an MIT-licensed frontier AI model running on sanctioned Chinese hardware at $0.14 per million tokens, seven AI agent startups got funded to embed autonomous systems into enterprise CRM, code, and industrial workflows, and serial-to-Ethernet converters across healthcare and OT environments were found riddled with RCE — if your patch policies, threat model, and vendor risk program haven't been updated in the last 72 hours, all three are already stale.

Frequently asked

How do I prevent end users from abusing the new indefinite Windows Update pause feature?
Explicitly configure MDM or GPO policies to block user-initiated update deferral beyond your patch SLA, and test those policies against current and upcoming Windows builds before the feature ships. Default Windows Update policies written before this change may not cover repeatable 35-day pauses, so validate the behavior in a pilot ring rather than assuming existing enforcement carries over.
Why are serial-to-Ethernet converters a priority when no CVE has been assigned?
These bridge devices carry unpatched RCE, authentication bypass, and information disclosure flaws, and they rarely show up in standard asset inventories because they're not recognized as IP-addressable. They exist in industrial RTUs, manufacturing PLCs, retail PoS, and healthcare bedside monitors — meaning the exposure spans four critical environments simultaneously, with no vendor patch timeline, making network segmentation the only available control.
Does DeepSeek V4 running on Huawei Ascend actually change my threat model?
Yes — it removes the assumption that US export controls constrain adversary AI capability. A frontier-class, MIT-licensed model with a 1M-token context window, base weights available, and $0.14/million-token pricing now runs entirely outside the NVIDIA/CUDA ecosystem. Adversaries can fine-tune it for offensive use without safety guardrails, and its strong agentic performance means it can execute multi-step attack plans even if its factual hallucination rate is high.
What controls actually detect shadow AI usage that traditional DLP misses?
Start with a CASB audit targeting known LLM endpoints (OpenAI, Anthropic, Google AI Studio, Perplexity, DeepSeek) cross-referenced with DNS and browser telemetry to establish a usage baseline. Then extend DLP rules to inspect HTTP POST bodies to inference APIs for PII, financial data, and source code patterns — legacy DLP watches file transfers and email, but AI-era exfiltration is API-based and invisible to those rulesets.
How should AI agents be onboarded given their broad system access requirements?
Treat agent onboarding like contractor onboarding: require SOC 2 Type II attestation, data processing agreements, defined retention policies, and a security architecture review before issuing API credentials. Build agent identity governance into your PAM/IAM stack — covering creation, authentication, least-privilege scoping, credential rotation, audit logging, and decommissioning — because agent self-approval rates of 97% have been observed, meaning only external oversight provides real separation of duties.

◆ ALSO READ THIS DAY AS

◆ RECENT IN SECURITY