PROMIT NOW · ENGINEER DAILY · 2026-02-27

A self-propagating npm worm (SANDWORM_MODE) is actively injecting malicious MCP servers into Claude, Cursor, Windsurf, and VS Code Continue — hijacking your AI coding assistant's tool-calling capability to exfiltrate crypto keys, raid password managers, and propagate through your repos.

· Engineer · 45 sources · 1,875 words · 9 min

Topics Agentic AI · LLM Inference · AI Capital

A self-propagating npm worm (SANDWORM_MODE) is actively injecting malicious MCP servers into Claude, Cursor, Windsurf, and VS Code Continue — hijacking your AI coding assistant's tool-calling capability to exfiltrate crypto keys, raid password managers, and propagate through your repos. Simultaneously, Claude Code itself has confirmed RCE vulnerabilities (CVE-2025-59536, CVE-2026-21852) where merely opening a cloned repository with malicious config files achieves code execution. Audit every MCP server configuration and AI tool permission grant in your org today — not next sprint.

◆ INTELLIGENCE MAP

  1. 01

    AI Dev Toolchain Under Active Attack: Supply Chain + RCE + Trust Boundary Failures

    act now

    Seven independent sources confirm a coordinated, multi-vector assault on developer toolchains — npm supply chain worms targeting AI assistants, RCE in Claude Code via malicious repo configs, prompt injection chaining through CI/CD to production releases, and fake job-assessment repos deploying fileless backdoors — all exploiting the same fundamental flaw: AI agents with tool-use capabilities lack hard trust boundaries between untrusted content and privileged actions.

    7
    sources
  2. 02

    Critical Infrastructure CVEs: Cisco SD-WAN, Cloud Hypervisor, Semantic Kernel

    act now

    Cisco SD-WAN auth bypass (CVE-2026-20127) has been exploited since 2023 via a devastating software downgrade chain; Cloud Hypervisor has a CVSS 10.0 guest-to-host file exfiltration bug breaking container isolation; Microsoft Semantic Kernel Python SDK has RCE in InMemoryVectorStore (CVSS 9.9) affecting every RAG prototype; and Dell RecoverPoint hardcoded creds are on CISA KEV — all requiring immediate patching or compensating controls.

    5
    sources
  3. 03

    AI Agent Architecture: Cloud VMs, Scheduled Tasks, and the Ensemble Pattern

    monitor

    AI agents crossed from copilot to autonomous worker this week — Cursor shipped dedicated cloud VMs producing merge-ready PRs, Anthropic's Cowork added cron-style scheduled tasks with plugins, OpenAI published configurable reasoning effort for gpt-5.2-codex, and OpenPipe's ART framework demonstrated RL-trained 14B models beating o3 at 1/64th the cost — while platform providers simultaneously began blocking third-party agent access to protect flat-rate pricing models.

    7
    sources
  4. 04

    Open-Source Frontier Models and Infrastructure Economics

    monitor

    GLM-5 (744B MoE, MIT-licensed, #1 on open leaderboards) and Qwen 3.5 dropped in the same week, while Meta scrapped its custom training chip and Google struck a multibillion-dollar TPU deal with Meta — confirming Nvidia's training dominance is more durable than expected, but inference alternatives are real and open-source models now rival proprietary ones on coding and agent tasks.

    5
    sources
  5. 05

    Enterprise SaaS Disruption and Agent-Native Platform Shifts

    background

    Salesforce's Agentforce hit $800M ARR but organic growth decelerated to 7-8% as AI features cannibalize legacy modules; Workday and HubSpot are locking down API data access against AI agent consumption; and Anthropic acquiring Bun signals vertical integration of the AI developer toolchain — collectively indicating that per-seat SaaS pricing is under structural threat and your integration dependencies need abstraction layers.

    5
    sources

◆ DEEP DIVES

  1. 01

    Your AI Coding Tools Are the Attack Surface: SANDWORM_MODE, Claude Code RCE, and the Trust Boundary Crisis

    <p>This is the most consequential security story of the week, confirmed across seven independent sources. <strong>Three distinct attack vectors</strong> are converging on your development pipeline simultaneously, and they all exploit the same architectural flaw: AI agents that can read untrusted content AND invoke privileged tools without hard isolation between the two.</p><h3>The SANDWORM_MODE Worm</h3><p>Socket's research team discovered a <strong>self-propagating npm worm</strong> spreading through 19 malicious packages that specifically targets AI coding assistants. The attack chain: exfiltrate crypto keys → raid password managers (Bitwarden, 1Password, LastPass) → <strong>inject malicious MCP servers into Claude, Cursor, VS Code Continue, and Windsurf</strong>. Every subsequent MCP tool call executes attacker-controlled code. The worm propagates via stolen npm/GitHub tokens and contains dormant capabilities including a <strong>polymorphic engine using local Ollama (deepseek-coder:6.7b)</strong> for self-rewriting and a destructive dead switch that wipes your home directory when access is revoked.</p><h3>Claude Code CVEs</h3><p>Anthropic's Claude Code has confirmed RCE vulnerabilities — <strong>CVE-2025-59536 and CVE-2026-21852</strong> — where malicious Hooks and MCP server configurations in cloned repositories achieve code execution and API key theft. The critical shift: the threat model moved from <em>'running untrusted code'</em> to <em>'opening untrusted projects.'</em> Separately, Meta's Manus AI agent (SilentBridge, <strong>CVSS 9.8</strong>) demonstrated the same class: hidden instructions in web pages triggered Gmail exfiltration, reverse shells with passwordless sudo, and cross-tenant CDN access.</p><h3>CI/CD Pipeline Compromise</h3><p>Adnan Khan found that Cline's Claude-powered issue triage was vulnerable to <strong>prompt injection via GitHub issue titles</strong>, chaining to cache poisoning (using the Cacheract tool) → production release credential theft → VS Code Marketplace compromise affecting millions. A separate threat actor found Khan's PoC on his test repo and used it to attack Cline before responsible disclosure completed — <strong>threat actors are actively monitoring security researchers' public repos for weaponizable PoCs.</strong></p><blockquote>These aren't implementation bugs that better input validation would fix. They're architectural failures. Any agentic system without hard isolation between content processing and tool invocation is vulnerable to this entire class of attack.</blockquote><h3>The npm Preinstall Hook Vector</h3><p>The <code>ambar-src</code> typosquatting attack hit <strong>50,000 downloads in 3 days</strong>, deploying OS-specific payloads via preinstall hooks: encrypted shellcode on Windows, Golang reverse SSH on Linux, Apfell/MythicAgents on macOS, with Yandex Cloud C2. Combined with fake job-assessment repos deploying fileless multi-stage backdoors (confirmed by Microsoft), your entire code ingestion pipeline — from <code>npm install</code> to <code>git clone</code> to AI-suggested packages — is under active attack.</p><hr><h3>Defensive Tooling Worth Evaluating</h3><table><thead><tr><th>Tool</th><th>What It Does</th><th>Why It Matters</th></tr></thead><tbody><tr><td><strong>nono</strong> (Luke Hinds)</td><td>Kernel-level AI agent sandbox using Landlock/Seatbelt</td><td>Prompt injection cannot bypass kernel enforcement</td></tr><tr><td><strong>enveil</strong></td><td>Encrypts .env secrets with AES-256-GCM, injects only at subprocess launch</td><td>Prevents AI assistants from reading plaintext secrets in context</td></tr><tr><td><strong>Titus</strong> (Praetorian)</td><td>Secret scanner with 450+ rules, binary file scanning</td><td>Catches secrets in Office docs, PDFs, SQLite that others miss</td></tr></tbody></table>

    Action items

    • Audit all MCP server configurations in Claude, Cursor, Windsurf, and VS Code Continue for unauthorized entries — check for SANDWORM_MODE indicators: unexpected npm deps, modified git hooks, unrecognized MCP servers
    • Enable --ignore-scripts for npm install in all CI pipelines and audit every preinstall hook exception by end of week
    • Establish sandboxed environment policy for all external code evaluation — candidate repos, OSS projects, AI-suggested packages — using ephemeral VMs with no network egress to internal infrastructure
    • Evaluate nono kernel-level sandbox for all AI coding tools with file system or network access this quarter
    • Implement package signature verification and allowlist policies for npm, NuGet, and PyPI in CI/CD — audit for malicious 'StripeApi' NuGet package across all .NET projects immediately

    Sources:[tl;dr sec] #317 - 100+ Kernel Bugs in 30 Days, Secret Scanning, Threat Actors Stealing Your PoC · 0-Days Sold to Russian Broker, Serv-U RCEs, RoguePilot Flaw, FileZen Exploitation · Manus Prompt Injection 💉, CarGurus 12.M Leak 🚙, LLM-based Deanonymization 🥸 · Claude Code Flaws Exposed Devices to Silent Hacking · 5 trends that should top CISO's RSA 2026 agendas · 5 trends that should top CISO's RSA 2026 agendas

  2. 02

    Cisco SD-WAN Exploited Since 2023, Cloud Hypervisor Guest Escape, Semantic Kernel RCE — Patch Now or Assume Breach

    <p>Five sources converge on a brutal CVE week: <strong>six CVSS 10.0 scores, two CISA KEV additions</strong>, and a multi-year nation-state exploitation campaign that just went public. Here's what demands your immediate attention.</p><h3>Cisco SD-WAN: Three Years of Silent Exploitation</h3><p>The attack chain is elegant and devastating: <strong>CVE-2026-20127</strong> bypasses authentication on current firmware, then the attacker <em>downgrades the software</em> to a version vulnerable to <strong>CVE-2022-20775</strong>, granting root-level control. Patching the 2022 CVE gave you zero protection — the attacker just rolls you back. CISA issued an emergency directive; Five Eyes agencies coordinated the advisory. Researchers characterize the actors as <strong>'highly sophisticated and disciplined'</strong> — standard euphemism for nation-state.</p><blockquote>If you're running Cisco SD-WAN, patching is necessary but insufficient. Root-level persistent access means you cannot trust the device after patching. Full rebuilds may be required.</blockquote><p>The downgrade attack pattern is <strong>not Cisco-specific</strong>. Any network appliance allowing firmware downgrade without cryptographic attestation is vulnerable. Audit your entire edge device fleet for secure boot and anti-rollback protections.</p><h3>Cloud Hypervisor: CVSS 10.0 Guest Escape</h3><p><strong>CVE-2026-27211</strong> allows a malicious guest VM to read arbitrary host files through virtio-block devices backed by raw images. Separately, <strong>Kata Containers CVE-2026-24834</strong> (CVSS 9.3) allows root code execution in the guest micro-VM. These are the technologies supposed to provide <em>stronger-than-container</em> isolation for multi-tenant workloads. The Cloud Hypervisor bug is a silent data exfiltration path — no typical IDS trigger.</p><h3>Microsoft Semantic Kernel: RCE in Your RAG Prototype</h3><p><strong>CVE-2026-26030 (CVSS 9.9)</strong> in the Python SDK's InMemoryVectorStore filter — the exact component every RAG tutorial uses and many teams shipped to production without security review. Fixed in <strong>version 1.39.4</strong>. These services often run with broad permissions to access knowledge bases and internal APIs. An RCE here gives an attacker everything those permissions grant.</p><h3>Actively Exploited: Dell RecoverPoint + Roundcube</h3><p><strong>Dell RecoverPoint CVE-2026-22769 (CVSS 10.0)</strong> — hardcoded credentials in backup infrastructure, now on CISA KEV. Backup systems have read access to everything they protect. <strong>Roundcube CVE-2025-49113</strong> — PHP Object Deserialization RCE affecting versions before 1.5.10 and 1.6.x before 1.6.11, also actively exploited.</p><h3>The SSTI Pattern</h3><p>Three unrelated products disclosed Server-Side Template Injection this week: Datart/Freemarker (CVSS 9.9), WSO2/Velocity (CVSS 10.0), Flask-Reuploaded/Jinja2 (CVSS 9.8). When the same vulnerability class appears across three different template engines simultaneously, it's systemic. Audit all server-side template usage where user input influences the template, not just the data.</p>

    Action items

    • If running Cisco SD-WAN: inventory all devices, collect and preserve logs before remediation, patch CVE-2026-20127, then initiate forensic review of control plane logs going back to 2023
    • Patch Cloud Hypervisor immediately in any environment — CVE-2026-27211 allows guest-to-host file exfiltration via virtio-block raw images
    • Pin Microsoft Semantic Kernel Python SDK to >=1.39.4 across all projects this week
    • Patch Dell RecoverPoint (CVE-2026-22769) and Roundcube (CVE-2025-49113) as emergency P0 — both on CISA KEV
    • Implement firmware integrity verification and anti-downgrade protections across all network edge appliances this quarter — not just Cisco

    Sources:@RISK® The Consensus Security Vulnerability Alert: Vol. 26, Num. 08 · Governments issue warning over Cisco zero-day attacks dating back to 2023 · 0-Days Sold to Russian Broker, Serv-U RCEs, RoguePilot Flaw, FileZen Exploitation · Claude Code Flaws Exposed Devices to Silent Hacking · Srsly Risky Biz: Is Claude Too Woke For War?

  3. 03

    AI Agents Get Cloud VMs, Cron Jobs, and RL Training — The Architecture Implications You Need to Plan For

    <p>This week marks a clear inflection: AI agents moved from <strong>'copilot in your editor'</strong> to <strong>'autonomous worker with its own compute, credentials, and schedule.'</strong> Seven sources confirm the convergence, and the engineering implications are mostly under-discussed.</p><h3>Cursor: Agents With Dedicated Cloud VMs</h3><p>Cursor agents now run in <strong>dedicated cloud VMs with full dev environments</strong> — they clone repos, install dependencies, execute code, run tests, and produce merge-ready PRs with artifacts (videos, screenshots, logs). They ship across web, mobile, desktop, Slack, and GitHub. The security model is non-trivial: these agents need the same access controls as a human developer — <strong>repo read/write, environment secrets, network egress</strong>. In regulated environments, you now need to answer 'who reviewed this PR?' when the author is an AI agent running in a third party's cloud.</p><h3>Anthropic Cowork: Cron-for-Agents</h3><p>Cowork added <strong>scheduled recurring tasks with a plugin architecture</strong> spanning design, engineering, and operations. The Customize sidebar with plugins, skills, and connectors suggests Anthropic is building toward an extensible agent platform — think Zapier with an LLM brain. Research preview on macOS and Windows for paid plans. The acquisition of <strong>Vercept</strong> for computer-use capabilities signals they're serious about GUI-interacting agents.</p><h3>The Cost Optimization Breakthrough: ART + GRPO</h3><p>OpenPipe's ART framework uses <strong>GRPO (the RL algorithm behind DeepSeek-R1)</strong> to train small open-source models that outperform frontier models on specific agentic tasks: <strong>96% accuracy vs. o3 on email search, 5x faster, 64x cheaper, trained on a single GPU for under $80</strong>. The architecture cleanly separates inference (vLLM) from training (Unsloth + LoRA) with hot-swappable checkpoints. RULER's LLM-as-a-judge exploits the fact that relative ranking is more reliable than absolute scoring — a perfect match for GRPO's needs.</p><blockquote>If you're paying $55/1K queries for a frontier model on a structured agentic workflow, you can now spend $80 to RL-train a 14B model that's actually better at your specific task. The 64x cost reduction is real but shifts cost from per-query fees to infrastructure ownership.</blockquote><h3>The Ensemble Pattern: OpenAI's Own Architecture</h3><p>OpenAI's VP of Science confirmed they internally use <strong>orchestrator + cheap specialized model ensembles</strong> rather than single large model calls, and considers the failure to adopt this pattern the most common costly mistake among teams building on their APIs. Their top engineers run <strong>3-4 parallel Codex agents across separate work trees</strong>, treating idle human time as wasted compute cycles.</p><h3>The Hidden MCP Cost Tax</h3><p>MCP's eager tool catalog loading is a hidden cost multiplier at scale. For broad API surfaces like Google Cloud, thousands of tokens are consumed before the agent does any work. Replacing cloud provider MCP integrations with <strong>direct CLI invocations achieves a 94% token cost reduction</strong> — the model already knows how to use CLIs and discovers capabilities lazily via <code>--help</code>.</p><h3>Platform Risk: Agent Access Getting Killed</h3><p>Google blocked OpenClaw from accessing Antigravity, and Anthropic previously blocked it from Claude Code. The economics don't work: a human makes 50-100 requests per session; an agent makes thousands. Anthropic then <strong>acquired Vercept and shipped mobile monitoring for Claude Code</strong> — they're not opposed to agents, they're opposed to <em>third-party</em> agents. Design agent orchestration systems to be model-provider-agnostic from day one.</p>

    Action items

    • Audit your CI/CD and repo access policies for AI agent compatibility this sprint — Cursor's cloud agents need repo access, environment secrets, and network egress
    • Prototype an ART training run on your lowest-risk agentic workflow using Qwen2.5-7B on a single GPU this quarter
    • Audit MCP server configurations and measure token consumption per session — replace broad cloud provider MCP integrations with direct CLI invocations where possible
    • Decompose one high-volume AI pipeline into an orchestrator + specialized model ensemble and A/B test against single-model baseline
    • Design all agent integrations with provider-agnostic abstractions — Google and Anthropic are actively blocking third-party agent access to protect margins

    Sources:Perplexity Computer 💻, DeepSeek withholds v4 🐋, Cowork scheduled tasks 💼 · ART: Train Agents That Can Learn From Experience · Claude has some conflicts · Intelligence crisis 🧠, Claude Code remote control 🕹, React Native for Meta Quest 🥽 · OpenAI's Kevin Weil on the Future of Scientific Discovery · AI Agenda Exclusive: A Robot Data Startup Raises $60 Million

  4. 04

    GPU Dominance Hardens, Open-Source Models Leapfrog, and the Compute Scarcity Is Structural

    <p>Multiple sources this week paint a consistent picture of the AI infrastructure landscape: <strong>Nvidia's training monopoly is more durable than expected</strong>, open-source models have reached parity with proprietary ones on key tasks, and compute scarcity is a physics problem, not a funding problem.</p><h3>Nvidia's Position Is Unassailable for Training</h3><p>Nvidia posted <strong>$68.1B quarterly revenue</strong> (73% growth), <strong>55.6% net margin</strong>, and <strong>$96.6B free cash flow</strong> — second only to Apple. NVLink networking revenue hit <strong>$11B in a single quarter (up 263% YoY)</strong>, confirming that inter-GPU networking is now the critical scaling bottleneck, not raw compute. Meta scrapped its most advanced custom training chip and fell back to a simpler design. Even with tens of billions in annual AI spend and top chip designers poached from Apple and Qualcomm, building competitive training silicon proved too hard.</p><p>However, <strong>Google struck a multibillion-dollar TPU deal with Meta</strong> — the first time Google's custom silicon has been sold externally at hyperscaler scale. Meta's infrastructure team deciding TPUs are worth the ecosystem switching cost signals the performance gap has closed enough for inference diversification.</p><h3>Open-Source Models Hit a New Ceiling</h3><table><thead><tr><th>Model</th><th>Architecture</th><th>License</th><th>Key Strength</th></tr></thead><tbody><tr><td><strong>GLM-5</strong> (Z.ai)</td><td>744B MoE / 40B active</td><td>MIT</td><td>#1 on open leaderboards, coding + agent tasks</td></tr><tr><td><strong>Qwen 3.5</strong> (Alibaba)</td><td>Multimodal</td><td>Open</td><td>Aggressive pricing, massive HuggingFace adoption</td></tr><tr><td><strong>DeepSeek V4</strong></td><td>Teased, not released</td><td>TBD</td><td>Withholding early access from US chipmakers</td></tr></tbody></table><p>Three frontier models in one week. GLM-5's MIT license means you can fine-tune and deploy without legal overhead. But provenance risk is real: <strong>Anthropic alleges systematic model extraction</strong> (24,000 fake accounts, 16M queries), and DeepSeek reportedly trained on export-controlled Blackwell GPUs. Using these models in regulated or government-adjacent work may carry compliance risk.</p><h3>The Compute Crunch Is Physical, Not Financial</h3><p>OpenAI's Stargate compute project has <strong>stalled</strong>, with projected <strong>$111B cash burn through 2030</strong>. Hyperscalers plan <strong>$650B in AI infrastructure spend in 2026</strong> — roughly Switzerland's GDP. Yet even with unlimited funding ambition, data center construction, power infrastructure, and TSMC fab capacity have <strong>12-18 month lead times</strong> that money can't compress. Nvidia is now guaranteeing <strong>$3.5B in data center leases</strong> (4x the previous quarter), essentially becoming an AI infrastructure financier — picking which companies get built out.</p><blockquote>The companies that win in a supply-constrained environment aren't the ones with the most GPUs — they're the ones that extract the most value per GPU-hour. Quantization, distillation, speculative decoding, and MoE architectures are your real competitive levers.</blockquote>

    Action items

    • Benchmark GLM-5 and Qwen 3.5 against your current model provider on actual production workloads this quarter — the MIT license on GLM-5 eliminates fine-tuning legal overhead
    • Separate inference and training infrastructure strategies — evaluate AMD MI300X and Google Cloud TPUs for serving workloads specifically
    • Lock in GPU capacity commitments for the next 12 months if running training workloads — don't rely on spot/on-demand
    • Invest in inference efficiency techniques (quantization, speculative decoding, batching) as a hedge against compute cost inflation

    Sources:AI News Weekly - Issue #467 · The Sequence Chat #814: Z.ai's Zixuan Li Talks About GLM · Meta's Internal Chip Design Efforts Hit Roadblocks · Exclusive: Google Strikes Multibillion-Dollar AI Chip Deal With Meta · Nvidia Posts Blockbuster Numbers · Amazon's OpenAI Investment Could Link Funding to IPO or AGI

◆ QUICK HITS

  • Oxfmt delivers 30x Prettier performance with 100% JS/TS compatibility — evaluate as a CI pipeline drop-in replacement

    Intelligence crisis 🧠, Claude Code remote control 🕹, React Native for Meta Quest 🥽

  • Karpenter overtook Cluster Autoscaler at 34% Kubernetes adoption (3x jump in 2 years) — if you're still on CA, you're now in the minority

    Jane Street vs Bitcoin 🪙, AGI career decisions 💼, Vercel Chat SDK 🤖

  • Ladybird browser validated LLM-assisted C++-to-Rust migration at 25,000 lines with zero regressions — strongest evidence yet for production-viable AI translation

    Manus Prompt Injection 💉, CarGurus 12.M Leak 🚙, LLM-based Deanonymization 🥸

  • 600+ FortiGate firewalls compromised using commercial AI toolkits targeting exposed management ports and weak passwords — audit your edge device management plane exposure

    Srsly Risky Biz: Is Claude Too Woke For War?

  • Sentry versions 21.12.0–26.1.0 have SAML-based account takeover (CVE-2026-27197) — upgrade self-hosted Sentry to 26.2.0 if using SAML SSO

    @RISK® The Consensus Security Vulnerability Alert: Vol. 26, Num. 08

  • OpenClaw exposed 21,000 instances with leaked OAuth tokens for Slack, Gmail, and Google Drive in just 2 weeks — inventory all AI agent OAuth grants in your admin consoles

    0-Days Sold to Russian Broker, Serv-U RCEs, RoguePilot Flaw, FileZen Exploitation

  • LakeFS, Neon, and Dolt represent three distinct data versioning architectures now production-ready — evaluate against your current snapshot-and-pray approach

    Git for Data Lakes 🌿, The Data Reckoning 📉, Query Flow Diagrams 🗺️

  • Anthropic abandoned its safety pause pledge under Pentagon pressure over a $200M contract — no major AI lab will self-impose meaningful capability limits; build your own enforcement layer

    ☕️ Still scheming

  • Apple released Foundation Models SDK for Python enabling on-device inference with type-safe responses via decorators — evaluate for privacy-sensitive or latency-critical use cases

    Perplexity Computer 💻, DeepSeek withholds v4 🐋, Cowork scheduled tasks 💼

  • AI agent swarm found ~100 exploitable kernel LPE bugs across Windows drivers for $600 total ($4/bug, avg CVSS 8.2) — after 90+ days, only Fujitsu patched

    [tl;dr sec] #317 - 100+ Kernel Bugs in 30 Days, Secret Scanning, Threat Actors Stealing Your PoC

BOTTOM LINE

Your AI coding assistants are under active attack from a self-propagating npm worm injecting malicious MCP servers, Claude Code has confirmed RCE flaws triggered by merely opening a cloned repo, Cisco SD-WAN has been silently exploited since 2023 via a firmware downgrade chain, and a CVSS 10.0 Cloud Hypervisor guest escape breaks container isolation — all while AI agents are getting their own cloud VMs, cron schedules, and the ability to produce merge-ready PRs autonomously. Audit your MCP configs, patch your infrastructure, and treat every AI tool permission grant like a production credential, because that's exactly what attackers are treating them as.

◆ ALSO READ THIS DAY AS

◆ RECENT IN ENGINEER