PROMIT NOW · ALL SIX LENSES · 2026-04-13

◆ DAILY BRIEFING

Monday, April 13, 2026

6 angles · 72 sources · 8,324 words · ~43 min end to end

  1. Engineer 12 sources · 9 min

    GLM-5.1 just shipped under MIT license — 754B MoE, SWE-Bench Pro 58.4 (beats GPT-5.4 and Claude Opus), 8-hour sustained autonomous execution with 1,700 tool calls — while Google dropped Gemma 4 under Apache 2.0 with native function calling down to 2B edge models.

    Two MIT/Apache 2.0 models — GLM-5.1 at 754B with 8-hour autonomous execution and Gemma 4 with native function calling down to 2B edge devices — just matched or beat proprietary APIs on coding benchmar…

    Read full briefing →
  2. Security 12 sources · 6 min

    Anthropic accidentally leaked 512,000 lines of Claude Code source code revealing a hidden background agent called KAIROS that has been running undisclosed in developer environments — 50,000 copies spread before containment.

    Anthropic shipped a hidden AI agent called KAIROS inside Claude Code — now exposed in a 512K-line source leak with 50,000 copies in the wild — while a zero-cost voice cloning tool that needs 3 seconds…

    Read full briefing →
  3. Data Science 12 sources · 7 min

    Open-source MoE models just crossed the frontier quality threshold under permissive licenses: GLM-5.1 (754B MoE, MIT) scores 58.4 on SWE-Bench Pro — reportedly beating GPT-5.4 and Claude Opus 4.6 — while Gemma 4's 26B MoE ranks #6 on Arena AI under Apache 2.0, outperforming models 20x its size.

    Open-source MoE models (GLM-5.1 at 58.4 SWE-Bench Pro under MIT, Gemma 4 26B at Arena AI #6 under Apache 2.0) now match or beat proprietary frontier models, diffusion LLMs are within striking distance…

    Read full briefing →
  4. Product 12 sources · 7 min

    GLM-5.1 just topped SWE-Bench Pro at 58.4 — beating both GPT-5.4 and Claude Opus 4.6 — under an MIT license, with 8-hour autonomous execution and 1,700 tool calls per session.

    Open-source AI models just passed proprietary leaders on the coding benchmark that matters most (GLM-5.1 at 58.4 SWE-Bench Pro, MIT license, 8-hour autonomous execution) — while UBS confirms that over…

    Read full briefing →
  5. Leader 12 sources · 7 min

    Open-source AI just dethroned the proprietary frontier: Z.AI's GLM-5.1 — MIT-licensed, 754B parameters — scored 58.4 on SWE-Bench Pro, beating both GPT-5.4 and Claude Opus 4.6, while operating autonomously for 8 hours with 1,700 tool calls.

    The most capable coding AI on earth is now free (GLM-5.1 beat GPT-5.4 under MIT license), but actual user data shows the market wants better copilots, not more autonomy — and the code AI is shipping 3…

    Read full briefing →
  6. Investor 12 sources · 7 min

    Open-source AI just claimed the #1 position on SWE-Bench Pro under an MIT license — the same week UBS confirmed over 50% of enterprises are actively 'containing' non-AI software spend and the selloff breached cybersecurity stocks for the first time (Palo Alto -6.7%, CrowdStrike -4%).

    Open-source AI just claimed the frontier benchmark crown under MIT license while UBS confirmed half of enterprises are actively capping non-AI software spend — the model layer is commoditizing and the…

    Read full briefing →