Z.ai
Z.ai
1,113 posts
Z.ai
@Zai_org
The AI Lab behind GLM models, dedicated to inspiring the development of AGI to benefit humanity.
huggingface.co/zai-org
discord.gg/QR7SARHRxK
Z.ai’s posts
Truly sorry for any confusion or frustration caused by unclear, misleading, or inappropriate rules in our moderation system and on our pages.
OpenClaw, Hermes, and SillyTavern are now explicitly marked as supported under the GLM Coding Plan. Other general-purpose tools will be
Fantastic to see GLM being applied to such fresh, dynamic scenarios.
Quote
Jifan Yu
@yujifan_0326
Doing some stress tests on OpenMAIC’s Interactive Simulation with a DNA Replication case.
Both powered by @Zai_org — with GLM-5.1 and GLM-5V-Turbo each generating these complex pedagogical simulations in real time. Can you spot the difference? The "Turbo" is catching up
0:11
GLM-5.1 Tool Calling Issue Fix & Chat Template Update
If you are running GLM-5.1 with vLLM/SGLang and using tool calling, please update your chat template. huggingface.co/zai-org/GLM-5.
Issue
When using tool calling, frameworks including vLLM automatically convert plain-text tool
0:04 / 1:24
BREAKING: GLM 5.1 by overwhelmingly dominates design-centric coding tasks amongst open-weight models
In the categories featured below, it is most comparable to Opus 4.6 by at ~1/8th the cost
Huge congrats to the team for this achievement!
GLM-5.1 by is now #3 in Code Arena - surpassing Gemini 3.1 and GPT-5.4, and now on par with Claude Sonnet 4.6.
The first frontier level open model to break into the top 3. It’s a major +90 point jump over GLM-5, and +100 over Kimi K2.5 Thinking.
Huge congrats to
Often discuss my three-level vision for opening GLM to the community:
First, we focus on accessibility by lowering the barrier to entry and removing unnecessary constraints so developers can truly explore the model. Second, we provide a robust baseline that empowers everyone to
Quote
Peter Yang
@petergyang
Silicon Valley is quietly running on Chinese open source AI models.
Here are the receipts:
→ Cursor confirmed last month that Composer 2 is built on Moonshot's Kimi K2.5
→ Cognition's SWE-1.6 model is likely post-trained on Zhipu's GLM
→ Shopify saved $5M a year by x.com/petergyang/sta…
We are thrilled to welcome as a Gold Sponsor for #GOSIMParis 2026!
paris2026.gosim.org/fr/
From groundbreaking LLMs to a thriving developer ecosystem, Zhipu is at the heart of the AI revolution. Stay tuned as we power the GOSIM Agentic
We partnered with to bring GLM-5.1 to Modal. Free to try as an endpoint for the next month.
GLM-5.1 further improves upon GLM-5's coding abilities and long-horizon effectiveness.
GLM-5.1 from is now available in .
Time to push the limits of reasoning and tackle complex agentic workflows.
Try Tabbit at tabbitbrowser.com
INCREDIBLE
GLM-5.1 weights are now opensource
> i’ve had early access to the weights for the past few days
> and yeah… this one matters a lot
benchmarks?
> SWE-Bench Pro: 58.4
> beats Opus 4.6 (57.3)
> beats GPT-5.4 (57.7)
> beats Gemini 3.1 Pro (54.2)
let that sink in
Introducing GLM-5.1 for understanding research papers
Highlight any section of a paper to ask questions and “@” other papers for quick context, comparisons, and benchmark references
If you’ve encountered garbled output like this while using GLM-5 or GLM-5.1 on our official service, the issue is now resolved.
We've patched the underlying inference-side bugs and will be releasing an article shortly to dive into the technical details and our specific fixes.
The chart says GLM-5.1 scored 54.9 on coding benchmarks. Three points behind Claude Opus 4.6. Interesting but not the story.
The story is what trained it. Zero Nvidia GPUs. 100,000 Huawei Ascend 910B chips. Every parameter. Every one of the 28.5 trillion training tokens.
智谱直接把开源 Agent 拉到新高度了!
GLM-5.1 正式开源:
开源模型里 SWE-Bench Pro 拿下 #1(58.4),全球第 3
真正长时程 Agent:自主运行 8 小时、几千次迭代 + 自审循环
从零搭出一个带 50+ App 的完整 Linux 桌面(视频太炸了)
Vector-DB-Bench 性能直接 6 倍提升
Z.ai releases GLM-5.1, a 754B-parameter model that it says outperforms GPT-5.4 and Claude Opus 4.6 on SWE-bench Pro, available under an MIT license ( / VentureBeat)
venturebeat.com/technology/ai-
GLM-5.1 by just launched in the Text Arena, and is now the #1 open model.
It outperforms the next best open model, its predecessor, GLM-5, by +11 points and +15 over Kimi K2.5 Thinking.
It shows strength in:
- #1 open model in Longer Query (#4 overall)
- #1 open model