Alireza

39 posts

Alireza

Alireza

@HighADHD

Beigetreten Aralık 2017
23 Folgt10 Follower
Alireza
Alireza@HighADHD·
@Zai_org They smoothly remove the Flagship model from the pro pricing plan — those prices are higher — and don't give the Lite plan the same 120 prompt limit they had in the contract. They also limit it by God knows how much because they remove that information from the bottom page.
English
0
0
0
9
Alireza
Alireza@HighADHD·
@Zai_org The company has been in high demand since last month, and what we've received afterward is an even more powerful model that you can't even handle the weaker one. So, instead, you decide to add new limits for users who purchased and subscribed with different contracts and TOS!
English
1
0
0
11
Z.ai
Z.ai@Zai_org·
Ever wondered what researchers at Z.ai are really like? We handed them the mic and let them show you what GLM-5 can do.
English
30
50
608
48.3K
Alireza
Alireza@HighADHD·
@TheBinaryNeuron They smoothly remove the Flagship model from the pro pricing plan — those prices are higher — and don't give the Lite plan the same 120 prompt limit they had in the contract. They also limit it by God knows how much because they remove that information from the bottom page.
English
0
0
0
14
Alireza
Alireza@HighADHD·
@TheBinaryNeuron They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
3
0
2
142
Alireza
Alireza@HighADHD·
@arena @Zai_org They smoothly remove the Flagship model from the pro pricing plan — those prices are higher — and don't give the Lite plan the same 120 prompt limit they had in the contract. They also limit it by God knows how much because they remove that information from the bottom page.
English
0
0
0
4
Alireza
Alireza@HighADHD·
@arena @Zai_org They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :)) #GLM5
Alireza tweet media
English
1
0
0
2
Arena.ai
Arena.ai@arena·
How does the #1 open Text Arena model hold up in agentic coding tasks? We tested GLM-5 in Code Arena with head-to-head SVG prompts vs. top frontier AI models. What do you think? Scores for @Zai_org 's GLM-5 in Code Arena coming soon. Test out GLM-5 for yourself and get voting.
Arena.ai@arena

GLM-5 from @Zai_org just climbed to #1 among open models in Text Arena! ▫️#1 open model on par with claude-sonnet-4.5 & gpt-5.1-high ▫️#11 overall; scoring 1452, +11pts over GLM-4.7 Test it out in the Code Arena and keep voting, we’ll see how GLM-5 performs for agentic coding tasks next! Congrats to the @Zai_org for this amazing achievement.

English
27
42
640
115K
Alireza
Alireza@HighADHD·
They smoothly remove the Flagship model from the pro pricing plan — those prices are higher — and don't give the Lite plan the same 120 prompt limit they had in the contract. They also limit it by God knows how much because they remove that information from the bottom page.
Alireza tweet media
English
0
0
0
18
Alireza
Alireza@HighADHD·
They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :)) #GLM5
Alireza tweet media
English
1
0
0
17
Alireza
Alireza@HighADHD·
@Zai_org They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
3
Z.ai
Z.ai@Zai_org·
Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens. Try it now: chat.z.ai Weights: huggingface.co/zai-org/GLM-5 Tech Blog: z.ai/blog/glm-5 OpenRouter (Previously Pony Alpha): openrouter.ai/z-ai/glm-5 Rolling out from Coding Plan Max users: z.ai/subscribe
Z.ai tweet media
English
314
784
5.4K
1.5M
Alireza
Alireza@HighADHD·
@vercel_dev They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
11
Vercel Developers
Vercel Developers@vercel_dev·
GLM-5 is now on AI Gateway. Better long-range planning, multiple thinking modes, and improved multi-step agent tasks versus previous Z.ai models. Use 𝚖𝚘𝚍𝚎𝚕: '𝚣𝚊𝚒/𝚐𝚕𝚖-𝟻' to get started. vercel.com/changelog/glm-…
English
7
9
152
11.4K
Alireza
Alireza@HighADHD·
@ArtificialAnlys @Zai_org They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
42
Artificial Analysis
Artificial Analysis@ArtificialAnlys·
GLM-5 is the new leading open weights model! GLM-5 leads the Artificial Analysis Intelligence Index amongst open weights models and makes large gains over GLM-4.7 in GDPval-AA, our agentic benchmark focused on economically valuable work tasks GLM-5 is @Zai_org's first new architecture since GLM-4.5 - each of the GLM-4.5, 4.6 and 4.7 models were 355B total / 32B active parameter mixture of experts models. GLM-5 scales to 744B total / 40B active, and integrates DeepSeek Sparse Attention. This puts GLM-5 more in line with the parameter count of the DeepSeek V3 family (671B total / 37B active) and Moonshot’s Kimi K2 family (1T total, 32B active). However, GLM-5 is released in BF16 precision, coming in at ~1.5TB in total size - larger than DeepSeek V3 and recent Kimi K2 models that have been released natively in FP8 and INT4 precision respectively. Key takeaways: ➤ GLM-5 scores 50 on the Intelligence Index and is the new open weights leader, up from GLM-4.7's score of 42 - an 8 point jump driven by improvements across agentic performance and knowledge/hallucination. This is the first time an open weights model has achieved a score of 50 or above on the Artificial Analysis Intelligence Index v4.0, representing a significant closing of the gap between proprietary and open weights models. It places above other frontier open weights models such as Kimi K2.5, MiniMax 2.1 and DeepSeek V3.2. ➤ GLM-5 achieves the highest Artificial Analysis Agentic Index score among open weights models with a score of 63, ranking third overall. This is driven by strong performance in GDPval-AA, our primary metric for general agentic performance on knowledge work tasks from preparing presentations and data analysis through to video editing. GLM-5 has a GDPval-AA ELO of 1412, only below Claude Opus 4.6 and GPT-5.2 (xhigh). GLM-5 represents a significant uplift in open weights models' performance on real-world economically valuable work tasks ➤ GLM-5 shows a large improvement on the AA-Omniscience Index, driven by reduced hallucination. GLM-5 scores -1 on the AA-Omniscience Index - a 35 point improvement compared to GLM-4.7 (Reasoning, -36). This is driven by a 56 p.p reduction in the hallucination rate compared to GLM-4.7 (Reasoning). GLM-5 achieves this by abstaining more frequently and has the lowest level of hallucination amongst models tested ➤ GLM-5 used ~110M output tokens to run the Intelligence Index, compared to GLM-4.7's ~170M output tokens, a significant decrease despite higher scores across most evaluations. This pushes GLM-5 closer towards the frontier of the Intelligence vs. Output Tokens chart, but is less token efficient compared to Opus 4.6 Key model details: ➤ Context window: 200K tokens, equivalent to GLM-4.7 Multimodality: Text input and output only - Kimi K2.5 remains the leading open weights model to support image input ➤ Size: 744B total parameters, 40B active parameters. For self-deployment, GLM-5 will require ~1,490GB of memory to store the weights in native BF16 precision ➤ Licensing: MIT License Availability: At the time of sharing this analysis, GLM-5 is available on Z AI's first-party API and several third-party APIs such as @novita_labs ($1/$3.2 per 1M input/output tokens), @gmi_cloud ($1/$3.2) and @DeepInfra ($0.8/$2.56), in FP8 precision ➤ Training Tokens: Z AI also indicated it has increased pre-training data volume from 23T to 28.5T tokens
Artificial Analysis tweet media
English
37
97
783
112.4K
Alireza
Alireza@HighADHD·
@arena @Zai_org They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
13
Arena.ai
Arena.ai@arena·
GLM-5 from @Zai_org just climbed to #1 among open models in Text Arena! ▫️#1 open model on par with claude-sonnet-4.5 & gpt-5.1-high ▫️#11 overall; scoring 1452, +11pts over GLM-4.7 Test it out in the Code Arena and keep voting, we’ll see how GLM-5 performs for agentic coding tasks next! Congrats to the @Zai_org for this amazing achievement.
Arena.ai tweet media
Z.ai@Zai_org

Introducing GLM-5: From Vibe Coding to Agentic Engineering GLM-5 is built for complex systems engineering and long-horizon agentic tasks. Compared to GLM-4.5, it scales from 355B params (32B active) to 744B (40B active), with pre-training data growing from 23T to 28.5T tokens. Try it now: chat.z.ai Weights: huggingface.co/zai-org/GLM-5 Tech Blog: z.ai/blog/glm-5 OpenRouter (Previously Pony Alpha): openrouter.ai/z-ai/glm-5 Rolling out from Coding Plan Max users: z.ai/subscribe

English
18
17
266
160.8K
Alireza
Alireza@HighADHD·
@ZenMuxAI They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
8
ZenMux
ZenMux@ZenMuxAI·
🚀 Huge news: ZenMux now officially supports GLM-5! Yes, the rumors are true—the stealth model "Pony Alpha" that’s been crushing the leaderboards is Zhipu AI’s latest powerhouse. 🐎💨 Experience the power of the "Pony Alpha" directly in Codex, Claude Code, or OpenCode. Model Positioning: Zhipu’s next-gen flagship base model, purpose-built for Agentic Engineering. It achieves Open Source SOTA in both Coding and Agent tasks, with a "coding feel" rivaling Claude 4.5 Opus. Excels at complex systems engineering and long-horizon Agent workflows. GLM-5-Code: A specialized version with stabilized tool calls and pinpoint accuracy for frontend generation (pages, animations, mini-games, 3D). 🏗️ Architecture Upgrades Scale: 744B parameters (40B active), ~2x expansion from the previous gen; 28.5T pre-training data. Asynchronous RL: Powered by the new "Slime" framework, an async Agent RL algorithm that enables continuous learning from long-range interactions. Sparse Attention: Integrates DeepSeek Sparse Attention for lossless long-context processing at significantly lower deployment costs. 💻 Coding Prowess SWE-bench-Verified: 77.8 (Open Source SOTA, exceeding Gemini 3.0 Pro). Terminal Bench 2.0: 56.2 (Open Source SOTA). 🤖 Agent Capabilities BrowseComp (Web Search): Open Source Leader. MCP-Atlas (Tool Use & Multi-step Execution): Open Source Leader. τ²-Bench (Complex Multi-tool Scenarios): Open Source Leader. 🚀 Access API: Unified access via ZenMuxAI (Pay-as-you-go & Subscription). Weights: Open-sourced. Best For: Coding, Agents, Systems Engineering, and Long-horizon tasks. #GLM5 #PonyAlpha #ZenMuxAI #OpenSourceAI #AgenticEngineering #Claude45 @Zai_org
ZenMux tweet media
English
2
7
68
14.4K
Alireza
Alireza@HighADHD·
@Zai_org They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
141
Z.ai
Z.ai@Zai_org·
GLM-5, Gameboy and Long-Task Era → 700+ tool calls, 800+ context handoffs, and a single agent running for over 24 hours. blog.e01.ai/glm5-gameboy-a…
English
97
287
3.1K
411.8K
Alireza
Alireza@HighADHD·
@CarolGLMs They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
4
Carol Lin
Carol Lin@CarolGLMs·
Introducing GLM-5 on Google Cloud Vertex AI. From experimentation to enterprise deployment — GLM-5 + Vertex AI gives you the scale, reliability, and global reach to build what’s next. Start building today. lnkd.in/ghDE9Hgk #GLM5 #GoogleCloud #AI
English
5
11
138
16.9K
Alireza
Alireza@HighADHD·
@vanchin_ai They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
10
Vanchin
Vanchin@vanchin_ai·
🎉🎉🎉 GLM-5 is now Live on Kuaishou Vanchin!
Vanchin tweet media
English
3
7
110
12.3K
Alireza
Alireza@HighADHD·
@EZmodel_Cloud @Zai_org They're just scammers. Don't fall for their tricks. They only brag about how good their model is compared to the Opus, but they can't even handle the server specs they need. Instead, they decide to rug pull the customers! :))
Alireza tweet media
English
0
0
0
6
EZmodel
EZmodel@EZmodel_Cloud·
Wow! 🎉 Look who’s here! Hello, how should I address you? PonyAlpha? 🐎 Moving forward, GLM-5 will be integrated into EZmodel shortly. We welcome users from business teams to be the first to experience it. @Zai_org #GLM5
EZmodel tweet media
English
2
1
37
8.6K