Xu Zhang

112 posts

Xu Zhang

Xu Zhang

@ceasarxuu

Katılım Nisan 2023
50 Takip Edilen1 Takipçiler
Xu Zhang
Xu Zhang@ceasarxuu·
@sofish 临时找个安卓设备用 googel play 呗,不卡信用卡
中文
0
0
0
385
Sofish
Sofish@sofish·
真麻烦,chatgpt pro 没有 app store 订阅链接,只能苟 claude 了吗?
中文
27
0
21
19.4K
Xu Zhang
Xu Zhang@ceasarxuu·
@yinmin1987 中转站感觉用下来并不比官方套餐便宜多少诶
中文
1
0
1
994
尹珉
尹珉@yinmin1987·
RC的Codex套餐上午上了 突然贵了好多
尹珉 tweet media
中文
22
2
51
21.3K
Xu Zhang
Xu Zhang@ceasarxuu·
@linyishan 每个月都有消息,希望这个月是真的
中文
1
0
1
2.3K
yishan
yishan@linyishan·
这回~是真的了,没跟你开玩笑。 中国的 DeepSeek-V4 将在本月底发布~! 创始人梁文锋在昨天的内部沟通会议中已经透露。 3 月初,通过 DeepSeek-V4 Lite 进行架构验证,是完整版发布前的信号。 优化的 MoE 架构,虽然总参数量达到 1 万亿级别,但每个 Token 激活的参数只有 370 亿左右。所以,推理成本将维持在极低水准。 V4 还引入了非常关键的Engram 条件内存,可以让推理逻辑与知识存储分离。不需要消耗显存来死记硬背,而是以一种类似“搜索引擎索引”的方式,在推理时按需调取知识。 新版本定位非常明确,就是要做最强工程模型。代码能力、多模态和数学能力方面以开源姿态看齐Opus 4.6水准。 训练阶段,已经适配华为昇腾 (Ascend) 等国产 AI 芯片。 DeepSeek-V4 将是一个能在 24G 显存环境下,通过高倍率量化,实现万亿级知识量和百万级记忆力的怪兽级模型。
yishan tweet media
中文
44
17
242
67.9K
Tongyi Lab
Tongyi Lab@Ali_TongyiLab·
Hello, creators and builders, This week marks a leap forward in controlled storytelling, efficient intelligence, and innovative AI infrastructure. We’re introducing Wan2.7-Video, a comprehensive model for controllable video storytelling; launching Zvec v0.3.0 with multi-platform support and official SDKs; and unveiling VimRAG, a framework for multimodal RAG. Let’s dive in. open.substack.com/pub/tongyilab/…
Tongyi Lab tweet media
English
3
3
54
2.9K
Xu Zhang
Xu Zhang@ceasarxuu·
@manateelazycat 单纯就是算力不够,国内用户这么多,算力又吃紧
中文
0
0
0
444
Andy Stewart
Andy Stewart@manateelazycat·
阿里云的百炼Coding Plan好在哪? 200元/月居然还得靠抢?
Andy Stewart tweet media
中文
50
0
46
49K
Xu Zhang
Xu Zhang@ceasarxuu·
@caizhenghai 官方声明了,周额度不变,5 小时额度缩水,给摊平了,很 sb 的设计
中文
1
0
0
1.4K
forecho📈
forecho📈@caizhenghai·
怎么感觉今天 ChatGPT 的 Token 缩水很严重,早上一个小时干爆了 2 个 plus 账户了,这是第三个了,以前从来没有这种情况。这是为了卖今天推出的 Pro 100 刀的账户? 还好我账户多。😊
forecho📈 tweet media
中文
55
0
40
30.7K
Xu Zhang
Xu Zhang@ceasarxuu·
@heeney_luke This is over-engineering. OpenAI's product manager should rethink this. They shouldn’t decide for users how they should allocate their quota — that’s arrogance.
English
0
0
0
59
Luke Heeney
Luke Heeney@heeney_luke·
did codex just change their 5 hour limits? I am suddenly burning through it in an hour after never coming close before. Gah, usage limits are why I use it over Claude!
English
112
14
838
89.5K
Xu Zhang
Xu Zhang@ceasarxuu·
@KKaWSB 内燃机在实验室多待 10 年就能发明火箭了吗?
中文
0
0
1
509
KK.aWSB
KK.aWSB@KKaWSB·
DeepMind掌门人Hassabis说了句大实话:"如果由我做主,AI会在实验室多待几年,多做几个AlphaFold,也许就把癌症攻克了。" 他的AlphaFold已经拿了诺贝尔奖,300万科学家在用,几乎所有新药研发都离不开它。但ChatGPT一出,所有实验室被拖进了商业军备竞赛——做聊天机器人,抢季度财报,科学突破反而靠边站。 一个本可以像CERN一样慢慢改变世界的行业,被产品发布周期绑架了。
中文
29
82
674
139K
Xu Zhang
Xu Zhang@ceasarxuu·
@OpenAI This is over-engineering. Your product manager should rethink this. You shouldn’t decide for users how they should allocate their quota — that’s arrogance.
English
0
0
1
119
OpenAI
OpenAI@OpenAI·
The Codex promotion for existing Plus subscribers ends today and as a part of this, we’re rebalancing Codex usage in Plus to support more sessions throughout the week, rather than longer sessions in a single day. The Plus plan will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use.
English
88
50
1.4K
327.6K
OpenAI
OpenAI@OpenAI·
We’re updating our ChatGPT Pro and Plus subscriptions to better support the growing use of Codex. We’re introducing a new $100/month Pro tier. This new tier offers 5x more Codex usage than Plus and is best for longer, high-effort Codex sessions. In ChatGPT, this new Pro tier still offers access to all Pro features, including the exclusive Pro model and unlimited access to Instant and Thinking models. To celebrate the launch, we’re increasing Codex usage for a limited time through May 31st so that Pro $100 subscribers get up to 10x usage of ChatGPT Plus on Codex to build your most ambitious ideas.
English
1.2K
1.4K
15.6K
4.4M
Xu Zhang
Xu Zhang@ceasarxuu·
@OpenAI What is the multiple of Codex for the $200 plan?
English
0
0
0
4
microstrong
microstrong@Microstrongs·
codex 大家还是要大胆用,不要担心用完了,说不定马上就重置了
microstrong tweet media
中文
13
1
42
8.4K
Gorden Sun
Gorden Sun@Gorden_Sun·
我的IP能用Claude,用不了Meta AI。有没有人实际试了Muse Spark怎么样?
Gorden Sun tweet media
中文
26
2
42
35.2K
Xu Zhang
Xu Zhang@ceasarxuu·
@wwwgoubuli 国内的 AI 融资环境其实不如海外,他们拿到的钱和算力都很有限,挺不容易了
中文
0
0
1
134
wwwgoubuli-你理狗,狗理你不
国模不是跑分跑不上去,和国外比起来的最大的区别不是能不能跑到那个高点,而是能不能在高点站住。 很多任务跑着跑着,就上下抽风,就差那么一口气。 跑的例子够多,总能跑出几个还不错的结果,不输 opus。 但很多时候跑着就偏那么一点点。 感觉是调教着急了那么一点点。 而这个着急,可能和 KPI 或者算力又有些关系。 这些事情会影响心态和目标的。
中文
4
0
13
3.4K
Xu Zhang
Xu Zhang@ceasarxuu·
@bridgebench We need a model like GLM 5.1 to break Anthropic's arrogance.
English
0
0
0
18
Bridgebench
Bridgebench@bridgebench·
GLM 5.1 just took the #1 spot on SWE-Bench Pro. Beating GPT 5.4. Beating Claude Opus 4.6. Beating every model on the market. 58.4. The $80/month model just outscored the $200/month models on agentic coding. A Chinese model that most developers haven't even heard of is now the best agentic coder in the world according to SWE-Bench Pro. The AI race isn't slowing down. It's getting harder to justify paying premium when the competition keeps closing the gap. BridgeBench results for GLM 5.1 coming soon.
Bridgebench tweet media
English
15
13
167
12.3K
Xu Zhang
Xu Zhang@ceasarxuu·
@0xkakarot888 opencode 目前还不卡,就是额度也不够高强度开发,只能当个备份缓解一下
中文
0
0
1
417
0x卡卡撸特
0x卡卡撸特@0xkakarot888·
GLM coding plan 终于抢到了,自己是死活抢不了,然后没办法,某鱼找了滴滴代抢。。。 先试 3 个月的,看看好不好用,也感谢评论区大神给我推荐的 opencode go,才10美金很便宜,我也买了,两个一起测,现在应该 token 无忧了。 感谢老师们,如果有好的使用建议,也方便在评论区分享,谢谢!
0x卡卡撸特 tweet media
0x卡卡撸特@0xkakarot888

你们到底是怎么抢到 GLM 的 Coding plan 的? 买个模型还要靠秒杀的?真是活久见,到了 10 点,页面直接太多人访问,打不开了,然后 10 点 01 分,可以打开了,被秒光了。。。 我要不是 Claude 烧不起,鬼才来抢这个。。。55555

中文
41
4
37
31.6K
Xu Zhang
Xu Zhang@ceasarxuu·
@bridgemindai Llama 4 was once involved in a scandal over over-optimizing for benchmarks. hope Muse doesn't repeat the same mistake. It's really a pity that it's not open source. If it outperformed Codex or Opus, I could understand keeping it closed, but it doesn't.
English
0
0
0
3
BridgeMind
BridgeMind@bridgemindai·
Meta just dropped Muse Spark and it's beating Claude Opus 4.6, GPT 5.4, and Gemini 3.1 Pro on nearly every multimodal and reasoning benchmark. But Claude Opus 4.6 still wins on agentic coding. The one category that matters most to vibe coders. Meta is back on the map. The frontier just got a lot more crowded.
BridgeMind tweet media
English
21
10
152
9.5K
Xu Zhang
Xu Zhang@ceasarxuu·
@aibra Actually, this situation exists with all models. When one model is unable to solve a problem for a long time, it's better to try several other models or harnesses.
English
0
0
0
7
Aibra
Aibra@aibra·
I swear Claude feels nerfed right now. I spent 45 minutes and basically my whole 5-hour token window trying to fix one mobile UI bug, and it kept missing and getting worse! I got so frustrated that I switched to codex which basically one-shotted it in 3 minutes
Aibra tweet media
English
97
28
615
22.4K
Xu Zhang
Xu Zhang@ceasarxuu·
@7a7zz For high-intensity development, it's still a bit insufficient — the weekly quota is used up in about two days. Looking forward to a larger plan. Also, as I recall, opencode uses the full model without quantization.
English
1
0
1
175
7A7z
7A7z@7a7zz·
got opencode go :)
7A7z tweet media
English
16
0
88
4.9K
Udit Goenka
Udit Goenka@iuditg·
Just tried GLM 5.1 Pros: - It's very good - Was able to solve some complicated problem - Very good with OpenCode and following all the agentic instructions Cons: - It's extremely slow - Contextual Window is very low 1M contextual window is the bare minimum in today's time.
English
24
2
122
12.4K