BeijingChef 🇺🇦

6.6K posts

BeijingChef 🇺🇦

BeijingChef 🇺🇦

@ChefBeijing

AI:时代的终章。QMHT Investment 私募基金第一期已成功完成对OpenAI和Anthropic 250万美元投资。

参加日 Mart 2020
92 フォロー中13K フォロワー
固定されたツイート
BeijingChef 🇺🇦
BeijingChef 🇺🇦@ChefBeijing·
这是我今晚刚刚录制的课程。 讨论OpenAI 在国防部订单面前的全局性惨败。 覆盖了模型后训练的安全选项如何把ChatGPT 淘汰出局。 有兴趣参与课程的,请在评论区留言。
BeijingChef 🇺🇦 tweet media
中文
1
1
2
3K
Sam Altman
Sam Altman@sama·
what if we name the next model "goblin" almost worth it to make you all happy...
English
2.8K
532
11K
1.2M
BeijingChef 🇺🇦
BeijingChef 🇺🇦@ChefBeijing·
@Yuchenj_UW Plus: China is a communist country, and Open-source is basically a communism thing. You can't compete with China with this.
English
3
0
0
559
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
It's weird that the US still doesn’t have a truly competitive open-source model lab. It’s clearly not a money problem. Several neolabs have raised billions. It’s not a compute problem. US labs have easier access to B200s/B300s than Chinese labs. So what is the issue?
English
211
27
787
131.2K
BeijingChef 🇺🇦
BeijingChef 🇺🇦@ChefBeijing·
@Yuchenj_UW simply because Open-source is a loser mode. Only those loser doing it. Top genius don't like this shitty game and buying their 7M houses in Tiburon in their 28 year old (from Ant.) What the fuck of open-source shit.
English
1
0
1
627
BeijingChef 🇺🇦 がリツイート
Marko Slavnic
Marko Slavnic@Markoslavnic·
The quality of animation you can create on your own is truly amazing. We really are just limited by our imaginations at this point. Go tell your story! Made in @runwayml in a few hours and a handful of gens.
English
788
1.1K
14.2K
6.4M
BeijingChef 🇺🇦 がリツイート
shellac
shellac@she_llac·
I think it's time to update the trendline
shellac tweet media
English
15
12
372
51K
BeijingChef 🇺🇦 がリツイート
Jukan
Jukan@jukan05·
Why did xAI hand over a 220,000-GPU cluster to Anthropic? The technical backdrop to xAI's decision to hand Colossus 1 over to Anthropic in its entirety is more interesting than it appears. xAI deployed more than 220,000 NVIDIA GPUs at its Colossus 1 data center in Memphis. Of these, roughly 150,000 are estimated to be H100s, 50,000 H200s, and 20,000 GB200s. In other words, three different generations of silicon are mixed together inside a single cluster — a "heterogeneous architecture." For distributed training, however, this configuration is close to a disaster, according to engineers familiar with the setup. In distributed training, 100,000 GPUs must finish a single step simultaneously before the cluster can advance to the next one. Even if the GB200s finish their computation first, the remaining 99,999 chips have to wait for the slower H100s — or for any GPU that has hit a stack-related snag — to catch up. This is known as the straggler effect. The 11% GPU utilization rate (MFU: the share of theoretical FLOPs actually realized) at xAI recently reported by The Information can be read as the numerical fallout of this problem. It stands in stark contrast to the 40%-plus MFU figures achieved by Meta and Google. The problem runs deeper still. As discussed earlier, NVIDIA's NCCL has traditionally been optimized for a ring topology. It works beautifully at the 1,000–10,000 GPU scale, but once you push into the 100,000-unit range, the latency of data traversing the ring once around becomes punishingly long. GPUs need to churn through computations rapidly to keep MFU high, but while they sit waiting endlessly for data to arrive over the network fabric, more than half of the silicon falls into idle. Google sidestepped this bottleneck with its own custom topology (Google's OCS: Apollo/Palomar), but xAI, by my read, has not yet reached that stage. Layer Blackwell's (GB200) "power smoothing" issue on top, and the picture comes into focus. According to Zeeshan Patel, formerly in charge of multimodal pre-training at xAI, Blackwell GPUs draw power so aggressively that the chip itself includes a hardware feature for smoothing power delivery. xAI's existing software stack, however, was optimized for Hopper and does not understand the characteristics of the new hardware; when it imposes irregular loads on the chip, the silicon physically destructs — literally melts. That means the modeling stack must be rewritten from scratch, which in turn means scaling is far harder than most of us imagine. Pulling all of this together points to a single conclusion. xAI judged that training frontier models on Colossus 1 simply was not efficient enough to be worthwhile. It therefore moved its own training workloads wholesale onto Colossus 2, built as a 100% Blackwell homogeneous cluster. Colossus 1, on the other hand — whose mixed architecture is far less crippling for inference, which parallelizes more forgivingly — was leased in its entirety to an Anthropic that desperately needed inference capacity. Many observers point to what looks like a contradiction: Elon Musk poured enormous capital into building Colossus, only to hand the core asset over to a direct competitor in Anthropic. Others read it as xAI capitulating because it is a "middling frontier lab." But these are surface-level reads. Look at the numbers and a different picture emerges. xAI today holds roughly 550,000+ GPUs in total (on an H100-equivalent performance basis), and Colossus 1 (220,000 units) accounts for only about 40% of the total available capacity. Colossus 2 — built entirely on Blackwell — is already operational and continuing to expand. Elon kept the all-Blackwell homogeneous cluster (Colossus 2) for himself and leased out the older, mixed-generation Colossus 1. In other words, he handed the pain of rewriting the stack — the MFU-11% debacle — to Anthropic, while keeping his own focus on training the next generation of models. The real point, then, is this. Elon's objective appears to be positioning ahead of the SpaceXAI IPO at a $1.75 trillion valuation, currently floated for as early as June. The narrative SpaceXAI now needs is that xAI — long the "sore finger" — is not merely a research lab burning cash, but a business with a "neo-cloud" model in the mold of AWS, capable of leasing surplus assets at high yields. From a cost-of-capital perspective, an "AGI cash incinerator" is far less attractive to investors than a "data-center landlord generating cash." As noted above, the most important detail of the Colossus 1 lease is that it is for inference, not training. Unlike training, inference requires far less tightly synchronized inter-GPU communication. Even when the chips are heterogeneous, the workload parcels out cleanly across them in parallel. The straggler effect — the chief weakness of a mixed cluster — is essentially neutralized for inference workloads. Furthermore, with Anthropic occupying all 220,000 GPUs as a single tenant, the network-switch jitter (unanticipated latency) that arises under multi-tenancy disappears. The two sides' technical weaknesses end up complementing each other almost exactly. One insight follows. As a training cluster mixing H100/H200/GB200, Colossus 1 was an asset that could only deliver an MFU of 11%. The moment it was handed over to a single inference customer, however, that asset transformed into a cash-flow asset rented out at roughly $2.60 per GPU-hour (a weighted average of the lease rates across GPU types). For xAI, what was a "cluster from hell" for training has become a "golden goose" minting $5–6 billion in annual revenue when redeployed for inference. Elon's genius, I would argue, lies not in the model but in this asset-rotation structure. The weight of that $6 billion becomes clearer when set against xAI's income statement. Annualizing xAI's 1Q26 net loss yields roughly $6 billion in losses per year. The $5–6 billion in annual revenue generated by leasing Colossus 1 to Anthropic, in other words, almost perfectly hedges xAI's loss figure. This single deal effectively pulls xAI to break-even. Heading into the SpaceXAI IPO, this functions as a core line of financial defense. From a cost-of-capital standpoint, if the image shifts from "research lab burning cash" to "infrastructure tollgate stably printing $6 billion a year," the entire tone of the offering can change. (May 8, 2026, Mirae Asset Securities)
Jukan@jukan05

What the SpaceX–Anthropic Deal Means Two weeks ago, we published a note laying out what GPT-5.5's release implied. The conclusion was simple: whoever secures compute first, in greater volume, and with greater reliability ultimately takes the win. With OpenAI's 30GW roadmap dwarfing Anthropic's 7–8GW, we closed by arguing that the structural advantage on compute sat with OpenAI. Less than a fortnight later, that conclusion is being tested. On May 6, Anthropic signed a single-tenant lease for the entirety of Colossus 1 with SpaceXAI — the infrastructure subsidiary that consolidates Elon Musk's xAI and SpaceX. The asset carries more than 220,000 GPUs and 300MW of power, and crucially, is scheduled to come online within this month. It served as the capstone of Anthropic's April blitz, which added 13.8GW of cumulative capacity over the span of a single month. On headline numbers alone, OpenAI took more than a year to stack 18GW; Anthropic has put 13.8GW in the ground in thirty days. The takeaways break down into three. First, the compute pecking order has been redrawn again. Anthropic has now swept up the AWS expansion (5GW, with $100B+ in spend commitments over a decade), Google + Broadcom (3.5GW of TPU), Google Cloud (5GW alongside a $40B investment), and now SpaceXAI's Colossus 1 (0.3GW). Cumulative committed capacity, inclusive of pre-April allocations, sits at 14.8GW. This is still only half of OpenAI's 2030 target of 30GW, but the fact that the SpaceX lease will be live inside a month makes "deliverability" a qualitatively different proposition. Second, Elon Musk is the plaintiff in an active lawsuit against OpenAI — and at the same time, the supplier handing 220,000+ GPUs and 300MW of power, in one block, to OpenAI's most formidable competitor. The timing matters: the deal was struck in the middle of the Musk–Altman trial. We read this as a deliberate pincer with OpenAI in the middle. In the courtroom, Musk works to dismantle the moral legitimacy of OpenAI's leadership; in the market, he arms Anthropic to absorb OpenAI's revenue and user base. Third, the structure is financial-engineering perfection — a clean win-win for both sides. xAI can recognize $6B of annual revenue from a single contract, an amount that almost precisely offsets its Q1 2026 annualized net loss of $6B. It also accelerates the cleanup of SpaceXAI's pre-IPO balance sheet, with the entity now being floated at around $1.75T. Anthropic, on the other side, converts roughly $5B of spend into what it expects to be $15B of ARR via the coming inference-revenue surge. (Mirae Asset Securities, May 8, 2026)

English
199
509
4.2K
1.1M
Figure
Figure@Figure_robot·
We taught two F.03 robots to clean a room and make a bed in under 2 minutes - fully autonomous.
English
662
1.1K
7.9K
1.3M
Brett Adcock
Brett Adcock@adcock_brett·
Figure taught two robots to make a bed together - fully autonomous Honestly, they’re better at it than most humans
English
507
485
4.8K
610.2K
Steven Uecke
Steven Uecke@stevenuecke·
@TheHumanoidHub With 2 swappable packs, they might have an internal battery to keep the unit on during full swaps. This internal battery may recharge from the swappable packs.
English
3
0
3
460
BeijingChef 🇺🇦
BeijingChef 🇺🇦@ChefBeijing·
@GoSailGlobal 一帮给人打工的码农手把手教你如何带领一家新上市公司。 即你没有一两家新上市公司你也可以学习一下对不对。 看我牛逼吧。 我知道这么多。
中文
0
0
0
315
Jason Zhu
Jason Zhu@GoSailGlobal·
Shopify 创始人 Toby 最新一期访谈 看完我对 CEO 这个角色的认知被重写了一遍 他说看到自己几年前写的代码,觉得写得还不错 那是他人生最难过的一天,因为这意味着 他停止了进步 这种心态贯穿他过去 21 年所有的决策 最戏剧性的一段是 COVID 那两年 Shopify 2015 年 IPO 的时候,Toby 当时是个 30 多岁的程序员 一夜之间,他觉得自己应该 cosplay 成一个 60 岁的西装公司 CEO 这是硅谷规则书里写好的范本,大家都这么做 差点毁了 Shopify 直到 COVID 砸下来,所有假设瓦解,计划失效 他被迫坐下来,一个项目一个项目过 发现的事让他自己都发疯 - 多伦多有个团队在专门给「超市行业」开发 Shopify 模块 他完全不知道这件事 - 公司 8000 人,居然有 5500 个不同的职级 title - 他过去几年的所有 executives,全程都在帮他 cosplay 一个不属于他自己的角色 那一年他亲自审完每个项目,砍掉了 60%,12 个月内换掉了所有的高管 1️⃣ 最重要的一步,是他打开了一个叫 Founders 的内部 Slack 频道 那个频道里都是他过去几年收购回来的公司的创始人 他在里面发了一句话「兄弟们 救救我」 这件事改变了他对人才的整个判断 他发现,收购来的那些创始人在 COVID 期间反而最不舒服 因为 Shopify 内部把他们当 irritants(刺头) 他们见到 shit 就直接说 shit,不接受现状 不会「成熟地放下」 大公司一般怎么处理这种人,扔进 skunkworks 或者 founder daycare 隔起来 Toby 说这种做法整个反了 这种人应该被放到现有 executives 头上 他后来把好几个工程师从 individual contributor 一口气提到 VP 之上,「每一个都成功了」 这是他的第一条 belief 2️⃣ 公司应该被「工程化」 COVID 之后他亲自启动了一个项目叫 Shopify OS 他用 Python 写了一个程序 把整个公司变成代码 每个职位 每个层级 每个汇报关系都是配置文件 工资和市场数据是机器可读的 JSON 跑一遍 set solver 算出公司「应该长什么样」 工程师管这种东西叫 desire state system 你定义「应该是什么」 然后系统算出把当前状态推到目标状态需要的最少步骤,React 就是这个原理 副作用是把整个公司的政治杀死了 销售总监跑来说「我需要再招 50 个销售」 以前是 Toby 在高尔夫球场上拍板批了 然后 HR 去工程团队「砍一磅肉」 现在系统会直接吐出反事实「招 50 个销售 意味着工程团队减员 X 人 你还要吗」 3️⃣ 关于薪酬 COVID 期间 Shopify 股价跌了 80% Toby 当时的第一反应 是松了一口气 因为高点估值已经做到 50 倍 PS 「那种估值已经在你能控制的范围之外 是别人在赌一个未来 你只是被动接受」 但员工的体验完全不一样 他们的股票是在高位发的 一夜归零 而且这个过程里他们没有 agency 所以 Shopify 把整个薪酬系统重写 每个员工每个季度可以自己调 slider 「我这季度想要多少现金 / 多少股票 / 多少 RSU / 多少 ShopCash」,如果股票跌了,下一季度同样的总薪酬数字可以兑换更多股票,自动 rebalance 听起来是技术细节,但 Toby 强调它的真正意义 「我不希望任何一个员工觉得 Shopify 是无意中走到现在这一步的 我们对每一件事都是有意识做了选择」 Toby 这套打法底层有一句话我反复在想 公司能给员工的最大产品 是这个:让员工每天被他自己崇敬的人围绕
David Senra@davidsenra

My conversation with Tobi Lütke (@tobi), co-founder and CEO of Shopify. 0:00 Companies as Social Technology 5:27 The Value of Reading Books: Cheat Codes for Life 7:28 Post-IPO Crisis: Cosplaying as a CEO 7:54 Competition vs Rivalry: The Power of Healthy Competition 16:02 COVID as a Turning Point: Rebuilding the Executive Team 18:21 Hiring Founders: Building a Team of High-Agency People 26:49 Shopify OS: Engineering the Company from First Principles 36:48 Compensation Innovation: Giving Employees Full Agency 40:41 The Psychology of Identity and Affirmations 48:43 Differentiation Over Perfection: Making It Your Own 50:31 Context Podcast: Documenting Decision-Making 1:26:36 The IPO Decision: Going Against Silicon Valley Orthodoxy 1:35:08 Building a Company Worth Working For 1:41:50 Hiring for Spikiness: Finding Non-Conformists 1:48:28 Office Design Philosophy: Creating Space for Excellence 1:58:54 Video Games as Business Training: StarCraft Lessons 2:07:06 AI Revolution: 2026 and Beyond 2:11:44 Focus on Craft: The Unquantifiable Elements of Excellence 2:21:08 Survivorship Bias: The Importance of Entrepreneurial Exposure 2:23:22 Closing Includes paid partnerships.

中文
4
18
118
21.9K
BeijingChef 🇺🇦
BeijingChef 🇺🇦@ChefBeijing·
@firstadopter When Ant generated $ 44B of ARR on April, OpenAI hesitated to announced their April revenue. This is not good.
English
0
0
2
402
tae kim
tae kim@firstadopter·
Let's play a game called pop the stale backward-looking media manufactured consensus narrative. 1. OpenAI HIT their "aggressive" Q1 plan 2. OpenAI raised revenue expectations for the rest of 2026 due to momentum going into Q2 3. One week into the GPT-5.5 launch, API revenue is growing more than 2x faster than the prior best 4. Codex revenue DOUBLED in less than seven days. 5. Customer behavior is inflecting following the GPT-5.5 release 6. AI infra commitments are AHEAD of plan: "When we announced Stargate in late 2025, we committed to securing 10GW of AI infrastructure in the United States by 2029. We have already surpassed that milestone, including more than 3GW added in the last 90 days alone" Here's the simplified version: OpenAI got punched in the face with Claude Code and Gemini late last year. Product-market-fit (exponential revenue) exploded toward AI agents and agentic coding. They pivoted resources toward Codex and agentic coding. And now, with the release of GPT-5.5 and their compute advantage, OpenAI is making a massive comeback back to tech leadership. Come on, people. This is not hard. OpenAI's talent is still there. It should be obvious to anyone who follows the AI industry and talks to developers even minimally.
tae kim tweet media
tae kim@firstadopter

For the last few months, and even last week, the consensus narrative was that Anthropic was running away with it and OpenAI was dead. I was one of the VERY few people who said that was wrong idea and as OpenAI trained new models on newer advanced Nvidia GPUs, they would bounce back strong. I was right. GPT-5.5 is a big hit with devs. It's a race again. The game is back on.

English
9
8
176
30.2K
Wang Shuyi
Wang Shuyi@wshuyi·
关键是,交付之前还做了严格的验证😄👍
Wang Shuyi tweet media
中文
3
0
15
21.4K
Wang Shuyi
Wang Shuyi@wshuyi·
Codex 升级到新版后,支持了 `/goal` 命令,也就是人给出最终目标,剩下的事儿都让 Codex 来操心即可。我于是非常贪婪地给出了要求:「给我做一个 3A 等级的塔防游戏。足够好玩,好到愿意让用户直接掏钱那种;运行要尽可能简单,最好是在浏览器打开就能用」🤭
Wang Shuyi tweet media
中文
48
77
746
161.8K
BeijingChef 🇺🇦
BeijingChef 🇺🇦@ChefBeijing·
@AYi_AInotes 那是一个全无界都在向你抛媚眼,一晚上你可以约20个美女干炮。的神奇时光。 别说普通人,就是全世界全世界的CEO都没有几个人体验过。
中文
0
0
0
27
阿绎 AYi
阿绎 AYi@AYi_AInotes·
能拒绝10亿现金的人,到底在想什么 ? 我们来看看22岁的扎克伯格,给所有创业者上的最狠的一课。 很多人都在说他当年拒绝雅虎10亿现金收购是神来之笔 但他自己说的最真实,我当时根本不够成熟,做不出什么复杂的商业分析, 周围所有的顾问,投资人,团队成员,全都在劝我卖掉。 我只是问了自己一个问题, 如果我把Facebook卖了,我接下来会做什么。 答案是,我会再建一个一模一样的公司, 而且我其实更喜欢我现在手里这个。 原因就这么简单, 没有精密的估值模型,没有对未来的精准预测, 只有一个最朴素的反问,和一股不想停下来的劲。 我觉得这才是顶级创始人和普通人最本质的区别。 普通人算的是,我卖掉能赚多少钱。 创始人算的是,我卖掉之后,还能不能继续做这件事。 对他们来说,公司从来不是一笔用来套现的生意,更像是自己人格的延伸,是这辈子要干的事。 或许有人会说他只是运气好,赌赢了。 但你要知道,2006年的10亿美金是天文数字, 当时Facebook一年的收入才几千万, 换成任何一个正常人,都会签字拿钱走人。 更难的是,在所有人都觉得你疯了的时候,你还能坚持自己的判断。 当然信念本身其实并不值钱, 值钱的是你为了这个信念, 愿意拒绝10亿,愿意再死磕二十年, 愿意扛过后面所有的死亡时刻: 移动转型的阵痛,隐私风暴的攻击, 元宇宙烧钱的质疑, 哪一个都足以让普通人崩溃。 其实所有伟大的公司,本质上都是一个创始人,拒绝了无数次卖掉的机会,死磕出来的。 如果你现在正在面临类似的抉择, 记住,别算估值,别听别人的建议,就问自己那个问题, 卖了之后,你还会再做一遍吗? 如果答案是会,那就别卖。
中文
36
7
36
18K
BeijingChef 🇺🇦
BeijingChef 🇺🇦@ChefBeijing·
@AYi_AInotes 如果自家服务器上已经挤爆了上千万用户,傻逼才会10亿卖掉自己的产品。
中文
1
0
1
51
阿绎 AYi
阿绎 AYi@AYi_AInotes·
补充个扎心的细节,当时雅虎给的是全现金,没有任何对赌,签字就能拿钱走人,兄弟们,换作是你,你会签吗?!
中文
2
0
2
1.4K
Tibo
Tibo@thsottiaux·
Please don't pronounce Codex, COD-ex. We are also not a fish.
English
329
36
2.5K
264K
Google AI
Google AI@GoogleAI·
I/O is less than 3 weeks away 🤯 We want YOU to help us create the countdown that will play before the keynote begins. Using @GoogleAIStudio or Canvas in @GeminiApp vibe code your most creative countdown concept and send it to us by May 6th. The only rule is that your build has to feature a large number between 1 and 10. Check the replies in this thread for sample projects to draw inspiration from or remix. You can find more info and directions on how to submit your builds here: goo.gle/codethecountdo…
GIF
English
68
79
662
122.2K
Chris
Chris@chatgpt21·
GPT 4.5 vs GPT 5.5 Thinking “I’m going through a tough time after failing a test”
Chris tweet mediaChris tweet media
English
35
5
453
61.8K