KevinWood

124 posts

KevinWood banner
KevinWood

KevinWood

@coolbryant24

Pay attention to AI, product manager, Financial securities.The unity of knowledge and action, do the right thing.

Katılım Ocak 2022
2.4K Takip Edilen25 Takipçiler
KevinWood
KevinWood@coolbryant24·
mark
Jason Zhu@GoSailGlobal

DeepSeek 像一把抵在硅谷模型公司背后的枪 🔫 硅谷101 今天上线了一期炸裂对谈:OpenAI 前研究员 Jenny Xiao × 芯片架构师肖志斌,两个硅谷内部人聊 DeepSeek v4 带来的生存危机 刚好也看到国内比较喜欢的AI博主大聪明“赛博禅心”,在解读这个视频,直播中的两个嘉宾很有料: - 肖志斌:ZFLOW AI 创始人兼 CEO,前华美半导体协会主席,资深芯片架构师 - Jenny Xiao:前 OpenAI 研究员,Leonis Capital 合伙人,专注 AI 投资 I've heard a similar point on an A16z podcast before, and it seems like reality has proven it right again. @pmarca @venturetwins @omooretweets 最狠的三句话: 1️⃣ "If you're a foundation model company and you get surpassed by open source, the value of your business is essentially zero." 这不是技术竞争,这是生死线(kill line) 2️⃣ "硅谷公司钱太多,反而没动力优化效率。中国模型厂商被资源倒逼,更早进入 token efficiency 创新",资源约束 = 创新加速器 3️⃣ "没有效率,AGI 就只能是个 demo。有了效率,AGI 才能成为真正的产品" ,DeepSeek v4:计算成本 1/3,内存占用 1/10 核心观点 - Anthropic 估值超过 OpenAI 的真相:专注 > 什么都做 - GPT-5.5 比 GPT-5 贵 2 倍,DeepSeek v4 便宜 10 倍,谁在裸泳? - 英伟达短期安全,长期推理市场会被 TPU / 升腾 / 寒武纪瓜分 - Claude Code 为什么是 Anthropic 的定义时刻 完整对谈👇

English
0
0
1
3
KevinWood
KevinWood@coolbryant24·
@passluo 纯给小白科普的,郭宇愿意科普已经不容易了,哈哈哈哈哈
中文
0
0
0
5
KevinWood
KevinWood@coolbryant24·
@cursor_ai All kinds of companies that do their own agents are meaningless, and the game is over.
English
0
0
0
1
Cursor
Cursor@cursor_ai·
We’re introducing the Cursor SDK so you can build agents with the same runtime, harness, and models that power Cursor. Run agents from CI/CD pipelines, create automations for end-to-end workflows, or embed agents directly inside your products.
English
368
783
8.3K
2.6M
KevinWood
KevinWood@coolbryant24·
This point of view is a little interesting. Keep paying attention to it.
George Noble@gnoble79

This is the most OUTRAGEOUS deal I've seen in my 45 years on Wall Street. SpaceX just disclosed Musk's new compensation package: He gets up to 200 million super-voting shares if SpaceX hits a $7.5 trillion valuation, establishes a permanent human settlement of at least ONE MILLION people on Mars, and deploys roughly 100 terawatts of space-based computing power. Let me put the 100 terawatts in perspective: The entire electricity generation capacity of the United States is around 1.2 terawatts. The comp plan asks Musk to build more than 80x America's entire power grid... in orbit. This is a science fiction screenplay that somehow landed in front of the SEC. But here's why it actually matters for your portfolio... The S-1 reportedly claims a $28.5 trillion total addressable market, with over 90 percent attributed to AI. CapeFearAdvisors flagged this one cleanly: when Palantir went public, it disclosed a $119 billion TAM and the SEC reviewed and accepted it. SpaceX is claiming a market roughly 240x BIGGER. Now let's talk about what is actually being sold here: Reported 2025 revenue is approximately $15.5 billion. Starlink delivers around $11 billion of that with healthy margins, and the launch business is genuinely dominant. The problem is xAI - the AI piece doing all the heavy lifting in the trillion-dollar valuation pitch. xAI generated just $210 million of revenue in the first 3 quarters of 2025 while burning through $9.5 billion in cash. Ben Brey and Rupert Mitchell - a former Fidelity portfolio manager and a former head of equity capital markets at Goldman and Citi between them - ran a serious discounted cash flow on the actual operating businesses and arrived at roughly $400 billion. Lawrence Fossi covered their work recently and the math holds up. The IPO is being marketed at $1.75 TRILLION. The gap between what these businesses support and what Musk is asking the public to pay is roughly $1.35 trillion of pure narrative. Then layer on what we just learned last week... The New York Times investigation revealed Musk personally borrowed $500 million from SpaceX between 2018 and 2020 at rates as low as 1%, while bank prime rates sat around 5%. The same SpaceX has been used to bail out SolarCity, prop up Tesla during cash crunches, and absorb xAI when the AI losses became unmanageable. This is the same playbook he's run for two decades. Use a privately controlled entity as a personal piggy bank, and when the bills come due, find new investors to absorb the losses. The IPO is structured to keep that game going FOREVER. The Texas reincorporation strips away Delaware's fiduciary protections. Controlled-company status on the Nasdaq eliminates independent board requirements. And retail is being offered up to 30% of the offering (3x the normal allocation) because the institutions who actually do the math are quietly stepping away. Here is the part that finishes the case for me: Roughly $40 billion of the IPO proceeds are already spoken for before a single dollar reaches operations. About $23 billion retires SpaceX debt. Another $17 billion retires the high-interest debt sitting on xAI and X. This raise is not funding the future. It's just plugging existing holes that retail investors will now own. In my 45 years I've never seen a deal where the comp hurdle is colonizing another planet. I've never seen a disclosed TAM that exceeds verified comparables by two orders of magnitude. I've never seen a company asking the public to fund the retirement of debt incurred by separate private entities controlled by the same individual. Every red flag I've watched precede a major bust over four decades is sitting in this prospectus, in plain sight. The Tesla mispricing is being repeated on a far larger scale. And this time the bag is being handed directly to retail. Don't be the one holding it.

English
0
0
0
1
KevinWood
KevinWood@coolbryant24·
@turingou 人和人的差距本来就很大,不用理会懒惰的人😁
中文
0
0
0
20
郭宇 guoyu.eth
郭宇 guoyu.eth@turingou·
如果你觉得我不断在写新的 AI 产品,不断分享我写的东西是在贩卖焦虑,你趁早把我 block,我也会同样 block 你。但我要告诉你,就算你看不到我写的东西,我做的事情会越来越多,你的焦虑也一样会与日俱增。你会在暗无天日的泥潭中生活,我告诉你,你总有一天会意识到,你看不到月光闪烁,不是因为其他青蛙跳出了池塘,只是因为你自己不愿意抬起头。
中文
116
10
584
104.2K
KevinWood
KevinWood@coolbryant24·
Kimi is the strongest LLM in China.
Ihtesham Ali@ihtesham2005

A Beijing AI lab just released a model that can run 300 agents at once without any of them interfering with each other, and the mechanism they used to pull it off is something most researchers thought was years away. The lab is Moonshot AI. The founder is Yang Zhilin. The model is called Kimi K2.6, and they shipped it on April 20, 2026, under a modified MIT license. 1 trillion parameters. 32 billion active per token. Fully open weights. You can download it today. Here is the part that matters, and why almost nobody building multi-agent systems saw it coming. Every serious attempt at running multiple AI agents on the same task before this one collapsed past a certain threshold. Researchers have a name for it. Coordination hallucinations. Agents giving each other contradictory instructions. Agents stepping on each other's work. Agents reaching conclusions that looked right in isolation but made no sense when you tried to stitch them together. The more agents you added, the worse the output got. Above around 100 agents, the whole system became unstable. The standard response to this problem was to build an orchestration layer on top of a single model. An external framework that routes tasks to agents, tracks progress, and merges outputs. Every multi-agent startup in the last two years has shipped some version of this. None of them scale past a ceiling. Moonshot did something different. They trained the orchestrator as part of the model itself. The coordinator is not a wrapper. It is not a framework. It is a core capability baked into K2.6 during training. The model understands how to decompose a task into parallel subtasks. It knows how to route work to specialized sub-agents based on their skill profiles. It detects when an agent has stalled or failed, reassigns the task automatically, and merges the outputs into a single coherent result. The numbers are unlike anything that came before. 300 sub-agents running in parallel. 4,000 coordinated steps in a single autonomous run. 12 hours of continuous execution on hard engineering problems. One RL infrastructure team at Moonshot ran a K2.6-powered agent autonomously for 5 straight days managing monitoring, incident response, and system operations without human supervision. It never lost the thread. The benchmark nobody wants to talk about is BrowseComp in agent swarm mode. K2.6 scored 86.3. GPT-5.4 scored 78.4. That is not a rounding error. That is an 8-point lead on a benchmark specifically designed to test multi-agent coordination, and it was won by the only open-source model in the comparison. The counterintuitive finding is this. More agents does not automatically mean better results. Every ad-hoc multi-agent system before K2.6 proved the opposite. The value is not in the number. The value is in whether the orchestration holds together long enough and cleanly enough to turn 300 independent workers into something that behaves like a single, focused mind. The bottleneck in AI was never intelligence. It was stamina. The first model that figured out how to stay coordinated for 13 hours just made every 1-hour model obsolete.

English
0
0
0
8
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
A Beijing AI lab just released a model that can run 300 agents at once without any of them interfering with each other, and the mechanism they used to pull it off is something most researchers thought was years away. The lab is Moonshot AI. The founder is Yang Zhilin. The model is called Kimi K2.6, and they shipped it on April 20, 2026, under a modified MIT license. 1 trillion parameters. 32 billion active per token. Fully open weights. You can download it today. Here is the part that matters, and why almost nobody building multi-agent systems saw it coming. Every serious attempt at running multiple AI agents on the same task before this one collapsed past a certain threshold. Researchers have a name for it. Coordination hallucinations. Agents giving each other contradictory instructions. Agents stepping on each other's work. Agents reaching conclusions that looked right in isolation but made no sense when you tried to stitch them together. The more agents you added, the worse the output got. Above around 100 agents, the whole system became unstable. The standard response to this problem was to build an orchestration layer on top of a single model. An external framework that routes tasks to agents, tracks progress, and merges outputs. Every multi-agent startup in the last two years has shipped some version of this. None of them scale past a ceiling. Moonshot did something different. They trained the orchestrator as part of the model itself. The coordinator is not a wrapper. It is not a framework. It is a core capability baked into K2.6 during training. The model understands how to decompose a task into parallel subtasks. It knows how to route work to specialized sub-agents based on their skill profiles. It detects when an agent has stalled or failed, reassigns the task automatically, and merges the outputs into a single coherent result. The numbers are unlike anything that came before. 300 sub-agents running in parallel. 4,000 coordinated steps in a single autonomous run. 12 hours of continuous execution on hard engineering problems. One RL infrastructure team at Moonshot ran a K2.6-powered agent autonomously for 5 straight days managing monitoring, incident response, and system operations without human supervision. It never lost the thread. The benchmark nobody wants to talk about is BrowseComp in agent swarm mode. K2.6 scored 86.3. GPT-5.4 scored 78.4. That is not a rounding error. That is an 8-point lead on a benchmark specifically designed to test multi-agent coordination, and it was won by the only open-source model in the comparison. The counterintuitive finding is this. More agents does not automatically mean better results. Every ad-hoc multi-agent system before K2.6 proved the opposite. The value is not in the number. The value is in whether the orchestration holds together long enough and cleanly enough to turn 300 independent workers into something that behaves like a single, focused mind. The bottleneck in AI was never intelligence. It was stamina. The first model that figured out how to stay coordinated for 13 hours just made every 1-hour model obsolete.
Ihtesham Ali tweet media
English
5
21
72
5.6K
KevinWood
KevinWood@coolbryant24·
@mythreviewer 就应该这样,进步再快一些,我是一点也不想开车
中文
0
0
0
10
China’s Diplomatic History Review
Tesla FSD 14.3.2初步感受非常流畅,敏捷,精确!没想到在14.2.2.5的基础上还能有这么大的提升。 如果说Tesla FSD 14.2.2.5的感觉是比大部分人类司机开的好,那么Tesla 14.3.2的感觉就是秒杀人类的驾驶技术了。驾驶汽车这件事以后可能就基本上和人类没什么关系了。
中文
16
24
289
21.8K
Michelle Kim
Michelle Kim@michelletomkim·
I'm a reporter (and lawyer) covering Musk v. Altman for @techreview. Here's what's going on day 2 of the trial at the Oakland federal courthouse. Elon Musk and Sam Altman are here. Musk will likely testify today. Opening statements from both parties' lawyers to come soon.
English
105
300
4.6K
673K
KevinWood
KevinWood@coolbryant24·
@quxiaoyin As a product manager, I agree with you very much. Programmers are just hard-mouthed and panicked for a long time. Hahahaha😂
English
0
0
0
6
Xiaoyin Qu
Xiaoyin Qu@quxiaoyin·
Everyone's saying product managers are screwed. I keep seeing articles like this one by Lenny where VCs think PMs are heading for massive disruption. Another piece straight-up called product management a "sunset industry." As someone who wrote a PM bestseller eight years ago, I'm conflicted. The articles aren't wrong about some things. Most PM work is just translating between teams and doing alignment - stuff AI can absolutely do better. AI has all the Slack messages, meeting transcripts, everything. Why do we need humans for alignment? Plus teams are smaller now anyway. Companies that used to need 1-2 PMs for 20 people now run with 2-person teams. Who needs a PM when there's barely anyone to manage? I agree - traditional PM work is toast. But here's where it gets interesting. Everyone agrees that "builder PMs" who can take ideas from concept to live product will be incredibly valuable. So who becomes these super builders? PMs learning to code? Or engineers finally getting to build what they want without PM interference? My take (and yes, I'm biased): PMs have the edge. Product sense matters more than coding skills when AI handles implementation. My engineer friends disagree. They think I'm just another liberal arts major who couldn't handle real technical work. They believe engineers will naturally become better builders than PMs scrambling to learn tech. Maybe they're right. But I still think understanding user problems beats understanding compilers. The winners will be people who think like founders - regardless of whether they came from PM or engineering. Because AI makes technical execution easier. It doesn't make knowing what to build any easier. #ProductManagement #AI #Engineering #Startups #TechCareers #FutureOfWork #Builders
English
6
0
12
1.9K
KevinWood
KevinWood@coolbryant24·
As a product manager, I agree with you very much. R&D personnel are just hard-mouthed. In fact, they are more panicked than anyone else. Hahahaha
Xiaoyin Qu@quxiaoyin

Everyone's saying product managers are screwed. I keep seeing articles like this one by Lenny where VCs think PMs are heading for massive disruption. Another piece straight-up called product management a "sunset industry." As someone who wrote a PM bestseller eight years ago, I'm conflicted. The articles aren't wrong about some things. Most PM work is just translating between teams and doing alignment - stuff AI can absolutely do better. AI has all the Slack messages, meeting transcripts, everything. Why do we need humans for alignment? Plus teams are smaller now anyway. Companies that used to need 1-2 PMs for 20 people now run with 2-person teams. Who needs a PM when there's barely anyone to manage? I agree - traditional PM work is toast. But here's where it gets interesting. Everyone agrees that "builder PMs" who can take ideas from concept to live product will be incredibly valuable. So who becomes these super builders? PMs learning to code? Or engineers finally getting to build what they want without PM interference? My take (and yes, I'm biased): PMs have the edge. Product sense matters more than coding skills when AI handles implementation. My engineer friends disagree. They think I'm just another liberal arts major who couldn't handle real technical work. They believe engineers will naturally become better builders than PMs scrambling to learn tech. Maybe they're right. But I still think understanding user problems beats understanding compilers. The winners will be people who think like founders - regardless of whether they came from PM or engineering. Because AI makes technical execution easier. It doesn't make knowing what to build any easier. #ProductManagement #AI #Engineering #Startups #TechCareers #FutureOfWork #Builders

English
0
0
0
3