刘江/LIU Jiang

12.8K posts

刘江/LIU Jiang

刘江/LIU Jiang

@turingbook

Exploring AGI. Co-Founder of Turing Company. ex Meituan, BAAI, CSDN. 图灵联合创始人。曾任:北京智源人工智能研究院副院长,CSDN&《程序员》杂志总编,美团技术学院院长。

Beijing, China Katılım Mart 2007
3K Takip Edilen51.6K Takipçiler
Sabitlenmiş Tweet
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
我在图灵(工作地点:北京)要招一个AI岗(技术与产品混合),直接和我配合,把图灵转变成AI原生的知识服务平台。年龄专业学历工作经验都不限,画像是:爱学习,爱折腾各种AI工具(比如Claude Code、小龙虾)来解决实际问题,也想把技术变现,帮助更多人拥抱AI新时代。有兴趣的DM。
中文
25
3
107
36.7K
Shengwen Yang
Shengwen Yang@yswen·
@turingbook Hold on — Professor Liu Zhiyuan leads the ModelBest team, not Moonshot AI. Are you sure about that?
English
1
0
0
9
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
在DeepSeek里直接输入<think>,会看到可能是其他用户问题的回答。同一个会话再次输入,不会继续。但换一个会话,又会出现。 比如: chat.deepseek.com/share/5ryhkjeu… 看到X上不止一位用户也发现了类似情况。
刘江/LIU Jiang tweet media
中文
4
1
4
2.2K
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
又一家大厂研究团队成员出来创业的neo lab,也是第一轮就融资几亿美元,估值几十亿,主要投资人是谷歌风投、NVIDIA。两位联合创始人Richard Socher是前Salesforce首席科学家,Tim Rocktäschel是DeepMind OpenEndedness方向负责人/UCL教授。赛道是自我改进的超级智能。
Recursive@Recursive_SI

x.com/i/article/2054…

中文
1
6
15
9.9K
刘江/LIU Jiang retweetledi
clem 🤗
clem 🤗@ClementDelangue·
As President Trump meets President Xi this week, a call to the American AI community: If your startup, lab, non-profit or company benefits from open international AI - especially Chinese (Deepseek, Qwen, Kimi, GLM,…), please share! Open source is the most important driver of competition, jobs and wealth creation in AI today. Let’s support and promote it at critical times like this week!
English
33
71
541
69.6K
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
刚发现美区App Store排行榜前几名App都是AI,中国区呢也有一些AI,但更明显的特征是——全来自字节
刘江/LIU Jiang tweet media刘江/LIU Jiang tweet media
中文
2
3
7
4.1K
傅盛
傅盛@FuSheng_0306·
看姚顺雨的访谈,Google的内部战略确实在全力以赴、迎头赶上了🤣 Google之前和Open AI一直卷chatbot,好在gemini 3的效果还不错,把市占率提了上来。 可是Anthropic的崛起让谢尔盖·布林意识到大模型决战在写代码的能力上🤣这要中途调整还是有点难度的。 之前Google的ceo在内部会议上就讲,我只希望我们公司内部用的写代码的模型是我们自己的。可见这个赶超的急迫感啊。
中文
28
24
198
85.2K
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
@cyrilliu1974 第二张图非常清楚地说了各自的贡献,都是真正的主力。没走的以Ilya为代表反而动嘴皮子的多一点🤪
中文
1
0
8
4.3K
Cyril Liu
Cyril Liu@cyrilliu1974·
@turingbook 你想太多了,所謂「核心」就只是負責動嘴皮子的人,但實際上動手的人沒走啊🤭
中文
1
0
6
5.3K
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
证据明摆着嘛,Anthropic的8位创始人里有6位都是GPT-3论文作者,包括第一第二作者。Dario Amodei名列最后,是团队的老板。 当时在OpenAI的一位朋友说,当时有两组人在做GPT-3,Dario这组做得更猛。GPT-1和2的主要作者Alec Radford和Ilya Sutskever是另一组,可能抢资源或者scale的执行力不行,没起主要作用,所以论文里放在倒数二三位,更偏顾问的角色。这一组人现在还有一些在OpenAI。
刘江/LIU Jiang tweet media刘江/LIU Jiang tweet media
刘江/LIU Jiang@turingbook

关于Anthropic为什么能赶超OpenAI。很多人可能没注意一个细节,其实GPT-3的核心研发团队就是Anthropic这帮人,他们走了以后OpenAI花了不少力气才接住摊子。

中文
32
39
370
209.7K
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
唐老师认知在线👍直接转了他在微博发的中文版。要点: - 接下来大模型都在卷长时程任务,带来的结果是一人公司不够了,会很快有更多无人公司(NPC)出现。 - 要解决的三大技术支柱是:记忆、持续学习和自我判断。 - 最后要实现模型自我训练。 - 这是不可逆转的,每个领域——安全、金融、法律、电商——都将被重塑。
刘江/LIU Jiang tweet media
jietang@jietang

Recent thoughts: The Shift to Long-Horizon Tasks The most likely breakthrough this year will be in long-horizon tasks. We are moving toward a stage where Large Language Models (LLMs) learn to complete extended, complex missions by interacting with Agent environments. This is perhaps where the true value of LLMs lies. Take cybersecurity as an example: imagine a model that continuously hunts for software bugs and vulnerabilities. While it sounds like a search process, it’s actually the model learning the high-level intuition and methodology of a professional hacker. Unlike humans, AI can run 24/7 without fatigue. It could potentially find exploits at a much higher frequwill ency and claim bounties on platforms like HackerOne or BugCrowd. It sounds fun, but fundamentally, it's a revolution that displaces the hacker. If even hackers are being "disrupted," one can only imagine the impact on general programmers. From One-Person to None-Person Companies Building on long-horizon capabilities, Autonomous Agent Systems (AAS) will inevitably become the next frontier. Last year, we were discussing the rise of the "One Person Company" (OPC). I didn't expect us to move so quickly toward the "None Person Company" (NPC). It’s an ironic twist—we might all end up as NPCs in this new ecosystem. Engineering the Impossible: Memory and Learning To realize the vision above, we must solve three technical pillars: Memory, Continual Learning, and Self-Judging. I used to think these would require massive paradigm shifts and years of research. However, the pressure from both the technical and application sides is so intense that we are seeing these capabilities emerge through ingenious engineering "tricks": Memory: Long context windows (1M+) and RAG have significantly bridged the gap. Continual Learning: While true continual learning remains difficult, the release cycles are shrinking. Global models are updated monthly; domestic models are catching up. If we reach weekly updates by next year, it will effectively function as continual learning. Self-Judging: This remains the most elusive, yet models like Opus 4.7 are already demonstrating early self-correction and judgment capabilities. The Self-Evolving Endgame The most difficult—and most promising—path is Self-Evolution. The current wave is incredibly fierce. I suspect that models like Claude may have already achieved a baseline for self-training: writing their own code, cleaning their own data, generating synthetic data, and then training on it. It might "waste" some compute, but it saves the most precious resources: human labor and time. In the LLM era, speed is everything. Rapid iteration is what creates the cognitive gap between leaders and followers. Claude’s rumored 2-million-chip cluster for next year is likely dedicated to exactly this: autonomous model self-training. Technical Summary: 1M Context: Necessary baseline. Memory & Continual Learning: Prerequisites, likely solved first via "tricky" engineering. Harnessing Environments: The breakthrough point. Self-Judging: The tipping point. Full Self-Training: The endgame. Redefining AGI and the Industry If this is the road to AGI, then AGI’s definition should be the sum of all human collective intelligence, not just an individual’s intelligence. It must possess the creative capacity to produce something as profound as the "Theory of Relativity"—meeting the bar set by Hassabis. During this transition, every APP will need to be reconstructed as AI-native. In fact, we might move past the concept of APPs entirely. The most significant challenge will be the reconstruction of the operating system itself. In the future, you won’t see a traditional desktop; you will see an LLM OS, where applications are "generated on demand." This challenges the 80-year-old Von Neumann architecture and represents a total upheaval of the computer science industry. The Irreversible Wave From completing long-horizon tasks to fully autonomous operations, every sector—Security, Finance, Law, E-commerce—will be reshaped. Many friends have reached out lately, asking how to transform their enterprises to keep pace with AI. But few truly realize that this irreversible process has already begun. As this massive technical wave hits, we must be prepared to act, but we must also start thinking seriously about how to regulate it.

中文
5
4
35
7.3K
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
@duguankui 嗯说专注肯定是对的。具体原因最近姚顺宇说了一些:Anthropic团队长期合作、更团结有默契,技术老大有公信力能拍板,执行力强。敏感地发现了Coding这个差异化机会,然后扑上去搞定,业务上做大了。中间很长时间OpenAI甚至没注意coding这么重要。
中文
1
0
2
510
刘江/LIU Jiang
刘江/LIU Jiang@turingbook·
关于Anthropic为什么能赶超OpenAI。很多人可能没注意一个细节,其实GPT-3的核心研发团队就是Anthropic这帮人,他们走了以后OpenAI花了不少力气才接住摊子。
思维怪怪@0xLogicrw

前 Anthropic 研究科学家、现 Google DeepMind 研究科学家姚顺宇,在 @zhang_benita 播客「语言即世界」中首次披露了 Claude 3.7 的内部研发过程。他在 2024 年 10 月加入 Anthropic 后被分进一个名为 Horizon 的团队,当时整个团队只有 10 到 11 个人,却负责 Anthropic 强化学习的全部工作,包括数据、基础设施和算法研究。Claude 3.7 从启动研究到最终发布总共耗时四五个月,前两三个月做算法和数据研究,后两个月做训练和基础设施搭建。 Anthropic 押注代码能力并非一开始就有规划。姚顺宇透露,Claude 3 之所以写代码比 GPT-4 强,背后有一个他无法公开的纯技术原因,是某个团队自下而上做出来的。Claude 3 发布后 Twitter 上的大量正面反馈验证了这一优势,Anthropic 管理层随即把代码能力升级为公司级战略全力押注。他认为 Anthropic 能这样快速下重注,核心在于技术一号位 Jared Kaplan 和 Sam McCandlish 本身就是联合创始人,技术上服众的同时也有权拍板,而 OpenAI 做不到这点,Ilya 在的时候也许行,但后来失去了决策权就走了。当时的 Anthropic 在产品方面几乎没有意识,Claude 3.5 半年内发了两个版本却用同一个名字,最终靠外界起的绰号「3.6」才勉强区分开来。

中文
57
23
266
283.2K
杜冠魁
杜冠魁@duguankui·
@turingbook 这些都是客观因素,我认为是因openai有一阵子钱多到花不完,拿别人钱就想搞点政绩,什么都搞反而不如anthropic专注战绩正好
中文
1
1
6
5.1K