Bit Cook

3.1K posts

Bit Cook banner
Bit Cook

Bit Cook

@bit_cook

Explorer Developer Innovator Transhumanist Cosmopolitan Cypherpunk Philosophy & NeuroScience Enthusiast Financial Alt Account: @ValueCaptor

The Real World 参加日 Mayıs 2013
1.6K フォロー中286 フォロワー
Bit Cook がリツイート
Mathematica
Mathematica@mathemetica·
Terence Tao is answering a fundamental question regarding the safety and reliability of modern AI: "How can we use a tool that is powerful, but unreliable?" W = ∑(wᵢ ⋅ xᵢ) + b AI isn’t just about “smart”; it’s about the probability of *looking* right. We’ve built systems where the weights (wᵢ) are optimized for plausibility, not veracity. This creates a “convincing mirror” that confidently serves dangerous advice in medicine or finance. The gap between “convincing” and “correct” is the most critical variable we need to solve for.
English
100
545
2.1K
476.2K
Bit Cook がリツイート
Berryxia.AI
Berryxia.AI@berryxia·
真的,只有大牛才敢站出来这么说! 全世界公认的最聪明的人之一,Terence Tao,亲自站出来把AI最致命的缺陷直接戳破了。 他问了一个所有人都回避的根本问题: “我们该如何使用一个强大、却极度不可靠的工具?” AI的核心方程写得清清楚楚: W = ∑(wᵢ ⋅ xᵢ) + b 它不是在追求“正确”, 而是在追求“看起来正确”。 所有权重都被优化成plausibility(似是而非),而不是veracity(真实性)。 于是我们造出了一个超级会“装”的镜子: 它在医学、金融、法律等领域,能用最自信、最流畅的语气, 给你最危险、最错误的建议。 “Convincing”和“Correct”之间的鸿沟, 才是AI时代最致命的风险。 我们越是依赖它,它就越容易把我们带进自己都看不出来的陷阱。 当最顶尖的数学家都在认真讨论“如何安全使用不可靠的AI”时, 我们普通人还在为“它写代码好快”鼓掌吗? 这段视频值得每一个用AI的人反复看。
Mathematica@mathemetica

Terence Tao is answering a fundamental question regarding the safety and reliability of modern AI: "How can we use a tool that is powerful, but unreliable?" W = ∑(wᵢ ⋅ xᵢ) + b AI isn’t just about “smart”; it’s about the probability of *looking* right. We’ve built systems where the weights (wᵢ) are optimized for plausibility, not veracity. This creates a “convincing mirror” that confidently serves dangerous advice in medicine or finance. The gap between “convincing” and “correct” is the most critical variable we need to solve for.

中文
119
229
1K
284.2K
Bit Cook がリツイート
奶昔🥤
奶昔🥤@realNyarime·
“资本主义Online” 5月5日,福建厦门。一名初三学生说,英语老师为了激励大家,制作了班级货币(简称英镑) 认真写作业以及考试成绩好的学生可以获得英镑。 每两周老师会拍卖零食,而零食需要用英镑兑换。 之后有学生仅用一周就完成了原始的资本积累,甚至在班里开设了“赌场和贷款业务”。甚至还有血腥的“三角贸易”比如有同学在“赌场”负债没钱了,只能去贷款,然后短时间还不上钱被资本斩杀,只能被别人花钱做廉价劳动力。 由于英语老师每天给优秀的学生发新的英镑,导致班级的英镑数量增加引发通货膨胀,上周买一瓶可乐需要5英镑,这周需要10英镑。 此外,英语老师还会在拍卖会上,给优秀学生“特权”。 拥有特权的人可以得到老师双倍的英镑奖励。于是一些学生把作业交给有特权的人,由他们代为交给老师,以赚取双倍英镑,之后双方平分。 于是一些有特权的学生,因为能更快获得英镑从而实现了财富的快速积累,甚至出现了他人通过大量英镑垄断零食,再让其他想吃零食的人用人民币购买的情况,实现了间接与人民币挂钩。最后那些资本雄厚的学生,甚至开设了“银行”使“英镑”直接与人民币挂钩,具有实时汇率变化。
中文
53
7
229
55.6K
Bit Cook がリツイート
Seth Howes
Seth Howes@SethSHowes·
I sequenced my genome at home, on my kitchen table. I wrote up exactly how I did it - the equipment, protocol, theory, and cost: iwantosequencemygenomeathome.com
English
108
764
4.7K
1.2M
Bit Cook がリツイート
Rey|判断位 x 英语自由
Rey|判断位 x 英语自由@ReyJudgementOS·
震撼:小哥利用AI,在家自行完成了基因组测序 一个有好奇心、能动性并且会学AI工具的年轻人, 可以做到什么? ——从医疗机构夺回决策权 推主追踪到了家族多代自身免疫疾病背后的机制, 这些机制此前没有任何临床医生能够理解。 他开始做这件事的时候, 并不知道是否真的能行得通。 结果证明,它行通了。 “你的基因组是你所拥有的最私密的数据。 你很可能不应该让它离开你的房子” Seth Howes公布了完整操作规程。 以前只由大型专业机构垄断的事情, 现在DIY了 原因? 好奇心(家族疾病)+能动性+AI 设施? 1) MinION测序仪 (把“读取DNA”从一个资本密集型行为,变成一个“工具型能力”) 2) 开源DNA模型(Evo2和AlphaGenome) 3) DGX Spark和Mac Studio 突破? 1)测序成本持续下降(类似摩尔定律) 从几十万美元 → $1000级别 下一步:$100级别 2)AI对生物数据的理解在指数提升 文中提到: AlphaGenome 这类模型意味着: 不只是“读DNA”,而是开始“理解功能” 3)接口变简单(MinKNOW + LLM) 文中一句非常关键: 用Claude生成BED文件 生物学操作 → 被语言接口接管 推主长文链接在评论区 适合大学生尝试
Rey|判断位 x 英语自由 tweet media
Seth Howes@SethSHowes

I sequenced my genome at home, on my kitchen table. I wrote up exactly how I did it - the equipment, protocol, theory, and cost: iwantosequencemygenomeathome.com

中文
9
34
162
20.4K
Bit Cook がリツイート
kache
kache@yacineMTB·
you can outsource your thinking but you cannot outsource your understanding
English
238
3.6K
16.1K
2.1M
Bit Cook がリツイート
luthira
luthira@luthiraabeykoon·
We implemented @karpathy 's MicroGPT fully on FPGA fabric. No GPU. No PyTorch. No CPU inference loop. Just a transformer burned into hardware, generating 50,000+ tokens/sec. The model is small, but the idea is not: inference does not have to live only in software 👇
English
272
703
7.5K
836.3K
Bit Cook がリツイート
Geek Lite
Geek Lite@QingQ77·
帮开发者用自己项目的真实源码,自动生成软著申请全套材料,不用再花钱找人整理。 github.com/Fokkyp/Softwar… 这个 Codex Skill 读取你的项目代码,分析业务逻辑后自动生成操作手册、代码材料(按前30页后30页规则截取)和申请表字段汇总。 代码只从你自己的项目里抽,AI 不会凭空编。生成过程中,业务口径、申请表字段、代码选择、截图方式这些关键环节都会停下来让你确认。最后输出操作手册 DOCX、代码材料 DOCX 和申请表 TXT,放在项目目录下的 软件著作权申请资料/正式资料/。
Geek Lite tweet media
中文
4
33
193
14K
Bit Cook
Bit Cook@bit_cook·
向量才是AI原生语言,用自然语言只是为了方便人类,却降低了很多效率。
中文
0
0
0
12
Bit Cook がリツイート
alphaXiv
alphaXiv@askalphaxiv·
“Recursive Multi-Agent Systems” Many multi-agent LLM systems rely on agents passing text back and forth. This paper argues for a different approach where it makes agents recur together in latent space. So agents refine latent thoughts, pass hidden states across one another, and only decode text at the end. The key idea is that recursion scales the whole agent system, not just one model, and in their experiments this makes collaboration more accurate, faster, and much cheaper in tokens.
alphaXiv tweet media
English
13
87
494
25.5K
Bit Cook がリツイート
Association for Computing Machinery
Happy Birthday to Claude Shannon, known by many as the “father of Information Theory.” Shannon was an American mathematician and electrical engineer. In 1948, he published A Mathematical Theory of Communication, which effectively created the field.
English
10
230
665
36.7K
Bit Cook がリツイート
alphaXiv
alphaXiv@askalphaxiv·
What if the model didn’t just use a computer, but actually was the computer? Meta AI introduces "Neural Computer", a model where computation, memory, and I/O are all inside one learned system. Their early prototype learns from screen recordings of terminals and desktops, and it can already imitate some basic computer behavior like rendering interfaces and responding to clicks or commands. But it still breaks on slightly harder tasks like reliable reasoning, stable memory, and reusable skills.
alphaXiv tweet media
English
28
144
918
154.8K
Bit Cook がリツイート
Nick Levine
Nick Levine@status_effects·
New work with @AlecRad and @DavidDuvenaud: Have you ever dreamed of talking to someone from the past? Introducing talkie, a 13B model trained only on pre-1931 text. Vintage models should help us to understand how LMs generalize (e.g., can we teach talkie to code?). Thread:
English
169
356
2.8K
980.8K
Bit Cook がリツイート
Haider.
Haider.@haider1·
Andrej Karpathy says computing may shift from classical software to neural systems Instead of code running everything, neural nets could take raw video, audio, and context, then generate interfaces and actions in real time "the CPU becomes the coprocessor, handling fixed tasks while neural nets run the show"
English
67
121
886
69.1K
Bit Cook がリツイート
Andrej Karpathy
Andrej Karpathy@karpathy·
Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights: The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons: 1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing. 2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc. 3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc. I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3). The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to... Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.
Stephanie Zhan@stephzhan

@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.

English
279
731
5.5K
786.3K
Bit Cook がリツイート
Warp
Warp@warpdotdev·
Warp is now open-source.
English
413
966
7.8K
2.8M