Corben Leo
4.3K posts

Corben Leo
@hacker_
I hack stuff legally; co-founder @boringmattress



There are areas in pentesting where AI excels, and excels independently. There are other areas where AI works well, but needs a bit of scaffolding. And finally, there are areas where it struggles and needs a lot of external structure. Albert Ziegler, our Head of AI, breaks down these pros and cons and explains how XBOW leverages the strengths of AI agents while mitigating their weaknesses in his new blog post: bit.ly/4cb0o90


Every year someone names a new bottleneck for AI compute scaling. @dylan522p on why power isn't gonna be the big one over the next few years: fundamentally there's many different ways to generate power (rather than just one company that can produce the EUV tools needed for the chips themselves) and the supply chains are simpler and easier to ramp. You can do jet engines bolted to the ground. Ship engines. Diesel recips from auto manufacturers with declining volumes. Fuel cells. Each category alone delivers tens of gigawatts by end of decade. Combined, hundreds. Even if energy costs double, a GPU goes from $1.40/hr to $1.50/hr. Nobody notices a dime when the models are improving so fast the value dwarfs the cost. Even if you don't add more power, but simply add more batteries, you can unlock 20% more of the US's terawatt scale power grid. This is because grid utilities want to make sure they're sized for peak summer load that hits a few hours a year. With enough batteries, you can make this guarantee, even without turning on more power plants! Fundamentally, there's a lot of different ways to bring power online over the next few years. Building more logic and memory is far more difficult and centralized, so that's where Dylan thinks the bottleneck will be.

claude code has always been open source for anyone who knows the strings command 😁



论文来了。名字叫 MSA,Memory Sparse Attention。 一句话说清楚它是什么: 让大模型原生拥有超长记忆。不是外挂检索,不是暴力扩窗口,而是把「记忆」直接长进了注意力机制里,端到端训练。 过去的方案为什么不行? RAG 的本质是「开卷考试」。模型自己不记东西,全靠现场翻笔记。翻得准不准要看检索质量,翻得快不快要看数据量。一旦信息分散在几十份文档里、需要跨文档推理,就抓瞎了。 线性注意力和 KV 缓存的本质是「压缩记忆」。记是记了,但越压越糊,长了就丢。 MSA 的思路完全不同: → 不压缩,不外挂,而是让模型学会「挑重点看」 核心是一种可扩展的稀疏注意力架构,复杂度是线性的。记忆量翻 10 倍,计算成本不会指数爆炸。 → 模型知道「这段记忆来自哪、什么时候的」 用了一种叫 document-wise RoPE 的位置编码,让模型天然理解文档边界和时间顺序。 → 碎片化的信息也能串起来推理 Memory Interleaving 机制,让模型能在散落各处的记忆片段之间做多跳推理。不是只找到一条相关记录,而是把线索串成链。 结果呢? · 从 16K 扩到 1 亿 token,精度衰减不到 9% · 4B 参数的 MSA 模型,在长上下文 benchmark 上打赢 235B 级别的顶级 RAG 系统 · 2 张 A800 就能跑 1 亿 token 推理。这不是实验室专属,这是创业公司买得起的成本。 说白了,以前的大模型是一个极度聪明但只有金鱼记忆的天才。MSA 想做的事情是,让它真正「记住」。 我们放 github 上了,算法的同学不容易,可以点颗星星支持一下。🌟👀🙏 github.com/EverMind-AI/MSA





















