华丽
3.2K posts

华丽
@yeahmetro
Building: https://t.co/3choTT6mvk github: https://t.co/D58qpvOpUX Creator of unplugin-devpilot, vue-hook-optimizer, tour-master.

I've been playing with React Native WebGPU recently and it's really flipping my perspective on animations.



最近公司发生一件极恶心的事,让我后悔来这家公司了。 我们项目以前的 Makefile 写的很长,逻辑越堆越多,已经没法维护。并且每次 make 都是全量编译,完全不用上增量编译,结果每次 make 都是10几分钟。我实在看不下去就让 CC 用 Python 重写了一版,加了增量编译支持。就这么放了三个月没敢合到主干,一直小圈子内灰度。 结果后来给我下个新需求,要往 Makefile 里加功能,我实在不想维护两套了,就跟领导说这两个版本二选一,实在不行就把 v2 删掉吧。他说别啊,他一直用,感觉增量编译速度很快。我说如果用 v2 就把 v1 删掉,并把 v2 转正。他说转正吧。 我当时的想法是,转正就转正,以后遇到问题,大家会一起改,这是一个共建的东西。结果转正后,这个 Makefile 就成我的专属,所有人遇到问题都来找我,还说:你改的你就负责到底。 我说这个东西是共建的,你给我我也是让 CC 改,这代码我不比你熟悉多少。你有跟我说的功夫给 CC 说早都改完了。 结果人家偏不(因为都不愿意接锅),这就成了我一个人的锅了。 这件事深刻的教育我们,不要打黑工,不要瞎优化,看到项目里的任何熵,在没有上级明确下单让你干之前,不要自作主张的干,干了就是你的,你就负责到底。




Left - Apollo 17, 1972 Right - Artemis II, 2026 Two photographs taken by one of us, of all of us, over half a century apart. What's changed?



A few weeks ago we published our Memory Sparse Attention paper, a new way to give AI models long-term memory that actually works. Today's LLMs/Agents forget. They can only hold so much context before things start falling apart. We built a system that lets a model remember up to 100 million tokens, the length of about a thousand books, and still find the right answer with less than 9% performance loss. On several benchmarks, our 4-billion parameter model even beats RAG systems built on models 58× its size. The idea? Instead of searching a separate database and hoping the right info comes back (that's how RAG works), we built the memory directly into how the model thinks. It learns what to remember and what to ignore, end to end, no separate retrieval pipeline needed. The response to the paper blew us away. Researchers and engineers everywhere asking the same thing: "When can we see the code?" So we got to work, cleaned up the inference code, documented it, and made it ready for the community to dig in. You asked for it. We open-sourced it. github.com/EverMind-AI/MSA


















