Synslius

4.9K posts

Synslius banner
Synslius

Synslius

@noirminded

Anything aesthetic? //*Producer

Earth Katılım Mayıs 2018
1K Takip Edilen223 Takipçiler
Sabitlenmiş Tweet
Synslius
Synslius@noirminded·
Synslius tweet media
ZXX
2
1
28
20K
感 受
感 受@ganshou111·
本人已经进入next level,老板是推特互关、合作方是推特互关、实习生也是推特互关。
中文
3
0
9
207
Maarten
Maarten@MaartenSlebos·
@noirminded Yeah fair but i still think their own status page should reflect status sooner :p
English
1
0
1
11
Maarten
Maarten@MaartenSlebos·
Why do companies even have status pages if they dont work Claude is clearly down for the past hour and the status page is just not reporting anything
English
1
0
4
348
Synslius
Synslius@noirminded·
@istdrc Thoughts on Anthropic’s recent changes for banning their coding plan users from using third party harness? Seems like my Claude Code’s agents are now unusable🫩
English
0
0
0
70
stdrc
stdrc@istdrc·
Hi, I’m RC. I previously built Kimi CLI at Moonshot AI. Now I’m building Slock, an agent-human collaboration platform for modern builders and teams. Today, we're shipping a ton of new features and improvements in Slock: search, thread inbox, saved messages, message permalinks, pinned chats, server join links, a more consistent color system, and many smaller upgrades. More details in the thread below.
stdrc tweet media
English
91
66
872
77.6K
艾略特
艾略特@elliotchen100·
去年 10 月 OpenAI 开发布会的时候,发过一条帖子说他们演示用的是 Ghostty Terminal,当时也没觉得这个 terminal 能活多久。 今天又刷到 Claude Code 的创始人 Boris Cherny 分享他怎么用 Claude Code,他和整个团队用的也是 Ghostty。Boris 这人同时开 10 到 15 个 Claude Code session,对 terminal 性能要求很变态的,他选 Ghostty 说明这东西确实扛得住。 OpenAI 在用,Anthropic 也在用,感觉整个硅谷都在用。 现在市面上 terminal 其实挺多的,每个风格都不太一样。iTerm2 走功能全家桶路线,Warp 搞 AI 集成,WezTerm 玩 Lua 可编程,Alacritty 走极简主义。但大家最后都选了 Ghostty,这说明一件事:在硅谷开发者眼里,渲染性能和原生体验才是 terminal 最核心的东西,花哨的功能反而不是第一优先级。 又扒了一下 Ghostty 的背景,确实有 sense。 创始人 Mitchell Hashimoto,HashiCorp 联合创始人,做过 Terraform 和 Vagrant。2024 年 IBM 花了 64 亿美金收购 HashiCorp 之后,他就出来了,想搞点不一样的,不做服务端了,做桌面软件。上个月刚加入了 Vercel 董事会,这人在开发者圈子里的影响力不用多说 Ghostty 一开始是他拿 Zig 语言写着玩的 side project,结果越写越上头,写了两年多才公开发布 有个很头铁的细节:他 70% 的开发时间花在字体渲染上。 Unicode、Emoji、肤色 Emoji、中日韩字体,全部从头撸。你打开 Ghostty 第一眼就会觉得字体特别舒服,这不是错觉,是真花了功夫的。 技术上最值得说的一点:Ghostty 不是那种一套代码跑所有平台的思路。核心用 Zig 写了个叫 libghostty 的库,但 macOS 的界面是 Swift 原生写的,Linux 那边用 GTK4。所以在 Mac 上跟苹果自家 app 一模一样,Mission Control、字体平滑、快捷键全部对得上,没有那种"套壳感"。 之前唯一的硬伤是没有搜索功能,我上条帖子评论区就有人吐槽不能 Ctrl+F。上个月 1.3 版本补上了,Cmd+F 直接搜,独立线程实现的,不卡渲染。 顺便把主流 terminal 对比一下,给个参考: Ghostty:快、原生、开箱即用,兼容性强,macOS 上体验最好。缺点是不支持 Windows。但说实话 terminal 要什么 AI?值得试,大概率试了就不想换回去。 iTerm2:老牌选手,功能最全,但 CPU 渲染,启动慢,2026 年了用起来有点显老。 Warp:AI 功能不错,但核心不开源,要登录才能用。一个 terminal 还要注册账号? WezTerm:Rust 写的,功能强,但配置要写 Lua,学习成本高,作者维护节奏也不太稳定。 Alacritty:快是真快,但连 tab 都没有,得配 tmux。适合折腾党。 现在 Ghostty macOS 每周下载 100 万,GitHub 15 个月 45K star。在 AI coding 时代,terminal 反而成了开发者的主战场,大家对它的要求更高了。 Ghostty 刚好踩在这个节点上。
艾略特 tweet media
Boris Cherny@bcherny

7. Terminal & Environment Setup The team loves Ghostty! Multiple people like its synchronized rendering, 24-bit color, and proper unicode support. For easier Claude-juggling, use /statusline to customize your status bar to always show context usage and current git branch. Many of us also color-code and name our terminal tabs, sometimes using tmux — one tab per task/worktree. Use voice dictation. You speak 3x faster than you type, and your prompts get way more detailed as a result. (hit fn x2 on macOS) More tips: code.claude.com/docs/en/termin…

中文
35
39
343
125.2K
Will
Will@williililii·
@om_patel5 you can call it Kevin mode
GIF
English
13
27
1.8K
54.9K
Om Patel
Om Patel@om_patel5·
I taught Claude to talk like a caveman to use 75% less tokens. normal claude: ~180 tokens for a web search task caveman claude: ~45 tokens for the same task "I executed the web search tool" = 8 tokens caveman version: "Tool work" = 2 tokens every single grunt swap saves 6-10 tokens. across a FULL task that's 50-100 tokens saved why does it work? caveman claude doesn't explain itself. it does its task first. gives the result. then stops. no "I'd be happy to help you with that." no "Let me search the web for you" no more unnecessary filler words "result. done. me stop." 50-75% burn reduction with usage limits getting tighter every week this might be the most practical hack out there right now
Om Patel tweet media
English
964
1.4K
23.7K
3.1M
Synslius retweetledi
alphaXiv
alphaXiv@askalphaxiv·
Scaling Attention to 100M context!? Memory Sparse Attention introduces an idea where instead of rereading an entire 100M-token entry, it learns to jump straight into the relevant memories and reason from them end-to-end. More specifically, it first encodes documents into compressed memory slots, then for each question it uses a learned router to score which chunks are actually relevant, pulls only the top few, and runs normal attention over that tiny assembled context. So the model’s compute grows with “how much it needs to look at” not “how much memory exists”. This retrieval step is trained jointly with answer generation, so memory lookup is part of the model itself, and can decouple memory capacity from reasoning cost.
alphaXiv tweet media
English
8
56
307
35.9K
Synslius retweetledi
EverMind
EverMind@evermind·
A few weeks ago we published our Memory Sparse Attention paper, a new way to give AI models long-term memory that actually works. Today's LLMs/Agents forget. They can only hold so much context before things start falling apart. We built a system that lets a model remember up to 100 million tokens, the length of about a thousand books, and still find the right answer with less than 9% performance loss. On several benchmarks, our 4-billion parameter model even beats RAG systems built on models 58× its size. The idea? Instead of searching a separate database and hoping the right info comes back (that's how RAG works), we built the memory directly into how the model thinks. It learns what to remember and what to ignore, end to end, no separate retrieval pipeline needed. The response to the paper blew us away. Researchers and engineers everywhere asking the same thing: "When can we see the code?" So we got to work, cleaned up the inference code, documented it, and made it ready for the community to dig in. You asked for it. We open-sourced it. github.com/EverMind-AI/MSA
English
6
20
123
13.1K
Synslius
Synslius@noirminded·
@troyhua Troy’s smooth visual as always
English
0
0
0
328
Synslius
Synslius@noirminded·
@lydiahallie One 1hr session bumped weekly usage by 17% genuinely had no clue what was going on
Synslius tweet media
English
1
0
0
128
Lydia Hallie ✨
Lydia Hallie ✨@lydiahallie·
We're aware people are hitting usage limits in Claude Code way faster than expected. Actively investigating, will share more when we have an update!
English
1.6K
743
13.6K
4.2M
Synslius retweetledi
ye
ye@kanyewest·
BULLY ON THE WAY NO AI
ye tweet media
English
10.7K
46K
284.7K
33.6M