Monk Zero

1.9K posts

Monk Zero

Monk Zero

@NoCommas

@antigma_labs, prev: @awsCloud, @Meta, @Mysten_Labs. A Turing Complete mind, wandering the world of Gödel Incompleteness.

Latent Space Katılım Temmuz 2012
1K Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
Monk Zero
Monk Zero@NoCommas·
The only way we human are able to communicate and understand each other, is that across space and time, we are one and all Inspired by <The Egg> by Andy Weir galactanet.com/oneoff/theegg_…
Monk Zero tweet media
English
6
2
15
4K
Monk Zero
Monk Zero@NoCommas·
@ludwigABAP @Yuchenj_UW Yep, actually I think somehow this become of better signal to know candidate has good fundamentals. Public github projects and profiles means a lot less now. In my experience working in most top tech companies , this remains one of top indicators regardless of what people say
English
0
0
1
27
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
I’m so glad AI killed LeetCode interviews. For 10 years, tech companies made every engineer grind the same puzzles and prove they could invert a binary tree from memory. Today, the dumbest AI model can walk in and one-shot the entire interview. Thank you, AI.
English
221
153
2.9K
663.1K
Monk Zero
Monk Zero@NoCommas·
@mycharmspace 啊才知道你也用twitter😂. 祝贺🎉! Search is indeed my most used grok feature ❤️
中文
0
0
1
56
Tianyi Zhang
Tianyi Zhang@mycharmspace·
Today is my last day at xAI. I joined xAI a year ago and had the pleasure of leading the search and factuality post-training team. Over time, we developed so many recipe and engineering co-optimizations, making Grok the best AI for search and real-time agent. I am also particularly proud of working with a small group of talented people delivering the recent iterations of the instant mode of Grok - the one I personally liked and used the most. My thanks to all the friends and teammates for their support and help over the past year. They are among the brightest minds I’ve met in my career. I am sure the team will continue the mission to make better Grok and understand the universe.
English
84
10
648
82.9K
Monk Zero
Monk Zero@NoCommas·
@antirez This is the way 🫡. Next is bidirectional multi-stream.
English
0
0
0
155
antirez
antirez@antirez·
Now DS4 implements the OpenAI Responses API and attempts to match the IDs in order to continue from the live KV cache without doing the efforts required in the chat completion API code path.
English
8
3
166
10.5K
shafu
shafu@shafu0x·
forward deployed engineer just means the guy is not fucking autistic
English
99
153
7.3K
565.7K
Monk Zero
Monk Zero@NoCommas·
@jonasgeiping Got a feeling some of Thinky and OpenAI realtime api already does something like this. Great work, this direction feels right
English
0
0
0
20
Jonas Geiping
Jonas Geiping@jonasgeiping·
Finally, we find that models with many internal streams allow us to more easily monitor their thinking, for example concerning evaluation awareness. With many parallel internal streams, it would be my hope that the model continues to subvocalize concerns in side-streams, even if the main CoT/thinking stream is occupied with solving a particular task.
GIF
English
2
2
36
3.7K
Jonas Geiping
Jonas Geiping@jonasgeiping·
We’re training models wrong and it’s due to chatGPT. Even the modern coding agents used daily still use message-based exchanges: They send messages to users, to themselves (CoT) and to tools, and receive messages in turn. This bottlenecks even very intelligent agents to a single stream. The models cannot read while writing, cannot act while thinking and cannot think while processing information. In our new paper, see below, we discuss LLMs with parallel streams. We show that multi-stream LLMs can … 🔵Be created by instruction-tuning for the stream format 🔵Simplify user and tool use UX removing many pain points with agents and chat models (such as having to interrupt the model to get a word in) 🔵Multi-Stream LLMs are fast, they can predict+read tokens in all streams in parallel in each forward pass, improving latency 🔵 LLMs with multiple streams have an easier time encoding a separation of concerns, improving security 🔵 LLMs with many internal streams provide a legible form of parallel/cont. reasoning. Even if the main CoT stream is accidentally pressured or too focused on a particular task to voice concerns, other internal streams can subvocalize concerns that would otherwise not be verbalized. Does this sound related to a recent thinky post :) - Yes, but I don’t feel so bad about being outshipped with such a cool report on their side by 23 hours. I’ll link a 2nd thread below with a more direct comparison. I actually think both are complementary in interesting ways.
GIF
English
41
168
1.4K
150.7K
Iridescence
Iridescence@iridescence_dev·
It's worth it, even if the network is the ultimate bottleneck. Users should have high quality software. Taking 0.5 seconds to load and 270-300MB at startup for a TUI is completely unacceptable when their competition, Codex CLI (Rust), can do it in a fraction of the time and half the RAM. Software Engineers have a duty to demand higher quality software that people can actually love using and stop making excuses.
English
1
0
3
87
Joel 🇦🇺
Joel 🇦🇺@ptr_to_joel·
holy wow they merged it
Joel 🇦🇺 tweet media
English
138
189
4.4K
820.1K
Andras Bacsai
Andras Bacsai@heyandras·
We made a fake repo with fake bounties, and the bots are applying fake PRs, so we know who is fake, and we can ban them from the Coolify repo. IQ over 1000
Andras Bacsai tweet mediaAndras Bacsai tweet media
English
194
499
10.6K
497.6K
Monk Zero
Monk Zero@NoCommas·
@mitsuhiko Yep. Rust based agent should focus on resource footprint and reliability; there is room for both.
English
1
0
0
129
Armin Ronacher ⇌
Armin Ronacher ⇌@mitsuhiko·
Pi wouldn’t make any sense in rust or go. Extensibility is key to it. That leaves ruby, python, js, php for the most part unless you want to ship an interpreter. None of those languages have any benefit over node.
English
92
17
555
146.9K
Monk Zero
Monk Zero@NoCommas·
@satory_ua @ThePrimeagen It has got too good recently since gpt-5.2. Miss the time when it is super easy to spot filter slop rust
English
1
0
0
166
ThePrimeagen
ThePrimeagen@ThePrimeagen·
Current meta
ThePrimeagen tweet media
Español
59
50
1.8K
71.6K
Monk Zero
Monk Zero@NoCommas·
@SIGKITTEN @thdxr interesting how different team has very different priorities; building sdk was actually where we started and everything grows from how to best interact with LLM; never occurred to me to use any 3rd party client sdk library
English
0
0
0
107
SIGKITTEN
SIGKITTEN@SIGKITTEN·
@thdxr finally big enough to remove ai-sdk!
English
5
0
36
1.8K
dax
dax@thdxr·
we're working on a library to abstract over all the llm providers there's very few teams that have dealt with the quirks between providers at the scale we have it's written in effect but will also have a vanilla api progress is in the opencode repo under packages/llm
English
111
18
1.7K
219.3K
Monk Zero
Monk Zero@NoCommas·
@badlogicgames @rebelcrayon “Inline this” is probably my most typed words recently. I feed the entire deep module lecture into both and still see this shit from time to time
English
0
0
3
174
Mario Zechner
Mario Zechner@badlogicgames·
calling it slopex from now on so it can join its sibling slopus.
Mario Zechner tweet media
English
55
20
1.5K
121.4K
Monk Zero
Monk Zero@NoCommas·
@tenderizzation him and Linus really built the pillars of our current digital world
English
0
0
0
697
Monk Zero
Monk Zero@NoCommas·
@thdxr there is alway an ancient Chinese proverb for these kind of things: 守正出奇
English
0
0
0
88
dax
dax@thdxr·
i pay attention to: 99%: using our product in dumb/simple ways and will never change behavior 0.01%: aliens who are showing us the distant future ignore the reminder: "pro" users who think they invented some clever workflow every week but get less done than the 99% group
English
61
27
873
63K
Kiana
Kiana@orzxh97·
@badlogicgames Oh I’m not saying to you. I’m agreeing you. Sorry that wasn’t clear.
English
1
0
4
1.6K
Mario Zechner
Mario Zechner@badlogicgames·
but it's cool that frontier models are now basically regressing. maybe all this madness will come to an end soon.
English
28
13
392
34K
antirez
antirez@antirez·
@NoCommas Unfortunately this si what DeepSeek official API returns, I don't know why they are masked / place-holder-ed. So the test checks that the continuation matches without being able to check the logits values. But it's enough to spot issues given that we test with long contexts.
English
1
0
8
2.7K
antirez
antirez@antirez·
Welcome to DS4, a specialized inference engine for DeepSeek v4 Flash. github.com/antirez/ds4 This project would have been impossible without the existence of llama.cpp and GGML and the work of @ggerganov and all the other contributors. Thanks!
English
44
214
1.5K
190.8K
Monk Zero
Monk Zero@NoCommas·
@eshear 🫡 you built something outlived your own fame
Monk Zero tweet media
English
0
0
1
414
Emmett Shear
Emmett Shear@eshear·
It’s an honor just to be nominated
English
218
251
9.9K
617.5K