Egor Konovalov

52 posts

Egor Konovalov banner
Egor Konovalov

Egor Konovalov

@foldll

Monte Carlo Entrou em Mayıs 2024
196 Seguindo274 Seguidores
0xmer
0xmer@0xmer_·
She told me she wanna go to Benihana I'm taking her to the
0xmer tweet media
English
3
0
45
1.2K
Egor Konovalov
Egor Konovalov@foldll·
@bskdany why twitter just now discovered shirts that i've seen in china uniqlo this summer
English
0
0
3
4.4K
Egor Konovalov
Egor Konovalov@foldll·
built kernelscope - CUDA kernel debugger that maps source lines to PTX + GPU events runs entirely on @modal btw analyzing warp state, memory coalescing, warp divergence, SM occupancy, hints 4 perf more info&pics in next post @charles_irl @can this is my job application part 2
Egor Konovalov tweet media
English
13
18
351
24.5K
Egor Konovalov
Egor Konovalov@foldll·
there were only two problems: Linear connector didnt work smh, I had to install Chrome so Claude for Chrome extension would work. in the end i found cli importer and docs, that did the trick
English
0
0
1
223
Egor Konovalov
Egor Konovalov@foldll·
yet again talking about reading list finally got access for @claudeai cowork and tasked him with organizing my tabs right away now i have @linear workspace with all my blogs/papers/etc organized by projects, each entry is issue, where i can change status, dates and more each JD is issue in Job Application that has references to other projects/issues so I can cross something from my reading list while preparing to interview seems like I finally solved my problem, other than solving FOMO
Egor Konovalov tweet media
English
1
0
5
318
Christian
Christian@creet_z·
Can the unemployed people get off claude for the day us employed people actually have stuff to do unlike you guys
English
6
0
86
3.9K
Egor Konovalov
Egor Konovalov@foldll·
genuinely dont understand why this post didnt activate @tekbog sleeper agent
English
1
0
3
302
Egor Konovalov
Egor Konovalov@foldll·
Oh, you're using Copilot? Everyone's on Cursor now. Just kidding, we're all on Windsurf. We're using Cline. We're using Aider. We have an in-house MCP server mesh with custom tool schemas but wait, OpenCode just dropped so we're migrating to that instead. Our PM is on Gemini CLI. The team lead was on Codex but now she's back to copy-pasting into ChatGPT. If you're not on Amp, you're ngmi. Our intern is building on Goose for our internal tooling. Our CFO approved Claude Max so now we're porting our workflows to computer use. Our CTO is working on an agent-less RAG pipeline so we won't need vibe coding anymore. Our CEO thinks we're talking about actual vibrations. We're building clankercloud.
English
2
0
17
2.8K
Egor Konovalov
Egor Konovalov@foldll·
chatgpt was right. thanks everyone that liked/followed/dmed and encouraged me in any way lately!
Egor Konovalov tweet media
English
0
0
10
669
Egor Konovalov
Egor Konovalov@foldll·
@syntrocode tried to do this actually! but models (both oss and proprietary, even manus) struggle with 500 tabs. maybe skill issue, ngl
English
0
0
2
317
Clinon 🇺🇸
Clinon 🇺🇸@syntrocode·
@foldll Train a model on the information and have an AI prompt you and converse with you in "the best way to learn".
English
1
0
0
328
Egor Konovalov
Egor Konovalov@foldll·
gm my fellow gpu lovers, what should i do next? ml perf/infra interviews coming up, trying to lock in i have loong reading list on cuda, dl frameworks internals, sysdis, some perf case studies and bunch of puzzles, so requesting some tips on organizing/prioritizing & must reads
English
7
1
92
8.1K
secemp
secemp@secemp9·
oh you're using anthropic api? oh no now it's claude max sub. actually it's too ratelimited, we use glm now. we switched to grok code fast. gpt5.2 IS THE BEST RIGHT NOW wait no, it takes 20min for a single prompt, nvm we go back to claude models. oh, turns out they give us 2k worth of tokens for 200 per month on the max ultra pro sub, nice. nvm they ban you if you work at a competitor. oh, they also ban you if you use it outside of claude code, we go back to glm. didn't you hear? we're using claude black sub now, it's from opencode-
English
22
12
443
29.3K
Egor Konovalov
Egor Konovalov@foldll·
@AISloppyJoel little more useful is what i already read distributedhatemachine.github.io/til/ because i sort low signal posts/blogs and write short description i used to track this kind of stuff in obsidian, but idk, i remember almost everything i read and can find it fast so i dont see point in this
English
0
0
2
37
joel
joel@AISloppyJoel·
@foldll What’s your reading list king
English
2
0
1
470
Egor Konovalov
Egor Konovalov@foldll·
@AISloppyJoel idk how to share but i probably can pipe sidebar tabs into text file now i have so much to read so i vibecoded some tool to visualize my tabs so i can take some skill paths like in videogames, but it doesnt cut it
Egor Konovalov tweet mediaEgor Konovalov tweet media
English
1
0
6
523
Egor Konovalov
Egor Konovalov@foldll·
Great post! But we often see GPUs that report 'Healthy' (full clocks & links up) but suffer from massive internal instruction replays or silent packet retransmits. If you skip active benchmarks at boot, isn't there a high risk of one 'hollow' node tanking an entire distributed cluster? Or a node might have 'healthy' links, but if the cloud provider allocated it on a different spine switch than the rest of my cluster, my training run effectively dies.
English
0
0
13
3.2K
Jonathon Belotti
Jonathon Belotti@jonobelotti_IO·
GPUs are unreliable at scale. At @modal we've scaled to 20,000+ concurrent GPUs across AWS, GCP, Azure, and OCI, with 1M+ instances launched. Public-cloud GPUs fail in many ways, and we’ve seen most of them. Here’s how we handle GPU reliability 👇
English
14
47
727
106.2K