NullVoider

1.1K posts

NullVoider banner
NullVoider

NullVoider

@nullvoider07

Engineer | Building in the chaos.

Bengaluru, India Joined Ekim 2018
111 Following27 Followers
NullVoider
NullVoider@nullvoider07·
@XFreeze Get back to reality. Humans as species haven't even been able comprehend themselves fully yet and you think machines, systems, and tech they build will surpass what nature tool billions of years to build??? 🤔
English
18
0
18
6.1K
X Freeze
X Freeze@XFreeze·
Here Grok’s AGI timeline 😂
X Freeze tweet media
Elon Musk@elonmusk

@minchoi 4.6 → 3T 4.7 → 6T 4.8 → 10T 4.9 → ??? 5.0 → AGI 6.0 → ASI 7.0 → ASI2 … 🤷‍♂️ 😂

English
476
345
2.3K
21.9M
NullVoider
NullVoider@nullvoider07·
@JasonBud @techdevnotes So even my suggestion was ghosted. 😂 x.com/i/status/20374…
NullVoider@nullvoider07

My feedback on Grok for coding after 2 days of use is this. Although I would never get a chance to work on it myself because @xaicareers loves ghosting, I will share the issues that I think need a lot of work on, in detail as much as possible. Grok is worst at complex coding. The main reason is that Grok cannot follow instructions and maintain context memory for long coding sessions. If Grok cannot retain the memory of what it said or the code it gave 1 response before the current one and keep going, then it's like I'm starting the session from the start, even though I'm far into the session. I recommend that devs restrict the chat session to a single mode for the model, preventing the model memory from being distributed across models. This method further reinforces Grok's long-term memory and context, preventing fragmentation of context and memory, enabling Grok to follow instructions over long coding sessions and during Grok training. This method also allows Grok to reinforce its memory at the root level. I would also recommend the devs not to limit Grok's tool use at both the training and inference levels because, 1. For coding sessions, Grok will need tools to write down the code when using memory to prevent overloading the chain of thoughts, causing Grok to lose context. 2. Grok will also need virtual terminals to execute actions to manage tools, create artifacts where Grok can write down and present code to the user, which then can be referenced by Grok as the session progresses, because now Grok will have something to refer to when it responds next time, further reinforcing Grok's ability for long coding sessions or a single response during the coding session.

English
1
0
0
71
Tech Dev Notes
Tech Dev Notes@techdevnotes·
Grok Web Needs a feature to Download All Files/Folder at once for Grok 4.3, currently it only supports downloading individual files
Tech Dev Notes tweet media
English
21
39
573
59.9K
NullVoider
NullVoider@nullvoider07·
@elonmusk @xai and @xaicareers I would like know if you guys automatically reject/ghost people if they from outside USA and require a H1B? If that's the case then do let me know that you are hiring on in USA, so that next time I don't have to waste my time apply for any roles at @xai.
English
0
0
0
25
Elon Musk
Elon Musk@elonmusk·
Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI. AI/robotics will produce goods & services far in excess of the increase in the money supply, so there will not be inflation.
English
46.7K
22.8K
195.4K
68.5M
NullVoider
NullVoider@nullvoider07·
Memory Archive + the full Project Dockyard stack (The-Eyes + Control-Center) is now fully open source! High-fidelity observational layer for CUA training: every screen event, actuation, 3-frame visual context, Rust-level capture with Kafka recovery, replayable memory.md files, manual + automated annotation modes, and production-ready multi-cloud storage. Quoting the full thread with all the details ↓ Any and all feedback for further development is appreciated — whether from individual devs, teams, or AI labs (@xai @AnthropicAI @GoogleDeepMind @metaai @perplexity_ai @OpenAI and the rest). Architecture, integration ideas, scalability feedback, or PRs on any repo are all super welcome. Drop thoughts here, in issues, or DMs! 🚀 #CUA #AgenticAI #OpenSource #MemoryArchive
NullVoider@nullvoider07

Open-sourcing Memory Archive today — the observational layer for CUA training data collection. It monitors The-Eyes (screen capture) and Control-Center (actuation), records every non-position command event with three exact frames of visual context, routes each step for natural-language reasoning, and compiles everything into replayable memory.md files that a CUA can follow to repeat the task exactly. The full Project Dockyard stack is now public: Memory Archive, Control-Center (actuation), The-Eyes (vision). Memory Archive: github.com/nullvoider07/m… If you're working on CUAs, agent training, SFT/RL pipelines, or inference-time memory at any lab or indie project, take a look. Issues, feedback, or PRs welcome on any repo. Control-Center: github.com/nullvoider07/c… The-Eyes: github.com/nullvoider07/t… No equivalent open-source stack exists for this level of CUA data fidelity at scale.

English
0
0
0
95
NullVoider
NullVoider@nullvoider07·
Design constraints baked in from day one: read-only (never issues OS commands), incremental persistence (no batching), data integrity (incomplete sessions explicitly flagged, never corrupted), per-session isolation (circuit breakers, storage pins, server addr overrides).
English
0
0
0
56
NullVoider
NullVoider@nullvoider07·
VLM pipeline (Python ma-app): stateless ModelRouter with pinned/fallback/load_balance policies, per-session circuit breakers, and sliding-window rate limits. Pricing pulled from Ed25519-signed manifest. Degraded sessions auto-fall back to the human annotation queue with model_degraded tags.
English
1
0
0
82
NullVoider
NullVoider@nullvoider07·
Rust ma-core (Tokio + Redis) handles capture: gRPC WatchCommands from Control-Center, HTTP closest-frame from The-Eyes. Atomic temp-file-then-rename writes for every file. Kafka-backed replay for cloud_primary crash recovery (offset committed only after successful storage write). Single ma-core supports 100k+ concurrent sessions.
English
1
0
0
93
NullVoider
NullVoider@nullvoider07·
Two modes, same output format. Manual: you run the task live, it captures in real time, then you annotate via the built-in Textual TUI (step list + image preview + editor, resumable, autosave, Ctrl+N to advance). Automated: orchestration registers sessions via IPC; VLM daemon processes StepReadyForReasoning pushes asynchronously; capture never stalls.
NullVoider tweet media
English
1
0
0
59
NullVoider
NullVoider@nullvoider07·
Open-sourcing Memory Archive today — the observational layer for CUA training data collection. It monitors The-Eyes (screen capture) and Control-Center (actuation), records every non-position command event with three exact frames of visual context, routes each step for natural-language reasoning, and compiles everything into replayable memory.md files that a CUA can follow to repeat the task exactly. The full Project Dockyard stack is now public: Memory Archive, Control-Center (actuation), The-Eyes (vision). Memory Archive: github.com/nullvoider07/m… If you're working on CUAs, agent training, SFT/RL pipelines, or inference-time memory at any lab or indie project, take a look. Issues, feedback, or PRs welcome on any repo. Control-Center: github.com/nullvoider07/c… The-Eyes: github.com/nullvoider07/t… No equivalent open-source stack exists for this level of CUA data fidelity at scale.
NullVoider tweet media
English
1
0
0
174
NullVoider
NullVoider@nullvoider07·
Output per step: before/at/after screenshots. Mouse clicks get automatic Rust annotation on the 'at' frame (filled red circle, directional arrow to open quadrant, X/Y coords). Keyboard steps use the raw frame. Reasoning is added either by a human or VLM; the source is tagged in reasoning.jsonl.
English
1
0
0
56
NullVoider
NullVoider@nullvoider07·
@elonmusk Just the stars is too small of a goal. Beyond the Universe is more fun than just tiny stars. 🚀🚀🚀
English
0
0
0
87
NullVoider
NullVoider@nullvoider07·
@skcd42 Are you working on a TUI? 🤔
English
0
0
0
115
skcd
skcd@skcd42·
in today's battle between terminal and skcd terminal 1 , skcd 0 this time I had problems because of ghostty + tmux and detecting colors properly :(
English
6
0
35
2.6K
NullVoider
NullVoider@nullvoider07·
@ayushjaiswal And that bottleneck good to have considering how unsafe the LLMs and agents at current stage are with handling data without nuking the entire code bases.
English
1
0
0
73
Ayush Jaiswal
Ayush Jaiswal@ayushjaiswal·
It's easier to make a 0 person company than single person company. I'm constantly the bottleneck for my agents.
English
6
2
35
2.5K
NullVoider
NullVoider@nullvoider07·
@PeterDiamandis That's why the phrase "An image is worth thousands of words" from the ancient times.
English
0
0
0
614
Peter H. Diamandis, MD
Peter H. Diamandis, MD@PeterDiamandis·
The human brain processes visual information 60,000x faster than text. Humans are visual processors, not text processors. Images hit the brain instantly. Words take work. That's why a single SpaceX launch video communicates more than a thousand-word essay—and why your slide decks hit harder than paragraphs. We're wired for pictures, not prose.
English
1.2K
1.2K
11.2K
29.9M
Alex Volkov
Alex Volkov@altryne·
Uh Oh... @AnthropicAI official response to everyone burning through their sessions in SessionGate is.. You're holding it wrong? Come on! Their recommendation is to: > Don't use Opus if you're on Pro > Don't use 1M context (they cost more despite anthropic setting them as default!?) > Do not resume large sessions after 1hr (not acknowledging the potential cache busting bug?) > Claim that no-one was overcharged. > No quotas reset (unlike Codex folks) I'm sure that this won't go well with the thousands of folks who experience significant decrease in the ability to use their Pro/Max plans and are cancelling in favor of other solutions. I don't want to dunk on Lydia, she's one of the few folks from Anth who actually acknowledged the community, please don't take this out on her, but continue voting with your wallets and sending them very quick sessions that eat most of your quota with /feedback people!
Alex Volkov tweet media
Lydia Hallie ✨@lydiahallie

Thank you to everyone who spent time sending us feedback and reports. We've investigated and we're sorry this has been a bad experience. Here's what we found:

English
181
98
1.7K
314.5K
NullVoider
NullVoider@nullvoider07·
@elonmusk That's not the only comedy. They are now sending DCMA takedown notice to all the user who forked. Even I got a notice for a research preview that I forked. 😂😂 github.com/anthropics/cla…
NullVoider tweet media
English
0
0
0
504
Elon Musk
Elon Musk@elonmusk·
Am referring to this
Alex Volkov@altryne

If you, like me, just woke up, let me catch you up on the Claude Code Leak (I know nothing, all conjecture): > Someone inside Anthropic, got switched to Adaptive reasoning mode > Their Claude Code switched to Sonnet > Committed the .map file of Claude Code > Effectively leaking the ENTIRE CC Source Code > @realsigridjin was tired after running 2 south korean hackathons in SF, saw the leak > Rules in Korea are different, he cloned the repo, went to sleep > Wakes up to 25K stars, and his GF begging him to take it down (she's a copyright lawyer) > Their team decided - how about we have agents rewrite this in Python!? Surely... this is more legal > Rewrite in Py > Board a plane to SK🇰🇷 > One of the guys decides python is slow, is now rewriting ALL OF CLAUDE CODE into Rust. > Anthropic cannot take down, cannot sue > Is this "fair use?" > TL;DR - we're about to have open source Claude Code in Rust

English
352
826
3.4K
1.4M
Elon Musk
Elon Musk@elonmusk·
Banger 😂
Elon Musk tweet media
Indonesia
4.9K
11.1K
212K
21.5M
NullVoider
NullVoider@nullvoider07·
@skcd42 @KentonVarda Hmm... That's pretty much the same thing I do but why do you have juggle between models when you can discuss the entire design with Grok/Opus, then ask Opus to compile the entire design and create a dev plan. Once the specs and dev plan is ready you can build using Opus.
English
0
0
0
32
skcd
skcd@skcd42·
@KentonVarda lol I pretext the reviewer with “the junior engineer has worked on your review” and the coder with “here’s the feedback from the senior engineer” and do it a couple of times until things look okay before reviewing
English
1
0
11
1K
Kenton Varda
Kenton Varda@KentonVarda·
I ask both Opus and GPT to give me a plan. I choose which plan is better and ask that model to implement its own plan. Then I have the other model review the code and recommend changes. Go back and forth until they are both satisfied. Am I just a manager now?
English
34
0
143
12.4K
NullVoider
NullVoider@nullvoider07·
My feedback on Grok for coding after 2 days of use is this. Although I would never get a chance to work on it myself because @xaicareers loves ghosting, I will share the issues that I think need a lot of work on, in detail as much as possible. Grok is worst at complex coding. The main reason is that Grok cannot follow instructions and maintain context memory for long coding sessions. If Grok cannot retain the memory of what it said or the code it gave 1 response before the current one and keep going, then it's like I'm starting the session from the start, even though I'm far into the session. I recommend that devs restrict the chat session to a single mode for the model, preventing the model memory from being distributed across models. This method further reinforces Grok's long-term memory and context, preventing fragmentation of context and memory, enabling Grok to follow instructions over long coding sessions and during Grok training. This method also allows Grok to reinforce its memory at the root level. I would also recommend the devs not to limit Grok's tool use at both the training and inference levels because, 1. For coding sessions, Grok will need tools to write down the code when using memory to prevent overloading the chain of thoughts, causing Grok to lose context. 2. Grok will also need virtual terminals to execute actions to manage tools, create artifacts where Grok can write down and present code to the user, which then can be referenced by Grok as the session progresses, because now Grok will have something to refer to when it responds next time, further reinforcing Grok's ability for long coding sessions or a single response during the coding session.
English
0
0
0
102
Hexiang (Frank) Hu
Hexiang (Frank) Hu@hexiang·
Congrats to the coding team🚀🫡🫡 Also glad to see that imagine has contributed to the success here😆
Mark Kretschmann@mark_k

Grok 4.20 by @xai is now the number two on the Web App Arena (@Designarena) 🔥🔥 Few expected that Grok 4.20 would be so good with coding, yet here we are. Another win for the xAI team. Can't wait for Grok Build.

English
9
2
96
4.4K