dCodes

316 posts

dCodes banner
dCodes

dCodes

@dCodes03

Building a person who fails fast, learns faster, and evolves the fastest.

Katılım Ocak 2025
181 Takip Edilen63 Takipçiler
Sabitlenmiş Tweet
dCodes
dCodes@dCodes03·
Hey, I am a human who can code. > 21 years old. > Was good at CP. > Making better recommendations at a food delivery app. > Trying to read research papers on weekends. > Trying to improve my skills by reading /might contributing (soon) open source repos. Who are you, anon?
English
0
0
0
381
dCodes
dCodes@dCodes03·
"Claude Mythos"
dCodes tweet media
Français
0
0
0
24
Boxy
Boxy@BoxyAI·
You manage every app, thread, and follow-up. You're the glue, but you don't have to be. Boxy runs quietly across your apps and turns what matters into one action you approve or dismiss. No prompting needed. Get back to your life. Join the waitlist at boxy.im
English
117
53
216
212K
dCodes
dCodes@dCodes03·
First thing on the agenda: optimise everything
English
0
0
0
8
Dhravya Shah
Dhravya Shah@DhravyaShah·
You are a legend if you can reply to this
English
234
1
361
36.5K
dCodes
dCodes@dCodes03·
@IamMiaChase @karpathy Yeah, not just proto assembly. It will prune the parent as well if the data required for child element is absent It’s currently in the very initial stage, working towards improvement and features
English
0
0
1
13
Mia Chase
Mia Chase@IamMiaChase·
@dCodes03 @karpathy does the builder prune unused nested fields before marshal, or mainly speed up proto assembly
English
1
0
0
21
dCodes
dCodes@dCodes03·
I've applied the @karpathy autoresearch loop to my golang library that I am building. I'm building a config-driven protobuf builder in Go for our BFF (Backend For Frontend) layer. > The problem App-facing APIs return massive, deeply nested proto responses - 82KB, 42 messages, 3 levels of recursion - and building them by hand means hundreds of lines of tightly-coupled Go code. That's just one element in a 2400KB JSON array response with multiple different proto element types, where other items could be even more complex and future designs will only push the schema further. Building them by hand means hundreds of lines of tightly-coupled Go code that changes every time the proto schema changes for new designs. The library fixes this. You define a JSON config (static templates + dynamic markers + generators for repeated fields), pass a flat data map at runtime, and it constructs the full proto using reflection. Config lives in a DB, not in code. Schema changes = config push, not a deploy. But reflection-based proto construction is slow. And at the microsecond level, intuition breaks. Changes that *should* be faster often aren't. Proto reflection is full of traps. So I made it a loop. One artifact (Go source code), one metric (benchmark ns/op), keep what improves, revert what doesn't. Run it. Come back to a faster library. > How it works: One EXPERIMENT.md describes everything - the architecture, the benchmark commands, the rules, and the optimization ideas to explore. You don't hand-pick experiments. You don't write the code changes. artifact: injector/builder.go + injector_fast.go metric: BenchmarkBuild ns/op, allocs/op constraint: 19 unit tests + byte-identical proto output rule: improved → git commit. same or worse → git reset --hard HEAD~1 From that alone, the agent reads the code, profiles with go tool pprof, finds the bottleneck, proposes ONE surgical change to the Go source, runs tests + correctness checks, benchmarks 5s × 3 runs, and decides keep or discard. It sees its own experiment history - when something fails, the next proposal knows what was tried and why it didn't work. > The loop: 1. Read current code + profile CPU bottlenecks 2. Propose ONE change (edit Go source files directly) 3. Run 19 unit tests + byte-identical output check 4. Benchmark with -benchtime=5s -count=3 5. Improved? Keep + git commit. Otherwise git reset --hard HEAD~1. 6. Go to 1. Run until stopped. 13 experiments on a live Go protobuf builder. 0 human intervention. > Build: 74,570 → 63,000 ns/op (-15.5%) > E2E: 104,647 → 95,400 ns/op (-8.8%) > Parallel: 25,267 → 22,006 ns/op (-12.9%) > 6 kept, 6 discarded, 1 crash > Bonus: added type-safe generics, config validation, helper constructors The biggest single win? The agent noticed that injection markers like clickAction.openTab.postBody and clickAction.openTab.cacheKey share a parent path. It grouped them at compile time - walk the Mutable() chain once, set all leaf fields. 14.4% Build speedup from one structural insight found by reading the workload data. What it discarded is equally interesting: > Pre-allocating list capacities → regression (proto's NewElement allocates throwaway messages) > Wire-bytes clone instead of proto.Clone → 12% slower (Unmarshal creates more objects than pointer copy) > Template-free building → 25 correctness diffs (missed repeated field semantics) > Empty template skip → 7% regression (branch misprediction penalty on CPU) Every one of these sounds like it should help. None did. That's the whole point of the loop - you stop guessing and start measuring. You describe the system. You set the constraints. It figures out what's actually fast.
dCodes tweet media
English
1
0
1
841
dCodes
dCodes@dCodes03·
“What if it actually works out?”
English
0
0
0
16
kanav
kanav@kanavtwt·
weird pain in the chest guys… is it over
English
10
0
21
3.5K
Zhengyao Jiang
Zhengyao Jiang@zhengyaojiang·
Autoresearch has been out for 2 weeks. The community is trying to apply it to everything with a measurable metric, here are some successful attempts: 🧵 (1/6)
English
34
137
1.7K
292.3K
Elon Musk
Elon Musk@elonmusk·
@EMostaque Time for cybernetic cognitive superpowers
English
519
201
1.8K
82.3K
dCodes retweetledi
jack friks
jack friks@jackfriks·
but what if it all works out
jack friks tweet media
English
22
4
157
5.5K
dCodes
dCodes@dCodes03·
@AlbiaHossain What if we replace the "Order now" text with the price, and the price with the rating in A? I find this more intuitive, at least for me.
English
0
0
0
16
Albia
Albia@AlbiaHossain·
A vs B Which one are you picking? 👀
Albia tweet media
English
11
0
15
1.5K
dCodes
dCodes@dCodes03·
@benkimbuilds through a friend’s reference during my 3rd year of graduation.
English
0
0
1
20
Ben Kim
Ben Kim@benkimbuilds·
how did you get your first customer/user?
English
14
0
7
1.5K
dCodes
dCodes@dCodes03·
@mscode07 I will Rebuild it, brick by brick.
English
0
0
0
15
mscode07
mscode07@mscode07·
What if your product fails ?
English
110
1
57
3.6K
dCodes
dCodes@dCodes03·
dCodes@dCodes03

I've applied the @karpathy autoresearch loop to my golang library that I am building. I'm building a config-driven protobuf builder in Go for our BFF (Backend For Frontend) layer. > The problem App-facing APIs return massive, deeply nested proto responses - 82KB, 42 messages, 3 levels of recursion - and building them by hand means hundreds of lines of tightly-coupled Go code. That's just one element in a 2400KB JSON array response with multiple different proto element types, where other items could be even more complex and future designs will only push the schema further. Building them by hand means hundreds of lines of tightly-coupled Go code that changes every time the proto schema changes for new designs. The library fixes this. You define a JSON config (static templates + dynamic markers + generators for repeated fields), pass a flat data map at runtime, and it constructs the full proto using reflection. Config lives in a DB, not in code. Schema changes = config push, not a deploy. But reflection-based proto construction is slow. And at the microsecond level, intuition breaks. Changes that *should* be faster often aren't. Proto reflection is full of traps. So I made it a loop. One artifact (Go source code), one metric (benchmark ns/op), keep what improves, revert what doesn't. Run it. Come back to a faster library. > How it works: One EXPERIMENT.md describes everything - the architecture, the benchmark commands, the rules, and the optimization ideas to explore. You don't hand-pick experiments. You don't write the code changes. artifact: injector/builder.go + injector_fast.go metric: BenchmarkBuild ns/op, allocs/op constraint: 19 unit tests + byte-identical proto output rule: improved → git commit. same or worse → git reset --hard HEAD~1 From that alone, the agent reads the code, profiles with go tool pprof, finds the bottleneck, proposes ONE surgical change to the Go source, runs tests + correctness checks, benchmarks 5s × 3 runs, and decides keep or discard. It sees its own experiment history - when something fails, the next proposal knows what was tried and why it didn't work. > The loop: 1. Read current code + profile CPU bottlenecks 2. Propose ONE change (edit Go source files directly) 3. Run 19 unit tests + byte-identical output check 4. Benchmark with -benchtime=5s -count=3 5. Improved? Keep + git commit. Otherwise git reset --hard HEAD~1. 6. Go to 1. Run until stopped. 13 experiments on a live Go protobuf builder. 0 human intervention. > Build: 74,570 → 63,000 ns/op (-15.5%) > E2E: 104,647 → 95,400 ns/op (-8.8%) > Parallel: 25,267 → 22,006 ns/op (-12.9%) > 6 kept, 6 discarded, 1 crash > Bonus: added type-safe generics, config validation, helper constructors The biggest single win? The agent noticed that injection markers like clickAction.openTab.postBody and clickAction.openTab.cacheKey share a parent path. It grouped them at compile time - walk the Mutable() chain once, set all leaf fields. 14.4% Build speedup from one structural insight found by reading the workload data. What it discarded is equally interesting: > Pre-allocating list capacities → regression (proto's NewElement allocates throwaway messages) > Wire-bytes clone instead of proto.Clone → 12% slower (Unmarshal creates more objects than pointer copy) > Template-free building → 25 correctness diffs (missed repeated field semantics) > Empty template skip → 7% regression (branch misprediction penalty on CPU) Every one of these sounds like it should help. None did. That's the whole point of the loop - you stop guessing and start measuring. You describe the system. You set the constraints. It figures out what's actually fast.

English
0
0
1
762
Garry Tan
Garry Tan@garrytan·
It's my birthday and on my birthday I want to recognize all my haters. Haters do the best marketing. Love your haters.
GIF
English
419
40
2.2K
523.5K
dCodes
dCodes@dCodes03·
@Devinbuild Start of the month Anthropic. Mid of the month OpenAI. End of the month Gemini.
English
0
0
0
16
Devin
Devin@Devinbuild·
Who is winning the AI race? - Anthropic - OpenAI - Gemini
English
622
13
361
68.7K