Fayçal

900 posts

Fayçal banner
Fayçal

Fayçal

@imd3vr

I make stuff with code and pixels. 🤓

Algeria Katılım Aralık 2010
2.2K Takip Edilen287 Takipçiler
Sabitlenmiş Tweet
Fayçal
Fayçal@imd3vr·
I've spent quite some time researching and experimenting with #ChatGPT to write a thorough post. It's done at last! f3.al/chatgpt-defini… I would love your feedback! Tell me what's missing or needs to be improved 🙏
English
1
0
3
0
Fayçal retweetledi
Google Gemma
Google Gemma@googlegemma·
Meet Gemma 4! Purpose-built for advanced reasoning and agentic workflows on the hardware you own, and released under an Apache 2.0 license. We listened to invaluable community feedback in developing these models. Here is what makes Gemma 4 our most capable open models yet: 👇
Google Gemma tweet media
English
154
809
6.9K
546.5K
Fayçal retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
On DeepWiki and increasing malleability of software. This starts as partially a post on appreciation to DeepWiki, which I routinely find very useful and I think more people would find useful to know about. I went through a few iterations of use: Their first feature was that it auto-builds wiki pages for github repos (e.g. nanochat here) with quick Q&A: deepwiki.com/karpathy/nanoc… Just swap "github" to "deepwiki" in the URL for any repo and you can instantly Q&A against it. For example, yesterday I was curious about "how does torchao implement fp8 training?". I find that in *many* cases, library docs can be spotty and outdated and bad, but directly asking questions to the code via DeepWiki works very well. The code is the source of truth and LLMs are increasingly able to understand it. But then I realized that in many cases it's even a lot more powerful not being the direct (human) consumer of this information/functionality, but giving your agent access to DeepWiki via MCP. So e.g. yesterday I faced some annoyances with using torchao library for fp8 training and I had the suspicion that the whole thing really shouldn't be that complicated (wait shouldn't this be a Function like Linear except with a few extra casts and 3 calls to torch._scaled_mm?) so I tried: "Use DeepWiki MCP and Github CLI to look at how torchao implements fp8 training. Is it possible to 'rip out' the functionality? Implement nanochat/fp8.py that has identical API but is fully self-contained" Claude went off for 5 minutes and came back with 150 lines of clean code that worked out of the box, with tests proving equivalent results, which allowed me to delete torchao as repo dependency, and for some reason I still don't fully understand (I think it has to do with internals of torch compile) - this simple version runs 3% faster. The agent also found a lot of tiny implementation details that actually do matter, that I may have naively missed otherwise and that would have been very hard for maintainers to keep docs about. Tricks around numerics, dtypes, autocast, meta device, torch compile interactions so I learned a lot from the process too. So this is now the default fp8 training implementation for nanochat github.com/karpathy/nanoc… Anyway TLDR I find this combo of DeepWiki MCP + GitHub CLI is quite powerful to "rip out" any specific functionality from any github repo and target it for the very specific use case that you have in mind, and it actually kind of works now in some cases. Maybe you don't download, configure and take dependency on a giant monolithic library, maybe you point your agent at it and rip out the exact part you need. Maybe this informs how we write software more generally to actively encourage this workflow - e.g. building more "bacterial code", code that is less tangled, more self-contained, more dependency-free, more stateless, much easier to rip out from the repo (x.com/karpathy/statu…) There's obvious downsides and risks to this, but it is fundamentally a new option that was not possible or economical before (it would have cost too much time) but now with agents, it is. Software might become a lot more fluid and malleable. "Libraries are over, LLMs are the new compiler" :). And does your project really need its 100MB of dependencies?
English
300
774
7.3K
1.1M
Fayçal retweetledi
Angie Jones
Angie Jones@techgirl1908·
We used to hunt the bug down in the codebase with our bare eyes
Angie Jones tweet media
English
94
966
9.1K
275.9K
Fayçal retweetledi
Z.ai
Z.ai@Zai_org·
Introducing GLM-OCR: SOTA performance, optimized for complex document understanding. With only 0.9B parameters, GLM-OCR delivers state-of-the-art results across major document understanding benchmarks, including formula recognition, table recognition, and information extraction. Weights: huggingface.co/zai-org/GLM-OCR Try it: ocr.z.ai API: docs.z.ai/guides/vlm/glm…
Z.ai tweet media
English
110
427
3.8K
551.2K
Fayçal retweetledi
matt rothenberg
matt rothenberg@mattrothenberg·
People keep probing my static site for wp-login.php, wp-admin, xmlrpc.php (the usual WordPress stuff). So I built them a login page
English
97
84
3K
215.8K
Conor O'Neill
Conor O'Neill@conoro·
@karpathy Most of my GitHub consists of RSS converters/adapters for sites/pages that are missing feeds. React client-side rendering made it a lot harder/slower. Now have to spin up Playwright instead of just using Beautiful Soup or Cheerio. Claude writes most of them now of course.
English
2
0
2
751
Andrej Karpathy
Andrej Karpathy@karpathy·
Finding myself going back to RSS/Atom feeds a lot more recently. There's a lot more higher quality longform and a lot less slop intended to provoke. Any product that happens to look a bit different today but that has fundamentally the same incentive structures will eventually converge to the same black hole at the center of gravity well. We should bring back RSS - it's open, pervasive, hackable. Download a client, e.g. NetNewsWire (or vibe code one) Cold start: example of getting off the ground, here is a list of 92 RSS feeds of blogs that were most popular on HN in 2025: gist.github.com/emschwartz/e6d… Works great and you will lose a lot fewer brain cells. I don't know, something has to change.
English
548
934
9.2K
1.3M
Ben Eater
Ben Eater@beneater·
Anyone else out there vibe circuit-building?
Ben Eater tweet mediaBen Eater tweet media
English
589
2.6K
41.3K
1.4M
Fayçal retweetledi
Armin Ronacher ⇌
Armin Ronacher ⇌@mitsuhiko·
Weekend thoughts on Gas Town, Beads, slop AI browsers, and AI-generated PRs flooding overwhelmed maintainers. I don't think we're ready for our new powers we're wielding. lucumr.pocoo.org/2026/1/18/agen…
English
70
96
734
114K
Fayçal retweetledi
Erik Schluntz
Erik Schluntz@ErikSchluntz·
Next iteration of the Vibe Coder's Keyboard - now with Voice Mode!
English
293
396
4.3K
632.4K
Fayçal
Fayçal@imd3vr·
@ryanvogel This did not need to go this hard 🔥
English
0
0
0
59
vogel
vogel@ryanvogel·
its time for a change.
English
24
3
156
17.1K
Fayçal retweetledi
goose
goose@goose_oss·
We brought the Ralph Wiggum loop to goose with a multi-model approach where one model does the work and a second model reviews it. Tutorial 👇 block.github.io/goose/docs/tut…
English
20
61
466
73.2K
Fayçal retweetledi
dax
dax@thdxr·
in OpenCode v1.1.18 we implemented the detailed planning flow you find in cursor or claude code but not 100% sure about it yet so we put it behind a OPENCODE_EXPERIMENTAL_PLAN_MODE=1 feature flag quick video on how it works but if you have feedback please put it on this thread
English
207
51
2K
151.9K
Fayçal retweetledi
Eric Glyman
Eric Glyman@eglyman·
One useful way to think about agents: they’re control systems. Generating output is easy. Feedback is everything. At Ramp we built a background coding agent, Inspect, that can actually translate requests in English into code, and then observe reality: tests, telemetry, and feature flags — plus visual checks for UI work (screenshots/live previews). It doesn’t just propose diffs; it iterates until the evidence says the change is correct. Two consequences surprised me: 1. Cheap, parallel sessions change behavior. When an agent runs in a real sandboxed dev environment (not your laptop), you stop babysitting and start running more iterations. 2. Multi-client + multiplayer matters more than people think. If it shows up in the places work already happens (PRs, Slack, web, VS Code) and you can hand a session to a teammate, it becomes shared infrastructure, not a novelty. We’re now at ~30% of merged PRs in our core repos authored by Inspect, without mandating it. People from essentially every job function, not just engineering, submitted code last week. Wild times.
Zach Bruggeman@zachbruggeman

The craft of engineering is rapidly changing. At @tryramp, we built our own background coding agent to accelerate faster. We call it Inspect. It wrote 30% of merged frontend + backend PRs in the past week. It’s powered by @opencode, @modal and @CloudflareDev. It runs fully in the cloud, and starts in seconds, letting every builder work at the speed of thought, no setup required. Today, we’re open sourcing the full blueprint so anybody can build their own Inspect. Just give our spec to your current coding agent, and let it build your new favourite.

English
23
22
388
60.7K
Fayçal retweetledi
Ben Williams
Ben Williams@theplgeek·
Aaaaand ralph-tui is live - thanks for your patience github.com/subsy/ralph-tui It's been a fun day using ralph-tui to build ralph-tui. All the details in the repo but: - Install w/ your fave package mgr eg 'bun install -g ralph-tui' - First time setup 'ralph-tui init' - Create a PRD and tasks 'ralph-tui prime' After that you'll be dropped into the TUI to start the ralph loop. Tons of tweakability for those that care. On that note, I'm out for the night 🤘 h/t @GeoffreyHuntley 🤠 @ryancarson @danshipper @kieranklaassen @clairevo @mattpocockuk @gregisenberg
Ben Williams@theplgeek

ralph-tui is cooking. All-in-one ralph engine with e2e observability - extensible by design - plugin agents (ships with cc and @opencode plugins) - plugin trackers (ships with json, beads, and beads-bv plugins) - built in interactive prd creator (leverages skills) - auto prd conversion to selected tracker format - customisable prompts - understands task dependencies and actionability - quickstart Overkill? Perhaps. Useful? Absolutely. A blast to use? Hell yes! Let's go #ralphwiggum Aiming to publish later today @mattpocockuk @ryancarson @Steve_Yegge @doodlestein @GeoffreyHuntley PS: Initial iteration built with ralph scripts. Subsequent iterations built with ralph-tui

English
32
35
571
101.4K
Luke Parker
Luke Parker@LukeParkerDev·
I don’t see Claude Code doing this
Luke Parker tweet media
English
1
0
22
1.8K
Zach Bruggeman
Zach Bruggeman@zachbruggeman·
The craft of engineering is rapidly changing. At @tryramp, we built our own background coding agent to accelerate faster. We call it Inspect. It wrote 30% of merged frontend + backend PRs in the past week. It’s powered by @opencode, @modal and @CloudflareDev. It runs fully in the cloud, and starts in seconds, letting every builder work at the speed of thought, no setup required. Today, we’re open sourcing the full blueprint so anybody can build their own Inspect. Just give our spec to your current coding agent, and let it build your new favourite.
English
98
152
2.2K
791.5K