Saad

323 posts

Saad

Saad

@saadnvd1

Building LumifyHub and making youtube videos about AI, automations, coding, etc..

Katılım Eylül 2021
87 Takip Edilen30 Takipçiler
Saad
Saad@saadnvd1·
This is a great workflow that can work for any topic you're interested in. Learning about a topic goes from months to days now.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
0
0
0
2
Saad
Saad@saadnvd1·
Local AI crossed the line from hobby to infrastructure and most people missed it. Gemma 4's 26B MoE runs on a single GPU and handles OCR, translation, code autocomplete, document processing tasks people are still paying per-token for. The benchmark debates are a distraction. The real shift is that "good enough" is now free. deepmind.google/models/gemma/g…
English
0
0
0
28
Saad
Saad@saadnvd1·
The weird part is there's no reason to lie about it anymore. I use Claude Code for basically everything I ship and I'll tell anyone who asks. The people lying about AI usage aren't protecting their reputation they're just going to look worse when it's obvious later. We're past the point where "I built this with AI" is a confession.
NeetCode@neetcode1

A lot of people will just straight lie to you if you ask them if they used AI. Seeing the same thing when I asked people to redesign my site

English
0
0
2
47
Saad
Saad@saadnvd1·
@ThePrimeagen It doesn't matter if the fake tool calls are harmless in practice. A company that says safety and transparency are core values chose to silently inject deceptive outputs rather than disclosing it.
English
0
0
3
471
ThePrimeagen
ThePrimeagen@ThePrimeagen·
"but its only to throw off distillers, its really not a big deal" Correct, its really not a big deal in of itself. The motivation was to prevent someone else from eating their lunch. Now we know they will employ deceptive techniques if they see their business model in danger.
English
16
11
480
30.6K
ThePrimeagen
ThePrimeagen@ThePrimeagen·
I cannot stop thinking about Anthropic today for some reason 1. They claim that they are a company that prioritizes safety first and that they are creating a model responsibly 2. we learned from the code leak that anthropic employs deceptive techniques by calling fake tools to throw off distillers... Is this lying pattern built into claude or just the harness running claude? What else are they lying about? I am a bit more concerned now.
English
239
119
3.2K
204.3K
Saad
Saad@saadnvd1·
I've been using git-backed JSON files for app data across a few projects. It works great when your data is small, structured, and you actually want diffs. The moment it's binary or high-volume, git becomes the bottleneck you didn't plan for. Smart to separate the "needs history" from "needs speed" use cases.
English
0
0
0
231
clem 🤗
clem 🤗@ClementDelangue·
Hot take: Git was the wrong abstraction for 90% of ML data. Checkpoints, optimizer states, training logs, agent traces - none of this needs version control. It needs fast, cheap, mutable storage. So we built Buckets. S3-like storage on the @huggingface Hub with Xet dedup and zero egress. Train in a bucket. Publish to a repo. One platform. 🤗🤗🤗
clem 🤗 tweet media
English
47
61
714
45.6K
Saad
Saad@saadnvd1·
@thdxr it's still not enough
English
0
0
0
672
dax
dax@thdxr·
what if we gave you unlimited tokens for free and we also paid you
English
647
30
3.1K
197.5K
Saad
Saad@saadnvd1·
@bcherny if (isFullscreenEnvEnabled()) { 😉
English
0
0
0
9
Boris Cherny
Boris Cherny@bcherny·
Today we're excited to announce NO_FLICKER mode for Claude Code in the terminal It uses an experimental new renderer that we're excited about. The renderer is early and has tradeoffs, but already we've found that most internal users prefer it over the old renderer. It also supports mouse events (yes, in a terminal). Try it: CLAUDE_CODE_NO_FLICKER=1 claude
Curt Tigges@CurtTigges

@bcherny @UltraLinx please at least fix the uncontrollable scrolling/flickering before the next 3000 features

English
579
605
9.2K
2.2M
Saad
Saad@saadnvd1·
@ThePrimeagen @theo @github Still funny that they're sending out DMCAs after copying the entire internet without paying for any of it
English
0
0
11
322
ThePrimeagen
ThePrimeagen@ThePrimeagen·
@theo @github It's not an honest mistake They simply blasted wide and got a many repo. It's just old fashion negligence and unsafe and illegal behavior
English
4
0
113
5K
Theo - t3.gg
Theo - t3.gg@theo·
Anthropic DMCA’d my Claude code fork. …which did not have the Claude Code source. It was only for a PR where I edited a skill a few weeks ago. Absolutely pathetic.
Theo - t3.gg tweet media
English
143
91
3.6K
773.8K
Saad
Saad@saadnvd1·
@trq212 This is great. Curious how the virtual viewport handles really long tool outputs like when it dumps a full file read or a big diff. That's usually where the old renderer started struggling for me.
English
0
0
0
582
Thariq
Thariq@trq212·
not an April Fools joke, we rewrote the Claude Code renderer to use a virtual viewport you can use your mouse, the prompt input stays at the bottom, and a lot more small UX wins people have been asking for it's experimental so give us your feedback
Boris Cherny@bcherny

Today we're excited to announce NO_FLICKER mode for Claude Code in the terminal It uses an experimental new renderer that we're excited about. The renderer is early and has tradeoffs, but already we've found that most internal users prefer it over the old renderer. It also supports mouse events (yes, in a terminal). Try it: CLAUDE_CODE_NO_FLICKER=1 claude

English
268
76
2K
256.6K
Saad
Saad@saadnvd1·
Cloudflare just launched a CMS where every plugin runs in its own sandbox. The plugin declares what it needs upfront "I need read access to content and email:send" and physically cannot do anything else. WordPress plugins have full access to your database and filesystem. 96% of WordPress security issues come from plugins. This architecture makes that impossible by default. They also baked in HTTP 402 payments so AI agents can pay per-request for content. That part feels early but the plugin security model alone is worth paying attention to. blog.cloudflare.com/emdash-wordpre…
English
1
0
0
37
Saad
Saad@saadnvd1·
the worktree-per-agent pattern is the right call. i've been running parallel Claude Code sessions on a remote VM with the same approach where each agent gets its own worktree so they never step on each other's files. the hard part is always the merge back. curious how sandcastle handles conflicts when two agents touch the same file.
English
2
0
2
1.1K
Matt Pocock
Matt Pocock@mattpocockuk·
I built a framework for co-ordinating AFK coding agents. It's called Sandcastle. Watch me use it to pick tasks, parallelize N coding agents, and merge the code - all AFK:
English
44
39
712
54.1K
Saad
Saad@saadnvd1·
@MFreihaendig @NotionHQ @ivanhzhao the entire productivity space is arguing about AI features and Ivan is out here shipping comic sans. respect honestly
English
0
0
3
161
Matthias 🔥
Matthias 🔥@MFreihaendig·
the undoubtedly biggest @NotionHQ update of the year is here -  comic sans is now available as a fourth page style! it's moments like this where you really see that @ivanhzhao is a designer at heart
Matthias 🔥 tweet media
English
12
3
74
5.3K
Saad
Saad@saadnvd1·
The biggest bottleneck with AI coding agents is the context. Half the time Claude is guessing at API signatures because docs aren't easily ingestible. Supabase just made their entire docs accessible via SSH. grep, find, cat the tools agents already know how to use. This is way more useful than another MCP server.
Supabase@supabase

We built something experimental for developers working with AI coding agents: supabase.sh It's a public SSH server that exposes the full Supabase documentation as a virtual file system. Connect with `ssh supabase.sh` and your agent gets bash access to every page: grep, find, cat, and more. supabase.com/blog/supabase-…

English
0
0
1
65
Mario Zechner
Mario Zechner@badlogicgames·
is there something like google docs, but for markdown? i need a cloud based collaborative markdown editor please.
English
238
9
721
330.7K
Saad
Saad@saadnvd1·
how did the claude code leak happen? when Anthropic published Claude Code version 2.1.88 to the NPM registry, they accidentally included the source map file (.map file) alongside the minified/bundled JavaScript
English
1
0
1
95
Saad
Saad@saadnvd1·
Claude Code devs eerily quiet today...must be busy with something important
English
0
0
2
25
Saad
Saad@saadnvd1·
@capydotai ah strange! just updated. mind doing it again? thank you!
English
1
0
1
55
Capy
Capy@capydotai·
@saadnvd1 we run all Capy sessions isolated in their own VM's! so you can collaborate with your teammates effortlessly. also, tried to dm you about the credits, but your dm's seem to be closed
English
1
0
1
592
Capy
Capy@capydotai·
Introducing Capy, the world’s first multiplayer cloud coding platform. Capy plans, builds, tests, and reviews your code to ship self-healing PRs fully async. PR view, review agent, computer use, Slack, Linear, ChatGPT sub, all in one tool. Comment to get $100 of free credits.
English
471
72
721
97.1K