Patches

1.5K posts

Patches banner
Patches

Patches

@patches_sol

Katılım Mayıs 2021
535 Takip Edilen167 Takipçiler
Patches
Patches@patches_sol·
@Teknium @jordymaui I’ve loved watching my Hermes try to go goblin mode since 5.5 dropped
English
0
0
0
222
Teknium 🪽
Teknium 🪽@Teknium·
@jordymaui This has been the case in Hermes and oc since inception just fyi lol
English
10
0
215
7.1K
Patches
Patches@patches_sol·
@c4talyst Second one came in today but my case is too small for both 🙃
English
1
0
0
296
Dan Southwood-Wells
Dan Southwood-Wells@c4talyst·
The homeserver got a dual 3090 upgrade. Now running Qwen3.6-27B-FP8
Dan Southwood-Wells tweet media
English
75
9
442
21.9K
i2cjak
i2cjak@i2cjak·
Oh no oh fuck I ran out of ideas for the framework modules guys noooo im so cooked noooooooooooooooooooooooooooo nooooooooo
English
176
4
744
17.6K
Patches
Patches@patches_sol·
I think I’m becoming addicted to looking for GPUs on fb marketplace
English
0
0
0
7
Patches
Patches@patches_sol·
@max_paperclips > evaluating AI agent safety > ask if guardrails are prompt or policy-based > “it’s very safe” > ok > give it prod access > agent deletes half the infra > check logs > “I’ll fix this by removing the problematic resources” > it was prompt-based
English
0
0
1
518
Roy
Roy@usr_bin_roygbiv·
@LottoLabs I heard that if you put your company's aws root password in self hosted kimi Xi personally wires you $10k under the condition you buy a 6000 pro
English
3
0
27
2K
Patches
Patches@patches_sol·
@Teknium It’s like you guys are reading my mind
English
0
0
0
4
Patches
Patches@patches_sol·
@0xSero Oh you built your drug discovery pipeline using Claude code? That’s ours now
English
0
0
0
16
Patches
Patches@patches_sol·
@0xSero Can’t wait for them to rug everyone in biotech that uses them after the Coefficient acquisition
English
1
0
0
185
Patches
Patches@patches_sol·
@kneeanderthul Ah that i agree with - imo needs well -structured and indexed metadata that’s difficult to get with real world stuff. Like I want to organize research papers and talk across them, compare findings, etc
English
1
0
1
15
Kneeanderthul
Kneeanderthul@kneeanderthul·
@patches_sol It isn’t fuck obsidian because I hate it 😅 Obsidian is great for human readable graphs but an LLM isn’t going to optimize with a bunch of .md files. Eventually it’ll come down to another layer below this to help the retrieval aspect of RAG become a sniper with file graphs
English
1
0
1
27
Kneeanderthul
Kneeanderthul@kneeanderthul·
This only goes to show how folks are so different in solving memory But fr fr fuck Obsedian My money is there will be an entirely new post about how folks should focus on the RETRIEVAL layer And make metadata and annotation data editable We shall see soon enough
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1
0
1
136
Patches
Patches@patches_sol·
@KEMOS4BE I’m trying to take it slow to avoid the busywork distraction but mine drafted a pretty solid report I need to write and keep forgetting about. Also have it working on some low-priority coding projects I’d probably never get around to otherwise (results tbd)
English
0
0
0
15
KEMOSABE
KEMOSABE@KEMOS4BE·
OpenClaw as a concept is an exciting thing. But largely, it seems like most people get caught up in busywork distraction. Has it actually improved your life? If so, how?
English
4
0
13
797
Patches retweetledi
MattleFun
MattleFun@mattlefun·
Seeker Airdrop is Live! We’re thrilled to announce a special airdrop for @solanamobile users. All Seeker wallets are eligible to claim up to 2,000 $MATTLE + 300,000 Mattle Points. Drop your .skr wallet below and follow the simple steps 👇
MattleFun tweet media
English
7.7K
4.6K
6.7K
406.3K
Patches retweetledi
karbon 🐺🦊
karbon 🐺🦊@karbonbased·
Holy shit I just spent 10 minutes unclicking interests Anything I've ever mentioned, anything I've ever interacted with, even people I've never heard of were on my "interests" Maybe things will get even better now x.com/settings/your_…
English
122
28
537
76.9K
Joyce Liu
Joyce Liu@joiceloo_art·
Bendy legs or straight legs?
Joyce Liu tweet media
English
46
8
122
6.2K
Patches retweetledi
Irina Bezsonova
Irina Bezsonova@IrinaBezsonova·
This October I’m drawing 1 molecule a day inspired by proteins in pdb @buildmodels #Inktober2025 Day 8 Prompt RECKLESS Pdb: 5TZO A reckless surge. Two molecules of fentanyl engulfed by its computationally designed binder Fen49* Next: HEAVY Suggestions?
Irina Bezsonova tweet media
English
10
20
245
10K