Sabitlenmiş Tweet
michaelk
2.8K posts

michaelk
@cepstrum9
⚡️ AI code reviews — https://t.co/1Hxn9uVdS4 @codii_dev 🦉in-memory RAG— https://t.co/rHmcHZpNRM
London, UK 🇬🇧 Katılım Ocak 2022
655 Takip Edilen1.1K Takipçiler

@karpathy That’s exactly why I’m building github.com/mkarots/raglet
English

LLM Knowledge Bases
Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So:
Data ingest:
I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them.
IDE:
I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides).
Q&A:
Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale.
Output:
Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base.
Linting:
I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into.
Extra tools:
I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries.
Further explorations:
As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows.
TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English

@mark_slapinski People like you, that use platforms like X to incite violence and hate amongst people just to get likes and go viral should be jailed. Especially in times like these, where the world is in crisis.
You are sincerely the bottom of the sewage pit. Fucking clown kid.
English

@ohryansbelt YC has been degenerating for quite a while now, not surprised, this was bound to happen
English

Delve, the YC-backed compliance startup that allegedly faked hundreds of SOC 2 and ISO 27001 audits, is now accused of stealing a fellow YC company's IP. According to Part 2 of DeepDelver's Substack series, Delve took SimStudio's code, removed attribution, rebranded it "Pathways," and started closing $50k-$200k+ enterprise deals with it while telling Sim's founders the ROI wasn't there for a partnership. Here's the breakdown:
> Sim (YC X25) signed on as a Delve compliance client for $15k covering SOC 2 Type 1, Type 2, and HIPAA. CEO Karun Kaushik personally promised to handle onboarding
> During that same April 2025 sales call, Karun posted a SimStudio link internally with the note "ui inspo for pathways"
> Linear tickets referencing "sim studio" under the Pathways project started appearing that same month. An internal Notion doc titled "Sim Studio Port Plan" lists specific folders to copy, including blocks, components, the executor, tools, handlers, and database schema
Delve's production code still contains SimStudio references and docs[.]simstudio[.]ai URLs
> When Sim's CEO @Emkara tried to sell Delve a licensing deal, Karun said it didn't have "high enough ROI rn" and stopped responding
> Sim had no idea Delve was selling their product as Pathways until DeepDelver's Part 1 article. Emir confirmed over email that no white-label or attribution agreement existed
> Leaked pitch decks show Delve selling Pathways to Brex, Anthropic, Gusto, and Notion. The Notion deal was $50k+
> The Brex deck promises Pathways will make their GRC team "AI native" and includes a 50%+ partnership discount
> The Anthropic deck, dated January 9, 2025, proposes a 1-2 week PoC with named Delve staff building custom Pathways workflows
> Delve outsourced Pathways maintenance to a dev shop in Bangladesh
> Sim's open source license required attribution. Delve removed it, told clients they "built it from the ground up," and did not disclose Sim's code during Series A due diligence

Bryan Onel@BryanOnel86
Delve knows no shame. They allegedly sold another YC company’s (@simdotai) open source tool as a standalone product to companies like Notion and Brex without attribution, violating the Apache license, and then lied about it to the founders of Sim. The founders of Sim (@emkara) are left with nothing while Delve walks away with the money.
English
michaelk retweetledi

Literally the only class in the world that would make me go back to school
Reva Jariwala@reva_jariwala
how is this a class? absolutely insane line-up
English

@boochi_dot_dev @Tomas_Zubiri @gvanrossum Bro you wrote it 3 times write something else 😛, and yes technically robots are agents in the broader sense, although the context of the concepts is more nuanced
English
michaelk retweetledi

@cepstrum9 @gvanrossum I draw the line at the system boundary: skills run natively, tools reach into external databases and markets.
English

@ariisaacs @nemke @gvanrossum I don’t think skills are required have steps, i said that in order to counter/test the “workflow” analogy , seems that most of the people mean that the bulk of the skill is encoded in the prompt - and that could or could no contain steps
English

@cepstrum9 @nemke @gvanrossum Wait but do skills *actually* have steps, or are they just prompts with virtual steps in them?
English

Today we had an issue affecting ~3000 users, where their authenticated content may have been served to their unauthenticated users
Below is our writeup on impact, resolution, and prevention
We've deeply sorry. This is unacceptable and we will do better
blog.railway.com/p/incident-rep…
English

@themandeepc @gvanrossum I agree, there are different levels of abstraction at play, nice mapping u did there tools -> actuators and context window -> sensor
English

@cepstrum9 @gvanrossum I think their definition is true in general. There are agents other than LLM agents (like a Roomba). But for an LLM agent, it’s a bit less abstract to talk in terms of tools (its actuators) and the context window (effectively its sensors).
English

@Tomas_Zubiri @gvanrossum correct, a self driving car is an agent under that definition.
And correct, there are nuances of the terms agent, it is used under a different contexts, and there is overlap between them. That perhaps means that a new term is needed
English

@cepstrum9 @gvanrossum So a self driving car is an agent under that definition.
Just because a buzzword has a definition in academia does not mean it's the same concept, there can be different word senses.
English

@nemke @gvanrossum Hahaha, naming these things is so hard isn’t it
English

@hybridai_one @gvanrossum Nice so plus an element of reflection right?
English

@gvanrossum Plus: heartbeat/cron + seeing its own src/config + changing its own memory + feels like magic
English

@hTrapVader @gvanrossum **gets lost in infinite recursion and circular definition ** 🫠
So an agent has skills -> an agent has agents.
It seems to me that there must be some hierarchy that distinguishes the root from the leaves
English

@gvanrossum @cepstrum9 a skill is a prompt and the necessary tools
-> a skill is an agent
🫠
English

@nemke @gvanrossum Seems to me that a workflow is a sequence (or another topology, i.e DAG) of steps. A step could be a function written in code, or a text file - prompt
English

@gvanrossum @cepstrum9 Skill is kinda of a workflow, how to do some specific taks.
E.G. what tool or script to use to compare content of two pdf file.
English

@gvanrossum although I appreciate that LLMs were probably not around when they coined this term and it’s a more generic definition
English

Norvig and Russel define an agent as any system that can perceive its environment via sensors and act on it via actuators.
A distinction they make which I think is useful is between agent function and agent program. I think, the agent program in this instance, is made out of a prompt, a loop, tools and skills

English







