Voldemort

44 posts

Voldemort

Voldemort

@JetXu14

Systemizing Vibe Coding through AI Harnessing Engineering. Creator of DocMason & LlamaPReview. Turning complex knowledge into reliable context for AI.

China Katılım Mart 2020
8 Takip Edilen7 Takipçiler
Sabitlenmiş Tweet
Voldemort
Voldemort@JetXu14·
@karpathy is spot on about the "agentic gap". White-collar workers miss out on real AI agents because their world is messy Office files (full diagrams PPTs, Excels, Emails)—not code or clean .md files. They're stuck with basic PDF wrappers. I built DocMason to change that 👇
Andrej Karpathy@karpathy

Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.

English
0
0
2
80
Voldemort
Voldemort@JetXu14·
@aryanlabde I built a tool for myself and I am using it daily…so a little better than nobody shows up😂
English
0
0
1
11
Aryan
Aryan@aryanlabde·
Vibe coder’s biggest nightmare : > build something in a weekend > launch > nobody shows up for 2 months > new idea
English
68
4
88
2.5K
Voldemort
Voldemort@JetXu14·
@devhe4d Building DocMason 🧱—an open-source agent in Codex that gives your local office files the same autonomous treatment engineers give codebases. We actually just launched on Product Hunt today! 🚀 producthunt.com/products/docma…
English
2
0
1
25
Ray
Ray@devhe4d·
What are you building this week? share your websites/projects 👇🏻
English
3
1
1
44
CodeForge
CodeForge@code_codeforge·
What are you building today? Let's gain some publicity for you
English
98
1
47
2K
Voldemort
Voldemort@JetXu14·
@Dmeister00 Building DocMason 🧱—an open-source agent in Codex that gives your local office files the same autonomous treatment engineers give codebases. We actually just launched on Product Hunt today! 🚀 producthunt.com/products/docma…
English
0
0
1
26
Darren
Darren@Dmeister00·
Here’s to those still building through the bull and the bear. You’re the reason things last. Lately I’ve been putting Manus AI through its paces, and damn - it actually delivers. What are you guys building at the moment?
English
1
0
4
47
Voldemort
Voldemort@JetXu14·
@delveroin Building DocMason 🧱—an open-source agent that gives your local office files the same autonomous treatment engineers give codebases. We actually just launched on Product Hunt today! 🚀 producthunt.com/products/docma…
English
0
0
2
56
(Oma)devuae
(Oma)devuae@delveroin·
Happy new week What are you building this week? Drop your portfolio URL Let’s send some traffic
(Oma)devuae tweet media
English
134
1
57
3.6K
Voldemort
Voldemort@JetXu14·
After weeks of building in public, DocMason is LIVE on Product Hunt! 🚀 If you're tired of AI chats forgetting your complex PDFs/PPTs, come check out the repo-native agent in Codex we've built. Your support today means everything! 👇 producthunt.com/products/docma…
English
1
0
1
56
Voldemort
Voldemort@JetXu14·
@NoahKingJr AI is not replacing software engineers. AI is releasing software engineers to replace all other professions.
English
0
0
3
1.1K
Noah
Noah@NoahKingJr·
Software Engineers what's your plan B if AI replaces you?
English
1.1K
69
2.1K
812.7K
Voldemort
Voldemort@JetXu14·
@_NaNovelist_ Facebook groups? GPT suggests me to find my product users there
English
0
0
0
8
Voldemort
Voldemort@JetXu14·
If you are paying for ChatGPT, you are probably wasting its most powerful hidden feature. 🤯 Codex (the engine behind Code Interpreter) has its own separate limits, but 90% of non-coders ignore it because they think "I don't code." Be honest about your current setup: 👇
English
0
0
1
71
Voldemort
Voldemort@JetXu14·
@alidougru Absolutely, I built a tool I use it everyday. And I open-sourced it. But it is too difficult to persuade others to try it…
English
0
0
0
4
Ali
Ali@alidougru·
Hot take: Marketing is 100x harder than coding
English
131
9
170
7.3K
Voldemort
Voldemort@JetXu14·
We are entering a "Folded" AI reality. Tech relies on agents with persistent memory. White-collar work is trapped in chat boxes that forget everything when you close the tab. If your AI has amnesia, you are being left behind. Why white-collar is stuck: 🧵 jetxu-llm.github.io/posts/the-ai-p…
Andrej Karpathy@karpathy

Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical people (who otherwise only knew AI as synonymous with ChatGPT as a website) experienced the latest agentic models.

English
0
0
3
43
Voldemort
Voldemort@JetXu14·
@karpathy Exactly. Non-tech workers miss out on agentic because they live in messy Office files, not clean .md files I built DocMason to bring Codex's AI Agent power to white collars. Watch it autonomously cross-reference a PPT and a hidden Excel sheet to expose a £94M contradiction 👇
English
0
0
1
533
Andrej Karpathy
Andrej Karpathy@karpathy·
Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical people (who otherwise only knew AI as synonymous with ChatGPT as a website) experienced the latest agentic models.
English
203
147
3.5K
303.9K
Andrej Karpathy
Andrej Karpathy@karpathy·
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is a group of reactions laughing at various quirks of the models, hallucinations, etc. Yes I also saw the viral videos of OpenAI's Advanced Voice mode fumbling simple queries like "should I drive or walk to the carwash". The thing is that these free and old/deprecated models don't reflect the capability in the latest round of state of the art agentic models of this year, especially OpenAI Codex and Claude Code. But that brings me to the second issue. Even if people paid $200/month to use the state of the art models, a lot of the capabilities are relatively "peaky" in highly technical areas. Typical queries around search, writing, advice, etc. are *not* the domain that has made the most noticeable and dramatic strides in capability. Partly, this is due to the technical details of reinforcement learning and its use of verifiable rewards. But partly, it's also because these use cases are not sufficiently prioritized by the companies in their hillclimbing because they don't lead to as much $$$ value. The goldmines are elsewhere, and the focus comes along. So that brings me to the second group of people, who *both* 1) pay for and use the state of the art frontier agentic models (OpenAI Codex / Claude Code) and 2) do so professionally in technical domains like programming, math and research. This group of people is subject to the highest amount of "AI Psychosis" because the recent improvements in these domains as of this year have been nothing short of staggering. When you hand a computer terminal to one of these models, you can now watch them melt programming problems that you'd normally expect to take days/weeks of work. It's this second group of people that assigns a much greater gravity to the capabilities, their slope, and various cyber-related repercussions. TLDR the people in these two groups are speaking past each other. It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and *at the same time*, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems. This part really works and has made dramatic strides because 2 properties: 1) these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also 2) they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them. So here we are.
staysaasy@staysaasy

The degree to which you are awed by AI is perfectly correlated with how much you use AI to code.

English
1K
2.4K
19.8K
4M
Voldemort
Voldemort@JetXu14·
@odpanda For sure, you could easily download and have a try. I am using it myself everyday😃
English
0
0
0
8
54.44
54.44@odpanda·
Hi, I have a set of PowerPoint files (PPT/PPTX) containing important materials, such as "6100 PPT" or other training/deck content. I want to "feed" this knowledge into you so that it becomes part of my own internal knowledge base. Please guide me step by step on the best ways to do this, including:Simple & quick methods (for everyday use, like uploading directly to a chat or project). Best practices for RAG (Retrieval-Augmented Generation) — how to extract text from PPT, chunk it properly, embed it, and build a searchable knowledge base so you can accurately reference the content without hallucinating. Advanced options — like using tools (LangChain, LlamaIndex, Unstructured.io, etc.), converting PPT to markdown/PDF/text first, handling images/tables/charts in slides, or building a local/private RAG system. Tips to make the model truly "own" the content — system prompts, project knowledge bases (e.g., Claude Projects), repeated exposure, or fine-tuning if possible. Any limitations I should be aware of (context window, file size, token costs, etc.).
English
1
0
0
29
Alex Ibragimov
Alex Ibragimov@alexwtlf·
what are you building this week? Share your product👇 (35k+ views last month)
English
312
3
142
11.9K
Voldemort
Voldemort@JetXu14·
@pluslee @karpathy 下载Repo后解压,用Codex打开Repo目录。然后把文档丢进/original_doc。接着在Codex里和DocMason聊天就行了。可以让它准备环境,也可以让它准备知识库。知识库存在/knowledge_base里,是把文档多模态内容提取出来后构建的,并做了轻度的Graph关系提取。没有做embedding,因为DocMason本质是Agentic RAG
中文
1
0
1
28
CHLee
CHLee@pluslee·
@JetXu14 @karpathy AI新手請教一下, 這個Repo的邏輯是把任意檔案丟入/Raw後, 會完成 1. 文件RAG化+知識圖譜的建立(標籤化+關聯性) 或是 2. 只讀取文件跟完成建立知識圖譜, 不做文件的RAG化?
中文
1
0
0
21
Voldemort
Voldemort@JetXu14·
"room for an incredible new product instead of hacky scripts" @karpathy nailed the thesis behind DocMason. The paradigm: The repo is the app. Codex is the runtime. 🤯 A local, agentic knowledge base built for consulting-level deep work. 👇
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
3
0
1
810
Voldemort
Voldemort@JetXu14·
@AIJoshBear @berryxia AI画的SVG😄没用任何工具。当然需要一定的prompt,而且并不是一次成型的,后续还需要手动调整。但是95%的部分都是AI画的。
中文
1
0
0
25
Josh Du
Josh Du@AIJoshBear·
@JetXu14 @berryxia 請教一下你這個是用什麼工具畫的呢, 乾淨又好看
中文
1
0
0
32
Berryxia.AI
Berryxia.AI@berryxia·
Karpathy 前两天刚说“应该有人做这个”, 兄弟们,48小时后就有人直接干出来了! Graphify 横空出世: ✅ 指向任意文件夹 ✅ 在 Claude Code 里输入 /graphify ✅ 自动生成完整知识图谱 + Obsidian vault + Wiki ✅ 支持 13 种代码语言 + PDF + 图片(Claude Vision) ✅ 查询 token 用量只有原来的 1/71.5! 以后 AI 再也不用每次硬啃整个文件夹了,直接问知识图谱就行,太丝滑了🤯 一键安装:pip install graphify && graphify install
Berryxia.AI tweet media
中文
18
273
1.2K
85.8K