dylan

1.6K posts

dylan banner
dylan

dylan

@dpshde

Jesus is King

Tham gia Temmuz 2010
1.1K Đang theo dõi589 Người theo dõi
dylan đã retweet
DAN KOE
DAN KOE@thedankoe·
A pattern I've noticed in stuck people: They're always busy. They never stop moving. They have 47 tabs open and a notebook-sized to-do list. But if you ask them what they accomplished this week that actually matters, their mind goes blank. Busyness isn't a badge of honor.
English
460
888
10.2K
280.9K
dylan
dylan@dpshde·
@0xSero I would but wouldn't just want Zai, I'd prefer to bring my own provider!
English
0
0
0
37
0xSero
0xSero@0xSero·
I’m trying to test something! How many of you would pay for Cursor Pro if Cursor let you oauth with GLM (ZAI) Leave a like, comment, and share. If you’re interested.
0xSero tweet media
English
82
5
381
21.2K
maarten
maarten@0xmrtn·
@ctatedev At this point vercel could introduce a harness with all of its best practises. Between this and agent browser , portless etc would make a killer ADE
English
1
0
2
183
Chris Tate
Chris Tate@ctatedev·
(Re)introducing opensrc. Now in Rust 🦀 Give your AI agent deeper implementation context, not just types and docs. The actual source code of your dependencies. Agents use `opensrc path` in any bash command. Fetched once, cached globally. npx skills add vercel-labs/opensrc github.com/vercel-labs/op…
English
13
15
296
15.6K
yags
yags@yagilb·
So excited to share that @adrgrondin and @LocallyAIApp are joining LM Studio family! Together we are doubling down on Apple platforms to bring you delightful AI experiences across devices. Adrien was able to build a tasteful and much loved app over nights and weekends, and have been crushing it on twitter as well. Could not be more excited to join forces and build the future together. Welcome to the team, Adrien!
Adrien Grondin@adrgrondin

I’m excited to announce that I’ve joined @lmstudio 👾 The team behind the app is amazing and I couldn’t be more proud. I’ll still be working on Locally AI, now full-time, to bring the best experience possible.

English
5
8
81
8.2K
Juan
Juan@JuanRezzio·
Last thursday we released @cursor_ai 3.0! Are you guys liking it? Is it your go-to coding layout or are you using the IDE? Curious about your thoughts!
English
125
8
242
16.9K
dylan đã retweet
Vatsal
Vatsal@vtslkshk·
Podchemy is now open source! I don’t listen to podcasts. I prefer reading, but most notes/summary tools serve a few bullet points and quotes which are too shallow for me. This felt like a good problem to solve when I was looking for ideas to build, and learn how to code with AI in the process. A year later, @podchemy gets thousands of visits every month. Many podcast creators and guests whom I admire have praised these notes. Balaji invited me to his Network School, Sajith Pai called it “likely my favourite new podcast tool / offering of 2025”, and my favorite moment was getting an email from David Deutsch with some corrections to the notes on his podcast appearance. Building Podchemy has been a rewarding experience. I’ve learned a lot and strengthened my AI muscles. There are many directions it could go from here, and I hope this decision to open-source will help with that. I am also taking an indefinite break from Podchemy’s active development and maintenance. Given how rapidly AI tools have been progressing, other higher-impact ideas floating in my head, and life getting more interesting but also demanding at both work and home, Podchemy is no longer on my list of priorities. I may come back to it, or not, I don’t know. Open-sourcing feels like the right closure for now. GitHub link here, fork away! github.com/vatsalkaushik/…
English
2
4
16
898
dylan
dylan@dpshde·
@aarondfrancis @jamesqquick Very curious how this will work with different harnesses. I am jumping between 4-5 different harnesses daily 😅
English
1
0
0
199
James Q Quick
James Q Quick@jamesqquick·
Managing state for your AI Chats...how are you doing it?
English
11
0
16
9.2K
dylan
dylan@dpshde·
@BleedingDev @0xSero @thekitze I have found myself preferring the CLI > chat TUI. I think app server/acp definitely makes sense, but not sure why they should abandon CLI?
English
0
0
1
23
Petr Glaser
Petr Glaser@BleedingDev·
@0xSero @thekitze Yes, UX is not ideal, I think they should focus more on harness, less on CLI. Would be better to just build App Server / ACP compatible tool and don't care about CLI. Then you would just connect T3 Code or anything else and you have a winner.
English
1
0
0
160
Wes Bos
Wes Bos@wesbos·
What are you working on? Send me your project. OSS, Paid, whatever. We're doing a @syntaxfm Syntax Highlight and we will review and/or roast your projects
English
609
14
342
56.6K
dylan
dylan@dpshde·
@yiliush OSS it! We need projects like these to really start moving in the right direction for the future AI native 'IDE'
English
0
0
1
230
Yiliu
Yiliu@yiliush·
before collaborator i built out this whole agentic knowledge base app with an entire source docs → knowledge graph → mcp pipeline... and just never released it. you guys want me to release it?
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
51
21
684
80.8K
Andrew Draper
Andrew Draper@andrewdraper·
@Shpigford Have you considered adding something like github.com/EveryInc/proof… to it? My main use of Clearly is quick previews of markdown files, but have a multiplayer option to work with my nanoclaws would be amazing.
English
2
0
2
104
Josh Pigford
Josh Pigford@Shpigford·
i wonder what clearly.md might look like as a macos-native obsidian alternative in an AI/agent world 🤔
English
7
0
10
3.2K
dylan
dylan@dpshde·
@0xSero @thekitze I’m curious what about the UX you don’t like? I enjoyed the idea of having a “one off” AI in the terminal as compared to the “threads” UX we have in every other editor
English
0
0
0
189
0xSero
0xSero@0xSero·
@thekitze It’s so ass, I tried it. Horrible UX
English
9
0
89
8.2K
dylan
dylan@dpshde·
@rohildev All it would need is access to a folder on the device
English
1
0
1
66
rohildev
rohildev@rohildev·
@dpshde Yaa. I will add X support soon. Doesn’t work with Obsidian now. Let me check if anyway to add support
English
1
0
1
371
rohildev
rohildev@rohildev·
I created a Private Second Brain 🧠 for you. It’s called Dump. I used Slack, Twitter bookmarks, and Apple Notes to store things, but finding old info was painful. Slack’s 90-day limit made it worse. Many founders faced the same issue, so I built Dump. Dump is your private second brain. It stores everything on your device or iCloud and helps you retrieve information with context, exactly when you need it. 100% privacy. Would love to hear your thoughts.
English
177
44
923
140.5K
dylan
dylan@dpshde·
@minafahmi Stream 🙌🏼 seems very interesting!
English
0
0
1
42
Mina Fahmi
Mina Fahmi@minafahmi·
The mouse for voice Comment 'Stream' if you'd like to join the beta!
Mina Fahmi tweet media
English
159
7
243
32.5K
dylan
dylan@dpshde·
few initial thoughts: 1. Because it's a Zed fork, it's not clear if we're expected to use the Zed chat interface, or terminal. Obviously either could be used, but an opinionated decision could be helpful 2. I'm not sure why the default is always a browser. I would almost never default to a browser when opening a tool like this. I see a browser as a complimentary feature to quickly test web projects 3. It seems we lose some of Zeds performance, which is a shame, this is the primary reason I would consider!
English
1
0
1
36
naaiyy
naaiyy@naaiyy_·
@dpshde Thanks ! lmk how that goes for you and how can I do better !
English
1
0
0
249