Ping-Lin Chang

593 posts

Ping-Lin Chang banner
Ping-Lin Chang

Ping-Lin Chang

@pinglin02

Founder & CEO @ Instill AI | an entrepreneur + programmer + computer vision and machine learning enthusiast

London Tham gia Nisan 2010
114 Đang theo dõi184 Người theo dõi
Kam Hosseini
Kam Hosseini@KyushiThe·
@pinglin02 solid point on keeping the original sources linked. gonna check it out
English
1
0
1
25
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
Instill AI has exactly the idea concretized and shipped for you. You likely wanna preserve the original unstructured data type and format to review the genuine sources with solid backlinks. Moreover, it’s not just a personal knowledge base in the end. It’s your high-quality context to generate more useful context for you and your agents. x.com/pinglin02/stat…
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
2
0
2
67
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
@pmitu Can’t say for sure, but I’m genuinely curious—how are you getting your cocaine in the first place?
English
0
0
0
4
Paul Mit
Paul Mit@pmitu·
Vibe coding is more addictive than cocaine
English
327
182
1.7K
67.5K
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
This is a weekend insight-sharing post. 1. Software engineering is moving from CI (continuous integration) and CD (continuous deployment) to CC (continuous contextualization). Andrej Karpathy’s idea of a personal knowledge base is essentially a form of CC in practice. 2. ETL (Extract → Transform → Load) will transition to ECL (Extract → Contextualize → Load). The “T” in ETL has historically been a billion-dollar business, as data cleaning and transformation require complex pipelines and workflows. While ETL workflows will remain—since foundational agent infrastructure still depends on them—ECL will rise rapidly, because context (tokens) is becoming what truly matters.
English
0
0
0
25
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
@kr0der So the main problem is not the "Plan" mode itself but the presentation quality of the plan draft. I've found Cursor's Plan draft concise with decent quality. Apparently Cursor has iterated the presentation many times.
English
0
0
0
52
Anthony Kroeger
Anthony Kroeger@kr0der·
slowly starting to use plan mode a LOT less nowadays i realised whenever i use plan mode, it generates a gigantic plan and then i dont read it and hit build out of laziness having a meaningful conversation with the AI agent to discuss implementation feels a lot easier 🤔
English
103
22
789
789.2K
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
@drawio How does this compare with LLM outputting Mermaid directly and Excalidraw MCP?
English
0
0
0
705
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
Instill AI has exactly the idea concretized and shipped for you. You likely wanna preserve the original unstructured data type and format to reviewing the genuine sources with solid backlinks. Moreover, it’s not just a personal knowledge base in the end, it’s your high-quality context to generate more useful context. x.com/pinglin02/stat…
English
0
0
0
601
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.4K
5.8K
50.1K
16.8M
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
I'm curious how people using Claude Code + git-worktree to run a dozen agents in parallel actually iterate on UI development. For backend development, I can have five or six coding agents running simultaneously and it's highly efficient. But for frontend, I haven't found a solid loop-closure framework yet. Case in point: here is an Instill Agent working on our Chat UI. After three iterations, I still had to personally roast it (roughly translated): "This is ugly," "What even is this?" and "Can you actually do something competent?" I've tried using skill.md or agent.md, but the results are limited—likely because the codebase is too large. I'm still keen to learn any definitive closed-loop methods for frontend stuff.
Ping-Lin Chang tweet mediaPing-Lin Chang tweet mediaPing-Lin Chang tweet media
English
1
0
1
59
FFmpeg
FFmpeg@FFmpeg·
FFmpeg is moving to Rust 🦀 Our use of C and Assembly in FFmpeg has been an unacceptable violation of safety. FFmpeg will be running 10x slower - but we're doing it for your safety. All your videos will appear green - safety first, working software later.
English
1.6K
3.7K
44.6K
1.9M
Guido van Rossum
Guido van Rossum@gvanrossum·
I think I finally understand what an agent is. It's a prompt (or several), skills, and tools. Did I get this right?
English
534
204
4.7K
567.8K
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
AI shouldn't just help you work more—it should help you work chill. 🧊 With Instill AI, we’re ditching the busywork: 
🎥 Multilingual video transcription & 1-click subs 
🎨 Instant slides, images, & web pages 
📂 Auto-organized Collections 
🔍 Semantic search (just type @) My top pick: Internalize 3-hour podcasts or videos into instant timeline segments. Jump straight to the gold, skip the fluff, and cite with a single click. 🖱️✨ Stop searching, start creating. Grab your invite in the replies! 👇 #InstillAI #Productivity #AI #WorkSmarter
English
1
0
1
45
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
If you’re like me and curious about which hundreds of startups YC incubates in every single batch—and the "why" behind them—you can use Instill AI to curate a beautiful dataset in under a minute. (And then come back 18 months later to see which hundreds have already "perished.") While you could get answers from ChatGPT, Claude, or OpenClaw, Instill AI makes that context persistent and organized, allowing you as a human to oversee your AI agents more effectively. Basically, I (or my AI agent) just drop that YC poster image in, and let the Instill Agent organize it into a collection. Moving forward, this becomes high-quality context that you don't have to keep burning tokens to re-generate. If you’ve started noticing that your agents are casually burning through millions of tokens, it’s not their fault—they just lack context! Give them a chance to redeem themselves; give them Instill AI. Check the comments for the invitation link.
Ping-Lin Chang tweet mediaPing-Lin Chang tweet mediaPing-Lin Chang tweet media
English
2
0
1
57
Ping-Lin Chang
Ping-Lin Chang@pinglin02·
I just fed three of my PhD and Postdoc papers into Instill Agents to turn them into posters. See that 3D cube? I spent two hours clicking my way toward carpal tunnel syndrome just to draw that. Back then (2011), the heart visualization required 3D scanning a mock-up model, getting the object file, rendering it with OpenGL shaders, and achieving CUDA interoperability for real-time 3D reconstruction and rendering. The other two papers were from my "young and naive" days when I thought these would be commercialized within five years: 1. AR/VR for Prostate Cancer Surgery: Using the Da Vinci robot to "doodle" inside the bladder, guiding surgeons to avoid critical nerve tissue (preserving the patient's "future happiness"). 2. Autonomous Robotics for Cardiovascular Surgery: Instead of a doctor manually pushing a catheter from the thigh to the heart—risking a fatal puncture—this EU project aimed to create a catheter that "swims" to the coronary artery, deploys the stent, and swims back out on its own.
Ping-Lin Chang tweet mediaPing-Lin Chang tweet mediaPing-Lin Chang tweet media
English
0
0
1
54