William Perry Melevo

25 posts

William Perry Melevo banner
William Perry Melevo

William Perry Melevo

@melevo_william

QA Engineer | Automation Engineer Philippians 4:7

Katılım Temmuz 2018
93 Takip Edilen9 Takipçiler
William Perry Melevo retweetledi
Titanium
Titanium@akinkunmi·
Introducing Avnac — opensource local-first Canva alternative.
Titanium tweet media
Română
131
540
6.2K
2.1M
Divya Ranjan
Divya Ranjan@divyaranjan_·
"AI made you faster. Your brain didn't get bigger" @karpathy's recent post went viral for sharing how he uses LLMs to build personal knowledge bases. Interestingly, I've been building a more holistic version of what he described for the past few weeks. Introducing Pattrns, an AI interface crafted for deep parallel work, with a partner called Dots that just knows you and grows with you from day one. Why? A few months back, I realised I was working with so many tools / terminals / windows. AI had made me 10x faster, but to be efficient at all times required all my focus and constant attention/depth. AI was creating 100x more output daily than my brain could process, and the constant context switching and orienting myself again and again was killing me. Alsooo, why is every AI chat so linear? The entire experience of using AI was disorienting me. Another agent wasn't the solution for me but an entire interface that connected all the dots for me automatically was. So I built Pattrns. Here's what it actually is: Pattrns is a visual environment to think and do more knowledge work with AI. It keeps you oriented at all times and uses visual threads, kind of like how our brain works (think your prefrontal cortex externalised). Your research, your references, and your thoughts for all your different threads all live side by side as context for AI. The interface gives you one view with infinite depth. You can run parallel sessions across projects, drop a massive question in one thread, and switch to another one to keep working. Focus when you want depth, expand when you want the big picture. My early version was actually an infinite canvas with chat, but using it daily became a bottleneck. Infinite canvases eventually just turn into noise especially for boards that keep evolving. Then there's Dots, the ambient intelligence underneath it all. It learns your taste and decisions by watching your actions. It pays attention to what you care about, what you curate, and what you engage with (also like how much, think pagerank). Over time, it just knows you. You never have to re-explain your thinking, your taste, or your decisions ever again. It does this by auto-organizing and constantly updating your memory graph into a board ("Me") for you to look at, edit, or chat with. You are also always fully aware of what it knows. The underlying rule is simple. Organization is Dots' job, but thinking and creating is yours. So every chat just feels like you're talking to someone who already gets you. This is how it feels to use: Day one: During onboarding, you import your past AI chats (Claude / ChatGPT) and data (Apple Notes, Notion, Evernote). Dots reads through everything, starts creating your Me board with your entire memory graph, auto resolves conflicts, and just knows you from the start. Week one: You're working across three projects. You drop research into one board, brainstorm in another, execute in a third. Switch between them instantly. The AI already knows what each board is about because it sees your cards, your structure, your context. No re-explaining. You can just start chatting anywhere and it stays updated at all times. Month one: Dots knows you and has seen what you've been creating and doing. What you build on vs what you explore and move on from. It's learned your taste through your actions, not your words, relative to the different boards. When you ask it to design something, it already knows you hate rounded corners in that exact project. When you're debugging, it remembers you prefer logs over breakpoints. Every correction you make teaches it. Every card you create sharpens its understanding. The result? You stop maintaining tools and start using them. No tagging. No filing. No "I should organize this later" guilt. Conversations are JSONL you can grep, Git tracks everything. Zero lock-in. Dots understands the context as the what and the conversations that led to it as the why. And also, there's a lot more under the hood Everything stays local (your brain is a folder you own). Privacy is a mission statement, nothing is stored online. You can literally just drop your entire Obsidian vault here and watch it get organized beautifully. It's powered by Anthropic's Agent SDK, so Dots is as capable and agentic as it gets. You can bring all your MCPs, and if an API or skill doesn't exist, just dump things and ask Dots to create it. Repeat something enough and Dots suggests turning it into a skill automatically. Every chat has reply threads (like Slack) so you can drill into any thought without losing the main conversation, and a TLDR button to catch up in seconds. Who is it for? I believe there are 2 kinds of people doing major work with AI: 1. Those who want fully autonomous agents that take a prompt and do everything. OpenClaw, AI chief of staff, that whole wave. 2. Those who sit with it, plan, execute step by step so their exact taste is translated into the output. Pattrns is for the latter! You will soon even be able to use the browser extension and Pattrns MCP to bring your own context to any chat agent you use daily, so it automatically starts thinking like you. Anyway Pattrns is a product I always wanted for myself and I deeply care about this cause. My ultimate mission is to eventually have an interface that is as intuitive as paper and pen along with an ambient AI that watches you and unifies everything you do in one place, constantly organizing your context so you keep coming back to it. What would that eventually feel like? That Pinterest image you keep going back to on your browser, it'll soon be auto-organized in a space for you. The early access for the beta is going live today (Invite only. Mac only for now). Reply with what you're building right now and I'd love to send you an invite soon! PS: There was no AI ever used while crafting this entire product experience, just pen and paper. Only used AI to build it. Taste is human :')
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
46
27
285
30.2K
Luca Rossi ꩜
Luca Rossi ꩜@lucaronin·
Introducing Tolaria! 💧 Today I am releasing a macOS desktop app for managing markdown knowledge bases, and helping both AI and humans operate them. It’s free and open source, and always will be. I have been working on it for three months, and I now use it to run my life and work. I personally have a massive workspace of 10,000 notes — the result of 6 years of Refactoring — which I now operate on Tolaria. Tolaria is the main collaboration surface with my AI agents: they create new notes there, connect them to what exists, and edit existing ones. Everything is easy to understand for them, because it’s just markdown files. In a way, it’s my implementation of @karpathy's LLM wiki. Tolaria is also the biggest experiment I have ever run about writing software with AI: • 2000 commits • 100K+ lines of code • 3000+ tests / 85% coverage • 9.9/10 code health • 70+ architecture decision records I am releasing it open source also to use it as a living artifact of how I do AI coding, so you can inspect at any time things like how I write docs, what's in my AGENTS file, what hooks do I run, and so on. You can find it below: • Newsletter announcement: refactoring.fm/p/introducing-… • Website: tolaria.md • Github repo: github.com/refactoringhq/… Let me know your thoughts!
Luca Rossi ꩜ tweet media
English
206
182
2.7K
304.4K
William Perry Melevo
William Perry Melevo@melevo_william·
If you go back to my first comment in this post by Amol Avasare, you should be able to understand which America I am referring to. Which America tag Anthropic as Supply Chain Risk? and I also said that "I vehemently objected to that decision". So, when you read a book, do you just read the front page and judge it by its cover? That is why, I said, "Make reading a habit". ok? 😄😄
English
1
0
0
28
Nilo Vélez
Nilo Vélez@NiloVelez·
@melevo_william @JohanFireOk @TheAmolAvasare Ok, little condescending prick. You are speaking about the opinion of the US referring to it as America. For the rest of the World, "America" is the full continent, not just your little country.
Seville, Spain 🇪🇸 English
1
0
0
31
William Perry Melevo
William Perry Melevo@melevo_william·
@jeiting As long as information is readable, minimalist design is so good. Users focus more on the information and functionality rather than designs. Nice app, btw!🙌
English
0
0
2
1.5K
Jacob Eiting
Jacob Eiting@jeiting·
reject modernity embrace tradition
Jacob Eiting tweet media
English
109
117
2.6K
155.8K
William Perry Melevo
William Perry Melevo@melevo_william·
Hey everyone, I wanted to share a little project I have been working on which is my Encyclopaedia Agentica. It is my personal knowledge vault, built as a minimalist space for reading, writing, and keeping track of everything I have learned and researched these past few years. I have always enjoyed reading, and I wanted this to feel like a calm place where ideas can live together without distractions. So instead of adding flashy designs or extra visual elements, I kept it plain and simple on purpose. The whole idea is to make it easier for me to focus on the content itself. Every entry is interconnected through wikilinks, so the notes behave like a real encyclopedia of my own learning. I do not plan to include a graph view, because this project uses wikilinks. Once I create an entry, it becomes a markdown file and is saved into a folder I choose, which means I can easily reuse it later whenever I need to upload or reference it in Claude or ChatGPT. I am still in control of the content. I do not rely on AI to write everything for me. Most of the time, I use it only for small things like correcting grammar, improving a phrase, or helping me find the right word when I get stuck. Sometimes I also ask it to help locate a research paper. But the actual thinking, structure, and research are still mine. The idea behind “Agentica” is something I am still exploring. I am thinking of building a dedicated AI agent that runs in CLI, for information gathering something that can help me boost productivity when I am diving deep into a specific topic. The goal is to keep it minimalist as well, inspired by Mario Zechner’s approach, but even more lightweight. I still need to experiment more on this, but that is the direction I am heading. This project is really my way of turning years of reading and research into something organized and lasting. I may still add a raw or dump folder later to make it easier to capture unfiltered thoughts before organizing them more carefully, just like Karpathy mentioned in his LLM Base Knowledge post. For now, this is what I have. It is still a work in progress, and there may be some changes in the flow, but the overall design direction is already set.
William Perry Melevo tweet media
English
0
1
2
54
William Perry Melevo retweetledi
UNTV News and Rescue
UNTV News and Rescue@UNTVNewsRescue·
A young Indonesian national was rescued by soldiers in a military operation in Sumisip town in Basilan, the Armed Forces of the Philippines (AFP) said.
UNTV News and Rescue tweet media
English
0
1
3
185
William Perry Melevo retweetledi
Members Church of God International
"There are three offerings we should offer to God—thanksgiving, good works, [and] contribution from the heart." (Hebrews 13:15-16) — Brother Eli Soriano, MCGI Mass Indoctrination Session 11 Reading Your Heart #PureDoctrinesOfChrist
Members Church of God International tweet media
English
0
31
35
740