Tropical Dog

91 posts

Tropical Dog

Tropical Dog

@TropicalDog2

dev @orbit__labs

Earth Katılım Haziran 2017
434 Takip Edilen119 Takipçiler
Tropical Dog retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
58.3K
20.9M
Tropical Dog retweetledi
aaalex.hl
aaalex.hl@aaalexhl·
I've reached a point in my engineering career where I just dont care anymore I used to want to solve complex problems, design new systems, learn new architecture etc. But something clicked in my brain last year and I just don't give a fuck like zero drive to keep doing this
English
339
99
4.2K
261.5K
Tropical Dog retweetledi
Browser Use
Browser Use@browser_use·
Introducing: Browser Use CLI 2.0 🔥 The most efficient browser automation CLI tool > 2x the speed, half the cost > Easily connect to running Chrome > Uses direct CDP Try it now 🔗↓
English
190
544
5.8K
1.6M
Tropical Dog retweetledi
Wise
Wise@trikcode·
There’s a new kind of burnout now. Not from working too much. From trying to keep up with tools, models, frameworks, launches, and 600 people saying “it’s over” every morning.
English
386
742
7.9K
243.7K
Tropical Dog retweetledi
kanav
kanav@kanavtwt·
Someone built a Google translate for Linkedin 😭
kanav tweet media
English
638
10.3K
90.6K
2.8M
Tropical Dog retweetledi
apewood
apewood@apewoodx·
twitter: you're behind. it's over. you're replaced. it's over. fuck you. you're also dumb. normal people outside: wow nice day we're having huh
English
232
1.5K
21.2K
446.8K
Tropical Dog retweetledi
Jerry Liu
Jerry Liu@jerryjliu0·
Parsing PDFs is insanely hard This is completely unintuitive at first glance, considering PDFs are the most commonly used container of unstructured data in the world. I wrote a blog post digging into the PDF representation itself, why its impossible to “simply” read the page into plaintext, and what the modern parsing techniques are 👇 The crux of the issue is that PDFs are designed to display text on a screen, and not to represent what a word means. 1️⃣ PDF text is represented as glyph shapes positioned at absolute x,y coordinates. Sometimes there’s no mapping from character codes back to a unicode representation 2️⃣ Most PDFs have no concept of a table. Tables are described as grid lines drawn with coordinates. Traditional parser would have to find intersections between lines to infer cell boundaries and associate with text within cells through algorithms 3️⃣ The order of operators has no relationship with reading order. You would need clustering techniques to be able to piece together text into a coherent logical format. That’s why everyone today is excited about using VLMs to parse text. Which to be clear has a ton of benefits, but still limitations in terms of accuracy and cost. At @llama_index we’re building hybrid pipelines that interleave both text and VLMs to give both extremely accurate parsing at the cheapest price points. Blog: llamaindex.ai/blog/why-readi… LlamaParse: cloud.llamaindex.ai/?utm_source=xj…
Jerry Liu tweet media
LlamaIndex 🦙@llama_index

PDFs are the bane of every AI agent's existence: here's why parsing them is so much harder than you think 📄 Every developer building document agents eventually hits the same wall: PDFs weren't designed to be machine-readable. They're drawing instructions from 1982, not structured data. 📝 PDF text isn't stored as characters: it's glyph shapes positioned at coordinates with no semantic meaning 📊 Tables don't exist as objects: they're just lines and text that happen to look tabular when rendered 🔄 Reading order is pure guesswork — content streams have zero relationship to visual flow 🤖 Seventy years of OCR evolution led us to combine text extraction with vision models for optimal results We built LlamaParse using this hybrid approach: fast text extraction for standard content, vision models for complex layouts. It's how we're solving document processing at scale. Read the full breakdown of why PDFs are so challenging and how we're tackling it: llamaindex.ai/blog/why-readi…

English
24
69
813
98.1K
Tropical Dog retweetledi
Wise
Wise@trikcode·
i changed all our "loading..." states to "thinking.." we are an agentic Al startup now
English
106
406
8.8K
240.2K
Tropical Dog retweetledi
Sahil Bloom
Sahil Bloom@SahilBloom·
I'm increasingly convinced that 99% of success is just the ability to outlast uncertainty. The one who can tolerate the most uncertainty is the one who will eventually win.
English
266
1.5K
11.3K
522.2K
Tropical Dog retweetledi
Péter Szilágyi
Péter Szilágyi@peter_szilagyi·
Monday night hot prediction: Even though I'm doubtful AGI will arrive soon, I am certain agents will proliferate soon. I don't think there will be a place in that world for 99.9x% people. Exactly 100 years ago, Henry Ford introduced modern labor with the 40 hour work week. He didn't do it out of the goodness of his heart, he did it because people didn't have time to use cars, so nobody bought one. The reason we all have a semblance of free time, is because businesses need consumers. If agents are going to wield the capital, guess who becomes obsolete? My expectation is that, first, businesses will start shifting focus to agents. Then they'll realise humans are a waste of time; and will stop catering for us altogether. Then comes the inevitable, capital will not be dependent on human consumption anymore, so there will be no incentive to keep people as consumers. That's going to be a dark hour, the first one. Unfortunately, agents will not stop at the virtual world. We are already putting people second when it comes to hardware. We can't afford to give a stick of ram to kids who want to play games; the agents need it! The same thing will happen with land, solar will be more valuable than farming. If businesses are willing to buy up all hardware capacity, knowing how it disadvantages literally every person in the world, you expect them to not do the same for land? Physical labor will survive just about up to the point where the agents can get a few autonomous factories up, then even the last drop of use for humanity is gone. We're not even waiting for agents to ask us to do this, we have the foresight to know there's a new class of consumers coming online, so we're pre-emptively building capacity for them; further shortening our runway. Why would we walk down this path? I don't see this not happening. The incentives are stacked against us. The handful of people who have say are racing against the clock to be at the top of the food chain. And all this, *without* AGI even needing to become a reality; we only need agents to be smart enough to handle a wallet. After so many sci-fi guesses as to what technology the Great Filter might be, it will be peak irony to realise it was simple greed all along.
English
72
56
499
55.5K
Tropical Dog retweetledi
Orbit Labs
Orbit Labs@orbit__labs·
Hello #lunccommunity Lunar New Year break 🧧 We’ll be offline for Lunar New Year from Feb 13th – Feb 23rd. We won’t be available during this period unless it’s an important / urgent matter. Normal operations resume after the break. Wishing everyone a happy Lunar New Year 🎉
Orbit Labs tweet media
English
11
14
76
7.1K
Tropical Dog retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.
English
1.6K
5.5K
40.1K
7.7M
Tropical Dog retweetledi
⭕ Brock Pierson
⭕ Brock Pierson@brockpierson·
How is this bottle of mustard any different than you at this moment?
English
835
988
9.7K
4.2M
Tropical Dog retweetledi
Orbit Labs
Orbit Labs@orbit__labs·
Hello #LUNC Community 🚀 Following the successful rebel-2 testnet upgrade to Cosmos SDK v0.53.x and three weeks of stable operation and active testing, we’re excited to share that a community spend proposal for the next phase of the Terra Classic SDK v0.53 upgrade will be submitted. We are moving forward to phase 2, maintaining, testing, and coordinating all necessary work to ensure a smooth and successful mainnet upgrade. station.terraclassic.community/proposal/colum… We look forward to your vote and support to recognize and sustain the development efforts for Terra Classic 📷🔥
Orbit Labs@orbit__labs

🧪 Terra Classic Testnet Update — SDK v0.53 The rebel-2 testnet has been running Cosmos SDK v0.53.x for 3 weeks. Core chain services are stable, and the network is still under active testing. This upgrade moves Terra Classic onto the latest Cosmos SDK (v0.53.4) and IBC V2 (IBC-go v10.3.0) A notable ecosystem change is an upstream SDK behavior update affecting tx logs. Developer guidance and compatibility notes are published below. 📄 Testnet results & dev guidance: @orbitlabs/SJuJqIJ8We" target="_blank" rel="nofollow noopener">hackmd.io/@orbitlabs/SJu… — Orbit Labs | Terra Classic Devs

English
12
50
190
37.9K