Gaurav Ramesh

2.1K posts

Gaurav Ramesh banner
Gaurav Ramesh

Gaurav Ramesh

@outofdesk

Reflections and writings @ https://t.co/f5J9701yAb

Sunnyvale, CA Katılım Ekim 2008
598 Takip Edilen205 Takipçiler
Sabitlenmiş Tweet
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
We have many names to describe personal tools. Malleable software speaks to how it behaves. Home-cooked software speaks to who makes it. I wanted a name that speaks to something deeper: how it feels. Perfect Software. I mean “perfect” in the way I mean a “perfect coffee”.
Gaurav Ramesh tweet media
English
3
0
3
300
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
My thoughts on "Your harness, your memory" from the LangChain Blog The post argues that memory is a critical part of the harness, so anyone selling harness without memory baked into it is creating a lock-in that's not visible just yet. It works today because memory is not as well-understood widely and most current interactions with LLMs are stateless. But as agent personality, personalization and long-term memory become more important, it's important that people/organizations own their memory, own their harness. The dominant players, Anthropic, Google, OpenAI, will want to own the memory/harness - that's what you get from features like Claude Managed Agents. Not only is the model a blackbox but so is the entire persisted state that makes a model useful. It reminds me of the same problem at the semantic layer: most cloud data warehouses, BI tools have had their own semantic layers, which is what makes analytics tick. Vendors would want it to be on their stack. LookML is a good example and is the most attractive layer of Looker. OSI, Open Semantic Interchange, is looking to change that. So you can take the semantic layer with you to any warehouse/BI tool vendor you wanna use, at least that's the promise. But memory and agent harness are a tighter form of lock-in than the semantic layer. Semantic layer was largely static and defined by humans, updated occasionally. Memory is deeply dynamic in nature. It's seeded by you, but takes a life of its own over time. Deep Agents is LangChain's answer - an open-source agent harness, that works with other open-source projects like LangChain and LangGraph. It's "model-agnostic," which is better than the closed ecosystems of the bigger players, but it's not as open as the author makes it sound. Deep Agents is built on Lang*(Chain/Graph) stack, which although open-source, is all owned by the same company. It's not a true interoperable solution - it's an emergent moat, where the lock-in is organically formed, rather than planned, as the agent performance is increasingly tied to whoever controls the "frameworks, runtimes, and harnesses", as Harrison himself makes a distinction of their offerings. How it'll likely play out: You use a model provider with Deep Agents. You wanna switch models tomorrow - you can keep the harness, switch the models. Good. But your agent harness is only as good as LangChain and LangGraph, which define the primitives and the persistence/memory layer respectively. Memory also encompasses the logs generated from agent behavior, making it dependent on LangSmith, LangChain's commercial observability product. Over time, the harness works - or works better - only with Lang ecosystem, which creates harness lock-in. You can switch model providers, but can't switch your harness ecosystem. How the agent summarizes, compacts information, what it remembers or discard - are all at the mercy of the Lang stack. Although it always comes with the promise of self-hosting and customization for your needs, most organizations will not or cannot do it. This has always been true for critical infrastructure, but is especially true in the LLM ecosystem given the novelty, the limited understanding most organizations have of how agents work under the hood, and the speed at which this space is evolving. Managed solutions are likely the end-game. So the play here seems to be to start from open-source - rather than closed-source like the dominant players - to gain market share, and convert that into structural lock-in, after significant customer adoption.
English
1
1
1
41
Gaurav Ramesh retweetledi
Daily Loud
Daily Loud@DailyLoud·
NEW WORLD RECORD: 18-year-old sprint phenom Gout Gout has clocked a stunning 19.67 time in the 200m run, surpassing Usain Bolt’s legendary mark.
English
1.3K
7.9K
87.2K
6.6M
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
When talking to Gemini, I seem to be hitting a lot of nails on the head.
English
0
0
0
18
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
When I think about my writing as something that I'd like my kids to read when they grow up, suddenly the stakes become much higher. The topics I write about, what I want to say, how I write it, how much time I spend on each - the whole mindset changes. Surprisingly, it also helps shift focus from thinking/worrying about what works on the Internet to what, if anything meaningful, I have to say.
English
0
0
0
11
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
As I interact more with Claude, I'm learning a few phrases in English that are good, novel to me, sound deep, but have an AI smell. "Worth sitting with" is one of those. "That insight is worth sitting with" .. "That tension is worth sitting with." I hadn't encountered them much before LLMs, so their prevalence in my conversations now probably speaks to the kind of topics I read/learn about now, those that have gotten easier to do with LLMs - philosophy, sociology, anthropology, neuroscience, biology, psychology, and research papers.
English
0
0
0
20
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
@itsolelehmann Obsidian just makes it easy to work with markdown files. At the end of the day, you need files over app, as @kepano calls it, and @obsdmd is an embodiment of that philosophy.
English
0
0
0
787
Ole Lehmann
Ole Lehmann@itsolelehmann·
why would I use obsidian when I can just use claude code for the knowledge base? whats the advantage?
English
232
6
279
115.2K
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
@stevemagness When you say most adults can't sprint, what do you mean exactly? I'm sure they are trying to run as fast as they can, even if they are relatively slow, but what exactly makes it a Sprint? Genuinely want to know..
English
0
0
1
237
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
I'd be curious to see the downloads of Obsidian before and after this tweet!
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1
0
0
25
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
@vasantshetty81 The mass layoffs are because of AI, yes, but not because of the capabilities of the models, or that AI can do what those people could. It's more to make a budget for spending on AI.
English
0
0
5
347
Vasant Shetty | Building Mundhe Banni
Oracle in India laid off 12,000 people in one go, and I was just thinking, what’s happening? Is it all because of AI? Mostly, yes! If you look at the rate at which AI is progressing and making strides every few weeks, its capabilities are definitely becoming 2x or 3x every three months. It’s compounding at a rapid pace. And that just shows that a lot of business processes, even legacy ones, are going to be affected by AI in one way or another. More likely, we will see even complex workflows in Fortune 500 companies being handled by AI. There is a high chance that almost every tech job will be impacted. The only way to handle this is for professionals in tech to quickly ramp up, develop new skills, and use their domain knowledge to become very different individuals. Agency may not always be cultivated easily, but when circumstances force change, people do adapt. This feels like an inflection point. People who believed their domain knowledge made them irreplaceable should realize that AI is going to come after them as well. It is better to prepare, harness AI, and ride this wave as long as it lasts. Also, in a few years, businesses that never had any tech exposure may come under the tech radar big time. It is now much easier to build for every kind of business, even those with smaller TAMs. No matter the size, something can be built, and efficiencies can be gained. This means a lot more businesses will become digital first, AI first, and that is where the opportunity lies. People who are affected should think about switching domains if needed, learning new skills, and unlearning things that built their careers so far. If not, they should consider alternate paths where human effort will still be relevant for the next 10 to 15 years. Otherwise, there is no way out. What I see is that large companies that employed thousands in India will continue to shrink their workforce. This is not going to reverse. It will only continue. Better to be prepared for it.
English
8
10
68
9.3K
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
@Gregorein How else in the future might one found a company backed by YC that optimizes websites?
English
0
0
0
1.5K
Sawyer Hood
Sawyer Hood@sawyerhood·
- interesting though experiment: - have an agent build an exhaustive test suite against it - delete the source and have a different agent reimplement it. - this would put you much more in a legally grey area, it is closer to a clean room implementation
English
2
0
14
1.5K
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
@ruchirkanakia If by successful you mean earning in dollars to send money back to India?
English
2
0
0
1.8K
Ruchir Kanakia - OneAssure
Ruchir Kanakia - OneAssure@ruchirkanakia·
You have to be really smart to be successful in India. You can be average and succeed in the US.
English
224
285
4.9K
179.5K
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
@akothari @KotakBankLtd I can confirm it's the same with @ICICIBank, unfortunately! Been in back and forth with them since the last 3-4 months to secure a loan, and eventually gave up
English
0
0
0
234
Akshay Kothari
Akshay Kothari@akothari·
PSA: If you’re an Indian living overseas, think twice before opening a @KotakBankLtd account. If you already have one, it may be worth considering alternatives. Speaking from my own experience over the past two years, even basic account changes have required excessive physical paperwork. Each round seems to uncover “one missing document” that was never mentioned earlier. Moving funds or closing the account has been equally difficult, often attributed to “RBI regulations.” Escalations typically bring in more managers, but not much progress beyond initial apologies. I’ve been a customer for a decade and have long respected @udaykotak as an entrepreneur, which makes this especially disappointing. I hope the bank can course correct and return to a higher standard of customer experience.
English
62
19
467
70.3K
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
"We weren't evolutionarily built to read or write" is not the gotcha people think it is. If we only did what evolution designed us for, we'd be a very different species in a very different world. The case for visual thinking doesn't need that argument.
English
0
0
0
22
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
I love visuals for thinking and communication too. I wrote about it here outofdesk.blog/thinking-outsi…. But I don't buy the usual arguments for why. "Thinking is faster than writing, so use low-latency mediums." Okay, but the friction IS the point. Slow is where the learning happens. "We weren't evolutionarily built to read or write." So what? We weren't built to fly planes either.
English
1
0
0
17
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
Another day, another essay about "speed as a virtue." The gist is essentially this: X is slower. Y is faster. Hence, Y is better. I'm increasingly seeing two camps emerge in the age of AI(along this one dimension of speed): one that favors friction, thoughtfulness, clarity, and stability, argues they are "features, not bugs", and another that favors speed above all else. Entrepreneurs and salespeople fall in the latter bucket.
Gaurav Ramesh tweet media
Grant Lee@thisisgrantlee

x.com/i/article/2032…

English
1
0
0
37
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
One of the benefits of reading a lot on Twitter: it acts as a good "training ground" to develop a taste for good writing and good arguments. You know it when you see it/feel it. Then you study it and pattern match to tell good from bad.
English
0
0
0
8
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
Another day, another essay about "speed as a virtue." The gist is essentially this: X is slower. Y is faster. Hence, Y is better. I'm increasingly seeing two camps emerge in the age of AI(along this one dimension of speed): one that favors friction, thoughtfulness, clarity, and stability, argues they are "features, not bugs", and another that favors speed above all else. Entrepreneurs fall in the latter bucket. x.com/thisisgrantlee…
Gaurav Ramesh tweet media
English
0
0
0
10
Gaurav Ramesh
Gaurav Ramesh@outofdesk·
In one of the @dwarkesh_sp podcasts(think it was with Sutton), he mentioned he used Gemini Deep Research to learn all about the history of RL. Although it was an ad, and he was paid to say that, if it's true, he'd do much better with Claude deep research! I thought Gemini was good until I tried Claude. Haven't gone back since!
English
0
0
0
26