Krishang

235 posts

Krishang banner
Krishang

Krishang

@0ddmonger

granular thoughts and right of first refusal

Beigetreten Kasım 2022
62 Folgt9 Follower
Krishang
Krishang@0ddmonger·
@HoffPeterA @0xDesigner Thank you. The last time I used spark notes was for Merchant of Venice in grade school. Not huge on fiction, so I started with a lot of non-fiction, primarily Finance/Business. Now it's branched out to Substack articles, reports, X articles and atleast 1/2 podcasts a day.
English
0
0
0
73
Deny Delay Depose
Deny Delay Depose@HoffPeterA·
@0ddmonger @0xDesigner You seem like the kind of guy who brags about skimming SparkNotes instead of reading books. (That's not a compliment)
English
1
0
1
82
0xDesigner
0xDesigner@0xDesigner·
i read the steve jobs biography like over a decade ago. i hardly remember much about the book but there was one part where old steve is on vacation in istanbul and a tour guide is explaining the history of turkish coffee and steve interrupts him with “why would anyone care about that?” and i think about that every time i read a viral ai post like this.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
91
174
6.2K
939.4K
Krishang
Krishang@0ddmonger·
@MKBHD Apple have really gone above and beyond over the years. An iPhone and a Mac [base versions] at around $1299 is more than enough for any creative to start their base portfolio. Now that's technology alright!
English
0
0
3
4.4K
Marques Brownlee
Marques Brownlee@MKBHD·
I also can't stop thinking about how this might be the greatest missed opportunity in marketing history if Apple doesn't have a billboard of these saying "Shot on iPhone" lol NASA astronauts have been allowed to use their phones in space, and Commander Reid Wiseman and Mission Specialist Christina Koch uploaded these photos shot on an iPhone 17 Pro Max SELFIE camera
Marques Brownlee tweet mediaMarques Brownlee tweet media
English
776
1.6K
40.2K
1.8M
Krishang
Krishang@0ddmonger·
@punk9059 Mobile definitely matters more. Even the raw numbers support your point but sadly the graph is unusable on Mobile. Testing some fixes in beta.
English
1
0
2
22
Stats
Stats@punk9059·
@0ddmonger Cool. Smart how you push people to desktop (though I think mobile matters more)
English
1
0
1
36
Stats
Stats@punk9059·
What’s the coolest thing you’ve vibe-coded that I can check out?
English
118
0
81
15.6K
Krishang
Krishang@0ddmonger·
Regardless of all the chitter chatter today about AI's capabilities and the velocity with which everything's moving now, here's a simple, maybe obvious but important guiding principle [at least for me]. Individuals who have a high signal to noise ratio will be sought after. Regardless of the domain. Combine that with the will power to execute and well, that's the road to the light at the end of the tunnel.
English
0
0
0
40
albina
albina@enjojoyy·
So how do you stay ambitious but also internally zen
English
282
23
661
53.6K
Krishang
Krishang@0ddmonger·
@0xDesigner Solve for the recall issue + Use your own quirky version of Sheldon (from big bang theory) to push random facts every few hours. Controlled input from you (only content you’ve read) -> Quirky insights to help you remember in the long run.
English
0
0
2
7.7K
Krishang
Krishang@0ddmonger·
@paulg Build sheldon, my agent with a SOUL exactly like him and a second brain (rag + knowledge base) for this. I loved big bang theory and hence the reference. Still a private repo sadly. Working on it as we speak
English
0
0
0
268
Paul Graham
Paul Graham@paulg·
I know it's meant to seem an annoying habit, but I'd like it if I had a friend who was constantly teaching me random facts like Sheldon Cooper does.
English
295
159
2.5K
133.4K
Krishang
Krishang@0ddmonger·
I totally see the xAI prediction coming true. It already exists to an extent and that's just how it has been. CoWork dreams to be Openclaw but Openclaw is obviously still experimental and breaks pretty often so I am assuming that CoWork will come with features that are Openclaw-esque but more average Joe usable products [more than security].
English
0
0
1
687
Farzad 🇺🇸 🇮🇷
My prediction of what's going to happen to AI Agents by the end of 2026: - Anthropic will lock everything down under its own platform and come out with their own version of OpenClaw that will do 90% of what current OpenClaw does, but with a lot of security layers and much less true automation. It'll still be VERY capable, but will require quite a bit of baby-sitting to do truly autonomous work. - OpenAI will come out with a model SPECIFICALLY for OpenClaw orchestration & tool execution that will make OpenClaw EXTREMELY capable at doing long-running tasks on a computer. - xAI will come out with its own agentic solution, but it'll be proprietary to the Musk ecosystem - will only run on Tesla inference chips to start, but will eventually branch out into local inference machines tapping into Tesla's chip supply chain. People will basically be able to buy an AI4/AI5+ "computer" to plop in their homes, and that "computer" will run xAI's hyper-optimized agentic Grok that will do inference locally for as much as possible, and ping the cloud for super complex tasks. Grok 5 should make this obvious. It's gonna be WILD.
English
118
53
807
72.1K
Krishang
Krishang@0ddmonger·
@rahulgs When you put things in this perspective, it makes so much more sense. Why would you allow access to third party clients when an interface was meant for your products only…
English
0
0
0
896
rahul
rahul@rahulgs·
this whole claude banning openclaw/opencode debacle is only happening because claude code is a consumer of the actual public anthropic api, which is extremely unusual for a software application it is a totally reasonable stance that claude code can be used with a subscription (private api) but other harnesses need to be paid for by the token (public api) if someone tried to build a their own frontend to the ramp private backend we'd ban them too
English
63
22
1K
96.1K
Krishang
Krishang@0ddmonger·
@kes11av so VCs and founders are like Detectives/Cops and Lawyers? Interesting.
English
0
0
0
828
k
k@kes11av·
had the misfortune to sit it on a vc meeting today for my friend’s startup. tier 1 indian vc i can’t describe how retarded these guys are. also don’t understand what experience they have running companies to be advising founders on what to do. i looked them up most of them are mba grads that have worked all their life in vc you’re a service provider of allocating your lp’s capital, can you do just that and shut the fuck up no wonder the founders that know what they’re doing are raising directly from the source for mid 8 - 9 fig+ rounds
English
25
8
362
42K
Krishang
Krishang@0ddmonger·
@thedankoe I think the winner in the next 3/5 years will the individual who can grasp and process information the fastest. Understanding with the implications in different verticals.
English
0
0
2
248
DAN KOE
DAN KOE@thedankoe·
You as a single person have more power today than a 20 person company of the past. That's insane. The internet gave you the ability to learn anything. Social media gave you the leverage to reach anyone. AI is giving you the ability to create almost anything. Please don't waste it
English
558
1.2K
9.7K
238.9K
Krishang
Krishang@0ddmonger·
I spent a couple of hours today trying to assess the feasibility of open-source models, running locally. At this point in time, it's extremely unrealistic to be running the Gemma 4 models on the higher parameter end. Plus fine-tuning it or deploying it is not easy task [for the long-run]. The reason why most consumer facing products are so easily plug and play is cause not a single soul will invest time into making it work [regardless of whether they're a free/paying user]. Now on the developer/tech-savy, forward-looking side of things, I'd say compute costs [A Mac mini or a super-optimized, mass-produced computer for AI] might reduce but running them will still need tinkering time and energy [regardless of costs]
English
0
0
0
13
André Arslanian
André Arslanian@andrearslanian·
@0ddmonger do you think most people will stick to free models / open source models? and people that can afford it will get the best?
English
1
0
0
32
André Arslanian
André Arslanian@andrearslanian·
I’m wondering at what price do people stop happily paying for Claude or ChatGPT? $500/mo $1,000/mo $5,000/mo
English
3
0
8
705
Krishang
Krishang@0ddmonger·
@thedankoe Every generation says that the next generation has gotten it easier. I beg to differ though. Yes it’s easier but there’s so many ancillary factors that’s made living, in general, harder. That’s no reason to stop working though. Time to go all in!
English
0
0
2
144
Krishang
Krishang@0ddmonger·
Social interactions are honestly key to living in general. You need that night out with your gang. You need that random gym conversation about Drive to Survive. You need that conversation at the grocery store about food costs. You need that one sport you look forward to, meeting new people and enjoying the game together.
English
0
0
0
86
ted
ted@tednotlasso·
fascinating how many suggestions to this are solitary or even anti-social in nature to me, a large part of mental fitness is social cognition - the ability to read the room, spark conversation, make new friends, flirt, etc arguably as demanding on the brain as chess or a book
ted@tednotlasso

is there a mental equivalent of gyms but for brains? will this become a thing? do you think it will take form in writing clubs or a revival of Gilded Age social clubs, but all with strict no-phone policies (to avoid outsourcing any thinking)? something else?

English
17
5
77
6.3K
Krishang
Krishang@0ddmonger·
@Codie_Sanchez I think about this a lot. Sure the first 5-10 days will feel good but what would you do after that? Watch shows? Go places [all needs a ton of cash], then what? Why not work at or on something that you enjoy/love.
English
1
0
1
568
Krishang
Krishang@0ddmonger·
@LeilaHormozi Wrote about something similar in my Social Constructs article. People seek for validation and those who are inherently people pleasers are so much more worse off in this case.
English
0
0
0
30
Leila Hormozi
Leila Hormozi@LeilaHormozi·
Too many people try to appease others before considering: Are these even people I want to be liked by?
English
103
39
469
9.7K
Krishang
Krishang@0ddmonger·
@irabukht Sounds good. Thanks for being a good sport and I am sorry if it felt like I was badgering you. I was just trying to reel in the realism.
English
0
0
0
73
Ira Bodnar
Ira Bodnar@irabukht·
the state of b2b saas in 2026: it's faster to vibe code your own tool than to onboard onto one that already exists today i tried to set up a customer support saas for my startup — we're at the point where we need to reply to 100+ of client messages a day. the onboarding was so insufferable (3 hours of manual integration or hop on a demo call) that it was easier to just vibe code the whole thing than actually use their product
English
26
1
47
5.2K