Atharva Ajit Patkhedkar

2K posts

Atharva Ajit Patkhedkar banner
Atharva Ajit Patkhedkar

Atharva Ajit Patkhedkar

@atharva_ajit

i find the gap between where b2b work actually happens and where it gets tracked. then i build the thing that closes it. electronics @bitspilani

bengaluru Katılım Haziran 2020
564 Takip Edilen291 Takipçiler
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
for claude pro subscribers: what do y'all do when your session limit is exhausted? personally, i have started manual writing. just read and write.
English
0
0
1
20
Priyanshu Shukla
Priyanshu Shukla@damnsirius·
Finally left my high paying job after pondering on it for a while Excited to see what is coming up next!
English
14
0
40
1.5K
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
every time I eat vidhyaarthi bhawan dosa I thank my mom for making me.
English
0
0
1
32
Liam
Liam@iamliamsheridan·
cold outbound is dead the version that's dead: spray, pray, 7-step sequences, generic personalisation the version that prints $25M of pipeline a year for our clients: 100% TAM coverage, 45-day cycle, 5-minute reply window we wrote the full motion playbook like + comment 'MOTION' and i'll DM you. (must be following)
English
240
2
221
15.6K
Ishan Vyas
Ishan Vyas@theishanvyas·
Hiring an AI-native developer intern (PMs who can vibe code are welcome). • 90-day paid internship (might convert to full time) • Build for global scale • No token limits push real boundaries Looking for builders who ship fast and think in systems. Drop your GitHub or live project URLs in the replies and I’ll DM you.
English
534
11
560
39.6K
Vatsal Sanghvi
Vatsal Sanghvi@vatsal_sanghvi·
good speakeasys in blr?
English
7
1
12
2.4K
yjxyz
yjxyz@yjxyz101·
planning a goa trip for the long weekend fully aware that WHOLE of del bom blr will be there fml
English
5
0
48
1.8K
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
since last 5 days no-one has talked to me about AI news and I suddenly feel lighter, more energised to go into the blackhole once again after the break.
English
0
0
2
29
Sick
Sick@sickdotdev·
Drop what you’re building. Last time 50k people saw it. Consider this as marketing.
English
632
4
298
28K
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
@shilpiagrawal55 it most of the times - from now on will be how can you skip the queue and catch attention. but the domain specificity is still going to be a hindrance
English
0
0
1
25
Shilpi Agrawal
Shilpi Agrawal@shilpiagrawal55·
@atharva_ajit 💯 i find it so weird cuz I thought it would be the opposite because of AI advancements. Everyone can suddenly become an expert in 7 days of dedicated prompting, but looks like its not. You get hired in finance only if you've done finance, hired in Uber only if you're in Rapido😅
English
1
0
0
49
Shilpi Agrawal
Shilpi Agrawal@shilpiagrawal55·
There are two types of PM : the PMs who apply for opportunities in the domain or experience they already have AND the PMs who apply for the products they love even when they have zero experience. The first kind is strategic and second passionate. Its the first kinds’ market rn.
English
1
0
10
562
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
@GabbbarSingh “By the time you succeed, it doesn’t mean much” such an L take. long term games provide sustained happiness. expensive cars and homes is success to you? it definitely is an achievement, but not the goal, wealth can be a goal.
English
0
0
0
131
Gabbar
Gabbar@GabbbarSingh·
The only young people I see who are buying expensive cars, homes etc and dedicating it to their erstwhile poor parents, are Instagram influencers & IPL Cricketers. Everyone else is dealing with delayed gratification. By the time you succeed, it doesn’t mean much.
English
63
160
2.9K
109.7K
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
@kevarmstech So true. I update my interview notes everytime, and it is building a sense of where I could go wrong, often times. Concepts are evolving too fast, we went from RAG to Context Graphs in way lesser time than earlier.
English
0
0
0
8
Kevin Armstrong
Kevin Armstrong@kevarmstech·
@atharva_ajit For AI PM prep, a KB beats every static guide on the market. The rubric shifts too fast — anything written six months ago misses half the bar. The real edge is a notes layer you re-query after every mock interview.
English
1
0
1
7
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
Karpathy described building a personal KB with LLMs as "room for an incredible product." I built one in a single session for AI PM interview prep. Zero installs. Zero code. Just Claude + Obsidian. The wiki knows who I am, that's what makes it special. Ask it a RAG question: it answers using my actual projects, my failure modes, my target companies.
Atharva Ajit Patkhedkar tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1
0
0
185
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
we should remember one of the most influential man of post independence India today.
English
0
0
2
37
Atharva Ajit Patkhedkar
Atharva Ajit Patkhedkar@atharva_ajit·
Built one for AI PM interview prep yesterday. The part your gist undersells: the schema layer is where it gets personal. Mine knows my specific projects and context, which means, same question returns different answers than it would for any other PM. That's what RAG can't replicate.
English
0
0
0
491
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.8K
26.6K
6.9M
I’m just a girl
I’m just a girl@antigalacticc·
what do you guys think would be a great book for someone who’s an economist? please drop your recs!!!
English
17
1
65
2.9K