Aman Prakash

424 posts

Aman Prakash banner
Aman Prakash

Aman Prakash

@apdotdev

LLMs @ LCS2 Prev. Research @UofSC @bitspilaniindia @reachnitt | CS'26 Working on AI safety

Earth 가입일 Mayıs 2023
291 팔로잉47 팔로워
Aman Prakash 리트윗함
Deedy
Deedy@deedydas·
The Ultimate List of Artificial Intelligence "Neolabs": May 2026. A Neolab is a pre-revenue scale startup working on long-term AI breakthroughs, usually with a $1B+ valuation. There are now 63 of them!
Deedy tweet media
English
100
212
1.9K
440.6K
Aman Prakash
Aman Prakash@apdotdev·
@AbhinavXJ Not expected from YC. We get better snacks after each and every small event here at IIT D. Still I mayn't have full context as I didn't attend.
English
0
0
0
1.7K
abhinav
abhinav@AbhinavXJ·
lunch @ yc event 😂 Idk I expected proper lunch but nvm
abhinav tweet mediaabhinav tweet media
English
119
7
566
192.2K
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.8K
26.6K
7M
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.9K
7.1K
58.5K
21M
siddhant.
siddhant.@ignoreandfly·
happy to share our paper on VLMs for Geolocalization has been accepted to EarthVision at CVPR.
English
9
0
16
679
Aradhye Agarwal
Aradhye Agarwal@AradhyeAgarwal·
I'm at a cross-roads. I just received an invitation from Entrepreneurs First which is essentially a ticket straight into the heart of the startup ecosystem. EF gives you 3.6 L straight up, no strings attached, and after 3 months decides whether to invest USD 125k in your startup for a 8% equity. EF is essentially the best next to YC and helps you go from -1 to 0 in your startup journey. But I have other options on the plate too; On one hand, I have an amazing job at MSR where I get to work on the bleeding edge of AI agents. I also have an amazing group of co-workers which is often a hard ask these days. And perhaps, most importantly, I have crazy amounts of compute available at my hands, with almost no blockers. On the other hand I received an offer (two actually) from CMU for master's in MSCV and MSR. CMU is well known for being one of the best places to do AI research and a lot of great researchers and even startups come from CMU. What do you think I should do?
English
7
0
20
867
Aritra 🤗
Aritra 🤗@ariG23498·
[Hugging Face Machine Learning Club India] We have locked in our first guest for the series. Some updates: 1. The talk is scheduled on 10th of March 2. It is going to be online, (links will be made available) 3. 2000 HRS IST is the set time. We will have more details tomorrow (or the day after). Brownie points for guessing the guest. Let's make it interesting, the title of the talk is "Teaching Tiny Models to Prove Hard Theorems". 🤗
Aritra 🤗 tweet media
English
40
26
570
24.2K
Sam Altman
Sam Altman@sama·
Great meeting with PM @narendramodi today to talk about the incredible energy around AI in India. India is our fastest growing market for codex globally, up 4x in weekly users in the past 2 weeks alone. 🇮🇳!
Sam Altman tweet media
English
2.1K
2.4K
27.7K
4.8M
Param Thakkar
Param Thakkar@Param23072004·
Got 11 huge PRs merged into Pytorch RL last week!
Param Thakkar tweet media
English
1
0
3
102
Aman Prakash
Aman Prakash@apdotdev·
@suparnojit26 Do you want to deep dive into Diffuion model in general, or the specific domains such as Diffusion LLM or the Diffusion Image generation models?
English
0
0
0
12
Suparnojit Sarkar
Suparnojit Sarkar@suparnojit26·
I want to deep dive on diffusion models. Any resources? Thanks!
English
14
3
94
9.2K
Yajat Gulati
Yajat Gulati@GulatiYajat·
Yesterday, Anthropic released Cowork for $200/month Today we're launching Shadow for FREE. Download now. @Shrivardhan29
English
92
96
1.3K
99.2K
Aritra 🤗
Aritra 🤗@ariG23498·
MoE routing algorithm by hand. Cooking this for the upcoming blog post. You all better read the post when it is out. 🔫
Aritra 🤗 tweet media
English
11
17
204
8.9K
Michelle Fang 🌁
Michelle Fang 🌁@michelleefang·
if you're vibe coding or building over the holidays, i want to gift one of you a 6 month subscription of claude pro to support <3 just drop a comment below. merry christmas!
English
7.5K
180
9K
1.1M
Dabs🩸
Dabs🩸@DabsMalone·
I got a reply from Y combinator They like what I’m working on but I need a cofounder. Any ML engineers want to work for $0 a year until we make it big?🥲
English
111
7
450
47.3K
Sayak Paul
Sayak Paul@RisingSayak·
Closed-source model providers should provide scripts to reproduce the reported benchmark numbers. I thought this was standard, but apparently not?
English
3
1
21
1.6K
Pranav
Pranav@pranavcmadhukar·
Today we're launching Meteor v2. Our new agent ranks #1 on webvoyager (96.5%) beating openai, gemini. We can now run for 10x longer than before allowing us to complete more complex tasks. Go to browse.dev to download!
English
125
122
1.5K
126.8K
Kirk Borne
Kirk Borne@KirkDBorne·
Infographic - 40 Essential Formulas for Data Scientists
Kirk Borne tweet media
English
6
198
897
31.5K
sync. labs
sync. labs@synclabs_so·
today, sync. is excited to announce react-1 a ten billion parameter masked video diffusion model built for one purpose: giving you full creative control over performances in post
English
33
54
262
27.1K