Aarjav Jain

1.1K posts

Aarjav Jain banner
Aarjav Jain

Aarjav Jain

@aarjavvikas

Katılım Şubat 2023
670 Takip Edilen60 Takipçiler
Aarjav Jain retweetledi
Chamath Palihapitiya
Key to winning: Choose to be positive and grateful. Then, just keep at it. Time is the great compounder and will do the rest. So many people just don’t have the discipline to stay positive and grateful. Then time compounds the bitterness instead.
English
312
1.1K
10.6K
392.2K
Aditya
Aditya@AdityaShips·
My monthly Spend Living in Mumbai House Rent - $152 EMI - $43 Food - $100 Wifi - $7 Electricity - $21 Cost of running my SaaS - $40 Miscellaneous - $27 Total comes to around $390 per month. Is this good or Am I overspending ?
English
63
1
175
15K
Vaibhav Sisinty
Vaibhav Sisinty@VaibhavSisinty·
xAI just made it possible to be in 50 sales calls at once. With your own voice. 🤯 Voice cloning is now live inside the Grok API. 4 minutes of reading a prompt in a quiet room. That's it. Your voice now answers customer calls. Closes sales. Runs support. While you sleep. Pick "Clone a Voice" in the console. Read the script. Your clone is ready. One call. One conversation at a time. Receptionists. BDRs. Support agents. The first layer of jobs going isn't coders. It's anyone whose work happens on a phone call. The bottleneck of being one person just stopped existing. vc: @XFreeze
English
19
20
249
19.8K
Aarjav Jain retweetledi
Kunal Shah
Kunal Shah@kunalb11·
GLPs are modern version of an ancient hack: reduce desire, reduce suffering. Religions spent centuries teaching detachment. Science just turned it into an injection.
English
49
46
556
28.7K
Aarjav Jain retweetledi
Viraj Sheth
Viraj Sheth@viraj_sheth·
my personal take on the youth immersing themselves too much in spiritual lectures from gurus and babas is that it softens you too much, makes you less ambitious, makes you detest material success. when you’re young you should want to build things, contribute to your family and society, not think about hanging your boots to seek nirvana.
English
294
133
1.8K
159.3K
Aarjav Jain retweetledi
Camus
Camus@newstart_2024·
Elon Musk saw through Instagram’s illusion and deleted it. He told Joe Rogan he caught himself taking selfies for likes and thought, “What the hell is wrong with me?” That moment made him realize the app is engineered to make everyone look hotter, richer, and happier than they actually are. The endless comparison game quietly breeds unhappiness and insecurity. It’s especially toxic for young people growing up on it. Elon’s decision to step away shows real self-awareness and discipline — something Instagram actively works against. Have you quit or heavily reduced Instagram and felt better for it? Or do you think the platform does more harm than good overall?
English
274
714
6.9K
858.9K
Aarjav Jain retweetledi
Paras Chopra
Paras Chopra@paraschopra·
Recently someone asked me if I were to do a software startup today, what would it be about? My immediate reaction was that I probably wouldn’t do a software startup at all. Instead, I’ll perhaps choose an idea with heavy operational buildup that cannot be replicated easily.
English
118
52
1.5K
143K
Aarjav Jain retweetledi
Cecilia D'Anastasio
Cecilia D'Anastasio@cecianasta·
New: Apple and Google have directed users to dozens of nudifying apps, per new report. The apps, found by searching "undress" or "nudify," have earned $122 million and were downloaded 483 million times. w/ @byJuliaLove bloomberg.com/news/articles/…
English
459
558
2.8K
63.8M
Aarjav Jain retweetledi
Varun Mayya
Varun Mayya@waitin4agi_·
Most people believe they are their intentions: what they plan (in the future) to do. In reality you are what you are currently doing and the future is simply a projection of that. You are action, not intention. This is sort of a life hack because then you can just spend time on improving action without worrying about intention (that usually solves itself).
English
49
100
1.1K
27.2K
Alex Olim
Alex Olim@alexolim_·
After taking dozens of apps to #1 in the App Store, our team went back and analyzed over 100,000 social media posts to trace back what worked We found 34 hooks that brought in 100's of millions of views and more than 300 others proven to go viral.. and turned all of that into templates you can steal for free If you run a consumer app and want: - more installs - higher App Store rankings - content that lands with Gen Z Comment “HOOKS” and I’ll DM you the file.
English
1K
27
589
89.9K
Aarjav Jain retweetledi
Paras Chopra
Paras Chopra@paraschopra·
One of the highest ROI activities you can do in your life is to deeply internalize that building good habits is a short term investment that compounds to lifelong gains. Any new good habit requires overcoming initial friction, but techniques like habit stacking and starting small help. The trick is to realise that after a while, habit becomes effortless. So it’s just that initial dip you have to overcome. After that, all what you’re trying to do becomes automatic (that’s why it’s called a habit). So if you’ve been sitting on reading, programming, exercising, dieting or anything else, know that mastering the meta-skill of habit building will probably change your life forever.
English
51
187
2K
62K
Aarjav Jain retweetledi
HeyGen
HeyGen@HeyGen·
We solved character consistency. Forever Avatar V captures you in 15 seconds and holds your identity across every video. Change the look, outfit, and setting to create unlimited versions of you. RT + comment "AvatarV" below and I'll DM 100 credits to test it out (must follow)
English
1.6K
1K
2.3K
783.3K
Divya Ranjan
Divya Ranjan@divyaranjan_·
"AI made you faster. Your brain didn't get bigger" @karpathy's recent post went viral for sharing how he uses LLMs to build personal knowledge bases. Interestingly, I've been building a more holistic version of what he described for the past few weeks. Introducing Pattrns, an AI interface crafted for deep parallel work, with a partner called Dots that just knows you and grows with you from day one. Why? A few months back, I realised I was working with so many tools / terminals / windows. AI had made me 10x faster, but to be efficient at all times required all my focus and constant attention/depth. AI was creating 100x more output daily than my brain could process, and the constant context switching and orienting myself again and again was killing me. Alsooo, why is every AI chat so linear? The entire experience of using AI was disorienting me. Another agent wasn't the solution for me but an entire interface that connected all the dots for me automatically was. So I built Pattrns. Here's what it actually is: Pattrns is a visual environment to think and do more knowledge work with AI. It keeps you oriented at all times and uses visual threads, kind of like how our brain works (think your prefrontal cortex externalised). Your research, your references, and your thoughts for all your different threads all live side by side as context for AI. The interface gives you one view with infinite depth. You can run parallel sessions across projects, drop a massive question in one thread, and switch to another one to keep working. Focus when you want depth, expand when you want the big picture. My early version was actually an infinite canvas with chat, but using it daily became a bottleneck. Infinite canvases eventually just turn into noise especially for boards that keep evolving. Then there's Dots, the ambient intelligence underneath it all. It learns your taste and decisions by watching your actions. It pays attention to what you care about, what you curate, and what you engage with (also like how much, think pagerank). Over time, it just knows you. You never have to re-explain your thinking, your taste, or your decisions ever again. It does this by auto-organizing and constantly updating your memory graph into a board ("Me") for you to look at, edit, or chat with. You are also always fully aware of what it knows. The underlying rule is simple. Organization is Dots' job, but thinking and creating is yours. So every chat just feels like you're talking to someone who already gets you. This is how it feels to use: Day one: During onboarding, you import your past AI chats (Claude / ChatGPT) and data (Apple Notes, Notion, Evernote). Dots reads through everything, starts creating your Me board with your entire memory graph, auto resolves conflicts, and just knows you from the start. Week one: You're working across three projects. You drop research into one board, brainstorm in another, execute in a third. Switch between them instantly. The AI already knows what each board is about because it sees your cards, your structure, your context. No re-explaining. You can just start chatting anywhere and it stays updated at all times. Month one: Dots knows you and has seen what you've been creating and doing. What you build on vs what you explore and move on from. It's learned your taste through your actions, not your words, relative to the different boards. When you ask it to design something, it already knows you hate rounded corners in that exact project. When you're debugging, it remembers you prefer logs over breakpoints. Every correction you make teaches it. Every card you create sharpens its understanding. The result? You stop maintaining tools and start using them. No tagging. No filing. No "I should organize this later" guilt. Conversations are JSONL you can grep, Git tracks everything. Zero lock-in. Dots understands the context as the what and the conversations that led to it as the why. And also, there's a lot more under the hood Everything stays local (your brain is a folder you own). Privacy is a mission statement, nothing is stored online. You can literally just drop your entire Obsidian vault here and watch it get organized beautifully. It's powered by Anthropic's Agent SDK, so Dots is as capable and agentic as it gets. You can bring all your MCPs, and if an API or skill doesn't exist, just dump things and ask Dots to create it. Repeat something enough and Dots suggests turning it into a skill automatically. Every chat has reply threads (like Slack) so you can drill into any thought without losing the main conversation, and a TLDR button to catch up in seconds. Who is it for? I believe there are 2 kinds of people doing major work with AI: 1. Those who want fully autonomous agents that take a prompt and do everything. OpenClaw, AI chief of staff, that whole wave. 2. Those who sit with it, plan, execute step by step so their exact taste is translated into the output. Pattrns is for the latter! You will soon even be able to use the browser extension and Pattrns MCP to bring your own context to any chat agent you use daily, so it automatically starts thinking like you. Anyway Pattrns is a product I always wanted for myself and I deeply care about this cause. My ultimate mission is to eventually have an interface that is as intuitive as paper and pen along with an ambient AI that watches you and unifies everything you do in one place, constantly organizing your context so you keep coming back to it. What would that eventually feel like? That Pinterest image you keep going back to on your browser, it'll soon be auto-organized in a space for you. The early access for the beta is going live today (Invite only. Mac only for now). Reply with what you're building right now and I'd love to send you an invite soon! PS: There was no AI ever used while crafting this entire product experience, just pen and paper. Only used AI to build it. Taste is human :')
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
46
27
296
31.7K
Aarjav Jain
Aarjav Jain@aarjavvikas·
@divyaranjan_ @karpathy bro it would help me a lot , i have all the exports ready and actually whatsapp , notion , all ai llms , notes , linkedin , twitter
English
0
0
0
110
Aarjav Jain
Aarjav Jain@aarjavvikas·
The monopolies are built intentionally or they just get built 🤔?
English
0
0
0
9
Aarjav Jain
Aarjav Jain@aarjavvikas·
Now normal doesn't go viral , and normal also changes with time.
English
0
0
0
8
Aarjav Jain
Aarjav Jain@aarjavvikas·
A successful business requires a moat , A successful monopoly requires Too Many Moats.
English
0
0
0
9