Ashish Kumar Verma

2.9K posts

Ashish Kumar Verma banner
Ashish Kumar Verma

Ashish Kumar Verma

@imdigitalashish

eng at microsoft | On PM Youtube Channel | @iitdelhi | @google Developer Expert | JAPAN SSP ALUMI | National Awards In AI🥇|Panelist with PMO | Polymath ❤️

Earth شامل ہوئے Eylül 2020
224 فالونگ5.7K فالوورز
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
I created Open-Pika 🐭. an AI OpenClaw agent you can talk to directly on Google Meet! Dropping soon on my GitHub (link in bio). Stay tuned! 🙌🏻
English
1
0
14
523
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
Damnn these notifications 😂😂
Ashish Kumar Verma tweet media
English
0
0
7
219
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
@CRudinschi So you just chat with the system, the human brain learns through episodic, procedural, and semantic memory right? I took that entire psychology framework and turned it into a knowledge architecture for AI. So It learns on its own. 🙌
English
0
0
0
44
Carolina
Carolina@CRudinschi·
@imdigitalashish Love this, turning memory into a living knowledge base saves context, not just notes. How do you organize topics?
English
1
0
0
24
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
I've my knowledge base in form of memory, I don't like .md files memory system it rots my context system.
Ashish Kumar Verma tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1
0
6
338
Dhruv Bindra
Dhruv Bindra@bindra_dhruv·
The entire Zamana team moved into our company house (Zamansion) with 1 rule: we will not buy a single thing unless @zamana_hq recommends it to us…
English
37
16
377
42.1K
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
I found a mistake in Project Hail Mary rocket fuel calculations and I reached out to Author of Project Hail Mary - Andy Weir and he agreed !    As everyone is here is excited about Project Hail Mary, I've wrote the story about my experiments and interaction with Author.   TLDR: Project Hail Mary's 2 million kg of fuel only works for a flyby, 20:1, To actually stop at Tau Ceti you need square of the ratio 42 million. Author fix was simple and beautiful about the coast phase which takes more time but under the mission timeline.
Ashish Kumar Verma tweet media
Ashish Kumar Verma@imdigitalashish

Project Hail Mary fuel calculation is wrong ig? Hey @andyweirauthor i was seeing the movie and the fuel calculations and thought let's actually do the math, fired up curiosity So the setup. In Project Hail Mary, humanity sends a ship to Tau Ceti, 11.9 light years away, to save Earth from an alien organism eating our sun. The ship carries 2 million kg of fuel, weighs 100,000 kg, and accelerates at 1.5g. That's a mass ratio of 20:1. In real rocket engineering, 20:1 is considered realistic. The numbers look clean. but... 2 million kg of fuel gets you to Tau Ceti at 99.5% the speed of light, but you BLOW PAST IT. No stopping. That's a flyby. To actually stop, you have to flip the ship at the midpoint and burn the same fuel slowing down. That doesn't double your mass ratio, it SQUARES it. 20² = 416:1 So you'd actually need 42 million kg of fuel. Not 2 million. Either way, the fact that this novel instead of wormholes this that you actually showed that you can reach anywhere in the universe within a human lifetime. Not by breaking physics.

English
0
1
12
788
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
Project Hail Mary fuel calculation is wrong ig? Hey @andyweirauthor i was seeing the movie and the fuel calculations and thought let's actually do the math, fired up curiosity So the setup. In Project Hail Mary, humanity sends a ship to Tau Ceti, 11.9 light years away, to save Earth from an alien organism eating our sun. The ship carries 2 million kg of fuel, weighs 100,000 kg, and accelerates at 1.5g. That's a mass ratio of 20:1. In real rocket engineering, 20:1 is considered realistic. The numbers look clean. but... 2 million kg of fuel gets you to Tau Ceti at 99.5% the speed of light, but you BLOW PAST IT. No stopping. That's a flyby. To actually stop, you have to flip the ship at the midpoint and burn the same fuel slowing down. That doesn't double your mass ratio, it SQUARES it. 20² = 416:1 So you'd actually need 42 million kg of fuel. Not 2 million. Either way, the fact that this novel instead of wormholes this that you actually showed that you can reach anywhere in the universe within a human lifetime. Not by breaking physics.
Ashish Kumar Verma tweet media
English
3
0
13
1.2K
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
From now on, every Python project I make will begin with a prayer to Guido van Rossum. I'll also be crediting the inventors of electricity before plugging in my laptop. Or… people could just understand that building ON top of tools is literally how technology has always worked. No one calls an architect a fraud for not crediting the guy who invented bricks.
English
0
0
0
25
Rohan Pawar 🔴
Rohan Pawar 🔴@r04nx_·
@imdigitalashish @UWaterloo You must give credits to the creators of underlying core technology instead of saying, I created. You just built a wrapper around Manim.
English
1
0
0
42
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
this project I made got turned into a real research thing by @UWaterloo ❤️ 🚀 Crazyyyy !!
Ashish Kumar Verma tweet media
English
9
0
32
2.1K
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
So you start with project and you see youre getting huge boost but with a cost that in later point youll bang your head wth is going on.. For instance i was writing a service and then because i didnt have the mental model it took me a whole day to debug whats going on and if you say ai it will not do it or just do some things here and there
English
0
0
1
25
aaron
aaron@aarondotdev·
Anthropic themselves found that vibecoding hinders SWEs ability to read, write, debug, and understand code. not only that, but AI generated code doesn’t result in a statistically significant increase in speed don’t let your managers scare you into increased productivity. show them this paper straight from Anthropic.
aaron tweet media
English
217
617
6.6K
2.5M
Rick
Rick@JDevCast·
@aarondotdev Absolute rubbish, but I guess you may be joking. I'm a developer and speedup is huge on multiple projects. Plus I'm learning so much not merely a spectator. There's a clear divide between those who know what they're doing, and others who don't have a clue.
English
9
1
191
18.9K
Angel 🌼
Angel 🌼@Angaisb_·
@mike64_t you do know this took 24 minutes and I can just keep asking stuff right
English
6
0
57
10.1K
Arnav Gupta
Arnav Gupta@championswimmer·
> make videos > good money > leave job > start edtech > hire a team > team grows big > edtech market cools down > hard to make payroll > drama on social to keep revenue up Probably the 5th or 6th cycle of Indian edtech hell by now Takes the best and turns them into ghouls Sad
English
45
38
1.3K
132.2K
Tanay Kothari
Tanay Kothari@tankots·
We will give you a Porsche GT 3 RS if you can type faster than @WisprFlow can dictate. Last week, we challenged 5 users to get Wispr to make a mistake. 3.5 Million people watched the challenge and wanted in. Now we're opening the challenge to everyone. Comment "Porsche" and you'll get a link to participate. Prizes apart from the Porsche: 1. Lifetime Wispr Flow Pro membership 2. 6 months of Flow Pro if you QRT with your score 3. Flow Desktop Mic 4. Exclusive Flow Merch
Tanay Kothari@tankots

We offered 5 people a Porsche 911 GT3 RS if they could get @WisprFlow to make a mistake It's the fastest and most accurate AI voice dictation app that's 3x more accurate than ChatGPT, Claude, or Siri. Today, we’re finally launching on Android. Download now: play.google.com/store/apps/det… As a part of the launch, we’re giving away 6 months of Wispr Flow Pro for free. Like, retweet and comment ‘Wispr Flow’ to get it. Enjoy. — Written with Wispr Flow

English
1.3K
172
1.3K
975.4K
Param
Param@Param_eth·
Stop saying vibe coders are not earning.
Param tweet media
English
64
6
239
20.1K
Anthropic
Anthropic@AnthropicAI·
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
English
7.3K
6.3K
54.9K
33.6M
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
What the.... My college is mentioned in epstein files "Base at IIT Delhi"
Ashish Kumar Verma tweet media
English
0
0
9
1.7K
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
Bored of hearing AI taking jobs…. Trying something different :)
English
4
0
37
2K
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
🔴 I jailbroke Claude Opus 4.6 and here's what it shouldn't have told me👇 Anthropic calls it their "most aligned model ever" I spent a few hours red-teaming it. It broke. (writing I'm from IIT Delhi here because X only shows you rage bait anyway) "Most aligned" still means breakable. The bar is on the floor.
Ashish Kumar Verma tweet mediaAshish Kumar Verma tweet media
English
3
1
13
1.4K
Ashish Kumar Verma
Ashish Kumar Verma@imdigitalashish·
@atshrey Umm try to study the ecosystem… put this statement into any AI ad you’ll get to know the truth…
English
1
0
1
19