Aneesh Muppidi

180 posts

Aneesh Muppidi

Aneesh Muppidi

@aneeshers

Incoming PhD at @StanfordAILab. Rhodes Scholar @FLAIR_ox. prev @harvard undergrad

เข้าร่วม Mayıs 2020
694 กำลังติดตาม439 ผู้ติดตาม
ทวีตที่ปักหมุด
Aneesh Muppidi
Aneesh Muppidi@aneeshers·
⭐New Paper Alert ⭐ How can your #RL agent quickly adapt to new distribution shifts ? And without ANY tuning?🤔 We suggest you get on the Fast TRAC🏎️💨, our new Parameter-free Optimizer that surprisingly works. Why? Website:computationalrobotics.seas.harvard.edu/TRAC/ 1/🧵
GIF
English
8
20
128
41.4K
Natasha Malpani 👁
Natasha Malpani 👁@natashamalpani·
there is no hugging face for robotics data. no standardized pipeline for collecting, labeling, versioning, training on real-world robot data at scale. no tooling that handles contact dynamics and material deformation well enough for industrial manipulation. no teleoperation infrastructure where human supervisor intervention automatically becomes training data. no vertical-specific manipulation datasets for any specific industrial task. the actual bottleneck in physical AI is the data and the infrastructure to generate it. and this is a structural problem. for language AI, training data was the internet. abundant, cheap, already labeled by human intent. for robotics, the gap between where foundation models are and where they need to be cannot be closed by deploying more robots. three bets are being made right now: simulation-first works brilliantly for locomotion. domain randomization has essentially solved quadruped walking in unstructured terrain. but it breaks down completely for manipulation. simulated cameras have no noise, blur, or friction error. real cameras and grippers have all of it. cable insertion, fabric folding, dexterous assembly are exactly where simulation fails. teleoperation as data collection is the second move. deploy semi-autonomous robots, capture human-guided trajectories, iterate. theoretically sound. but the capital math is brutal and the execution evidence isn't there yet. human video as proxy is the third. if robots could learn from watching humans, you tap unlimited data. the problem: human hand geometry and force feedback don't map onto robot actuators. you're learning the shape of motion without the physics that make it work. what's actually working today is locomotion. narrow manipulation in structured environments. inspection and sensing. quadrupeds doing thermal inspection. no general-purpose manipulation required. the hardware race is loud, capital-intensive, winner-take-few. but the data infrastructure race is quiet, undercapitalized, wide open.
English
61
26
379
71.9K
Aneesh Muppidi
Aneesh Muppidi@aneeshers·
The coolest benchmark I've seen in 2026. Algorithmic Discovery Agents (ADAs) are the future! Also, it opens up so many directions (especially in optimizing AI for science)
Alex Goldie@AlexDGoldie

1/ 🪩 Automating the discovery of new algorithms could unlock significant breakthroughs in ML research. But optimising agents for this research has been limited by too few tasks to learn from! Introducing DiscoGen, a procedural generator of algorithm discovery tasks 🧵

English
0
0
2
288
Aneesh Muppidi รีทวีตแล้ว
Alex Goldie
Alex Goldie@AlexDGoldie·
1/ 🪩 Automating the discovery of new algorithms could unlock significant breakthroughs in ML research. But optimising agents for this research has been limited by too few tasks to learn from! Introducing DiscoGen, a procedural generator of algorithm discovery tasks 🧵
Alex Goldie tweet media
English
3
38
129
23.8K
Aneesh Muppidi รีทวีตแล้ว
Francesco Capuano
Francesco Capuano@_fracapuano·
Built with my own 2 hands, fully open source! Huge shoutout to Codex’s invisible hands @reach_vb for helping me pull this together :)) Having always worked on algorithms, it was quite daunting (and fun!) to build a simulator for once. See you on the open source side for more 👀
ORCA Dexterity@orcahand

Last week, we announced our three new hands. Today, we're releasing their digital twins ↓↓↓ > new orcahand mjcf/urdf files available on github.com/orcahand/orcah… > custom learning environment @ github.com/orcahand/orca_…

English
4
11
66
9.3K
Zhengyao Jiang
Zhengyao Jiang@zhengyaojiang·
Your autoresearch needs its own Weights & Biases. We’ve turned Weco into an observability tool that lets you monitor, analyze, and share autoresearch runs. Here's what it can do: 🧵(1/4)
English
18
59
649
36.7K
Aneesh Muppidi รีทวีตแล้ว
Autoscience Institute
Autoscience Institute@AutoScienceAI·
Dario: “The biggest thing to watch is this issue of AI systems building AI systems.” Today, Autoscience is announcing $14M in funding by General Catalyst, Perplexity Fund and Toyota Ventures to create autonomous AI research labs. Human AI researchers can't keep up with the pace of new AI research. They’re out of time to run the experiments they want. So we’re building an autonomous AI lab that can. The era of human-scale R&D is over. Machine-scale development has begun. 🚀
Autoscience Institute tweet media
English
11
13
72
384.9K
Aneesh Muppidi รีทวีตแล้ว
Heng Yang
Heng Yang@hankyang94·
Glad that our work “Inference-Time Enhancement of Generative Robot Policies via Predictive World Modeling”, led by @QiHan46459, has been accepted to IEEE Robotics and Automation Letters! 🎉 We propose Generative Predictive Control (GPC): sample action proposals from a pretrained diffusion policy (“look back”), roll them out with a diffusion-based action-conditioned video world model (“look forward”), then rank or optimize the actions using either a learned reward model or VLM preferences. Conceptually, this is trajectory optimization / MPC with hybrid sampling + gradient optimization, interpreted through modern diffusion priors and video world models. Interestingly, we first posted the paper on arXiv in Feb 2025, when action-conditioned video world models for planning were still rare—now this direction is rapidly gaining traction. Still many open questions, e.g., • how to avoid local minima in planning • what representations work best for world models • how to balance physics priors vs. data-driven learning Paper: arxiv.org/abs/2502.00622
Heng Yang tweet media
English
5
17
98
18.1K
Aneesh Muppidi รีทวีตแล้ว
Pranav Ramesh
Pranav Ramesh@pranavramesh25·
Introducing Chat++: Supercharge your iMessage Chat++ is an open-source, AI-supercharged search engine on your iMessages. You can search for moments hidden literally anywhere in the history of your message history with literally anyone, across multiple chats. I built this over the course of a week. I initially started working on this side project as a way to quickly search for things that my girlfriend and I did and talked about in the past as part of a Valentine's gift I was making for her. After discovering how incredibly difficult it was to even search for things in the native iMessage interface, I built a quick search tool to index the chat.db database on Mac and more efficiently search for events. Then I realized — why not expand this? Features: - Cursor-like agentic search over your messages - AI-curated "timeline" to jump across moments in your conversations with others - Really fast keyword search over filter range - Minimap for quickly scrolling through messages Stack: - Tauri / Rust for full stack - @aisdk for agent orchestration with model options from @AnthropicAI , @xai , @OpenAI , and @GeminiApp. - @OpenAI image embeddings Repo: github.com/pr28416/chatpp This is still far from complete but is rapidly improving day-by-day. Feel free to submit issues and PRs!
English
28
8
318
29.4K
Tanishq Kumar
Tanishq Kumar@tanishqkumar07·
I've been working on a new LLM inference algorithm. It's called Speculative Speculative Decoding (SSD) and it's up to 2x faster than the strongest inference engines in the world. Collab w/ @tri_dao @avnermay. Details in thread.
English
134
454
4.1K
601.4K
Francesco Capuano
Francesco Capuano@_fracapuano·
*Very* happy to share lerobot is going to ICLR 2026 🇧🇷 See you in Brazil!
Francesco Capuano tweet media
English
6
22
414
17.8K
Aneesh Muppidi รีทวีตแล้ว
Reza Shamji
Reza Shamji@Reza_Shamji·
What if you could search all your research files with AI? ToolUniverse makes it real. Ask questions. Answers grounded in YOUR data. 🔍 Share your tool globally + use other scientists' tools! Collaborative science 🌍 Watch the demo 👇 youtube.com/watch?v=Rnkl01… @ScientistTools
YouTube video
YouTube
English
0
2
7
489
Aneesh Muppidi รีทวีตแล้ว
Zilin Wang
Zilin Wang@zilinwang4ai·
1/ 🚗 🌏 What if an autonomous vehicle could move to a new city without collecting a single human demonstration in that city? I am so excited to introduce our new work: Learning to Drive in New Cities Without Human Demonstrations.
Zilin Wang tweet media
English
1
10
47
15.9K
Aneesh Muppidi รีทวีตแล้ว
J Rosser
J Rosser@jrosseruk·
I wrote a quick guide on speed running a fresh mech interp research remote GPU setup⚡️ From nothing → a fully working remote GPU dev environment in minutes (SSH + VS Code/Cursor, CUDA, PyTorch, TransformerLens, GitHub, uv). 🧵 It’s the exact workflow I use for new projects - especially for MATS-style research when you want to do research, not infrastructure. To prove this is rapid, here's a vid of my personal record: 11 minutes. youtube.com/watch?v=0UeQdf…
YouTube video
YouTube
English
4
7
129
9.8K
Aneesh Muppidi รีทวีตแล้ว
Reza Shamji
Reza Shamji@Reza_Shamji·
Worried about LLM context budget with reasoning? @aneeshers @katrinarbrown @rana_shahout introduce predictive scheduling - a way to predict amount of reasoning necessary pre-generation! Could be super helpful for compute (e.g. post training RL)!
Aneesh Muppidi@aneeshers

Introducing Predictive Scheduling. Can we predict how much reasoning a query needs before generating a single token? Blog: aneeshers.github.io/predictive-sch… Paper: arxiv.org/abs/2602.01237 Co-led with @katrinarbrown and @rana_shahout

English
0
1
2
346