Nihal

57 posts

Nihal

Nihal

@omnomnihal

CompE @UofIllinois, Research @daslab_uiuc, making cool stuff @sigrobotics

Katılım Kasım 2025
173 Takip Edilen56 Takipçiler
Nihal retweetledi
underscore advait patel
underscore advait patel@_advaitpatel·
Mind has raised an additional $400M, bringing our total funding to over $1 billion. We are building dexterous, general-purpose robots and foundation models for industrial deployment, starting with the automotive industry. I’m super excited and grateful to be working on these problems with such a stellar team. We are uniquely positioned to tackle the general-purpose robotics problem given our deep partnership with @Rivian, who is a shareholder and our pilot customer, and who is also providing us with data from production lines to add to our training mixture. It’s still super early days, and we’re hiring across research, software, hardware, and more! (Link below)
underscore advait patel tweet mediaunderscore advait patel tweet media
English
21
9
178
21.1K
Ville🤖
Ville🤖@VilleKuosmanen·
@omnomnihal @droyd_robotics I recommend investigating force torque sensors and/or controllers too, to plug in cables you need to apply specific amount of force which is hard to do with position control only
English
1
0
2
118
Nihal
Nihal@omnomnihal·
summer internship update: I’ll be working on VLAs at @droyd_robotics! Will be researching how different end effectors (to plug cables in) affect the policy. Might end up writing a paper on it. If you have any leads/have done similar work lmk!
English
2
1
17
1.4K
Nihal retweetledi
Generalist
Generalist@GeneralistAI·
GEN-1 delicately arranges potato chips, and lifts a heavy bag of potatoes — from a gentle touch to a strong grip. Read more about Gen-1 in our blog posts in the comments below ↓
English
7
38
307
84K
Nihal
Nihal@omnomnihal·
@chetan_ Don't think that's a plague though. More people working on robots = more development. pretty much everything will be automated by robots in the (not-so-far) future anyways
English
0
0
2
102
Nihal
Nihal@omnomnihal·
@k7agar Foundational vlas and world models exist tho. Like a bajillion hours of pretraining data has been thrown at those things
English
0
0
0
262
atharva ☆
atharva ☆@k7agar·
true pre training for robot learning has not been done yet
English
8
1
70
5.9K
Nihal
Nihal@omnomnihal·
@sheriyuo I don’t think so. Now that scale is the limiting factor for model performance, people are realizing its not sustainable. So i feel like we’re shifting back to discovering more efficient model architectures. So we will optimize for lower compute
English
1
0
2
913
Xiuyu Li
Xiuyu Li@sheriyuo·
AI research is already falling into a death cycle. If you do not get an internship at a top lab/company, you cannot access the core techniques or gain real frontier engineering experience. But without those experiences, it becomes almost impossible to pass the resume screening and multiple interview rounds for those same internships. People joke about using Macs for AI, but in reality they are often just better SSH terminals into remote GPU clusters. In frontier labs, the most important thing about an internship is not the payout. What really matters is which team (foundations/data/infra/ToC/...) you are on and how much GPU cluster (have you tried training on 64 GPUs?) access you get. That determines the actual value of the internship for your future research and career. The most advanced models, datasets, and compute resources are increasingly concentrated inside a handful of companies. That concentration is quietly reshaping the entire field.
紫云@dviolettchan

CS used to be a relatively less toxic field because the tools were open and cheap. You could do meaningful research with a laptop, or maybe a single GPU. Those good old days are probably never coming back. (1/3)

English
48
112
1.5K
123.3K
Wesley Maa
Wesley Maa@wesleymaa·
I have chosen to kill myself over this class project to put minecraft on an fpga instead of studying for finals
English
4
0
29
2.1K
Eric Zhang
Eric Zhang@ekzhang1·
I feel it’s really unhelpful that searching for “deep RL” sends you to Q learning, MDPs, Bellman’s equation etc, when it’s literally just Run LLM agent on data -> was it good? -> policy gradient +/- reward Like that’s actually it! And LLMs are just stacks of attn+MLP
English
29
14
441
63.7K
Stephen Xie
Stephen Xie@stephenx_·
Longer chain-of-thought = slower inference, more context rot, and ballooning compute. So what if the model could decide for itself when to go parallel? Our new BAIR blog breaks down Adaptive Parallel Reasoning (APR) — the next paradigm in inference-time scaling. 🧵
Stephen Xie tweet media
English
14
49
454
40.4K
Nihal
Nihal@omnomnihal·
@weikaih04 this is super cool i've been wanting to try implementing this for a while
English
0
0
0
107
Weikai Huang
Weikai Huang@weikaih04·
Traditional VLA perceives the world with 2D perceptions with a ViT, while human perceive it in 3D. Introducing MolmoAct 2, a fully open-sourced VLA that can first Reason in 3D spaces and then Act and beat Pi0.5 in nearly all benchmarks. We open-sourced all the data/code/models, and huge shout out to proj leads: @hq_fang and @DJiafei
Ai2@allen_ai

Robotics models often struggle outside controlled environments. Ours is built to work in real ones. Today we're launching MolmoAct 2, which can assist with a host of chores & lab tasks, plus the MolmoAct 2-Bimanual YAM dataset—the largest open robotics dataset of its kind. 🧵

English
2
20
117
14.8K
Nihal
Nihal@omnomnihal·
@AmitLeViAI Blacklist this guy from all conferences bro 😭
English
0
0
7
438
Amit LeVi
Amit LeVi@AmitLeViAI·
Such a great evening to start a brand new research for NeurIPS in 3.5 days.🧘‍♂️ Day 1: planning. Night 1: running experiments and sending the abstract. Day 2: reading results fighting with Claude, and sending again. Night 2: sleep (optional). Day 3: opening Codex, and finally, write the pape in parallel. Night 3: resolving the “beef” with Claude (temporary peace) and going to sleep. Day 4: final reading, last-minute fixes, submission then some relaxation, maybe a beach walk. I’ll keep you posted on the results. This will be my only single-author paper, so I can’t hide behind other submissions if it gets rejected 😅
Amit LeVi tweet media
English
32
11
251
393.2K
Ayush Sharma
Ayush Sharma@ayushsdev·
I want to read more research papers but it’s hard to keep up I need an X-like platform where every post is a summary of new research papers or discovery and you can comment to ask questions and have an AI answer you
English
2
0
0
66
Nihal
Nihal@omnomnihal·
@VMises76153 @IlirAliu_ I tried implementing this with a mobile robot but honestly this is way more efficient. Cool stuff!
English
0
0
0
29
Nathaniel Nifong
Nathaniel Nifong@VMises76153·
This is Stringman. An open source room scale CDPR compatible with LeRobot and designed for picking up laundry. @IlirAliu_ $1235 assembled at neufangled.com
English
20
29
260
71.5K
Nihal
Nihal@omnomnihal·
Tried making a video gen model from scratch (from the Wan paper). It kinda worked but im gonna try implementing a world model backbone so it has some object perception m.youtube.com/watch?v=ES00rb…
English
1
0
3
272
Nihal
Nihal@omnomnihal·
@rodinrooh I’ve found leading with something about them works well too (like oh i saw u on this podcast, read ur blog post, etc)
English
1
0
5
993
rodin 🌇
rodin 🌇@rodinrooh·
Cold email tips that got me convos with Mark Cuban, Paul Graham, Sequoia, and more at 15: 1) Finding who to email matters more than what you say 2) The body of your email doesn't matter one bit if you have a killer subject 3) Stop asking everyone for calls
rodin 🌇 tweet mediarodin 🌇 tweet media
English
7
2
119
15.2K
Nihal
Nihal@omnomnihal·
@broodsugar I mean maybe if ur just fine tuning existing models
English
0
0
0
29