Parav

710 posts

Parav

Parav

@paravn

.

Katılım Ağustos 2013
538 Takip Edilen217 Takipçiler
Sabitlenmiş Tweet
Parav
Parav@paravn·
Unreasonably excited about - world-class education that finally scales - self driving cars 10x safer and 2x faster than human-driven ones - drones (wheeled and flying) doing package deliveries - robots that can manufacture, build infra, and cook Michelin-starred meals for you - near zero solar energy prices making synthetic fuels viable - fundamental understanding of aging and longevity unlocks - airships dominating transcontinental cargo All of this is already happening or will happen in the next ~15 years. Incredible time to be alive
English
1
0
3
570
eigenron
eigenron@eigenron·
@ankurhandos can you link some good papers? i read the sim-and-real co-training paper by nvidia today but even that depended on real data for grounding and bridging the reality gap.
English
1
0
2
310
Ankur Handa
Ankur Handa@ankurhandos·
Resist the temptation to go collect data in the real world until you've maxed out with what you can do in sim. Simulation scales much better than most people think. Excellent work from Patrick and Octi et al. In this age of coding agents and LLMs, many groups are still overlooking on simulation as a major source of scalable data. I expect it to pick up pace more this year.
Patrick Yin@patrickhyin

We’re releasing OmniReset, a framework for training robot policies using large-scale RL and diverse resets for contact-rich, dexterous manipulation. OmniReset pushes the frontier of robustness and dexterity, without any reward engineering or demonstrations. Try the policies yourself in our interactive simulator! weirdlabuw.github.io/omnireset/ (1/N 🧵)

English
3
11
90
10.4K
Parav
Parav@paravn·
@patrickhyin The env scaling curves are beautiful Do you use any real-world demonstration to train the distilled policy?
English
1
0
2
460
Patrick Yin
Patrick Yin@patrickhyin·
We’re releasing OmniReset, a framework for training robot policies using large-scale RL and diverse resets for contact-rich, dexterous manipulation. OmniReset pushes the frontier of robustness and dexterity, without any reward engineering or demonstrations. Try the policies yourself in our interactive simulator! weirdlabuw.github.io/omnireset/ (1/N 🧵)
English
20
88
417
85.6K
Parav
Parav@paravn·
@martin_casado Nice. Just being able to tweak the layout, materials, and lighting in a sim / dcc would be a game changer
English
1
0
1
20
martin_casado
martin_casado@martin_casado·
No. It's more a limitation of the scene models right now which don't have perfect geometry. Object deconstruction, materials, etc. are all coming and already pretty functional. I think we need one more generation of models to get the geometry to the point where they can be automatically turned into functional meshes without manual tweaking.
English
1
0
4
75
martin_casado
martin_casado@martin_casado·
World building is progressing so quickly. A year ago, most scenes were a single room, or skybox with some parallax. Now we're seeing enormous scenes being built with complex geometries, objects etc. etc. And we're still so early!
World Labs@theworldlabs

This entire cyberpunk world was built by a single creator. 100 million Gaussian splats, with nearly every surface and structure generated in Marble. We’re entering an era where individuals can build entire worlds.

English
11
9
167
15.4K
Parth Ingle
Parth Ingle@parthingle_x·
Some failures are more interesting than others
Parth Ingle tweet mediaParth Ingle tweet media
English
3
0
64
4.6K
Parav
Parav@paravn·
@Shreyko Nice, clever. I'm assuming "mesh" means USD primitives For realistic textures, you would still need a diffusion pass though, right?
English
0
0
0
24
Shrey Kothari
Shrey Kothari@Shreyko·
@paravn not clear from the lighting in the image but also approximating normal and roughness maps using Sobel. the mesh is generated by the model- my goal is to get it to output a perfect shape each time and then there’s a lot you can do for PBR
English
1
0
1
41
Mason Hensley
Mason Hensley@masonhensley·
What is this madness of pcb components layouts being paywalled everywhere. If I made pcb components, I'd plaster my diagrams everywhere to win the hearts and minds - what the heck is this friction. Claude Code is DDOSing the internet looking for ethernet port components
English
4
0
5
213
Parav
Parav@paravn·
@nikitabier Dude there's much better ways to detect slop content Fix your algo
English
0
0
1
10
Nikita Bier
Nikita Bier@nikitabier·
Starting Thursday, we'll be updating our revenue sharing incentives to better reward the content we want on X: We will be giving more weight to impressions from your home region—to encourage content that resonates with people in your country, in neighboring countries and people who speak your language. While we appreciate everyone's opinion on American politics, we hope this will disincentivize gaming the attention of US or Japanese accounts and instead, drive diverse conversations on the platform. We invite creators to start building an audience locally. X will be a much richer community when there's relevant posts for people in all parts of the world.
English
10.8K
3.8K
37.8K
15.8M
Parav
Parav@paravn·
@simonkalouche We need to solve sim2real just so we can get the coolest looking bots
English
0
0
0
38
Simon Kalouche
Simon Kalouche@simonkalouche·
Wheel-leg hybrid is the future. Trained entirely in sim.
English
3
3
72
5.9K
Parav
Parav@paravn·
@nikunj Moravec is feeling a little left out
English
0
0
0
36
Nikunj Kothari
Nikunj Kothari@nikunj·
Being in SF means constantly hearing the same three things (jevons paradox, goodharts law and the bitter lesson) over and over again.. Good reminder (even for me) to read more history, art and sci-fi!
English
22
6
212
16.7K
Parav
Parav@paravn·
@ryan_punamiya I imagine test time prediction would still be useful for longer horizon futures, similar to test time reasoning tokens
English
0
0
0
39
Ryan Punamiya
Ryan Punamiya@ryan_punamiya·
this makes a lot of sense, I view video prediction akin to a better version of language subtask prediction where you can optionally drop it at test time - as such it’s much more likely for representation learning
Hang Zhao@zhaohang0124

Our recent findings on World Action Models (WAMs): the core advantage of WAMs is not test-time “imagination” of futures, but the training-time supervision from future video prediction. We propose Fast-WAM, which makes inference simple, fast, and policy-centric.

English
1
0
17
2.7K
Parth Ingle
Parth Ingle@parthingle_x·
They all look beautiful but have something wrong with them
Parth Ingle tweet media
English
3
0
20
2.9K
Parav
Parav@paravn·
@SeanZCai This doesn't seem scaleable until we have better value functions / reward models
English
0
0
0
112
Sean Cai
Sean Cai@SeanZCai·
The data markets are reacting to the fact that we can't hillclimb on anything but realistic data anymore for long horizon training, and trying to purchase datasets from failed startups en masse. Naturally, same problem as contrived data - your data sources inherently aren't good. If you're purchasing codebases from failed startups, you're probably not getting the greatest learning signal from the myriad of git commits and prs that are frantically put together at the end of the startup's lifecycles. Working on real world data pipelines with good data sources. Unlocking good enterprise data for model training has sucked since 2023 after we scraped the entirety of the internet for pre-training data. Unfortunately, or fortunately, the best quality data comes from people who aren't aware that its being used for ai model training.
English
10
2
118
10.4K
Parav
Parav@paravn·
@beingbeyond_ are the grippers mechanically coupled to the gloves?
English
1
0
1
353
BeingBeyond
BeingBeyond@beingbeyond_·
Introducing BeingBeyond U1, the world’s first Real DexUMI. U1 brings embodiment-agnostic dexterous hand data collection to real-world manipulation, taking a major step toward general-purpose dexterous manipulation models. From data collection to transfer, deployment, and execution, U1 pushes UMI beyond the gripper era and into the age of dexterous hands.
English
13
62
417
41.2K
Corny
Corny@cornelius_ong·
I think this paper is a nice read but I thought it is worth clearing up some misconceptions about reflected inertia and gear ratio. First of all, if you keep output torque constant, reflected inertia is actually independent of gear ratio (if you ignore gear friction and mass). You can try different combinations of gear ratio and rotor inertia, but if you have output torque as a constant, regardless of the combination you picked, you'll end up with the same reflected inertia. This is a pretty important intuition to build, and it's also the reason why the reflected inertia between outrunner actuators and inrunner actuators is pretty much the same for the same torque (but inrunners have slightly lower because of differences in mechancial implementation). And also why linear actuators roughly have the same reflected inertia as a rotary actuator with the same joint torque. It's a little harder to compare reflected inertia of linears vs rotaries because reflected inertia in a 4-bar linkage is non linear, and also depends if you invert the gear train or not :) The part I'm not sure about is that the gear ratio is decreased by an order of magnitude (from 288:1 to 15:1) and to compensate for the loss in torque, they use an axial flux motor. Axial flux motors are cool but they don't give you THAT much more torque. Also, torque transparency can be decent up to gear ratios of 100 or even higher. Gearbox efficiency + motor side losses dictate an actuator's ability to sense output torque, and gear ratio is a just multiplier of that effect. Ie high gearbox efficiency means you can get away with higher gear ratios without sacrificing proprioception. Involute gear teeth are very efficient, and given an efficient gear tooth design, gearbox efficiency is determined by how many stages you have, not necessarily gear ratio. Single-stage = very good, two-stage = pretty good, three-stage = decent. And the converse is true as well: low efficiency gearing, like harmonic drives, means that you will always need an external torque sensor, regardless of gear ratio. (Not only do harmonic drives have low efficiency, but also their efficiency is non linear with speed and torque) It's easy to point fingers at gear ratio as the parameter to blame for sim2real gaps. However a well designed actuator with a 30:1 gear ratio and a smaller/lighter motor often times outperforms a 15:1 actuator with a larger motor if you also care about total mass, thermal performance, and battery life. But I do think for a hand that only needs to do light + dexterous work like origami, going down the low-ratio route is a sure-fire way to make your models happy
Quanting Xie@DanielXieee

Why does manipulation lag so far behind locomotion? New post on one piece we don't talk about enough: The gearbox. The Gap You've probably seen those dancing humanoid robots from Chinese New Year. Locomotion isn't entirely solved; but clearly it's on a trajectory. But we haven't seen anything close for manipulation. 𝗪𝗵𝘆? When sim-to-real transfer fails, the instinct is to blame the algorithm. Train bigger networks. Crank up domain randomization. Those approaches have made real progress; we don't deny that. But we started wondering: are we treating the symptom or the disease? The Hardware Bottleneck: Fingers are too small for powerful motors. So most hands use massive gearboxes (200:1, 288:1) to get enough torque. But those gearboxes break everything manipulation needs:   • Stiction and backlash are complex to simulate. Policies trained on smooth physics hallucinate when they hit that reality.   • Reflected inertia scales as N². At large gear ratio, the finger hits with sledgehammer momentum.   • Friction blocks force information. The hand becomes blind. And they're the first thing to break. What we are trying to build at Origami, we cut the gear ratio from 288:1 to 15:1 using axial flux motors and thermal optimization. The transmission becomes more transparent: backdrivable, low friction, forces propagate to motor current. Early signs are encouraging. Still running quantitative benchmarks. Why Interactive? I love how Science Center uses interactive devices to explain complex ideas. I want to borrow this concept and help people understand the hard problems in robotics better visually. The post has demos where you can toggle friction, slide gear ratios, watch the sim-to-real gap widen in real-time. What's inside:   • Interactive demos (friction curves, N² scaling, contact patterns)   • Comparison table: 14 robot hands by sim-to-real gap and force transparency   • The math behind why low-ratio matters Read it here: origami-robotics.com/blog/dexterity… We're not claiming we've solved dexterity. The deadlock has many pieces. But we think this one's foundational. Curious what you think.

English
1
2
25
3.8K
Parav
Parav@paravn·
@beffjezos This is not aligned with lab incentives and Nvidia will try its darndest to open-source the continual learning stack and make you go to them directly for the compute
English
0
0
0
154
Beff (e/acc)
Beff (e/acc)@beffjezos·
Whichever is the lab that will offer continuous learning / online RL per unique agent for enterprise will absolutely print money. Virtual headcount for all companies will become very real. Could charge $5k+ per month per continuous agent easily
English
39
49
518
42.5K
Parav
Parav@paravn·
@andreasklinger Yeah and it's still marginally useful. Like someone said, this could be a better practice buddy than a ball throwing machine. In an indoor arena ofc :)
English
0
0
1
13
Andreas Klinger 🦾
Andreas Klinger 🦾@andreasklinger·
@paravn all that being said - anything interacting with the world around it is atm impressive
English
1
0
1
20