Tianyu Li EasyPaperSniper

55 posts

Tianyu Li EasyPaperSniper

Tianyu Li EasyPaperSniper

@SniperPaper

Founding Researcher@Dexmate | exRS@FAIR | GaTech PhD

Katılım Eylül 2019
180 Takip Edilen188 Takipçiler
Tianyu Li EasyPaperSniper
Interesting works. Enhancing the capacity of low cost robots will make robots more accessible to everyone
Zhiyang (Frank) Dou@frankzydou

Excited to share that our work NeuralActuator: Neural Actuation Modeling for Robot Dynamics and External Force Perception has been accepted to #RSS2026! Your robot — even a low-cost one — can feel external forces without torque or tactile sensors. TL;DR: NeuralActuator is a neural actuator model that jointly predicts 1️⃣torque to capture the nonlinear and time-varying current–to–torque relationship of low-cost servos, 2️⃣external contact forces (and force detection gates) for sensorless force perception, 3️⃣and motor conditions that indicate each motor’s operating regime. Here is a fast-forward video clip ⬇️ We are also covering more robots like LeRobot-S101 and Franka Panda. More details coming soon.

English
0
0
1
150
Xiaolong Wang
Xiaolong Wang@xiaolonw·
Excited to share that Assured Robot Intelligence (ARI) has joined @Meta to help build the future of humanoid intelligence! When we started ARI one year ago, our mission was clear: achieve physical AGI. Through deep customer engagements and real-world deployments, it became clear to us that serving the massive opportunity ahead requires training a truly general-purpose physical agent. We believe this agent will be humanoid — and that scaling will come from learning directly from human experience, not teleoperation alone. Meta’s ecosystem brings together the key components needed to make this vision possible. We will be joining Meta Superintelligence Labs (MSL) to help bring personal superintelligence into the physical world. We are incredibly grateful to the brilliant minds, robotics researchers, engineers, partners, and supporters who have worked with us on this journey. Thank you to our investors and angels, led by @aixventureshq , for believing in our mission. This is just the beginning.
Bloomberg@business

Meta Platforms Inc. has acquired Assured Robot Intelligence, a startup developing artificial intelligence models for robots, as part of a major initiative to build humanoid technology. bloomberg.com/news/articles/…

English
109
59
680
176.8K
Tianyu Li EasyPaperSniper retweetledi
Claude
Claude@claudeai·
Claude now connects to the tools creative professionals already use. With the new Blender connector, you can debug a scene, build new tools, or batch-apply changes across every object, directly from Claude.
English
1.6K
4.4K
46.6K
12.4M
Tianyu Li EasyPaperSniper
Tianyu Li EasyPaperSniper@SniperPaper·
Our perception stack today is a mosaic — MoGe for depth/normals, SAM3 for segmentation, Grounding DINO for open-vocab detection — each with its own preprocessing and failure modes. Long-horizon tasks compound this: every subpolicy wants a different input. Collapsing all of it into one model is a real shift in the engineering math. Cant wait to give it a try.
Saining Xie@sainingxie

vision🍌 is here vision-banana.github.io if you got into computer vision the way I did, starting with pixel-level labeling tasks like segmentation, edges, depth, or surface normals, you’ll probably feel the same seeing these results -- something big has quietly shifted, and it’s going to change how we approach these problems for good 🧵

English
0
0
0
103
Tianyu Li EasyPaperSniper
Tianyu Li EasyPaperSniper@SniperPaper·
We believe the best way to move forward is to bring others along. That's why we're launching our first @DexmateAI Research Grant Program! Selected research proposals will receive access to Vega U, our dual-arm manipulation platform, to explore and advance the frontiers of physical AI and embodied intelligence. Who's eligible: Full-time faculty at accredited universities or degree-granting research institutions with a US mailing address. 📅 Deadline: April 5, 11:59 PM PST 🔗 Apply: dexmate.ai/research-grant Not a PI? Pass this along to your lab lead — and tag anyone who should apply! #Dexmate #VegaU #Robotics #PhysicalAI #EmbodiedAI #ResearchGrant
Dexmate@DexmateAI

🤖 Robotics faculty and labs, we’re launching our first Dexmate Research Grant Program. The future of AI will not live only in software. It will move, interact, and operate in the physical world. At Dexmate, we’re building toward that future by supporting the researchers shaping it. This program is designed to accelerate breakthroughs in physical AI, embodied intelligence, and real world robotics applications. Selected applicants will receive access to Vega U, our dual arm manipulation platform, to explore, build, and push the boundaries of what robots can do. 🧪 Applications are now open. Apply here: dexmate.ai/research-grant ✨ Eligibility: 
Full time faculty at accredited universities and degree granting research institutions with a US mailing address. 📅 Deadline
: April 5, 11:59 PM PST Not a faculty PI? Share this with your PI or lab lead. Tag someone who should apply. Let’s see what your lab builds next 🚀 #Dexmate #VegaU #robotics #PhysicalAI #EmbodiedAI #ResearchGrant #Innovation

English
0
1
3
317
Tianyu Li EasyPaperSniper retweetledi
Zhiyang (Frank) Dou
Zhiyang (Frank) Dou@frankzydou·
We present EgoReAct: Real-time 3D human reaction generation from streaming egocentric video. 🌟Reacting to streaming egocentric video is something humans do every day. We hope EgoReAct makes human motion more human-like. 🔎 What we found: existing ego-reaction data can be spatially inconsistent (e.g., moving reactions paired with fixed-camera videos), which breaks 3D grounding. 📷 What we built: HRD, a spatially aligned egocentric video–reaction dataset (3,500 pairs, 32 categories), plus a spatially aligned ViMo fix for fair evaluation. (Instead of collecting expensive ground-truth motion, we employ VDM to generate the egocentric videos.) 👁️⚡🏃 Our simple yet effective pipeline: motion tokenization for compact discrete codes + an autoregressive Transformer for online, strictly-causal generation. Metric depth and head dynamics further improve 3D spatial consistency. Project Page: frank-zy-dou.github.io/projects/EgoRe… ArXiv: arxiv.org/abs/2512.22808 #HumanMotion #EgocentricVision #3D #ARVR #Animation #AIGC #DeepLearning #GenerativeAI #Graphics #ComputerVision #Motion
Cambridge, MA 🇺🇸 English
6
29
160
11.2K
Tianyu Li EasyPaperSniper retweetledi
AK
AK@_akhaliq·
MoCapAnything Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos
English
6
110
819
36.8K
Chenhao Li
Chenhao Li@breadli428·
🧠Model-Based RL shows promises but has seen limited success in real-world robotics. 🌎Introducing Robotic World Model, a black-box end-to-end neural dynamics model that bridges this gap, where policies are trained purely in imagination. @NeurIPSConf 🎯sites.google.com/view/roboticwo…
English
13
66
365
101.4K
Tianyu Li EasyPaperSniper retweetledi
Alan Fern
Alan Fern@AlanPaulFern1·
Imagine moving a heavy object with a joystick—through a swarm of quadruped-arm robots. 🕹️ decPLM: decentralized RL for multi-robot pinch-lift-move. • No comms or rigid links • Hierarchical RL + constellation reward • 2→ N robots, sim→real 🔗 decplm.github.io
English
15
115
627
59.7K
Ziyu (Charlotte) Zhang
Ziyu (Charlotte) Zhang@ziyu_zhang73354·
Training RL agents often requires tedious reward engineering. ADD can help! ADD uses a differential discriminator to automatically turn raw errors into effective training rewards for a wide variety of tasks! 🚀 Excited to share our latest work: Physics-Based Motion Imitation with Adversarial Differential Discriminators ( @SIGGRAPHAsia 2025), with Sergey Bashkirov*, Dun Yang, @YiShi_333, Michael Taylor, and @xbpeng4. 🌟 Webpage: add-moo.github.io 🌟 Code: coming soon!
English
6
50
293
52.2K
Tianyu Li EasyPaperSniper
Tianyu Li EasyPaperSniper@SniperPaper·
Here’s an example validation on Unitree G1 humanoid (averaged over 3 2-star level dances from Just Dance): 🤖 Robot avg. score: 5,707 🧑 Human avg. score: 9,361 Not the upper bound—future engineering can push robots much closer to matching human performance.
English
1
0
0
132
Tianyu Li EasyPaperSniper
Tianyu Li EasyPaperSniper@SniperPaper·
The Nintendo Switch is more than just fun 🎮—it can also advance humanoid research! @sehoonha @jeonghwankim0 @wontaek0820 @donghoonbaek @seungeun071 With Switch4EAI, we turn Just Dance into a benchmark for humanoid whole-body motion tracking: ✅ ~$400 teleoperation setup ✅ Built-in scoring ✅ Constantly updated motions ✅ Direct human-vs-robot comparison 📄 arXiv: arxiv.org/abs/2508.13444 🌐 website: easypapersniper.github.io/projects/Switc… 🕺🤖 #Robotics #EmbodiedAI #AI #Humanoid #Benchmarking #NintendoSwitch
English
1
2
10
436
C. Zhang
C. Zhang@ChongZitaZhang·
Why I keep saying quadrupeds are far more capable than bipeds (even most humans), and point feet can be better than wheeled/surface feet in complex terrains. youtu.be/QDU_FicBPDo?fe…
YouTube video
YouTube
English
3
3
49
50.8K