Igor Vasiljevic

57 posts

Igor Vasiljevic banner
Igor Vasiljevic

Igor Vasiljevic

@vslevic

ML @Woven_ToyotaJP; Senior Research Scientist @ToyotaResearch. Surf @ Santa Cruz and Ichinomiya. PhD @TTIC_Connect, @UChicago alum.

東京都、日本 参加日 Ağustos 2020
532 フォロー中275 フォロワー
Igor Vasiljevic がリツイート
Jean Mercat
Jean Mercat@MercatJean·
Releasing VLA Foundry: an open-source framework that unifies LLM, VLM, and VLA training in a single codebase. End-to-end control from language pretraining to action-expert fine-tuning — no more stitching together incompatible repos.
English
10
76
490
73.5K
Igor Vasiljevic
Igor Vasiljevic@vslevic·
My team at Woven by Toyota is hiring an ML intern (onsite in Tokyo) for this summer! Looking for experience with large-scale pre-training for perception models (bonus: 3D) and world models. Feel free to DM if interested Apply: woven.toyota/en/careers/det…
English
0
4
9
1.5K
Igor Vasiljevic がリツイート
Shun Iwase
Shun Iwase@s1wase·
My team is looking for highly motivated research interns this summer with strong backgrounds in 3D representations for robotics and scene understanding. If you’re interested, please feel free to DM me! jobs.lever.co/tri/95fba28c-9…
English
7
31
243
28K
Igor Vasiljevic がリツイート
Zubair Irshad
Zubair Irshad@mzubairirshad·
🚀Thrilled to share what we’ve been building at TRI over the past several months: our first Large Behavior Models (LBMs) are here! I’m proud to have been a core contributor to the multi-task policy learning and post-training efforts. At TRI, we’ve been researching how LBMs can help robots learn faster, better, and more efficiently. The key takeaways: ✅ We built an evaluation pipeline to benchmark LBM performance with real 𝐬𝐭𝐚𝐭𝐢𝐬𝐭𝐢𝐜𝐚𝐥 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 ✅ Pre-training on hundreds of tasks makes models more robust—plus, we can teach new, complex tasks with 80% 𝐥𝐞𝐬𝐬 𝐝𝐚𝐭𝐚 ✅ The bigger and more diverse the pre-training, the better the results Check out our overview video, webpage and paper for more details: ✨youtube.com/watch?v=DeLpnT… 🌎 toyotaresearchinstitute.github.io/lbm1/ 📄 arxiv.org/pdf/2507.05331 We hope this work helps move the field of robotics forward!
YouTube video
YouTube
Russ Tedrake@RussTedrake

TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the technology, and to share a lot of details for how we're achieving it. youtube.com/watch?v=BEXFnr…

English
3
27
185
20.2K
Igor Vasiljevic がリツイート
Sergey Zakharov
Sergey Zakharov@ZakharovSergeyN·
Excited to share our new work on multi-object scene completion and grasp pose estimation from a single RGB-D image! Kudos to @s1wase and the incredible team from @ToyotaResearch, @WbyT_Tech, and @CarnegieMellon. Come chat with us at #CVPR2025 to learn more.
Shun Iwase@s1wase

#CVPR2025 starts in two days, and can’t wait to share our new work! 🎉 We present ZeroGrasp, a unified framework for 3D reconstruction and grasp prediction that generalizes to unseen objects. Paper📄: arxiv.org/abs/2504.10857 Webpage🌐:sh8.io/#/zerograsp (1/4 🧵)

English
0
3
8
1.1K
Igor Vasiljevic がリツイート
Igor Vasiljevic がリツイート
Zubair Irshad
Zubair Irshad@mzubairirshad·
Introducing ✨Posed DROID✨, results of our efforts at automatic post-hoc calibration of a large-scale robotics manipulation dataset. We provide: 🤖 ~36k calibrated episodes with good quality extrinsic calibration 🦾 ~24k calibrated multi-view episodes with good-quality multi-view camera calibration ✅ Quality assessment metrics for all provided camera poses To achieve this, we utilize: 1️⃣ Auto Segment Anything (SAM) based filtering (Camera-to-Base Calibration) 2️⃣ Tuned CtRNet-X for bringing in additional cams (Camera-to-Base Calibration) 3️⃣ Pretrained DUST3R with depth-based pose optimization (Camera-to-Camera Calibration) Try it out at: droid-dataset.github.io Learn more at: 🌐 arXiv: arxiv.org/pdf/2403.12945 📄 Blog: medium.com/p/4ddfc45361d3 🧵 1/n
English
3
26
186
13.4K
Igor Vasiljevic がリツイート
Sedrick Keh
Sedrick Keh@sedrickkeh2·
1/ DeepSeek-VL is trained from DeepSeek LLM Qwen-VL is trained from Qwen-7B PaliGemma is trained from Gemma-2B Is this really the best way to train a VLM? What if we had access to model checkpoints -- would it be better to train with images before the LLM fully converges? 🧵
English
5
10
31
2.9K
Igor Vasiljevic がリツイート
Sedrick Keh
Sedrick Keh@sedrickkeh2·
We're seeing more and more that small models trained on high-quality datasets can perform very well. Together with our collaborators at DCLM, we trained strong 1B models and openly release everything! Check it out at huggingface.co/TRI-ML/DCLM-1B
Achal Dave@achalddave

Excited to share our new-and-improved 1B models trained with DataComp-LM! - 1.4B model trained on 4.3T tokens - 5-shot MMLU 47.5 (base model) => 51.4 (w/ instruction tuning) - Fully open models: public code, weights, dataset!

English
0
2
10
1.2K
Igor Vasiljevic がリツイート
Achal Dave
Achal Dave@achalddave·
As always, this wouldn't be possible without all the DataComp-LM collaborators and a special thanks to @ToyotaResearch, Apple, and UW!
English
0
1
7
721
Igor Vasiljevic がリツイート
Achal Dave
Achal Dave@achalddave·
Excited to share our new-and-improved 1B models trained with DataComp-LM! - 1.4B model trained on 4.3T tokens - 5-shot MMLU 47.5 (base model) => 51.4 (w/ instruction tuning) - Fully open models: public code, weights, dataset!
Achal Dave tweet media
English
3
29
114
30.6K
Igor Vasiljevic がリツイート
Achal Dave
Achal Dave@achalddave·
We've publicly released our DataComp-LM models: Truly open 1B and 7B models that's competitive with state-of-the-art (llama3, qwen2, gemma, ...) on most benchmarks, but with a public training recipe, dataset, and code! (1/3)
English
1
14
56
6.4K
Igor Vasiljevic がリツイート
Sedrick Keh
Sedrick Keh@sedrickkeh2·
- tons of new cool work on large rnns (@RWKV_AI, mamba2 @tri_dao @_albertgu, just read twice @simran_s_arora, etc)! - but pretraining is expensive - our recipe for linearizing llms into rnns was accepted to @COLM_conf! #COLM2024 - we train SOTA rnns & show limitations of rnns
English
1
8
38
4.5K
Igor Vasiljevic がリツイート
Vaishaal Shankar
Vaishaal Shankar@Vaishaal·
I am really excited to introduce DataComp for Language Models (DCLM), our new testbed for controlled dataset experiments aimed at improving language models. 1/x
Vaishaal Shankar tweet media
English
7
79
274
120.1K
Igor Vasiljevic がリツイート
Achal Dave
Achal Dave@achalddave·
Check out DataComp for language models! Open data, open code, open training recipe, and close to Llama3-8B performance. This has been a labor of love over the last year, a huge thanks to all the collaborators for helping make this happen!
Vaishaal Shankar@Vaishaal

I am really excited to introduce DataComp for Language Models (DCLM), our new testbed for controlled dataset experiments aimed at improving language models. 1/x

English
1
10
27
4.4K
Igor Vasiljevic がリツイート
Zhenjun Zhao
Zhenjun Zhao@zhenjun_zhao·
Self-Supervised Geometry-Guided Initialization for Robust Monocular Visual Odometry Takayuki Kanai, @vslevic, @vitorguizilini, Kazuhiro Shintani tl;dr: zero-shot monocular depth estimation->geometric prior->initialize dense bundle adjustment arxiv.org/pdf/2406.00929
Zhenjun Zhao tweet mediaZhenjun Zhao tweet mediaZhenjun Zhao tweet mediaZhenjun Zhao tweet media
Română
0
4
49
3.8K
Igor Vasiljevic がリツイート
Zubair Irshad
Zubair Irshad@mzubairirshad·
Starting the #RoboNeRF workshop at #icra2024 with our first speaker @leto__jean. Jeanette's talk is on Grasping with NeRF! Come check it out at Conference Center 419!
Zubair Irshad tweet media
English
0
1
5
585