Muhammed Kocabas

142 posts

Muhammed Kocabas banner
Muhammed Kocabas

Muhammed Kocabas

@mkocab_

Katılım Aralık 2016
667 Takip Edilen365 Takipçiler
Muhammed Kocabas retweetledi
Xiaoming Zhao
Xiaoming Zhao@xmzhao_·
Most 3D representations capture shape or texture, but rarely both, especially view-dependent effects like reflections. Check out LiTo: a set of latent tokens that capture both geometry and appearance for high-quality image-to-3D generation. apple.github.io/ml-lito (1/n)
English
5
58
271
19.8K
Muhammed Kocabas retweetledi
Oncel Tuzel
Oncel Tuzel@OncelTuzel·
LiTo: Surface Light Field Tokenization (ICLR 2026) — new work from Apple MLR. LiTo learns a unified 3D representation of geometry + view-dependent appearance, capturing effects like specular highlights & Fresnel reflections, enabling high-fidelity 3D generation from single image.
English
1
28
201
11.8K
Muhammed Kocabas retweetledi
Yao Feng
Yao Feng@YaoFeng1995·
I recently had a spine fracture and spent about a month in the hospital. It was humbling (and a bit embarrassing) to ask the nurses for help every time, even for small tasks like getting up, reaching for things, or going to the restroom, especially knowing how busy they are. I couldn’t help but imagine how helpful it would be to have a robot that could assist in such moments. This experience made me appreciate even more what we’ve built with GentleHumanoid — a robot that can interact gently, safely, and naturally, and hopefully one day help take care of us when we need it most. Grateful for the collaboration with @Axell_wppr, Baiyu, @michaelpiseno_ , @zhenanbao , and Karen Liu, and to everyone who supported me during this time 🙏 🩵 GentleHumanoid: Building Robots That Care 🤖 gentle-humanoid.axell.top
Qingzhou Lu@Axell_wppr

Excited to present GentleHumanoid: a whole-body control policy with upper-body compliance and tunable force limits for safe, natural human & object interaction. ⚡ONE policy for diverse tasks and compliance levels. 👉Website: gentle-humanoid.axell.top

English
4
5
57
7.8K
Muhammed Kocabas retweetledi
Michael Black
Michael Black@Michael_J_Black·
The BEDLAM2.0 dataset (B2) is here, just in time to train your 3D human pose and shape estimation methods for CVPR. B2 goes beyond BEDLAM (B1) to include widely varied and natural camera motions and fields of view, more diverse body shapes, strand-based hair, more garments, shoes, more body motions, and more 3D scenes. Compared with B1, training on B2 produces more accurate 3D human pose, resulting in SOTA accuracy, particularly for estimates in world coordinates. B2 lets you jointly train camera motion and human motion regressors, and we also provide depth maps. Check out bedlam2.is.tuebingen.mpg.de/for data, code, dataset statistics, and much more. BEDLAM2.0 will appear in the 2025 NeurIPS Datasets and Benchmarks Track. Joint work with Joachim Tesch, @gfgbec, Prerana Achar, @AYiannakidis, @mkocab_, @PriyankaP1201.
English
4
50
240
27.1K
Kwang Moo Yi
Kwang Moo Yi@kwangmoo_yi·
Luo et al., "Self-diffusion for Solving Inverse Problems" Pretty much a deep image prior for denoising models. Without ANY data, with a single image, you can train a denoiser via diffusion training, and it just magically learns to solve inverse problems.
Kwang Moo Yi tweet media
English
7
41
346
28.9K
Muhammed Kocabas retweetledi
Michael Black
Michael Black@Michael_J_Black·
SMPL is 10 years old and has done what we hoped — it changed the way the field estimates and models 3D humans and their motion. I’m delighted that the original team has been recognized today at @ICCVConference with the Mark Everingham Prize. The prize is given to individuals or teams who have worked to further progress in the computer vision community as a whole. Mark Everingham understood that to have an impact, it is not enough to simply publish a paper. SMPL’s success is due to lots of hard work to provide the community with code, data, and support. My deepest thanks go out to all the members of the @PerceivingSys department who have supported SMPL and related technology over the years. It has been a team effort of many dedicated people and we share this award with you. Mark understood that big changes require community effort. Consequently my big thanks go to all the users of SMPL and related tools. You have pushed the field forward as a community in ways that no small team could. I’m constantly inspired by your work. Computer vision has changed a lot in 10 years but people keep finding new uses for SMPL, most recently in training humanoid robots. There are many more applications to come in games, interactive entertainment, sports, and biomechanics. Congratulations to my coauthors Matt Loper, @naureenmahmood, Javier Romero and @GerardPonsMoll1. smpl.is.tue.mpg.de
Michael Black tweet media
English
10
24
253
19.4K
Muhammed Kocabas retweetledi
Simo Ryu
Simo Ryu@cloneofsimo·
This is aging so well. We've reached the point where humans can make mind-drugs made out of bits that can (and will) hyper-optimize itself across time. Society will collapse if we poison our children. Stop this.
will depue@willdepue

do not build Infinite Jest (V), do not build the infinite AI TikTok slop machine, do not build the P-zombie AI boy/girlfriend, do not build the child-eating short-form video blackhole, do not build the human-feedback-optimized diffusion transformer porn generator. save yourselves

English
13
45
943
80.7K
Muhammed Kocabas retweetledi
Meshcapade
Meshcapade@meshcapade·
✨ Did you know? Every time you download a motion from our Meshcapade platform, you also get the camera extracted 🎥 Perfect sync between movement and perspective—straight out of MoCapade! No extra setup, no guesswork, just plug & play for your 3D shots. Ready to give your animations cinematic flow? 🚀 #MotionCapture #3DAnimation #Cinematography #SMPL
English
6
36
251
23.8K
Muhammed Kocabas retweetledi
alerender
alerender@alerender_mocap·
#stuntdanyramos These types of dynamic movements, like backflips, are precisely the ones that most often challenge motion capture systems, making them an excellent example. I also used some characters from the #iclone hashtag (Reallusion).
English
2
5
28
3.2K
Muhammed Kocabas retweetledi
Meshcapade
Meshcapade@meshcapade·
MoCapade 3.5 is officially live this week on our platform! 🚀 🎭 Facial expression tracking 👣 Foot locking Capture full-body motion and facial expressions — no suits, no markers, just one camera. Any camera. Experience the next generation of markerless motion capture. 🎉 Come try it out live at #SIGGRAPH2025! #MotionCapture #3DBody #Animation #SMPL
English
2
12
71
8.2K
Muhammed Kocabas
Muhammed Kocabas@mkocab_·
@StavrosDiol Super cool! Is there a timeline for the code release? The link in the paper is not a valid github link.
English
1
0
1
419
Muhammed Kocabas retweetledi
Humphrey Shi
Humphrey Shi@humphrey_shi·
10 years ago, I recruited 4 new PhD students—including Jiahui—with the late Prof Tom Huang, thanks to new industry funding. Proud to see them shaping AI’s frontier. Today, academia struggles to fund the next generation. Industry—your support & partnership matter more than ever!
World of Statistics@stats_feed

Real Madrid spent $80M to sign Ronaldo from Manchester United in 2009. Meta paid $100M to sign Jiahui Yu from OpenAI in 2025.

English
3
6
132
19.9K
Muhammed Kocabas
Muhammed Kocabas@mkocab_·
@andrew_n_carr Those carving artifacts don't only happen with generated videos. They occur in real videos too. Here are results from real videos showing the same patterns. This is a failure of the point tracker. Assuming you used AllTracker, @AdamWHarley could probably speak to this better.
English
3
0
3
236
Andrew Carr 🤸
Andrew Carr 🤸@andrew_n_carr·
synthetically generated videos have fewer and fewer visible artifacts. however, they still have substantial noise. one interesting way to detect this noise is to use dense pixel tracking. here are some veo 3 videos with each pixel tracked over time! look at that pattern
English
5
4
48
3.6K
Muhammed Kocabas retweetledi
Michael Black
Michael Black@Michael_J_Black·
Public service announcement -- if you're making a new dataset of human motions in SMPL-X format using a marker-based system, please use MoSh. If you first compute a skeleton and then transfer this to SMPL-X, you will lose a lot of realism. MoSh fits SMPL-X to the markers directly, giving optimal shape and pose parameters. This will make your dataset much more useful to the community. There's code online (github.com/nghorbani/mosh…) but, if you're having trouble doing it yourself, reach out and we'll help.
English
3
7
84
11.5K
Muhammed Kocabas retweetledi
Meshcapade
Meshcapade@meshcapade·
Next in our #CVPR2025 lineup: PromptHMR 👀✨ Drop a video and watch it blossom into crisp 3D people, even when limbs are hidden or several folks share the frame. Humans reconstructed in world coordinates with state-of-the-art accuracy 💯 By Yufu Wang, Yu Sun, Priyanka Patel, Kostas Daniilidis, Michael J. Black and Muhammed Kocabas. Why it matters • One-click lifelike 3D bodies • Keeps tracking when limbs slip behind objects • Understands interactions in crowded scenes • Anchors every person precisely in real-world space 🎯 Artists, animators, game devs and researchers can plug PromptHMR into their pipeline and generate digital humans in minutes. 🎥 Catch our Difflocks video for hair magic, and visit booth 1333 at @CVPR to see PromptHMR live and chat with the team. 📄 Paper link in the thread/comment. #3D #DigitalHuman #ComputerVision #AI #MachineLearning #SMPL #MotionCapture
English
5
19
80
23.7K
Muhammed Kocabas retweetledi
Yan Zhang
Yan Zhang@cnsdqzyz·
We (@meshcapade) have new internship positions on human-to-humanoid motion transfer. Recommendations are heartily appreciated! More details are ⬇️
Yan Zhang tweet mediaYan Zhang tweet media
English
1
11
57
11.3K
Muhammed Kocabas retweetledi
Meshcapade
Meshcapade@meshcapade·
Faces, Expressions, Hair—Brought to Life with Meshcapade. 🎭✨ From facial animations captured straight from video to realistic 3D hair strands reconstructed from a single image, bringing digital humans to life has never been this seamless. See it in action at #GDC2025! Stop by Booth C1821 and experience how motion, detail, and expression come together—fast, effortless, and ready for Unreal. #UnrealEngine #3DAnimation #MotionCapture #FacialAnimation #genAI #Meshcapade
English
7
23
109
16.4K