Noam Rotstein

170 posts

Noam Rotstein banner
Noam Rotstein

Noam Rotstein

@NoamRot

Se unió Nisan 2009
624 Siguiendo114 Seguidores
Tweet fijado
Noam Rotstein
Noam Rotstein@NoamRot·
📢📢📢 Check out our new paper! We demonstrate how to control motion in any image-to-video model with no extra training. Fast, simple, high-quality, and model-agnostic. 🔗 time-to-move.github.io
Assaf Singer@assaf_singer

We present Time-to-Move (TTM)! a training-free, plug-and-play method for precise motion control in video diffusion. Unlike prior training-based methods, TTM works with any backbone at no extra cost🔥 Page: time-to-move.github.io [1/4] @NoamRot @orlitany @mann_amir_

English
0
1
10
673
Noam Rotstein retuiteado
Saar Huberman
Saar Huberman@HubermanSaar·
SemanticMoments - Semantic motion similarity How do you find videos with similar motion? It’s harder than it sounds. Models like VideoMAE and V-JEPA encode motion, but their embeddings are dominated by appearance. So how do we build a compact embedding for motion similarity? Joint work with @kfir99 @OPatashnik @BenaimSagie @MokadyRon
GIF
English
8
29
180
26.2K
Noam Rotstein retuiteado
Mickmumpitz
Mickmumpitz@mickmumpitz·
The potential of this tech is just scratching the surface! Watch this Blender layout transform into a final render using my Time-to-Move (TTM) workflow:
English
2
2
24
2.1K
Noam Rotstein retuiteado
enigmatic_e
enigmatic_e@8bit_e·
I wanted to try out @mickmumpitz ComfyUI workflow that lets you animate movement by manually shifting objects or images in the scene. Link to his tutorial below 👇
English
61
273
2.3K
323.9K
Noam Rotstein retuiteado
Lumana
Lumana@LumanaHQ·
This just in: Lumana is now the world’s fastest-growing AI video security company! 50,000 cameras surpassed in just over a year since emerging from stealth. Read more: bit.ly/3XS50u2 #AIVideoSecurity #AI
English
0
4
9
1.8K
Noam Rotstein retuiteado
Navve wasserman
Navve wasserman@NavveW·
Thanks @rohanpaul_ai for sharing our new work! Automatic Interpretability Pipeline + Human Brain Data = 🧠🔍🔥 See how we use a large-scale automatic interpretability pipeline to discover what concepts are represented in the human brain. Page & Demo: navvewas.github.io/BrainExplore/
Rohan Paul@rohanpaul_ai

This paper uses AI-style interpretability tools to map which images trigger which visual concepts in the human brain. It scales by adding about 120K extra images using predicted functional magnetic resonance imaging (fMRI) signals. The problem is that fMRI data has about 40K voxels per person, each voxel is a tiny 3D pixel, and manual labeling does not scale. The pipeline first breaks each brain region’s activity into patterns that can be mixed to rebuild any response, and a sparse autoencoder pushes each response to use only a few patterns. For every pattern, it finds the top images that trigger it, captions those images, and has a model that writes text suggest shared meanings like “kitchen” or “hands in action”. To avoid random labels, it builds a big concept list, marks each image as true or false for each concept, then keeps the concept that shows up most consistently in that pattern’s top images. The payoff is a searchable map from image concepts to brain areas, plus a fair way to compare breakdown methods using held-out real scans. ---- Paper Link – arxiv. org/abs/2512.08560 Paper Title: "BrainExplore: Large-Scale Discovery of Interpretable Visual Representations in the Human Brain"

English
1
2
25
3.7K
Amit Linhard
Amit Linhard@AmitLinhard·
גיא מלמד זה בעצם תיירי הנרי
עברית
1
0
1
67
Noam Rotstein retuiteado
derrick has started yet another project
I think Time to Move might be my favorite recent AI video tool, and I haven't seen a lot of talk about it. Maybe because it requires a lot of up front motion path generation? but it makes controllable video so much easier. 1st video is motion path provided, 2nd is output
English
1
1
9
810
Nathan Shipley
Nathan Shipley@CitizenPlain·
Playing with the Time To Move test monkey in ComfyUI. 🙃 Cool to see the little secondary and joint animation it adds #aivideo
English
1
2
15
2.3K
Noam Rotstein retuiteado
@alexgnewmedia
@alexgnewmedia@alexgnewmedia·
This is honestly so cool. Using one of the free @ComfyUI templates (wanvideo2_2_I2V_A14B_TimeToMove_example), you can turn a simple base animation (left video) into a fully realistic motion clip (on the right). I even built a small app with @Google Gemini 3 to generate the basic animation… and the result looks incredible. Just wow.
English
1
4
37
4.2K
Wildminder
Wildminder@wildmindai·
WanVideoWrapper now supports TimeToMove for Wan 2.2. It’s fun to play with- move objects with Cut-and-Drag on the image to get a fully animated video. github.com/kijai/ComfyUI-…
English
8
69
459
43.9K
Noam Rotstein retuiteado
Kwang Moo Yi
Kwang Moo Yi@kwangmoo_yi·
Singer and Rotstein et al., "Time-to-Move: Training-Free Motion Controlled Video Generation via Dual-Clock Denoising" Make a rough warp, push it through Image-to-Video model with denoise together up until a timestep, then let it finish the rest without interference.
English
4
9
95
25.8K
Noam Rotstein retuiteado
Wildminder
Wildminder@wildmindai·
Gen video by moving objects with Time-to-Move: precise video motion control; adds object & camera control to any diffusion model like Wan 2.2! performs on par with training-based methods at zero extra cost. time-to-move.github.io
English
0
3
19
1.5K
Assaf Singer
Assaf Singer@assaf_singer·
We present Time-to-Move (TTM)! a training-free, plug-and-play method for precise motion control in video diffusion. Unlike prior training-based methods, TTM works with any backbone at no extra cost🔥 Page: time-to-move.github.io [1/4] @NoamRot @orlitany @mann_amir_
English
5
15
74
20K
Noam Rotstein
Noam Rotstein@NoamRot·
@cocolitron @_akhaliq ‘Go With the Flow’ is a great paper, but it’s based on heavy training. TTM is 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠-𝐟𝐫𝐞𝐞 while also enabling appearance-based control!
English
1
0
3
103
Ben
Ben@cocolitron·
@_akhaliq Wasn't there a very similar paper by the Netflix team a while back?
English
1
0
2
566
AK
AK@_akhaliq·
Time-to-Move Training-Free Motion Controlled Video Generation via Dual-Clock Denoising
English
10
31
218
19.4K