Aseem Agarwala

223 posts

Aseem Agarwala

Aseem Agarwala

@aseemaa

Research Scientist, Head of the Adobe Video AI Lab

Seattle, WA Katılım Ağustos 2010
1.2K Takip Edilen361 Takipçiler
Aseem Agarwala retweetledi
Adobe Video
Adobe Video@AdobeVideo·
Want to streamline how you rotoscope? We thought so. Meet Object Matte, now live in #AfterEffects. You can now isolate and track your subjects with this single-click selection tool, including the option to refine further. Say goodbye to manual bottlenecks and hello to intelligent, intuitive masking and rotoscoping. Try it today in After Effects! Learn more: adobe.ly/4mQzp7P
English
0
4
25
2.7K
Aseem Agarwala retweetledi
Yao-Chih Lee
Yao-Chih Lee@YaoChihLee·
Excited to share our new work: Generative Video Motion Editing with 3D Point Tracks. We propose a framework that uses 3D point tracks to precisely edit both camera and object motion in a video, unlocking a wide range of new editing applications.
English
34
135
901
107.6K
Aseem Agarwala retweetledi
Adobe Video
Adobe Video@AdobeVideo·
Objectively, the new Object Mask Tool in #Premiere (beta) is revolutionary. In just one click, Premiere identifies and tracks subject-matter in your footage, with the option to refine further yourself. The latest beta also features more efficiency gains like improved shape masks. Download the beta today! adobe.ly/4hxJseZ
English
9
22
184
12.5K
Aseem Agarwala retweetledi
Adobe Research
Adobe Research@AdobeResearch·
Summer 2026 Research internship applications are now open! Collaborate with world-class researchers, engineers, and designers to push the frontiers of AI and creativity! 🔗 Apply now: adobe.ly/42M36hr
English
0
4
4
1.3K
Aseem Agarwala retweetledi
Howard Pinsky
Howard Pinsky@Pinsky·
Okay, Object Mask in the Premiere beta is pretty damn legit! 🤯
English
40
46
818
79.8K
Aseem Agarwala retweetledi
Cusuh
Cusuh@cusuh_·
MotionCanvas presents a method for scene-aware, decoupled object and camera motion control for image-to-video generation. Come by the #SIGGRAPH2025 Video Generation session tomorrow @ 10:45am in West Building, Rooms 118-120 to learn more! motion-canvas25.github.io
GIF
English
1
2
5
428
Aseem Agarwala retweetledi
Adobe Research
Adobe Research@AdobeResearch·
Adobe researchers shared a groundbreaking publication at #CVPR2025, introducing an experimental generative video propagation framework that applies edits in the first frame of a video to all the following frames in a reliable and consistent manner, all while preserving areas the user hasn’t edited. adobe.ly/3Zvh0ms
English
0
2
18
1.9K
Aseem Agarwala retweetledi
Adobe Research
Adobe Research@AdobeResearch·
Meet Project Fast Mask—what began as a research prototype at @AdobeResearch is now powering tools like Roto Brush 3 in After Effects and Remove Video Background in Adobe Express. See how this AI breakthrough is changing the way we edit videos! adobe.ly/4jvsQEI
Adobe Research tweet media
English
0
1
3
646
Aseem Agarwala retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Adobe premiere is FINALLY getting media intelligence and semantic search! I’ve wanted this ability to pull up the right clip based on a simple description the moment I tried it in Google photos. Huge when you’re trying to find that *one* b-roll clip.
English
7
8
68
5.7K
Aseem Agarwala
Aseem Agarwala@aseemaa·
Check out this great advertisement for Premiere Pro's new AI tools, starting with two that my lab made major contributions to: Generative Extend and Media Intelligence. Congrats to everyone involved! youtube.com/watch?v=KTBtE8…
YouTube video
YouTube
English
0
0
0
100
Aseem Agarwala
Aseem Agarwala@aseemaa·
@CliffordAsness I wish it were available without a financial advisor! Direct indexing is democraticized at this point, and I hope long/short goes down the same path.
English
1
0
0
244
Clifford Asness
Clifford Asness@CliffordAsness·
This is a fair summary of critiques and I’m not shooting the messenger. In fact I encourage taxable investors to read them, and if they want to be fooled by lying long-only dinosaurs who are losing to better implementations and desperate to stop the bleeding, then more power to them. These are a set of either really dumb, or outright lying, nonsense critiques. We will be responding. Cry havoc and let slip the dogs of superior after tax returns.
Brent Sullivan@TaxAlphaInsider

Some large investment managers tell me tax-aware long/short (130/30, 250/150) ain't what it's cracked up to be. They see the strategies as useful for only a tiny audience. In my latest blog, I list critiques of this white-hot strategy. taxalphainsider.com/p/is-tax-aware…

English
11
3
35
32.1K
Aseem Agarwala retweetledi
Kris Kashtanova
Kris Kashtanova@icreatelife·
🎉 Adobe Firefly Video (beta) model is released and here for everyone to use 🎉 Go to the website and generate AI videos and share with me. I animated my own photographs that I was taking of myself through the years. It feels magical. #CommunityXAdobe
English
98
88
681
170.3K
Aseem Agarwala retweetledi
AK
AK@_akhaliq·
MotionCanvas Cinematic Shot Design with Controllable Image-to-Video Generation
English
1
7
41
11.8K
Aseem Agarwala retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Adobe's new AI paper is pretty cool, and it's open source for a change. TransPixar can go directly from text prompt --> video footage with an alpha channel. Meaning that your clip is already "cut out" and ready to composite in your tool of choice. Useful in video editing tools like CapCut or Premiere where you can conjure up stock elements on demand (e.g. smoke effects, magical portals, explosions) and drag them right in. Equally fun for making memes in DingBoard or animated stickers in social apps like instagram. Now let's go a bit deeper: → You know how text-to-video AI has gotten pretty good lately, but everything it makes is solid RGB -- meaning no transparency. That's a pain if you want to composite the videos into other scenes or do VFX work. → While you can use something like Meta's SAM2 to segment out any AI video, these segmentation models tend to fall apart with volumetric effects like explosions, or finer structures like fur, hair and foliage -- giving you a flickery mess that's hard to work with. What makes TransPixar clever is how they tackled this: → Instead of building a new model from scratch, they figured out how to extend existing text-to-video AI models to handle transparency (the alpha channel) alongside the regular RGB output. → They did this by using some smart attention mechanisms and LoRA fine-tuning. And a remarkably small dataset -- just 484 training videos with transparency. That's tiny by AI standards. → For a change, Adobe has released the code on GitHub and it's free for non-commercial use. I expect others to embrace this approach and reproduce these capabilities with open & closed source video models. This could be huge for indie VFX artists and content creators who need quick, high-quality, read-to-composite elements. And yeah, meme lords about to have a lot of fun with this :)
English
14
68
471
39.2K
Aseem Agarwala retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Woot! Two REALLY cool features coming to Adobe Premiere. 1) Generative Extend available today in beta. Take any clip and extend it — with the video and audio automagically extended :)
English
16
71
469
76.1K
Aseem Agarwala retweetledi
Kris Kashtanova
Kris Kashtanova@icreatelife·
Object select in video in Premiere Pro (beta) is insane and coming soon. #AdobeMAX
English
15
32
179
18.2K
Aseem Agarwala retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
BREAKING: Adobe Firefly Video is “the first commercially safe video generation model” — it supports text-to-video, image-to-video and designed for immaculate prompt coherence. Those sample generations look quite impressive — excited to go hands on! I’m at Adobe MAX this week so stay tuned for more updates on my feed: @bilawalsidhu
English
47
141
922
151.5K