Dan Kondratyuk

510 posts

Dan Kondratyuk banner
Dan Kondratyuk

Dan Kondratyuk

@hyperparticle

Research Scientist working on Video Generation @LumaLabsAI. Prev. #VideoPoet @GoogleAI. I'm a developer that enjoys solving puzzles, one piece at a time.

Seattle, WA Se unió Mart 2015
613 Siguiendo2.2K Seguidores
Tweet fijado
Dan Kondratyuk
Dan Kondratyuk@hyperparticle·
Today we are launching Dream Machine, our first AI model that generates cinematic and fluid videos from text instructions and images. I generated this 1-minute 60 fps video entirely from our model. Try Dream Machine → lumalabs.ai/dream-machine Join us → lumalabs.ai/join
English
25
51
413
38.8K
Dan Kondratyuk
Dan Kondratyuk@hyperparticle·
Our team has developed a new diffusion distillation technique which is overall much simpler and more robust than prior methods, and scales well to large model training. We make the code and paper freely available github.com/lumalabs/tvm
Luma@LumaLabsAI

Introducing Terminal Velocity Matching: a scalable, single-stage generative training method that delivers diffusion-level quality with a 25× fewer inference steps, now trained at 10B+ scale. lumalabs.ai/blog/engineeri…

English
0
0
9
1.1K
Dan Kondratyuk
Dan Kondratyuk@hyperparticle·
It took an incredible amount of energy to get here, but now we're ready to unleash Ray3, our new frontier video model with reasoning capabilities. I especially love the HDR video generations, the colors and lighting just pop in ways that make SDR look dull. Check it out!
Luma@LumaLabsAI

This is Ray3. The world’s first reasoning video model, and the first to generate studio-grade HDR. Now with an all-new Draft Mode for rapid iteration in creative workflows, and state of the art physics and consistency. Available now for free in Dream Machine.

English
1
3
18
2K
Yiheng Li
Yiheng Li@Yiheng_Li_Cal·
🎉Introducing Improved Immiscible Diffusion - Accelerating Diffusion Training by Reducing Its Miscibility. 🔥 Supported by detailed feature analysis, we further clarify that the miscibility problem, i.e. the mix of diffusion paths of different images during training, reduces the training efficiency. 🤔 Based on this, we design a new KNN implementation, which not only is efficient (unrelated to batch sizes) but also performs well in diverse baseline models, especially in flow matching. 🤩 We hope our miscibility problem lights the way for further improving diffusion training efficiency. ✈️ arxiv.org/abs/2505.18521
Yiheng Li tweet media
English
7
27
106
16.4K
Dan Kondratyuk retuiteado
Luma
Luma@LumaLabsAI·
Introducing Modify Video. Reimagine any video. Shoot it in post with director-grade control over style, character, and setting. Restyle expressive performances, swap entire worlds, or redesign the frame to your vision. Shoot once. Shape infinitely.
English
180
608
6.5K
3.6M
Dan Kondratyuk
Dan Kondratyuk@hyperparticle·
Anyone out there using LLMs/Cursor to build ML code effectively? Looking for helpful tips and tricks to write Pytorch code faster.
English
1
0
2
322
Dan Kondratyuk retuiteado
𝙳𝚊𝚟𝚒𝚍 𝙹 𝙼𝚘𝚛𝚛𝚒𝚜
Even I am astounded at these results! Using @LumaLabsAI Dream Machine Camera controls with Ray2 text-to-video. And I'll teach anyone interested in learning how I do it over the coming months (it's not hard) . Extends up to 30 seconds in this!
English
16
13
153
11.3K
Dan Kondratyuk retuiteado
Luma
Luma@LumaLabsAI·
3D Chalk Art – A New Perspective Step into the illusion with #DreamMachine. Where flat images become dimensional scenes. Powered by #Ray2 Camera Motion Concepts.
English
30
70
487
58.3K
Dan Kondratyuk
Dan Kondratyuk@hyperparticle·
A fun test of what's possible with camera control. Generated with Ray2 flash.
English
0
0
5
253
Dan Kondratyuk retuiteado
Christopher Fryant
Christopher Fryant@cfryant·
Camera Controls for @LumaLabsAI Ray2 AI video is out now and IT IS GLORIOUS! First impressions: Just look at these results! Are you not entertained!?
English
38
63
538
57K
Dan Kondratyuk
Dan Kondratyuk@hyperparticle·
This particular release has me excited. I've been trying out new camera motions with Ray2 in Dream Machine and it made it so much more fun to use.
Luma@LumaLabsAI

Introducing #Ray2 Camera Motion Concepts in #DreamMachine — 20+ precision-tuned camera motions designed for smooth cinematic control and great reliability. Concepts compose with each other making hundreds of impossible new camera moves possible. Available now.

English
0
0
7
385
Dan Kondratyuk
Dan Kondratyuk@hyperparticle·
@naveenmarrii @jon_barron It's similar but here I'm guessing they are leaning very heavily on the LLM to do most of the work and the diffusion decoder just adds very fine-grained details on top, something that diffusion models excel at
English
0
0
2
52
naveen
naveen@naveenmarrii·
@hyperparticle @jon_barron Isn’t it somewhat similar to dalle-2 approach with AR diffusion prior+diffusion decoder?
English
1
0
0
68
Jon Barron
Jon Barron@jon_barron·
Okay my working hypothesis for 4o image generation is that it is jointly performing autoregressive inference (raster scanline order) on an image pyramid at all scales simultaneously.
Jon Barron@jon_barron

Very interesting how 4o image generation appears to be some sort of combination of multiscale and autoregressive. At first I thought it was generating a coarse image and then just filling in fine details, but the coarse image itself seems to change during generation (shown here)

English
36
37
698
99.5K