logtd

24 posts

logtd banner
logtd

logtd

@logtdx

dreaming of machines 💤

Katılım Ekim 2024
49 Takip Edilen614 Takipçiler
logtd retweetledi
Luma
Luma@LumaLabsAI·
Uni-1 is here! A new kind of model that thinks and generates pixels simultaneously. Less artificial. More intelligent.
English
437
786
5K
6.1M
logtd retweetledi
Luma
Luma@LumaLabsAI·
Introducing Uni-1, Luma’s first unified understanding and generation model, our next step on the path towards unified general intelligence. lumalabs.ai/uni-1
Luma tweet media
English
31
99
807
226K
logtd retweetledi
Luma
Luma@LumaLabsAI·
Stop guessing. Start directing. Ray3 Modify is now in Dream Machine. Edit and reimagine videos with all-new precise keyframe and character reference controls. Your vision, reimagined. Supercharge your production with rapid retouching, precise element swapping, and scene redesign.
English
25
78
414
132.6K
logtd retweetledi
Luma
Luma@LumaLabsAI·
Introducing Modify Video. Reimagine any video. Shoot it in post with director-grade control over style, character, and setting. Restyle expressive performances, swap entire worlds, or redesign the frame to your vision. Shoot once. Shape infinitely.
English
180
609
6.5K
3.6M
Roman szczesny
Roman szczesny@roman_szczesny·
@logtdx Nice! How did you achieve it? Is there an implementation for ComfyUI?
English
1
0
0
81
logtd
logtd@logtdx·
experimenting with regional prompting on the Hunyuan video model, giving some inception vibes left side prompt: cyberpunk & pan left right side prompt: steampunk & pan right
English
3
1
26
1.7K
logtd
logtd@logtdx·
@estebs most of them support it, but i'm not sure if anyone's implemented it before
English
1
0
0
34
Esteban
Esteban@estebs·
@logtdx is this the first video model that supports regional prompting?
English
2
0
0
54
logtd
logtd@logtdx·
And last but never least, Flux. FlowEdit really shines in that it can make precise edits while keeping the majority of the image intact (a bit more difficult to pull off in video though). github.com/logtd/ComfyUI-…
logtd tweet media
English
1
0
13
1.2K
logtd
logtd@logtdx·
@ostrisai not yet, but hopefully soon
English
0
0
0
106
Ostris
Ostris@ostrisai·
Has anyone had any luck converting FLUX LoRAs to SVDquant format? I have been trying to reverse engineer the process but keep hitting roadblocks.
English
6
0
10
2.4K
Ai-ndmix
Ai-ndmix@aindmix·
@logtdx So basically open source Viggle???
English
1
0
0
363
logtd
logtd@logtdx·
Just published a set of ComfyUI nodes to use Genmo's Mochi to edit videos. github.com/logtd/ComfyUI-… It uses rf-inversion, the gift that keeps on giving.
English
16
41
294
36.9K
logtd
logtd@logtdx·
@wildfireworlds On a single 4090 it takes about 2 minutes to "warm up" a video, then 3 minutes to generate. So if you have the same video clip about 3 minutes each after warming up.
English
2
0
2
388
logtd
logtd@logtdx·
@StraughterG It might, I haven't been keeping up with Comfy's compatibility with apple silicon and the newer video models
English
1
0
0
411
Jay Guthrie
Jay Guthrie@StraughterG·
@logtdx Yea but does it work with apple silicon
English
1
0
0
498
logtd
logtd@logtdx·
@nahbee80 if you're looking to reproduce content from one image into another, I don't think there is a good way right now if you're just looking for something similar or to remix an image, RF-Inversion is really good
English
1
0
3
135
lurktweet
lurktweet@nahbee80·
@logtdx whats the best way for content transfer for flux in your opinion?
English
1
0
1
77
logtd
logtd@logtdx·
Been revisiting Reference-Only Control for Flux. It uses the diffusion model as a pseudo image encoder on a reference image to influence the generation. Results are somewhere between style and content transfer.
logtd tweet media
English
1
0
10
2.2K
logtd
logtd@logtdx·
RAVE and FLATTEN were two of the papers that originally got me into diffusion models. They take inverse noise and apply consistency to image models. Now with RF-Inversion (thanks @litu_rout_ and @natanielruizg) I can try these on Flux. Not production quality, but still fun.
English
5
5
44
3.8K