Stable Diffusion Tutorials

2.4K posts

Stable Diffusion Tutorials banner
Stable Diffusion Tutorials

Stable Diffusion Tutorials

@SD_Tutorial

👉 Ai models local installation 👉 Comfy Workflows 👉 Tutorials (Image Gen, Video gen) FOLLOW WEBSITE 👇👇

Katılım Mayıs 2018
89 Takip Edilen3K Takipçiler
Stable Diffusion Tutorials
Ltx 2.3 😃Greenscreen Avatar Ic lora vertical This LoRA generates perfect green screen avatars for TikTok, Shorts, and Reels -Optimized for vertical formats 9:16 -Clean green backgrounds for easy chroma keying -Expressive avatar generations HF repo:👇 huggingface.co/OmerHagage/ltx…
English
0
3
23
1.2K
Stable Diffusion Tutorials retweetledi
Stable Diffusion Tutorials
Implemented😃 NegPip on the Z-image series. Neg pipe-allowing for the use of prompts with negative effects within regular prompts and prompts with positive effects within negative prompts github + workflow:👇 github.com/BigStationW/Co…
Stable Diffusion Tutorials tweet media
English
0
1
3
851
Stable Diffusion Tutorials retweetledi
Ostris
Ostris@ostrisai·
Left is Hidream O1 Base 30 steps 4 CFG. Right is a finetune I am working on, only 4 steps, no CFG. Back story, I have been finetuning 2 siglip2 naflex models this past week on datasets I have been preparing for a few months. 1 is a full image (not cropped) person identity model. The second is a style model. Doing an arcface type loss on both. Since it is naflex, they can take in varying sizes and aspect ratios. I trained them on up to 1MP images. I tested using the ID model to train a ZIT (no training adapter) lora on myself using only the new embedding output as a loss, no flow matching loss. It worked somewhat, though wasn't perfect, but more importantly, it didn't break down the turbo distillation at all. Which means, you would be able to distill a model doing something similar. So I started with just siglip2 so400 naflex training on Hidream O1. It was working, but was overfitting to the vision encoder. So I added clip-g as well. It got better. So I started multi-scaling into siglip2, even better. So I added my style model and my new id model. And it started cleaning up extremely fast. Bear in mind, I am doing all of this with a batch size of 2 at 512 on a 5090, so, even though it seems like a lot of vision compute, it is within reason. It is continuing to clean up, but now I think I need to finetune more vision encoders for different knowledge. The model will learn to trick the vision encoders, but each one added makes it harder to trick. The naflex finetunes allow you to recover fine detail on large images. Each one acts as its own expert critic focusing only on their own task. The more you have, the more knowledge you can transfer from the images to the model. Which is important, because the model never receives a loss on the real images, only the single vector image embeddings. I will keep you posted on progress.
Ostris tweet mediaOstris tweet media
English
4
13
162
9.2K
Stable Diffusion Tutorials
FLUX.2 Klein+AsymFlow with no VAE. -builds hyper-realistic images directly in pixel space rather than compressed latents -Sharper textures and superior fidelity40% faster -Low-rank noise parameterization solves high-dimensional -Comfy support incoming 👇 hanshengchen.com/asymflow/
English
1
3
42
5K
Stable Diffusion Tutorials
LTX Director 🎬 📹Complete Timeline Editor For LTX 2.3 😃Custom node for ComfyUI -Prompt Relay enable -Training Free Integration -Wan & LTX Supported -Functional Timeline Editor -Multi Keyframe Support -Custom Audio Support -Builtin T2V & I2V 👇github: github.com/WhatDreamsCost…
English
1
19
162
6.8K
Stable Diffusion Tutorials retweetledi
Stable Diffusion Tutorials retweetledi
Tongyi Lab
Tongyi Lab@Ali_TongyiLab·
Meet Z-Anime by @seesee. Built on the powerful Z-Image Base architecture, this model brings flagship-level diversity and precise prompt control to anime generation. It inherits full support for complex negative prompts and extreme customizability. Ready to push your anime workflows to the next level? Grab the weights on Hugging Face:👇 huggingface.co/SeeSee21/Z-Ani…
Tongyi Lab tweet mediaTongyi Lab tweet media
English
8
49
434
76.4K