Runflow
267 posts

Runflow
@runflow_io
The missing layer between ComfyUI and production. Visual workflows + developer API + automated quality scoring. Deploy your pipeline, not your weekends.

Seedance 2.0 is allowing us to enter a new era of music video creation. Here is how I created HONEY. It was a quick test to see how well this workflow holds up. 🐝 1 - Write your song and generate the music with Suno 5.5. 2 - Use an image generator of your choice. For HONEY I combined both Grok Imagine for aesthetics and Nano Banana Pro for refined editing. 3 - In Capcut I import my audio and just save out a blank video video containing the audio. This step is important because this video file containing audio will now be used with Seedance 2.0 as a video reference with Omni. This allows the AI to apply automatic and realistic lipsync and movement to the music, it's extremely powerful! 4 - Once I have a both my image and video with audio as reference, I use Seedance 2.0 Omni and upload my starting image and then the video reference with the audio. 5 - From here I'm simply prompting like normal, specifying what's happening in my scene with detailed instructions, mentioning multi shots and camera angle changes and then specifying that the person is singing along to the song. I type out the lyrics that are present to have better lipsync accuracy. 6 - Once I have generated a video and like the result, I do video to video, so i upload that video that just got generated and type "The scene continues" and prompt new actions to take place. This allows you to expand on a narrative. These new shots can be used as B-ROLL and since I uploaded my video as reference I have full consistency of everything it saw in the video. This is also extremely powerful. 7 - This is actually the most difficult part. Edit in Capcut. This is where you need to understand pacing and shot selection from all the scenes you generated to bring it all together. You must be strategic with the editing. Goodluck! I'll probably record a video tutorial at some point as it's easier to see what is being done.

I loved creating this cartoon. Adobe added Kling 3 and Kling 3 Omni to Firefly. For the first time in a while, I had that feeling of “okay… this is getting very real.” Consistent characters across shots used to be the hardest part. This made it feel… easy. Workflow below:

most people struggle to generate human faces consistently on seedance 2.0, especially when relying on character sheets a lot of the time, the system rejects it or the face breaks across scenes so I built a workflow that uses a base image as the anchor instead this keeps the face clean, natural, and consistent while still giving you full control for cinematic shots it’s a small change, but it makes a huge difference when working with human characters another thing I’ve learned, instead of generating scene by scene (which burns a lot of credits), it’s much more efficient to build multiple moments inside one 15s generation you can guide each second with slightly different prompt directions, then refine the final result with manual cut-to-cut editing this way you get more usable footage in one go and still keep control in post try it here: app.flora.ai/techniques/see… @floraai #FLORATechnique

The AI video upgrade we’ve been waiting for just dropped Dreamina Seedance 2.0 is now LIVE in the US 🇺🇸 Stronger model + unified platform = real workflow improvement. From idea → video, all in one place. AI video is getting practical.

💥BREAKING: Seedance 2.0 has officially launched on GlobalGPT — now available at 50% OFF! Experience advanced AI video with realistic physics, built-in audio-video generation, and top-tier image control. Accessible worldwide — no limits, no restrictions, no invite codes. 👇

The era of "experimental" AI video is over. Production-grade AI video is HERE. 🚀 Introducing Seedance 2.0 by BytePlus — a massive leap forward in controllable, high-fidelity video generation. And the API is now officially live. You can now direct AI video. Not just generate it. 🎬

BREAKING: Seedance 2.0 and Seedance 2.0 Fast by @BytePlusGlobal are #1 and #2 on Video Arena, Image-to-Video Arena, and Multi-Image-to-Video Arena! These are defining a new frontier of video generation models Huge congrats to the @BytePlusGlobal team on the launch!

New video models just dropped in Recraft Studio! We’ve added support for: • Seedance 2.0 • Seedance 2.0 Fast • PixVerse V6.0 • PixVerse V5.6 • Wan V2.7 Explore more ways to create, experiment, and bring your ideas to life with the latest video generation tools.

Seedance 2.0 + my AI UGC prompting system = insane results Generated 200+ videos in 24 hours to refine the framework This video = 1 prompt, 1 tool, zero editing Easily the best model I’ve used so far Fully automatable workflow First time high-quality AI UGC can be automated like this If you want me to share the full setup, just comment “UGC”

you can really test 50 product videos on TikTok Shop in one afternoon all generated from Marketing Studio on Higgsfield the workflow: - find a product - paste the link - get UGC and product review formats back all TikTok-native the videos look like a real creator filmed them what used to cost $6-11 per video on other platforms, or $500 per video hiring a real UGC creator costs $0.347 per generation now

Covering the whole workflow from prompt to delivery... Seedance 2.0 is a lifesaver for music video concepts. 🔄