Sabitlenmiş Tweet
JZ
458 posts

JZ
@jznode
Building AI video tools for creators. Cinematography + psychology for better AI videos. Shipping what I learn along the way.
Katılım Kasım 2023
63 Takip Edilen22 Takipçiler

@borntocreate01 @JohnnyDigital47 @yapper_so Yes. CapCut desktop app, look under AI video generation. You get free credits daily, Seedance 2.0 included, and it renders without a paywall.
English

Seedance 2.0 is now globally available on @yapper_so.
Your imagination is the limit!
Comment "yapper" below to get access today 👇
English


@MayorKingAI the sequence grammar is doing as much work as the style descriptors. neo-noir is the vibe, but door → room → action is the actual multi-shot structure that earns the pacing.
English

@Strength04_X @YouArtStudio same prompt shows the output range, not the ceiling. Seedance 2.0 leans on reference consistency, Kling 3.0 on camera direction. the one that works depends on what you're building.
English

Same prompt. Two AI video models
Seedance 2.0 on @YouArtStudio vs Kling 3.0 🤯
Both generated using the exact same prompt, but the results look completely different.
AI video tools are evolving fast and creators can now build cinematic scenes with just prompts.
English

@wildmindai the 'plan then control' framing matches what works in production. locking camera grammar before generation is where consistency actually comes from, not the model's variance score.
English

ShotVerse by Tencent.
Cinematic multi-shot video gen with precise camera control.
- Qwen3-VL-2B to derive camera movements from narrative text descriptions.
- tops Sora2, Kling3.0, VEO3 in consistency.
shotverse.github.io
English

@StephanieInii @RoboNeo_ai for a full movie, reference images solve the generation side. the harder problem is editorial: B-roll at transition points, cutting on action, style choices that reduce visual axes. that's what actually holds long sequences together.
English

I made this live action bts of kpop demon hunter with AI last 3 months with @RoboNeo_ai
Now I can do something even better like making a full movie with character consistency.
English

@DNAMismatches @EHuanglu the exaggerated default is what happens without specific direction. 'sadness' gets melodrama, 'quiet resignation after a long day' gets something much closer to human.
English

@natecurtiss_yt the thumbnail bottleneck kills more faceless channels than content quality does. most people spend 10x more time on scripts than thumbnails, but thumbnails are what determines if anyone reads the script at all.
English

@openart_ai consistent characters is what makes storyboarding actually functional. once you can lock a character across shots, you're making previs decisions early rather than hoping things match in post. changes the whole pre-production timeline.
English

The all new Sora 2 is now live on OpenArt - Day 0. 🎬
After testing the Sora API, the biggest unlock for us has been character consistency. Being able to generate scenes with the same characters across shots makes storyboarding and early scene development far more practical.
Longer clips and higher-resolution output make it even more powerful - we’ve already started using it in some of the campaigns and promotions we launch ourselves.
Excited to see where this goes.
@OpenAIDevs
English







