پن کیا گیا ٹویٹ
JZ
461 posts

JZ
@jznode
Building AI video tools for creators. Cinematography + psychology for better AI videos. Shipping what I learn along the way.
شامل ہوئے Kasım 2023
63 فالونگ22 فالوورز

@rezkhere consistency in a single shot is solved. consistency across scenes where the character ages, reacts, moves through different lighting, that's the next wall. the technical bar moved from "same face" to "same performance" and most people haven't noticed the goalposts shifted.
English

Character consistency was supposed to be the hard problem.
For years, AI video kept failing at it. Different face every shot. Unusable for anything narrative.
That problem is apparently solved now.
A 4-person team just ran a full series with consistent characters, professional lighting, and cinematic camera work. Test audiences didn't clock it as AI.
When the last major technical blocker falls, things move fast. We're in that moment right now.
Rez Karim@rezkhere
A bartender from New Brunswick just made $1M+ Not for acting or for building a startup. But for his FACE. Higgsfield licensed his likeness and used his Soul ID to cast him in Arena Zero. No camera, script, audition or agent taking 15%. This is the wildest deal in entertainment right now. What you gotta know 👇
English

@SPX_FILMMAKER the Kubrick parallel is exactly right. what AI actually gives you is not the ability to make films, it's the ability to make them without needing to convince 50 other people first. the bottleneck was never the creative vision, it was the access to execution.
English

I am of two minds of this.
The self-centered part of me loves generative AI filmmaking because I look up to all the other weirdo auteurs of the past who would’ve done almost everything themselves if they could (Kubrick comes to mind). It’s the greatest thing to have happened to a director that doesn’t like dealing with money or other people in order to make a movie (like myself). As a failed filmmaker in the traditional industry, it’s given me a second chance.
The other part of me recognizes that a lot of talented people out there in the biz are utterly screwed. They’ve built up some incredible skills with the promise of having a well paying career and AI is going to destroy their plans and even put them out on the street unless the government gets its head of its ass and comes up with some kind of serious safety net. People have NO CLUE what’s coming.
Again, why it’s so important for me to load up on as much SPX6900 as possible!
So, yes, flip Hollywood but with caveats…. Flip Hollywood AND make sure the actors and artisans aren’t on the street ❤️
Orvalous 6900 💹🧲@orvalous
Hollywood about to get flipped
English

the artifact masking is the underrated part. single model footage has a signature look viewers learn to spot. mixing models breaks that pattern, but the real challenge shifts to the edit, matching color grading and motion cadence so the cut between models feels like a camera switch, not a model switch.
English

the best ai video operators in march 2026 are running multi-model pipelines.
here's the stack that's working right now:
- kling 3.0 for talking head ugc (best character consistency + reference locking)
- veo 3.1 for b-roll and cinematic shots (best visual quality + native audio)
- sora 2 pro for narrative sequences (best multi-scene coherence)
- seedance 2.0 for dance/movement clips (best body motion)
each model has a specific strength. using one model for everything means you're getting its weaknesses on half your output.
the workflow: script in claude > voice in chatterbox > talking head in kling 3.0 > b-roll in veo 3.1 > edit everything together in capcut/remotion.
the final video looks like it was shot by 2 different cameras in 2 different locations. the talking head footage has the natural ugc feel from kling.
the product shots have the cinematic quality from veo. nobody can tell it's ai because no single model's artifacts are present throughout the whole video.
this is the production method the agencies charging $5k+ per month are using. one model workflows are for beginners now.
English

@borntocreate01 @JohnnyDigital47 @yapper_so Yes. CapCut desktop app, look under AI video generation. You get free credits daily, Seedance 2.0 included, and it renders without a paywall.
English

Seedance 2.0 is now globally available on @yapper_so.
Your imagination is the limit!
Comment "yapper" below to get access today 👇
English


@MayorKingAI the sequence grammar is doing as much work as the style descriptors. neo-noir is the vibe, but door → room → action is the actual multi-shot structure that earns the pacing.
English

@Strength04_X @YouArtStudio same prompt shows the output range, not the ceiling. Seedance 2.0 leans on reference consistency, Kling 3.0 on camera direction. the one that works depends on what you're building.
English

Same prompt. Two AI video models
Seedance 2.0 on @YouArtStudio vs Kling 3.0 🤯
Both generated using the exact same prompt, but the results look completely different.
AI video tools are evolving fast and creators can now build cinematic scenes with just prompts.
English

@wildmindai the 'plan then control' framing matches what works in production. locking camera grammar before generation is where consistency actually comes from, not the model's variance score.
English

ShotVerse by Tencent.
Cinematic multi-shot video gen with precise camera control.
- Qwen3-VL-2B to derive camera movements from narrative text descriptions.
- tops Sora2, Kling3.0, VEO3 in consistency.
shotverse.github.io
English

@StephanieInii @RoboNeo_ai for a full movie, reference images solve the generation side. the harder problem is editorial: B-roll at transition points, cutting on action, style choices that reduce visual axes. that's what actually holds long sequences together.
English

I made this live action bts of kpop demon hunter with AI last 3 months with @RoboNeo_ai
Now I can do something even better like making a full movie with character consistency.
English

@DNAMismatches @EHuanglu the exaggerated default is what happens without specific direction. 'sadness' gets melodrama, 'quiet resignation after a long day' gets something much closer to human.
English






