JZ

461 posts

JZ banner
JZ

JZ

@jznode

Building AI video tools for creators. Cinematography + psychology for better AI videos. Shipping what I learn along the way.

Se unió Kasım 2023
63 Siguiendo22 Seguidores
Tweet fijado
JZ
JZ@jznode·
seedance 2.0 v2v. real footage + one style prompt. no editing.
English
1
0
3
541
JZ
JZ@jznode·
@rezkhere consistency in a single shot is solved. consistency across scenes where the character ages, reacts, moves through different lighting, that's the next wall. the technical bar moved from "same face" to "same performance" and most people haven't noticed the goalposts shifted.
English
0
0
0
9
Rez Karim
Rez Karim@rezkhere·
Character consistency was supposed to be the hard problem. For years, AI video kept failing at it. Different face every shot. Unusable for anything narrative. That problem is apparently solved now. A 4-person team just ran a full series with consistent characters, professional lighting, and cinematic camera work. Test audiences didn't clock it as AI. When the last major technical blocker falls, things move fast. We're in that moment right now.
Rez Karim@rezkhere

A bartender from New Brunswick just made $1M+ Not for acting or for building a startup. But for his FACE. Higgsfield licensed his likeness and used his Soul ID to cast him in Arena Zero. No camera, script, audition or agent taking 15%. This is the wildest deal in entertainment right now. What you gotta know 👇

English
6
4
12
3.6K
JZ
JZ@jznode·
@SPX_FILMMAKER the Kubrick parallel is exactly right. what AI actually gives you is not the ability to make films, it's the ability to make them without needing to convince 50 other people first. the bottleneck was never the creative vision, it was the access to execution.
English
0
1
1
38
SPX FILMMAKER
SPX FILMMAKER@SPX_FILMMAKER·
I am of two minds of this. The self-centered part of me loves generative AI filmmaking because I look up to all the other weirdo auteurs of the past who would’ve done almost everything themselves if they could (Kubrick comes to mind). It’s the greatest thing to have happened to a director that doesn’t like dealing with money or other people in order to make a movie (like myself). As a failed filmmaker in the traditional industry, it’s given me a second chance. The other part of me recognizes that a lot of talented people out there in the biz are utterly screwed. They’ve built up some incredible skills with the promise of having a well paying career and AI is going to destroy their plans and even put them out on the street unless the government gets its head of its ass and comes up with some kind of serious safety net. People have NO CLUE what’s coming. Again, why it’s so important for me to load up on as much SPX6900 as possible! So, yes, flip Hollywood but with caveats…. Flip Hollywood AND make sure the actors and artisans aren’t on the street ❤️
Orvalous 6900 💹🧲@orvalous

Hollywood about to get flipped

English
1
1
17
287
JZ
JZ@jznode·
the artifact masking is the underrated part. single model footage has a signature look viewers learn to spot. mixing models breaks that pattern, but the real challenge shifts to the edit, matching color grading and motion cadence so the cut between models feels like a camera switch, not a model switch.
English
0
0
0
35
KNOX
KNOX@knoxtwts·
the best ai video operators in march 2026 are running multi-model pipelines. here's the stack that's working right now: - kling 3.0 for talking head ugc (best character consistency + reference locking) - veo 3.1 for b-roll and cinematic shots (best visual quality + native audio) - sora 2 pro for narrative sequences (best multi-scene coherence) - seedance 2.0 for dance/movement clips (best body motion) each model has a specific strength. using one model for everything means you're getting its weaknesses on half your output. the workflow: script in claude > voice in chatterbox > talking head in kling 3.0 > b-roll in veo 3.1 > edit everything together in capcut/remotion. the final video looks like it was shot by 2 different cameras in 2 different locations. the talking head footage has the natural ugc feel from kling. the product shots have the cinematic quality from veo. nobody can tell it's ai because no single model's artifacts are present throughout the whole video. this is the production method the agencies charging $5k+ per month are using. one model workflows are for beginners now.
English
12
13
197
11.5K
Yapper
Yapper@yapper_so·
Seedance 2.0 is now globally available on @yapper_so. Your imagination is the limit! Comment "yapper" below to get access today 👇
English
347
39
283
50.7K
JZ
JZ@jznode·
Wrong. “sadness” gets melodrama. “quiet resignation after a long day” gets something closer to human. Same model.
JZ tweet media
English
0
0
0
17
JZ
JZ@jznode·
@EHuanglu replace the execution, not the eye. knowing what looks right in context is the part that doesn't prompt well.
English
0
0
2
330
JZ
JZ@jznode·
@MayorKingAI the sequence grammar is doing as much work as the style descriptors. neo-noir is the vibe, but door → room → action is the actual multi-shot structure that earns the pacing.
English
1
0
1
354
MayorkingAI
MayorkingAI@MayorKingAI·
Kling 3.0 Multi-shot Prompt below👇
Indonesia
10
14
193
13K
JZ
JZ@jznode·
@Strength04_X @YouArtStudio same prompt shows the output range, not the ceiling. Seedance 2.0 leans on reference consistency, Kling 3.0 on camera direction. the one that works depends on what you're building.
English
0
0
1
103
𝐌
𝐌@Strength04_X·
Same prompt. Two AI video models Seedance 2.0 on @YouArtStudio vs Kling 3.0 🤯 Both generated using the exact same prompt, but the results look completely different. AI video tools are evolving fast and creators can now build cinematic scenes with just prompts.
English
35
2
64
3.4K
JZ
JZ@jznode·
@wildmindai the 'plan then control' framing matches what works in production. locking camera grammar before generation is where consistency actually comes from, not the model's variance score.
English
0
0
0
12
Wildminder
Wildminder@wildmindai·
ShotVerse by Tencent. Cinematic multi-shot video gen with precise camera control. - Qwen3-VL-2B to derive camera movements from narrative text descriptions. - tops Sora2, Kling3.0, VEO3 in consistency. shotverse.github.io
English
3
6
69
4.6K
JZ
JZ@jznode·
The anchor: one locked style prompt block, one character reference sheet, one fixed parameter set. Nothing changes after episode 1. End every session by generating a consistency check. Start the next session by comparing against it.
English
1
0
0
31
JZ
JZ@jznode·
Getting the Ghibli look took 1 session. Keeping it consistent across 19 episodes took a system. Most AI video creators solve the wrong problem.
JZ tweet media
English
1
0
0
61
JZ
JZ@jznode·
@StephanieInii @RoboNeo_ai for a full movie, reference images solve the generation side. the harder problem is editorial: B-roll at transition points, cutting on action, style choices that reduce visual axes. that's what actually holds long sequences together.
English
0
0
0
34
Stephy Designs
Stephy Designs@StephanieInii·
I made this live action bts of kpop demon hunter with AI last 3 months with @RoboNeo_ai Now I can do something even better like making a full movie with character consistency.
English
3
0
35
97K
JZ
JZ@jznode·
@DNAMismatches @EHuanglu the exaggerated default is what happens without specific direction. 'sadness' gets melodrama, 'quiet resignation after a long day' gets something much closer to human.
English
0
0
0
21
DNA 🧬
DNA 🧬@DNAMismatches·
@jznode @EHuanglu it's not soulless it's exagerated emotions. Even some actors can't fake emotion and are qualified as bad. Some Ai shit are good somes are bad.
English
1
0
0
35
el.cine
el.cine@EHuanglu·
AI is not soulless, grok btw
English
290
453
2.5K
668K
JZ
JZ@jznode·
The bigger shift: treating it like a design API instead of describing what you want to see. Shadow direction, typography hierarchy, color relationships. The model executes specifications rather than interpreting descriptions.
English
1
0
0
12
JZ
JZ@jznode·
Interior designers are using Nano Banana 2 as a floor plan reader, not an image generator. Input a floor plan, get a rendered room. Iterate furniture placement without re-rendering from scratch. The spatial reasoning holds across edits.
JZ tweet media
English
1
0
0
37
JZ
JZ@jznode·
The most practical technique: place B-roll cutaways at clip transitions, not for visual interest but as consistency cover. Cut away from the character, cut back. The viewer's eye resets during the cutaway. The seam between clips disappears.
English
1
0
0
23
JZ
JZ@jznode·
People spend hours optimizing AI video prompts trying to make their work look cinematic. The filmmakers actually making cinematic AI video spend that time on post-production instead. Color grading, B-roll strategy, strategic cuts. That's where the quality gets made, not in the prompt.
JZ tweet media
English
1
0
0
17