JZ

458 posts

JZ banner
JZ

JZ

@jznode

Building AI video tools for creators. Cinematography + psychology for better AI videos. Shipping what I learn along the way.

Katılım Kasım 2023
63 Takip Edilen22 Takipçiler
Sabitlenmiş Tweet
JZ
JZ@jznode·
seedance 2.0 v2v. real footage + one style prompt. no editing.
English
1
0
3
539
Yapper
Yapper@yapper_so·
Seedance 2.0 is now globally available on @yapper_so. Your imagination is the limit! Comment "yapper" below to get access today 👇
English
342
39
279
49.5K
JZ
JZ@jznode·
Wrong. “sadness” gets melodrama. “quiet resignation after a long day” gets something closer to human. Same model.
JZ tweet media
English
0
0
0
17
JZ
JZ@jznode·
@EHuanglu replace the execution, not the eye. knowing what looks right in context is the part that doesn't prompt well.
English
0
0
2
330
JZ
JZ@jznode·
@MayorKingAI the sequence grammar is doing as much work as the style descriptors. neo-noir is the vibe, but door → room → action is the actual multi-shot structure that earns the pacing.
English
1
0
1
353
MayorkingAI
MayorkingAI@MayorKingAI·
Kling 3.0 Multi-shot Prompt below👇
Indonesia
10
14
193
13K
JZ
JZ@jznode·
@Strength04_X @YouArtStudio same prompt shows the output range, not the ceiling. Seedance 2.0 leans on reference consistency, Kling 3.0 on camera direction. the one that works depends on what you're building.
English
0
0
1
103
𝐌
𝐌@Strength04_X·
Same prompt. Two AI video models Seedance 2.0 on @YouArtStudio vs Kling 3.0 🤯 Both generated using the exact same prompt, but the results look completely different. AI video tools are evolving fast and creators can now build cinematic scenes with just prompts.
English
35
2
64
3.4K
JZ
JZ@jznode·
@wildmindai the 'plan then control' framing matches what works in production. locking camera grammar before generation is where consistency actually comes from, not the model's variance score.
English
0
0
0
12
Wildminder
Wildminder@wildmindai·
ShotVerse by Tencent. Cinematic multi-shot video gen with precise camera control. - Qwen3-VL-2B to derive camera movements from narrative text descriptions. - tops Sora2, Kling3.0, VEO3 in consistency. shotverse.github.io
English
3
6
70
4.6K
JZ
JZ@jznode·
The anchor: one locked style prompt block, one character reference sheet, one fixed parameter set. Nothing changes after episode 1. End every session by generating a consistency check. Start the next session by comparing against it.
English
1
0
0
31
JZ
JZ@jznode·
Getting the Ghibli look took 1 session. Keeping it consistent across 19 episodes took a system. Most AI video creators solve the wrong problem.
JZ tweet media
English
1
0
0
61
JZ
JZ@jznode·
@StephanieInii @RoboNeo_ai for a full movie, reference images solve the generation side. the harder problem is editorial: B-roll at transition points, cutting on action, style choices that reduce visual axes. that's what actually holds long sequences together.
English
0
0
0
34
Stephy Designs
Stephy Designs@StephanieInii·
I made this live action bts of kpop demon hunter with AI last 3 months with @RoboNeo_ai Now I can do something even better like making a full movie with character consistency.
English
3
0
35
97K
JZ
JZ@jznode·
@DNAMismatches @EHuanglu the exaggerated default is what happens without specific direction. 'sadness' gets melodrama, 'quiet resignation after a long day' gets something much closer to human.
English
0
0
0
21
DNA 🧬
DNA 🧬@DNAMismatches·
@jznode @EHuanglu it's not soulless it's exagerated emotions. Even some actors can't fake emotion and are qualified as bad. Some Ai shit are good somes are bad.
English
1
0
0
35
el.cine
el.cine@EHuanglu·
AI is not soulless, grok btw
English
290
460
2.5K
667.9K
JZ
JZ@jznode·
The bigger shift: treating it like a design API instead of describing what you want to see. Shadow direction, typography hierarchy, color relationships. The model executes specifications rather than interpreting descriptions.
English
1
0
0
12
JZ
JZ@jznode·
Interior designers are using Nano Banana 2 as a floor plan reader, not an image generator. Input a floor plan, get a rendered room. Iterate furniture placement without re-rendering from scratch. The spatial reasoning holds across edits.
JZ tweet media
English
1
0
0
37
JZ
JZ@jznode·
The most practical technique: place B-roll cutaways at clip transitions, not for visual interest but as consistency cover. Cut away from the character, cut back. The viewer's eye resets during the cutaway. The seam between clips disappears.
English
1
0
0
23
JZ
JZ@jznode·
People spend hours optimizing AI video prompts trying to make their work look cinematic. The filmmakers actually making cinematic AI video spend that time on post-production instead. Color grading, B-roll strategy, strategic cuts. That's where the quality gets made, not in the prompt.
JZ tweet media
English
1
0
0
17
JZ
JZ@jznode·
@natecurtiss_yt the thumbnail bottleneck kills more faceless channels than content quality does. most people spend 10x more time on scripts than thumbnails, but thumbnails are what determines if anyone reads the script at all.
English
0
0
1
13
Nate Curtiss
Nate Curtiss@natecurtiss_yt·
How to make AI thumbnails in 2026 (full tutorial). This tool is INSANE for faceless YouTube channels.
English
2
7
92
3.4K
JZ
JZ@jznode·
@openart_ai consistent characters is what makes storyboarding actually functional. once you can lock a character across shots, you're making previs decisions early rather than hoping things match in post. changes the whole pre-production timeline.
English
0
0
1
530
OpenArt
OpenArt@openart_ai·
The all new Sora 2 is now live on OpenArt - Day 0. 🎬 After testing the Sora API, the biggest unlock for us has been character consistency. Being able to generate scenes with the same characters across shots makes storyboarding and early scene development far more practical. Longer clips and higher-resolution output make it even more powerful - we’ve already started using it in some of the campaigns and promotions we launch ourselves. Excited to see where this goes. @OpenAIDevs
English
489
449
501
2.5M