László Gaál
240 posts



@VORTEX_Promos @javilopen I think with V6-7 everybody felt something is off, but everybody hoped it will get better.
English

@laszlogaal_ @javilopen why there is so much hate on V8 it's surprising. It was never good since v6 and was always falling behind. It remains as a nice niche artsy project but we cant be comparing it to Nano banana level models in 2026
English

Yesterday @javilopen asked what happened to Midjourney. Usually if a tool disappoints I move on and keep it to myself, but I was scrolling through Midjourney's official Twitter account and something stopped me. They were implying that long prompts are usually "𝐠𝐩𝐭𝐬𝐥𝐨𝐩".
I had to sit with that for a second because they are sitting at version 8 of their model, prompt length and adherence is still limited, faces are still distorted, no Omni feature in V8, text accuracy is still a problem and they removed relax mode so you are spending your credits on bad outputs. And still the users are the problem who are using too long prompts?
It is 2026. We have Nano Banana Pro and NB2 with enormous context length, we have Black Forest Labs Flux2 Klein with text rendering running on consumer hardware, we have image editing models capable of taking 14 input images and now the problem is the user who wants to use longer than 1300 characters to control the output?
I'm not writing against Midjourney itself, but more about this tweet. It's a great tool for quick visualizing, but since V6 they are falling behind while even open source, free models are catching up.
Some examples for V8 and same promtps in Nano Banana Pro/2 4K in the gallery.

English

@laszlogaal_ @javilopen It's kinda sad because they still generate some aesthetics and interesting styles you can't get with NB alone
English

@maxescu @javilopen even if V8 would be the best model out there, these responses don't beling to an official channel!
English

Holy shit, this response is insane tbh.
Their replies on X since last night have been unhinged, both from the official account and David, who told Javi he's prompting wrong.
The 1300-character limit is bonkers, also. My NB2 prompts have reached thousands of characters, and everything is taken into consideration.
English

@henrydaubrez @zoom_will let me double check the math, 62$ gives you 15.000 credits, one 15s video is 45 credits, 0.18$ for generation (even with image input OMNI). if you upload a depth map or motion control video, it is double (0.36$)
English

Spent about $1000 in credits on Seedance 2.0 over the last few weeks,and here are a few thoughts:
First, the main thing that strikes me using a state-of-the-art model from this new generation is how hard it still is to scale beyond short form.
Getting great animation is fast.
Getting multi-cut sequences that make sense is possible.
Consistency with Omnireference is actually very good.
But the moment you move into real narrative work, things change.
Multi-character exchanges, long sequences, maintaining visual continuity across shots, keeping tone, pacing, and staging consistent… it’s not impossible, but it is still a lot of work. And with generation costing somewhere between $2 and $7 per ~15 seconds, it adds up very quickly.
As models improve, producing good looking short content is becoming trivial.
Building something that holds together as a story is still not.
Continue Video in Seedance is clearly trying to address part of this, but in my case it has been broken for the last couple of weeks, so none of my longer attempts would go through.
In theory, you could imagine a small team of 5–10 people generating all day from the same storyboard, using a shared visual reference as a single source of truth. That alone shows how close we are to something that starts looking like a real production pipeline.
But we are not fully there yet.
Right now it still feels like we can touch the future with the tip of our fingers, while at the same time struggling to precisely steer a model using mostly words, references, and iterations when the narrative becomes complex.
Short clips are easy.
Worldbuilding is not.
And storytelling is still the hardest part.
GIF
English

@henrydaubrez @zoom_will on the chinese site it's priced much lower. 0.2-0.4$ per 15s.
English

@zoom_will Ah no,it’s on Dreamina. Counted price by roughly saying 100 credits is 1 dollar
English

✨Honored to receive 3rd place in the Artisan Award Escape [esc] @escapeaimedia Awards 2026, the annual Oscars of AI filmmaking.
The Artisan Award celebrates creators who push the boundaries of art through evocative, original and emotional imagery. Being recognized by a community of filmmakers exploring the edges of this new cinematic language means a lot to me.
Congratulations to the other artists in this category:
🥇 Dustin Hollywood @dustinhollywood
🥈 (TrashCan) Roxanne Ducharme
Both doing incredible work.
What makes these awards special is that they come from a community of peers, artists experimenting with the same new tools, languages and possibilities of what some of us call Neo Cinema.
A big thank you to ESCAPE AI MEDIA, Loucaros (Luke) Eleftheriou, John Gaeta for their incredible work curating and building such a strong and inspiring creative space.
And congratulations to all the nominees and winners this year, the level of work across the platform is seriously impressive.
Diane Laidlaw Francesca Fini floriane Bont @Noiteeluar1 @theblkspectrum Stephane Benini @BeniniStep9801 @BLVCKLIGHTai @Uncanny_Harry @AzeAlter Lion El Aton @JeffSynthesized @Diesol @DavideBianca Francesca Fini @JunieLauX Paola Rocchetti Paola @Kavanthekid @MetaPuppet @Magiermogul @laszlogaal_ @gossipgoblin @PsyopAnime @NeuralViz @KNGMKRlabs among others.
The exploration continues.



English

@laszlogaal_ Got it! I used an API and configured keywords to retrieve posts sorted by popularity and then manually filtered them
English

Update: Added 57 prompts today, Total: 3,000+
The best AI image generation prompt collection site👇
meigen.ai
English

@meigen7982 more about just the organizing (so I can host my own library, expand it, etc)
English

@laszlogaal_ Are you referring to obtaining prompts? We provide an open-source database in JSON format. github.com/jau123/nanoban…
English

my OpenClaw bot works as my assistant editor now
- digs through my script/comes up with b-roll ideas
- researches x, youtube, google for tasteful, undiscovered b-roll
- downloads the clips and puts them in organized folders
- gives me a list of timestamps with high quality moments
i used to pay an assistant editor to do this exact task...
RT this + reply "skill" and I'll send you a free skill file that researches, scrapes, and organizes broll for your videos (must be following)

English

@markgadala This is the best seedance transformation clip ever
English

What happens when an AI model starts understanding not only images, but the subtext of a dialogue, the meaning of a silence, or the small hesitation before saying something painful?
For the global launch of Kling 3.0 @kling_ai, the Labyrinth Studio team and I wanted to build a small film.
That’s how Looking for Bianca was born.
A short film set in Hong Kong that follows a young woman searching for Bianca, her missing husky.
Behind these five minutes there was a great deal of writing, staging, and experimentation with the new model. Even when a film is fully AI, the process remains surprisingly close to traditional filmmaking.
And that is exactly where this language will need to grow, in the writing of characters and stories.
It took two weeks and a team of four people to make Looking for Bianca.
Once again, Giacomo Cannelli @norealframe, @ARisuleo , Aurora Cecchini and I adapted our workflow.
With Kling 3, spatial control inside a scene has become much more precise, and we often chose to start from a wide shot and generate the coverage from there, as we did in the three-character dialogue scene between Sophie and the truck drivers.
A thank you to the entire @Kling_ai team, and especially to @tonypu_klingai , who has a rare sensitivity when it comes to supporting projects like this.
Watch Looking for Bianca here:
English

If you watch one short video today, please give a chance for this one. Creating an action scene, a vignette commercial or a sci-fi space battle has always been much easier with AI, while a chamber play is extremely hard. But Kling 3.0 made it a bit easier - and it’s a really, really good model for micro-expressions and dialogue shots, I think that last shot really speaks for itself. I had no early access so started this as a one-clip test last week and it quickly grew into a five-minute short. Finally all the small details for acting you put into the prompt will happen in the generations. All clips and dialogue were generated with Kling 3.0 from an i2v workflow. Multi-shot was only used when I forgot to turn it off.
English
László Gaál retweetledi

Watch our Creative Partner László Gaál @laszlogaal_ bring human interactions to life with Kling 3.0!
László Gaál@laszlogaal_
If you watch one short video today, please give a chance for this one. Creating an action scene, a vignette commercial or a sci-fi space battle has always been much easier with AI, while a chamber play is extremely hard. But Kling 3.0 made it a bit easier - and it’s a really, really good model for micro-expressions and dialogue shots, I think that last shot really speaks for itself. I had no early access so started this as a one-clip test last week and it quickly grew into a five-minute short. Finally all the small details for acting you put into the prompt will happen in the generations. All clips and dialogue were generated with Kling 3.0 from an i2v workflow. Multi-shot was only used when I forgot to turn it off.
Română













