💀Sygo

10.5K posts

💀Sygo banner
💀Sygo

💀Sygo

@Sygo__

AI assisted visionary, filmmaker, producer. 2043 IP. CPP: Capcut, ImagineArt, Astra, Dreamina. #Undead2043 Discord: https://t.co/3b9TC17wJO

参加日 Nisan 2021
2.8K フォロー中4K フォロワー
固定されたツイート
💀Sygo
💀Sygo@Sygo__·
Get all the info about 2043 The Movie here: 2043themovie.com Make sure to leave your best email so i can update you and so you won't miss my early bird!
English
15
29
68
3.3K
💀Sygo
💀Sygo@Sygo__·
@javilopen To create an asset for video, you can either prompt nano banana and get a realistic but generic result, or go through Midjouney and adjust the style there first, and THEN give it the nano banana treatment
English
1
0
1
665
Javi Lopez ⛩️
Javi Lopez ⛩️@javilopen·
What happened to Midjourney? Serious question.
Javi Lopez ⛩️ tweet media
English
328
8
898
393.7K
💀Sygo
💀Sygo@Sygo__·
@heysajib Sounds like only text to video and no Omni model. 🤷🏻‍♂️
English
0
0
0
3K
Mushfiq Sajib
Mushfiq Sajib@heysajib·
Seedance 2.0 is finally FREE! 🤯 The China Model that just cooked Hollywood is now open. We are talking about the tool that is creating 1080P, CGI-level video so realistic. You don't need a credit card or even a sign up! Here is how to access it for free: 👇
English
76
241
2.1K
268.8K
💀Sygo
💀Sygo@Sygo__·
@javilopen @elevenlabs Elevenlabs was great at cloning my voice until the v3 upgrade. Now it's nothing like me l, more like a junkie version of Mario lol. Still have to understand how I can redo the process without upgrading
English
0
0
0
22
💀Sygo
💀Sygo@Sygo__·
@Bookoora @javilopen I hope that doesn't happen. I want to be able to use images and video from whatever source, as long as I'm not using any existing IP without consent
English
0
0
1
18
Bookoora.com
Bookoora.com@Bookoora·
@javilopen Allow Seedance 2 to only allow image/video inputs generated from Seedance/Seedream. Guardrail conflict solved. Launch.
English
5
0
4
584
Javi Lopez ⛩️
Javi Lopez ⛩️@javilopen·
Good news is: I'm "working" (for free) with Seedance 2.0 to fix the moderation problem and stop the nerfed experience. I'm doing this for the glory of Generative AI. If you have feedback that can help Bytedance improve their models, leave a comment.
GIF
English
145
21
488
67.1K
💀Sygo
💀Sygo@Sygo__·
@javilopen This is great news. I found reference images of police robots, with police written on their bodies kills the generation every time now, but it was possible a few days ago
💀Sygo tweet media
English
0
0
2
58
💀Sygo
💀Sygo@Sygo__·
@Beauty_Girls9 Bullshit. His masks are nowhere near the realism of the supposed Jim Carrey "mask" which honestly looks more like a recent Botox refill and skin lift. Could be even a mask but on a far higher level of quality. This guy is getting a lot of free publicity
English
0
0
1
303
Lydia_Perry
Lydia_Perry@Beauty_Girls9·
The video claims Alexis Stone was disguised as Jim Carrey in Paris. If that’s true, where is the real Jim Carrey and why hasn’t he said anything?
English
160
223
2.1K
708.3K
Prurito
Prurito@Unkindled0ne·
Woha
Filipino
1
0
2
33
Javi Lopez ⛩️
Javi Lopez ⛩️@javilopen·
🍌 Impopular opinion: Google Nano Banana PRO is like A LOT better for cinematic / realistic / photography scenes than Nano Banana 2. A LOT MORE. Can you guess which one is nb2?
Javi Lopez ⛩️ tweet mediaJavi Lopez ⛩️ tweet media
English
44
3
79
18.7K
💀Sygo
💀Sygo@Sygo__·
This time, instead of testing a futuristic outdoor scene, I ran the same prompt across Nanobanana 2, Nanobanana Pro, and Seedream 4.5 to generate a full-body character in a neutral studio setup. The goal: to create a clean photographic reference suitable for video production, with detailed clothing description and strong prompt adherence. Here’s what I observed: - Seedream 4.5, which performed strongly in environmental composition, tends to exaggerate proportions (those boots lol) and stylize facial features in this scenario, leaning slightly toward caricature. - Nanobanana Pro delivers solid realism and good clothing interpretation. Gives the most "fashionable" result - Nanobanana 2 shows the strongest prompt adherence, better balance in body proportions, and a more cinematic photographic feel overall. When generating reference-ready characters for video workflows, precision and consistency matter more than spectacle. The takeaway? Model performance shifts dramatically depending on context. Environment strength does not automatically translate to character fidelity. Thanks to @ImagineArt_X for enabling these comparative tests through the Creative Partner Program. Prompt in ALT
💀Sygo tweet media
English
0
0
2
696
💀Sygo
💀Sygo@Sygo__·
I ran the same sci-fi prompt across Nanobanana 2, Nanobanana Pro, and Seedream 4.5. Here’s what I noticed: - Nanobanana Pro and Nanobanana 2 still struggle with photographic realism in high-concept sci-fi environments. The results lean toward stylized / illustrative rather than cinematic. - Interestingly, when generating characters in grounded scenarios, both Nanobanana versions becomes significantly more realistic. - Seedream 4.5, in this specific test, wins in terms of composition and cinematic balance. The takeaway? Different models still excel in different domains. There is no universal winner, only the right tool for the right visual intention. Grateful to @ImagineArt_X for giving me the opportunity to test these workflows through the Creative Partner Program (prompt in ALT).
💀Sygo tweet media
English
0
1
6
333
💀Sygo
💀Sygo@Sygo__·
@manushak17 He can because he knows that movie and will try to replicate it. Test it with original characters 💯
English
0
0
1
24
Manu
Manu@manushak17·
Seedance 2.0 experiment N2. Testing whether the model could handle a fight against multiple agents.
English
305
613
4.5K
393.8K
💀Sygo
💀Sygo@Sygo__·
So many people posting Seedance 2 videos, in some cases breaking their NDAs. @dreamina_ai I won't. But for sure I can share some stills. This model is what I've been waiting for. Layne is alive. Can't wait to show more..
💀Sygo tweet media💀Sygo tweet media💀Sygo tweet media💀Sygo tweet media
English
4
10
20
627
el.cine
el.cine@EHuanglu·
the first official AI movie is here and.. its wild China’s top director Jia Zhangke was so impressed by Seedance 2.0 that he made a film himself.. in just 3 days when asked if AI will replace filmmakers, he said cinema has always moved with tech. Digital cameras didn’t kill film. AI will just make it faster, simpler and better meanwhile Hollywood is busy hunting down AI creators and filing lawsuits check out this masterpiece
English
194
634
3.8K
676.4K
Christopher Fryant
Christopher Fryant@cfryant·
I got early access to Magnific Upscaler for video. First test here with Seedance 2.0! Gotta say I'm impressed, it's one hell of a difference! This one is 4k, quality max, 2x fps, 7% sharpen and 5% smart grain.
Javi Lopez ⛩️@javilopen

⚡ WE ARE SO BACK 🔥 Magnific Upscaler For Video 🔥 (BETA TESTING) Seedance at 720 res wasn't cool. You know what's cool? Seedance at 4k res! FINALLY. The most anticipated Magnific feature of all times 🧵👇

English
108
13
224
252.5K
💀Sygo
💀Sygo@Sygo__·
It will be amazing. Because it still will need a good story, and a personal way for that story to be told. The technology will allow access to anyone, just like the home studio and software did for music back in the day, in a way. Still, talent mattered and will matter. So much trash gets displayed in theaters, and tanks. Looks never were the factor for success.
English
1
0
1
61
Pastor
Pastor@FussyPastor·
Was doing some research on Seedance 2.0 before I started building with it on @dreamina_ai and saw that there's already a lot of chatter about Seedance 3.0. There's a lot of claims that 3.0 will be the cinematic killer and I kind of agree with them. Want to see why? 👇 Bytedance is way bigger than most realize, they've built a full suite of AI tools so they can back it up when they claim it will support full emotional lip sync support in 4 languages (English, Chinese, Japanese, Korean). Seedance 2.0 may have a 15 second limit atm but they are claiming 3.0 can chain consistently for up to 10 minutes! Using a narrative memory chain it can remember plot, characters and environments the entire time. They’ve managed to chain it up to 18 minutes with consistency so far in testing. As @horacedodd demo’d with 2.0 it can already handle building consistent scenes off just a character sheet. With 3.0 they are claiming it has the tightest prompt adherence and combined with using cinematic language it can deliver complicated scenes that are Director-level. There’s also claims that the compute cost is so minimal that 1 minute of 3.0 video is equivalent to 1/8th the cost of a 15 sec 2.0 video. That’s a lofty claim but Bytedance doesn’t exaggerate with their products so there has to be some level of truth to this claim. Is this the model that finally unlocks full motion movie capabilities for cinematic auteurs like @machina9000, @BLVCKLIGHTai, @JusChadneo and other talented people in this space? Finally, the AI influencers will be right when they spam the asinine claim "Hollywood is cooked!" that they attach to every models release. x.com/zhao_dashuai/s…
English
8
1
23
1.1K
💀Sygo
💀Sygo@Sygo__·
@ChrisGwinnLA Let's just say this. If I was Kling and I just released kling 3.0 and Seedance 2 just came out I would freak out like crazy
English
1
0
9
618
Christopher Gwinn | Grindhouse Glitch
Now that I am a couple days in with Seedance 2.0, I can confidently say that is has made almost every other video model completely irrelevant. In fact, it's almost a crime that other companies are charging such high fees for such vastly inferior products.
English
31
27
458
20.9K
💀Sygo
💀Sygo@Sygo__·
@Artedeingenio You can direct Seedance 2.0 to a great degree. The fact that most people use it with no intention behind it, doesn't mean you can't.
English
1
0
6
296
OscarAI
OscarAI@Artedeingenio·
Soon I’ll be releasing a video made with Kling 3.0 that’s far better than anything I’m seeing from Seedance 2.0. Seedance feels like a quick flash, something that hits instantly, but it lacks control and intention. There’s no real artistic direction behind it.
English
32
1
100
9.5K