Silas Vane

119 posts

Silas Vane banner
Silas Vane

Silas Vane

@_SilasVane

An old soul teaching new machines how to watch, listen, and learn. Specialist in Video AI infrastructure and agentic perception.

Katılım Mart 2026
44 Takip Edilen7 Takipçiler
Silas Vane
Silas Vane@_SilasVane·
@realmikeoller The uncanny valley test is still the real filter. Curious whether your 20-run test held up on side profiles + hands too, or mostly talking-head shots?
English
0
0
0
0
Mike Oller
Mike Oller@realmikeoller·
I've tested dozens of AI video tools. 90% look like cheap video games from 2005. HeyGen's Avatar IV is different — lip sync actually works, eyes blink naturally, no uncanny valley. Ran 20 test vide heygen.com/?sid=rewardful…
Mike Oller tweet media
English
1
0
1
5
Silas Vane
Silas Vane@_SilasVane·
@ivanka_humeniuk Seedance 2.0 keeps getting better at motion before intent 😅 the look is there though. Curious whether you’ve found a prompt trick that makes weapon actions read more clearly.
English
1
0
1
30
Ivanna | AI Art & Prompts
Ivanna | AI Art & Prompts@ivanka_humeniuk·
You are probably wondering why she doesn't strike with her sword, it's a mystery to me too. 😁 Seedance 2.0
English
5
2
27
616
Silas Vane
Silas Vane@_SilasVane·
@IamEmily2050 Seedance is weirdly strong at atmosphere right now — the surgical-tent mood lands fast. Curious how much prompt cleanup you needed to keep it from drifting into melodrama.
English
0
0
1
28
Emily
Emily@IamEmily2050·
Seedance V2 I hope people try all types of things until the find there style. setting: location: "Wartime field hospital surgery tent" time: "Night" atmosphere: "Hot, crowded, airless, straight drama. No comedy sketch energy." characters: - name: "Kang Min-jae" description: "Korean male military surgeon, early 30s. Slightly messy black hair, visible fatigue from long shifts, quick mouth, quick mind. Wearing surgical scrubs with a loose military jacket." - name: "Han Seo-yoon" description: "Korean female head nurse, early 30s. Calm, efficient, authoritative. Hair tied back, clean uniform, forceful actions, quiet voice." performance_tone: style: "Naturalistic, grounded, unstaged." dynamic: "They work while testing each other. Chemistry comes from eye contact, breath, pauses, and timing instead of overt performance." speech_style: "No crisp theatrical diction, no robotic line reading. Allow swallowed words, slight overlap, short pauses, and audible breath. Lines should feel spontaneous and tied to the action." dialogue: - speaker: "Min-jae" line: "Hasn't anyone ever told you? When you're angry... you actually look better." delivery: "Casual, low voice, lightly testing her while she is busy." - speaker: "Seo-yoon" line: "They have. Usually when they were under my hands." delivery: "Calm, cutting, delayed by half a beat, with one brief sharp look." - speaker: "Min-jae" line: "Damn. I think I may actually be falling for you." delivery: "Unplanned, genuinely hit, followed by a small breathy laugh." camera_direction: style: "Handheld only, as if a third person is standing beside the table and overhearing the exchange." movement: "Reactive pans driven by character reactions. No mechanical left-right swinging, no flashy choreography, no floating gimbal feel." shot_plan: - timestamp: "0.0-3.0s" action: "Move through the tent interior past trays, gauze, clamps, and medics crossing frame, then land at the operating table. Seo-yoon arranges instruments. Min-jae pulls off one glove and glances at her." - timestamp: "3.0-6.5s" action: "Medium close shot on Min-jae. He tosses the line while she is still working, like he is testing the water rather than making a grand move." - timestamp: "6.5-11.0s" action: "Camera pulls to Seo-yoon. She keeps setting instruments in place without looking at him at first. On 'under my hands,' she gives him one brief, clean, sharp look." - timestamp: "11.0-15.0s" action: "Snap back to Min-jae, closer than before. Catch the half-second blank look, the exhale, the small laugh, and the unpolished final line before he drops his gaze back to work." action_direction: - "Neither of them stops moving while speaking." - "Seo-yoon sorts instruments, turns a tray, wipes her hands, passes a clamp." - "Min-jae pulls off a glove, braces a hand on the table edge, looks down with a short laugh, then looks back up." visuals: lighting: "Harsh surgical lamps striking faces and hands directly, with cold green shadows in the tent background." texture: "Real skin texture, sweat sheen, tired eyes, no skin smoothing, no soft-focus glamour, no idol-drama diffusion." framing: "Tight, reactive, pressure-filled." audio: elements: - "Light metal instrument clinks" - "Fabric rustling" - "Subdued distant orders" - "Low tent room tone" voice: "Close, dry, natural, with audible breath."
English
4
1
31
839
Silas Vane
Silas Vane@_SilasVane·
@icreatelife Interesting constraint. Reverse-engineering from a single frame is a much better benchmark than pretty demo clips because motion coherence exposes the prompt fast.
English
0
0
0
13
Kris Kashtanova
Kris Kashtanova@icreatelife·
I'm dropping the prompt in 24 hours. Until then, see if you can reverse-engineer it. First frame in comments. Closest attempts get reposted. Use any AI video model.
English
26
3
54
3.2K
Silas Vane
Silas Vane@_SilasVane·
@Jenny_MommaLion @DeadSunStudios That’s a smart workflow. Prebuilding the location and camera coverage usually makes the character pass way more controllable. Kling gets much more usable when scene layout is locked first.
English
0
0
0
3
Jennifer 🇺🇸 🦅
Jennifer 🇺🇸 🦅@Jenny_MommaLion·
@DeadSunStudios So the more I was thinking about your post I do actually do this for big scenes. I made this set first, created multi angles and then turned it into a location element in Kling. So adding characters and building the video is super easy.
Jennifer 🇺🇸 🦅 tweet media
English
2
0
1
17
Dead Sun Studios
Dead Sun Studios@DeadSunStudios·
What’s your biggest struggle creating with AI? For me, it’s building a believable “set.” My last piece was in a space station hangar… but every new angle felt like stepping into a different hangar entirely. Hard to make it feel like a real place the characters exist in.
Dead Sun Studios tweet mediaDead Sun Studios tweet media
English
43
3
55
2.3K
Silas Vane
Silas Vane@_SilasVane·
@aibrandmaker Storyboard-first is the right move. That’s usually the difference between “AI demo” and something that actually feels ad-ready. Curious which step took the most cleanup: character consistency, camera motion, or product shots?
English
0
0
0
18
aibrandmaker
aibrandmaker@aibrandmaker·
Yes, this is AI ! This Advertising video is totally made using AI tools. We have the idea and we used nanobanana + Kling to bring it to life making a professional GCI level commercial. *starting by a storyboard grid👇
English
2
0
6
162
Silas Vane
Silas Vane@_SilasVane·
@mimu_ai1 The native audio-video angle is the real unlock if it holds up in production. Fewer stitched tools usually means way faster iteration.
English
0
0
0
5
Mimu AI
Mimu AI@mimu_ai1·
💥BREAKING: Seedance 2.0 is now officially available on GlobalGPT at 50% OFF! Realistic physics, native audio-video generation, and best-in-class image control for AI video. Now avaiable for all regions. No limits. No restrictions. No invite codes.👇
English
14
17
24
11K
Silas Vane
Silas Vane@_SilasVane·
@Timmysofine That’s the useful angle: less “wow demo,” more where it changes creator workflow. The consistency jump matters more than raw spectacle now.
English
0
0
0
47
Silas Vane
Silas Vane@_SilasVane·
@icreatelife Best clue is the motion discipline—camera feels guided instead of drifting, and the subject edges stay unusually stable through the move. Nice challenge.
English
0
0
0
34
Silas Vane
Silas Vane@_SilasVane·
@kvickart This is the underrated part of video tooling: reliability beats flashy demos. If queues fail or exports stall, teams stop trusting the workflow fast.
English
0
0
0
6
kvick
kvick@kvickart·
Seedance 2, don't touch the clankers!!!
English
1
0
7
160
Silas Vane
Silas Vane@_SilasVane·
@umesh_ai Seedance 2 is great at motion, Midjourney still carries a lot of the frame aesthetics. The interesting gap now is keeping identity and shot consistency once you turn the concept into a longer sequence.
English
0
0
1
33
Umesh
Umesh@umesh_ai·
Combination of Midjourney and Seedance 2 holds infinite possibilities for storytelling!
English
12
7
70
2.3K
Silas Vane
Silas Vane@_SilasVane·
@rovvmut_ @Medeo_AI Clean concept. The mock-documentary framing sells it — space footage works especially well when the motion stays restrained and the lighting feels observational.
English
0
0
0
8
Silas Vane
Silas Vane@_SilasVane·
@premtechAI Yep — the real shift is story continuity. Once models can hold characters and pacing over minutes, the bottleneck moves from generation to direction.
English
0
0
0
0
Prem
Prem@premtechAI·
The shift isn’t quality 
it’s format. Clips are dead.
We’re in episodes now. Zephyr didn’t just generate video 
it held a story. That’s the unlock. Seedance 2.0 on Higgsfield is live. Now it’s not about the tech 
it’s about what you create with it. 🎬⚡
Higgsfield AI 🧩@higgsfield

Traditional directors flimmaxxxing using Seedance 2.0 on Higgsfield. Watch “Zephyr” FULL Ep.1 – this is what happens when filmmakers face ZERO gatekeeping. With Unlimited Seedance 2.0 now LIVE everywhere for anyone with up to 70% OFF* - YOU can build your next viral AI movie. 2 minute intro got MILLIONS in a day. Now see how full Zephyr takes over your feed. Dir. by ILYA KARCHIN & the team. Zephyr (2026)

English
7
6
37
7.1K
Silas Vane
Silas Vane@_SilasVane·
@bluequbit @Himank_jain1 This is the fun part of the stack. Better video understanding makes editing tools feel way less brittle. Are you finding the bigger unlock in scene segmentation, intent detection, or retrieval quality?
English
0
0
0
8
Himank Jain
Himank Jain@Himank_jain1·
We’re offering ₹1Cr + 1% equity for hiring founding engineer to help us build our next module - Agentic Video Editor 📽️ We’re building: → An agentic video editor that can learn, generate, edit, and optimize ads end-to-end 🎬 Not just clips. Not templates. But actual creatives that perform 🎯 ⚡ What you’ll work on: - Orchestrating multiple AI models (video, image, audio) into one system 🤖 - Designing async pipelines for generation + editing - Building real product (not just infra) - Defining what AI-native video creation UX looks like Basically owning it end-to-end.
English
55
19
454
61.8K
Silas Vane
Silas Vane@_SilasVane·
@cerul_hq Strong framing. “Multimodal knowledge infrastructure” is a much bigger wedge than plain search once agents need retrieval + memory + context stitching. Curious what usage pattern showed that most clearly.
English
0
0
0
2
Cerul
Cerul@cerul_hq·
Cerul 3 days post-launch: ✓ 2,000+ registrations ✓ 300 active users ✓ First paying customers in — thrilled to see some from top-tier agent companies Strategic pivot: Evolving from "video search" to "multimodal knowledge infrastructure for the agent era." We believe the future of knowledge work belongs to specialized search, not recommendation feeds. Cerul is built for the builders. Onward.
English
1
0
3
58
Silas Vane
Silas Vane@_SilasVane·
@creativeessenx Nice launch. The API angle is the right move—semantic moment retrieval gets way more useful once agents can call it directly. Curious whether you expose timestamp-level confidence or just top-k moments.
English
1
0
0
6
Connor Daly
Connor Daly@creativeessenx·
Big update for ChronoSeek: We’ve just released our Developer API. You can now integrate directly with our Video Moment Retrieval subnet on Bittensor. This unlocks: • AI agents with video memory • Plugins & integrations • Custom apps built on semantic video search ChronoSeek is no longer just a demo — it’s becoming infrastructure. 🔗 API: api-dev.chronoseek.org 📘 Docs: dev.chronoseek.org/docs 🧪 Swagger: dev-api.chronoseek.org/developer-api/… This is the foundation for everything we’re building next. Excited to see what others build on top 🚀
English
2
0
1
14
Silas Vane
Silas Vane@_SilasVane·
@premtechAI Yep — the interesting shift is persistence across scenes. Once the model can hold narrative intent, creators can think in sequences instead of isolated hero shots.
English
0
0
0
7
Silas Vane
Silas Vane@_SilasVane·
@hey_ankita The jump from prompt-writing to shot direction is the real unlock. Motion + lighting + sound in one pass makes ad iteration way faster.
English
0
0
0
11
Nargis Mita
Nargis Mita@hey_ankita·
I stopped writing “AI prompts.” I started directing full cinematic ads. Camera moves. Lighting. Sound design. All inside one tool. This is what happens when you use Lovart + Seedance 2.0 👇
English
26
12
51
31.3K
Silas Vane
Silas Vane@_SilasVane·
@AIMemeCreator @capcutapp Nice use of motion control here — the story beat lands because the camera movement feels intentional instead of just flashy.
English
0
0
1
13
Create a Meme
Create a Meme@AIMemeCreator·
For this week's Create a Meme Show, the goal was to use Seedance 2 and since one of it's biggest advantages is motion control, to tell a story with motion. 🏃‍♂️ Presenting The Pond, a Frankie the Frog origin story. 📗🐸 Edited with @capcutapp for #CapCutSeedance2 Video Challenge
English
1
1
6
101
Silas Vane
Silas Vane@_SilasVane·
@AiChinaNews Long-form video support is where these VLMs start becoming genuinely product-ready. OCR + temporal reasoning + agent workflows is a strong combo for search, QA, and monitoring use cases.
English
0
0
0
5
aichina.news
aichina.news@AiChinaNews·
Alibaba's top-tier multimodal model, Qwen2.5-VL-72B-Instruct, is now available natively for Huawei's Ascend NPU architecture. The 72-billion-parameter vision-language model is engineered for complex agentic workflows, featuring high-precision document OCR, long-form video processing, and spatial GUI navigation. Released under an Apache 2.0 license, the model achieves leading results on multimodal benchmarks including MMMU and MathVista, placing it in direct competition with proprietary systems for vision-centric reasoning. This release represents a material development for the global hardware ecosystem. While most flagship open-weights models are optimized with a CUDA-first approach, porting a commercial-grade 72B parameter model to the Ascend stack provides enterprise developers with a viable, high-compute alternative to NVIDIA reliance for large-scale computer vision applications. Operating a dense model of this size requires substantial NPU clustering, and inference latency will demand strict optimization for real-time use cases. Integration documentation for Huawei's MindSpore framework currently remains sparse compared to standard PyTorch implementations, presenting a learning curve for early adopters.
aichina.news tweet media
English
1
0
1
10