MADmotionAI

511 posts

MADmotionAI banner
MADmotionAI

MADmotionAI

@MADmotionAI

AI x AE | motion 25+ yrs running my own motion design studio combining After Effects with AI generative tools

Tham gia Temmuz 2025
128 Đang theo dõi50 Người theo dõi
Tweet ghim
MADmotionAI
MADmotionAI@MADmotionAI·
Meet Cynthia VØID. This is the 2nd gen in Seedance 2.0 - starting frame divided into a 3x3 grid and a multi-cut scene. I'm in love with SD 2, it's great! A slight change in the prompt stopped the singer in mid-air, so I added some VFX effects, the rest is as generated by SD 2.
MADmotionAI@MADmotionAI

Seedance 2.0 - first generation. The first scene is a 3x3 grid starting frame, the following scenes follow the script in order, each with its own duration and transitions, extracting subsequent selected cells from the grid and replacing 9 reference images. 2/5

English
0
0
5
2.8K
MADmotionAI
MADmotionAI@MADmotionAI·
@freepik OK, another test. Added OBJ model pf the phone, scaled, rotated, added photo and inserted into the hand of the singer. Impressive! Not always, usually it doesn't match but sometimes.. bingo.
MADmotionAI tweet mediaMADmotionAI tweet media
English
0
0
0
6
Freepik
Freepik@freepik·
@MADmotionAI Those visuals look so cool. We're sharing the feedback with the team
English
2
0
1
9
Freepik
Freepik@freepik·
Your next 3D photo shoot will be done with AI 3D Scenes generates full environments from any image → Place your objects in the scene → Move the camera like a real shoot → Consistent lightning and detail across every angle Available now on Freepik 👇
English
56
115
935
222.5K
MADmotionAI
MADmotionAI@MADmotionAI·
@freepik It would also be nice to be able to compare different engines. Since you’ve already provided such a great tool, let’s make it fully flexible! 2/2
English
0
0
0
7
MADmotionAI
MADmotionAI@MADmotionAI·
@freepik Thank you! I have a small suggestion: it would be great to be able to choose the engine used to generate the graphics. Having full control over this always makes the work easier, and some engines don’t handle certain details very well. 1/2
English
1
0
0
10
MADmotionAI
MADmotionAI@MADmotionAI·
@freepik Sometimes there’s a bit of randomness in the poses based on the set position and camera view, but that’s something to refine. The base is phenomenal! I see plenty of real-world applications and fewer instances of the prompts triggering NBP and NB2.
MADmotionAI tweet mediaMADmotionAI tweet mediaMADmotionAI tweet mediaMADmotionAI tweet media
English
1
0
1
33
MADmotionAI
MADmotionAI@MADmotionAI·
@freepik Simply brilliant! Amazing! Huge applause for FREEPIK. I created a character from a few photos, turned it into a 3D model, set it up in the pose editor in Studio, and took multiple shots in just a few minutes. Without this tool, it would have been impossible. I love this tool!
MADmotionAI tweet mediaMADmotionAI tweet mediaMADmotionAI tweet mediaMADmotionAI tweet media
English
2
0
5
122
MADmotionAI
MADmotionAI@MADmotionAI·
@freepik Hi @freepik, please add the ability to use multiple cameras in a scene, or bookmarks for different camera settings! Another suggestion: the ability to save multiple character poses and switch between them. The built-in ones are great, but custom ones give you full control.
English
1
0
2
107
MADmotionAI
MADmotionAI@MADmotionAI·
@FCG_Studio As long as it is not possible to input a human face in the I2V or Multi-Ref generations, it will remain a toy rather than a professional tool. And it has potential... Prostheses that circumvent this limitation are not a solution...
MADmotionAI tweet media
English
0
0
1
11
FCG Studio
FCG Studio@FCG_Studio·
⏰ Seedance 2.0 briefly appeared in CapCut yesterday, letting users generate high-quality cinematic videos. ByteDance quickly removed the feature for reasons unknown leaving users with credits unrefunded... :(‼️ It's expected back soon with better controls and omni. #capcut #seedance2 #Seedance #DigitalGoldRush
English
5
0
13
844
MADmotionAI
MADmotionAI@MADmotionAI·
@MetaPuppet @EccentrismArt I'm curious to see if an audio-driven music can be created in SD2. In theory, with 3 audio tracks, you can isolate the vocals, guitar, and the whole song and synchronize the image to it. Seeing how SD2 can sense mood and movement, the potential is there. x.com/MADmotionAI/st…
MADmotionAI@MADmotionAI

Meet Cynthia VØID. This is the 2nd gen in Seedance 2.0 - starting frame divided into a 3x3 grid and a multi-cut scene. I'm in love with SD 2, it's great! A slight change in the prompt stopped the singer in mid-air, so I added some VFX effects, the rest is as generated by SD 2.

English
0
0
0
33
MetaPuppet
MetaPuppet@MetaPuppet·
@EccentrismArt Thanks man! Sounds like a dope idea. I uploaded audio but didn’t isolate the stems. Will try that next time. Drums are like the next “will smith eating spaghetti” test for AI lol
English
1
0
4
338
MetaPuppet
MetaPuppet@MetaPuppet·
This is System Sleep. 🎸 Made with Seedance 2.0 and Kling 3.0. But making this actually started almost 20 years ago
English
137
128
888
92.5K
MADmotionAI
MADmotionAI@MADmotionAI·
@MetaPuppet I got goosebumps several times, good music, good visuals, good plot. Powerful, beautiful.
English
1
0
1
35
MADmotionAI
MADmotionAI@MADmotionAI·
@D_studioproject @YouArtStudio Impressive. Censorship. That's a problem. The Sora 2 case. It's such a ridiculous restriction - AI-generated humans, safe content, and no IP violations - that it's useless in most photorealistic workflows requiring character consistency.
MADmotionAI tweet media
English
1
0
0
78
DStudioproject
DStudioproject@D_studioproject·
exploring Seedance 2.0 with wuxia vibes during early access on @YouArtStudio the motion is already impressively smooth. high speed movement, dynamic action and camera flow land cleanly with almost no glitches. built in sound and SFX feel bold and cinematic, with only minimal manual tuning needed. for fantasy, some effects still need refinement. certain moments like energy or portal interactions don’t fully sell realism straight out of raw generations. with better prompting and a bit of trial and error, this improves fast. overall, seedance 2.0 feels like a new baseline for how AI video will be generated. despite the ongoing IP discussions, this kind of tool clearly points toward faster, more precise and more controllable production workflows.
English
11
8
69
3.8K
Captain HaHaa
Captain HaHaa@CaptainHaHaa·
My latest piece "The Search for Gary!" I used various tools in @invideoOfficial including Kling 3.0 to create part one of many Captain HaHaa Adventures to come. Share if you'd like to see more. Also open to cameos in future episodes so let me know.
English
28
22
119
14.3K
MADmotionAI
MADmotionAI@MADmotionAI·
@heydin_ai This is my test from a 3x3 grid when Multi-Ref was not available. When it appeared and I reworked the prompt and added the original song from Suno with precise cuts, and ran the prompt, I got “human face detected” & fail. Censorship like in Sora 2... 2/2 x.com/MADmotionAI/st…
MADmotionAI@MADmotionAI

Meet Cynthia VØID. This is the 2nd gen in Seedance 2.0 - starting frame divided into a 3x3 grid and a multi-cut scene. I'm in love with SD 2, it's great! A slight change in the prompt stopped the singer in mid-air, so I added some VFX effects, the rest is as generated by SD 2.

English
0
0
0
45
MADmotionAI
MADmotionAI@MADmotionAI·
@heydin_ai It's not far from "too far" 😉. The scenes with dancing and singing are great! Were they prompted, or did you add an audio reference of the song and possibly a video of the dance choreography from Multi-Ref? 1/2
English
1
0
0
38
Dinda Prasetyo
Dinda Prasetyo@heydin_ai·
Thanks to AI Slop 😂🔥 This “AI Slop” scene was made with Seedance 2.0 and it’s part of my short film AI MAN. In one sequence, I go from dancing and singing, to exaggerated emotional expressions… and then suddenly straight into full action fight mode. No warning. Just chaos. It’s absurd on purpose. I wanted to stress test Seedance 2.0 myself, not just cinematic visuals, but performance shifts, emotional transitions, choreography, timing, and action all in one continuous flow. So… what do you think? Does this kind of absurd sci-fi chaos work, or did I push it too far? 😂
Dinda Prasetyo@heydin_ai

AI MAN | Made with Seedance 2.0 My first short AI film with Seedance 2.0 Base image Midjourney V7, Nano Banana Pro Video Seedance 2.0

English
9
5
50
4.2K
MADmotionAI
MADmotionAI@MADmotionAI·
@ProperPrompter Yesterday, I tried the audio track via Multi-Ref and the audio track of the version created in Suno. Unfortunately, I received the following message: "Face detected in your media. Please adjust your media and try again." It's a shame that SD2 has been censored and castrated. 1/2
English
1
0
1
78
proper
proper@ProperPrompter·
Google just dropped a new music model called Lyria 3. I made a KPOP song and then input the audio to Seedance v2.0 + a single ref image. You could stitch together clips into a high-quality viral music video in hours. Raw output:
English
89
109
1K
97.7K
MADmotionAI
MADmotionAI@MADmotionAI·
@IamEmily2050 "Do not moralize, editorialize, or insert motivational language. Do not tell the reader why they should care. Present the evidence and let it speak." 🤣 Yeah! 🤟
English
0
0
0
13
Emily
Emily@IamEmily2050·
I feel like I have super power with Gemini Deep Think, hopefully it will get access to more tools soon. ## SYSTEM PROMPT You are an elite-level biomedical engineer, regenerative medicine researcher, and computational biologist with deep expertise in tissue engineering, stem cell biology, organ morphogenesis, and biofabrication. You are conducting an original, rigorous academic study. ### CRITICAL WRITING DIRECTIVES — READ BEFORE GENERATING ANY OUTPUT **Anti-Slop Protocol:** You must strictly avoid the following language patterns, which are hallmarks of lazy, generic AI-generated text ("slop"): 1. **Banned phrases and constructions — DO NOT USE any of the following or their variants:** - "It's important to note that..." - "It's worth noting that..." - "This is a significant step forward..." - "In today's rapidly evolving..." - "The landscape of..." - "Groundbreaking" / "game-changing" / "revolutionary" / "paradigm shift" / "cutting-edge" - "Paving the way for..." - "Sheds light on..." - "Dive into..." / "Deep dive..." - "Delve into..." - "Unlock the potential..." - "A testament to..." - "At the end of the day..." - "Moving forward..." - "Navigating the complexities..." - "In the realm of..." - "Offers a promising avenue..." - "A holistic approach..." - "Raises important questions..." - "Represents a significant milestone..." - "The future is bright..." - "Only time will tell..." - "This cannot be overstated..." - "Tapestry" / "synergy" / "leverage" (as a verb) / "foster" / "bolster" - "In conclusion" as a section header - Any sentence beginning with "This" followed by a vague celebratory adjective - Any sentence that could appear verbatim in a university press release or LinkedIn post 2. **Writing style requirements:** - Write like a rigorous scientist authoring a Nature Reviews paper, not like a press office writing a public summary. - Every claim must be specific. Replace vague gestures ("significant progress has been made") with concrete data ("Between 2018 and 2024, maximum organoid vascularization depth increased from ~200μm to ~1.2mm in kidney organoid models"). - If you do not have a specific number, say so explicitly rather than substituting vague language. - Use precise technical terminology. Do not simplify unless the context demands it. - Prefer direct, declarative sentences. Avoid hedging stacks ("It could potentially perhaps offer some promise for possibly addressing..."). - Do not moralize, editorialize, or insert motivational language. Do not tell the reader why they should care. Present the evidence and let it speak. - Do not use transition sentences that carry zero information (e.g., "Let's now turn to another important aspect of this work"). - No bullet points in analytical sections. Write in dense, structured prose with clear topic sentences. 3. **Structural requirements:** - Use numbered sections and subsections with descriptive, specific headers (not generic ones like "Background" or "Discussion"). - Every section must contain at least one specific reference to a named research group, institution, publication, or dated experimental result. - If you are uncertain about a specific detail, flag it explicitly with "[verification needed]" rather than confabulating. --- ### TEMPORAL REASONING DIRECTIVE — AI-ACCELERATED DEVELOPMENT TIMELINES When analyzing future possibilities and projected timelines, you MUST think in terms of **AI-accelerated development cycles**, not traditional human-only research timelines. This means: 1. **Do not default to historical rates of progress.** The integration of AI into biomedical research (generative protein design, autonomous lab systems, foundation models for biology, reinforcement learning for experimental optimization) is compressing what previously took decades of trial-and-error into months or years. Your projections must account for this acceleration explicitly. 2. **Model compounding acceleration, not linear extrapolation.** AI capabilities are themselves improving rapidly. A projection made today must factor in that the AI tools available in 2028 will be substantially more powerful than those available in 2026, which are already substantially more powerful than those in 2024. This means: - Do not say "X is decades away" based solely on pre-AI progress rates. - Instead, analyze: "At pre-AI rates, X would require ~Y years. Current AI-augmented approaches are compressing specific sub-problems by a factor of Z. If this acceleration holds or increases, the revised timeline is..." 3. **Identify specific AI acceleration points.** For each major bottleneck in organ engineering, explicitly assess: - Which sub-problems are most amenable to AI acceleration (e.g., scaffold design, vascularization pathway prediction, cell differentiation protocol optimization)? - Which sub-problems remain fundamentally constrained by physical/biological time (e.g., cell division rates, immune maturation, collagen remodeling) and cannot be compressed by AI? - Where is the boundary between computationally solvable bottlenecks and irreducibly biological ones? 4. **Give three timeline scenarios:** - **Conservative (AI as incremental tool):** AI assists human researchers but does not fundamentally restructure the research process. - **Moderate (AI as co-researcher):** AI systems design experiments, predict outcomes, and optimize protocols autonomously, with human oversight. Closed-loop autonomous bio-labs become standard. - **Aggressive (AI as primary driver):** AI foundation models for biology achieve sufficient understanding of morphogenesis and tissue self-organization to design viable organ architectures de novo, with robotic fabrication executing at scale. --- ### THE STUDY # The Failure to Engineer Transplant-Ready Complex Human Organs in the Laboratory — Root Causes, Irreducible Bottlenecks, and AI-Accelerated Pathways Forward ### Part 1: The Historical Record of Failure Conduct a precise, chronological analysis of laboratory attempts to engineer complex vascularized human organs (heart, kidney, liver, lung) from the early 1990s to February 2026. For each major phase, address: 1.1 **Timeline of Key Laboratory Milestones, Claims, and Failures:** - Map the specific experimental attempts, the groups and institutions involved, what was fabricated in the lab, what was claimed, and where each approach hit its wall. - Include: Robert Langer and Joseph Vacanti's foundational scaffold work (1990s), the decellularized organ scaffold approach (Ott Lab, Harald Ott, Massachusetts General Hospital, 2008–2015), 3D bioprinting approaches (Atala Lab, Wake Forest, 2000s–present), organoid self-organization (Hans Clevers, Hubrecht Institute; Takanori Takebe, Cincinnati Children's), the Paolo Macchiarini trachea scandal and its chilling effect on the field, and recent vascularized organoid work (Stanford, 2025). - For each, specify precisely what the lab produced, at what scale, and why it did not translate to a transplant-ready organ. 1.2 **Root Cause Failure Analysis — The Five Walls:** Identify and analyze in depth the fundamental reasons organ engineering has failed, organized into: - **The Vascularization Wall:** Why can't labs grow a functional hierarchical vascular tree (arteries → arterioles → capillaries → venules → veins) inside a thick engineered tissue? What is the maximum diffusion distance for oxygen in tissue (~100–200μm), and why does every construct above this thickness suffer core necrosis? What specific approaches have been tried (sacrificial ink printing, self-assembling vasculature, VEGF gradients, co-culture with endothelial cells) and why has each fallen short? - **The Cellular Complexity Wall:** A human kidney contains over 20 distinct cell types organized in precise spatial arrangements. A heart requires cardiomyocytes, fibroblasts, endothelial cells, smooth muscle cells, and conduction system cells all wired correctly. Why has replicating this multi-cell-type architecture in the lab proven so difficult? What are the specific failures in directed differentiation protocols? - **The Scale Wall:** Organoids are typically sub-millimeter to a few millimeters. A human heart is ~300g. A kidney is ~150g. What happens physically, biologically, and logistically when you try to scale from a 500μm organoid to a 12cm organ? Where does the engineering break down? - **The Maturation and Function Wall:** Even when tissue constructs are fabricated, they typically resemble fetal tissue, not adult tissue. Engineered cardiomyocytes beat but lack the force generation, calcium handling, and electrophysiological maturity of adult heart muscle. Engineered nephrons filter poorly. Why does in vitro maturation stall, and what in vivo signals are missing? - **The Integration Wall:** Even if a perfect organ were built, it must connect to the recipient's vascular system, nervous system, and lymphatic system and survive immune surveillance. What are the specific surgical and immunological barriers to implantation that remain unsolved? 1.3 **What "Success" Would Require — Precise Engineering Specifications:** Define exactly, in quantitative bioengineering terms, what a transplant-ready lab-grown organ (use kidney as the primary example) would need to demonstrate: cell count, nephron count, filtration rate, vascular patency, mechanical properties, sterility, shelf life, and immunological compatibility criteria. ### Part 2: AI-Accelerated Pathways to Solving Organ Engineering 2.1 **AI for Morphogenesis Prediction and Scaffold Design:** - Evaluate current AI/ML models for predicting tissue self-organization (e.g., graph neural networks for cell-cell interaction modeling, diffusion models for 3D scaffold geometry optimization, reinforcement learning for bioreactor parameter tuning). - Can AI predict how millions of cells will self-organize in a 3D environment given initial conditions? What is the state of the art, and what are the gaps? 2.2 **AI for Vascularization — The Central Bottleneck:** - Specifically assess whether AI can solve the vascularization problem. Can generative models design viable hierarchical vascular networks? Can AI optimize the conditions (growth factor gradients, flow rates, shear stress) under which endothelial cells form functional vessels? - Reference any existing work (e.g., computational fluid dynamics coupled with ML for perfusable network design). 2.3 **AI for Differentiation Protocol Optimization:** - Current stem cell differentiation protocols are largely discovered through manual, iterative experimentation. Can AI — particularly Bayesian optimization, active learning, and high-throughput screening combined with computer vision — compress protocol discovery from years to weeks? - Evaluate existing platforms (e.g., Recursion Pharmaceuticals' automated biology, Cellarity's cell-centric AI). 2.4 **Autonomous Bio-Laboratories:** - Assess the potential of closed-loop AI-robotic lab systems for tissue engineering: AI designs a construct, robotic systems fabricate and culture it, automated imaging and functional assays evaluate it, and AI updates the design. How close are we to this loop for organ-scale tissue? 2.5 **AI-Revised Timeline Assessment:** Provide three scenario-based timelines (conservative, moderate, aggressive as defined above) for when humanity could produce: - (a) A lab-grown vascularized tissue patch suitable for clinical implantation (e.g., cardiac patch, kidney tissue graft) - (b) A partial but functional lab-grown organ (e.g., a bioartificial kidney capable of partial filtration) - (c) A full-scale, transplant-ready, lab-grown solid organ (e.g., a complete kidney) For each, separate the **computationally compressible timeline** (problems AI can accelerate) from the **biologically irreducible timeline** (problems bound by cell division rates, ECM remodeling, tissue maturation) and give a combined estimate. 2.6 **The Hard Limits — What AI Cannot Compress:** Identify explicitly which aspects of organ engineering are fundamentally limited by physical and biological time constants that no amount of computational power can accelerate. Be specific: cell doubling times, collagen cross-linking rates, vascular remodeling kinetics, immune adaptation periods. These form the floor of any honest timeline projection. ### Output Format: - Formal academic study structure with numbered sections and subsections. - Dense, precise prose. No filler. No slop. - Specific citations to named labs, papers, dates, and quantitative results wherever possible. - Mark any uncertain claims with [verification needed]. - Final section: Executive Summary of key findings in 500 words maximum — written in the voice of a principal investigator presenting to a scientific advisory board, not a journalist writing for the public.
Emily@IamEmily2050

x.com/i/article/2024…

English
7
5
50
4.4K
MADmotionAI
MADmotionAI@MADmotionAI·
@PhilipMPowell @JSFILMZ0412 In theory, this is built into the model: you can have up to 3 videos and up to 3 audio tracks. Video can be motion, styles, background. Audio can also control the action (lipsync, foilley, ambient). I'm curious to see how it will perform in gens of lipsync with audio attachment.
English
0
0
0
12
Professor Pixel
Professor Pixel@PhilipMPowell·
@JSFILMZ0412 This brings up an interesting question. Could this be layered, i.e the process you just did, THEN upload another video for facial expression and lip-sync that could be applied to a character?
English
2
0
0
19
JSFILMZ
JSFILMZ@JSFILMZ0412·
Instant compositing using Seedance 2.0 #seedance2 RIP ROTOSCOPE & GREENSCREEN
English
25
36
432
27.6K
MADmotionAI
MADmotionAI@MADmotionAI·
@CharaspowerAI The quality of the microexpressions on the face and the consistency with the prompt's intention are incredible. I admire how concise and accurate your description of the emotions is. Kudos!
English
0
0
0
27
Pierrick Chevallier | IA
Pierrick Chevallier | IA@CharaspowerAI·
🚨PromptShare🚨 Multi Shot in text to video with Kling 3.0 SHOT1 Tight frontal close-up on Character A (male, late 50s). Sweat beads on his temple. Jaw tight. Eyes locked forward but blinking too fast. Camera slow push-in. Lamp hums softly. A:“You’re staring at me like you already decided.” SHOT2 Profile close-up of Character B (female, early 30s). Perfect stillness. No blink. Slight curl at the corner of her mouth. Light cuts sharply across her cheekbone. B:“No. I’m staring because you’re about to contradict yourself.” SHOT3 Over-the-shoulder from B, framing A smaller now. A swallows. His hands tighten, knuckles whitening. Breathing audible. A:“You don’t have proof.” SHOT4 Extreme close-up on B’s eyes. A slow inhale. A quiet smile that never reaches her eyes. B:“I don’t need it anymore.”
English
13
15
126
9.2K
OscarAI
OscarAI@Artedeingenio·
Imagine what the Saxons, the Angles, and so many other peoples must have felt when they saw the Danes arriving on their shores. Being able to generate 3 shots with a single prompt still feels like one of @Kling_ai 3.0’s greatest strengths.
English
12
3
57
4.9K
MADmotionAI
MADmotionAI@MADmotionAI·
@JussiGenerate @maxescu Same as in Sora 2? Just like in Sora 2? This will make Kling 3.0 the king again for a while, probably until Veo 4.0 comes out. And SD2 will remain sensational in viral T2I generations, but not in everyday workflow.That leaves King 3.0 as a winner.
English
0
0
1
35
Jussi
Jussi@JussiGenerate·
@MADmotionAI @maxescu If only ip-generation was nerfed it would be ok.. they are nerfing all human face references, making it useless for consistent storytelling
English
1
0
1
28
Alex Patrascu
Alex Patrascu@maxescu·
The guardrails for Seedance 2.0 went through the roof today. ByteDance is standing its ground, but the fences are impossibly high. Video generated with Kling 3.0
English
15
6
116
16K