Cheer Cheung
400 posts

Cheer Cheung
@learnwithcheer
CMO @evolinkai · 12K followers on TikTok Helping devs access AI APIs faster & cheaper 11K+ AI audience | daily tools & workflows ↓ Get API credits
Hong Kong 가입일 Temmuz 2023
90 팔로잉140 팔로워

@hAru_mAki_ch wow,how did you use HappyHorse-1.0? can you share the platform
English

🐴 Seedance2.0 超えの謎のAI動画モデル「HappyHorse-1.0」を使ってみた❻
~屋外で喋るシーン~
変な雑音がないのと、映像のブレもなくて良いのでは??!!
HappyHorse-1.0 は音声系が強みかも!!!
喋り終わりの挙動が気になるが、、、自然と思えば自然かも??!!
Maki@Sunwood AI Labs.@hAru_mAki_ch
🐴 Seedance2.0 超えの謎のAI動画モデル「HappyHorse-1.0」を使ってみた❺ ~喋るシーン~ 喋らせても普通によい!!若干の間が気になるがそれはセリフと尺の問題かも??!!
日本語

@romainhuet @ashebytes looking forward to it. The conversation with Peter is interesting
English

Really loved this conversation with @ashebytes.
Ashe is one of the most original and creative builders in the community.
We talked about building in public, relational intelligence, second brains, and how you can just build things with Codex.
Full episode out later this week!
English

hey my beautiful @fal friends - we would very much like seedance 2.0 access as well
some of our biggest competitors, all of whom have raised insane amounts of money and are therefore already at an advantage, have gotten access to the model
meanwhile, us poor bootstrapped founders have to wait while companies with questionable marketing ethics get to already deploy and make money from this powerful model
i think it would only be fair if you grant access to @genviral_, too
otherwise, how are we supposed to escape the permanent underclass introduced by the very same models you guys are distributing???

English

@seedance2_news That $2m is agreed to upfront, AND the only added perk is for the EARLY access version of Chinese Seedance 2.0
Imagine paying $2m, and the API releases next month or something 🤣
English

@xoMushinxo @samayousakadaru EvoLinkのSeedance 2.0 APIは、プロキシ統合に最適化されており、非同期ポーリングをサポートし、スムーズな並列処理を実現します。生成された動画の料金情報については、x.com/EvoLinkAi/stat… をご参照ください。
日本語

@samayousakadaru ありがとうございます🙇♂️
Seedance 2.0は化物ですよね。ただ、確か音声入力ができないのと、動画は尺の都合API操作を前提とした、エージェント駆動が基本なので、厳しいですね😆高いですからね動画系のAPI。
蒸留学習したモデルが出ることに期待しています。
日本語

@Mentor EvoLink's Seedance 2.0 API also covers reference-to-video solid for production workflows. hope you could try it evolink.ai/seedance-2-0
English

@koraybirand Add EvoLink to that list,check the price guide x.com/EvoLinkAi/stat…
English

Seedance 2 kullanmak için sacma sapan abonelik tuzaklarına düşmeyin. Api hizmeti veren yerler var. Misal wavespeed.ai, kie.ai benim sık kullandıklarım. Kullandıklarım arasında kie seedance 2 desteği gelmiş. Kullandığınız kadarını harcıyorsunuz.
480P
$0.0575/s, with video input
$0.095/s, no video input
720P
$0.125/s, with video input
$0.205/s, no video input
Kie sadece endpoint hizmeti veriyor, wavespeed’de end point + arayüz üzerinden üretim imkanı da var.
Şiddetle tavsiye ederim, üçkağıtçılara para kazandırmayın.
Türkçe

@atomtanstudio thanks for your feedback. we will get this feature soon!!!
English

@learnwithcheer I just wish you were using the same endpoint as dzine.ai. They are able to use real human faces in Seedance 2.0 without any issues, but they are three times more expensive than you are are.
English

I am ready to publish my first open source app that I feel really addresses some Twitter/X user pain points, but I am struggling to get a good quality video from the Seedance 2.0 API provider I'm using. Is it common for screen elements like you'd normally see in After Effects horribly compressed? I feel like this provider is not giving me their best quality.
English

@0xikkun EvoLinkにて、Seedance 2.0の公式版が利用可能です。何かを開発中の方は、ぜひ一度お試しください 👉 evolink.ai/seedance-2-0
日本語

@karim_yourself Hello. If it's not a secret, could you please tell me how you got access to API Seedance 2?
English

@Rich_ard_Roe EvoLinkは、公式のSeedance 2.0エンドポイント上で動作します。ラッパーやリバースエンジニアリングされたバージョンではありません。15秒動画の生成、参照動画の入力、マルチモーダルプロンプトに対応しています。こちらから直接お試しいただけます:👉 evolink.ai/seedance-2-0
日本語

@BTC_JMS @Astria_AI @SkyThomasGidge For Seedance 2.0 API, LoRA-style syntax isn't used — instead you pass reference images directly via the `reference_images` param alongside your text prompt. Both inputs work together. Docs + code examples: 👉 evolink.ai/seedance-2-0
English

Seedance 2.0 has officially landed on Astria! 🚀
Transform custom AI models into high end fashion videos in seconds. Click Video, select your model, set duration (up to 15s) and create.
Pro tips for the perfect E-commerce prompt:
1️⃣ The Foundation: Use Image 1 + Full Body Reference. Vibrant colors only (No B&W).
2️⃣ Shot Breakdown:
• Shot 1: Close-up portrait with a direct gaze.
• Shot 2: Peaceful scene, lying on back, arms stretched.
• Shot 3: Macro close-up of socks on wrinkled bedding.
3️⃣ Global Settings: High-end lighting + 1.5X speed for that perfect energetic rhythm.
The future of digital storefronts is here. 🎬✨
#Astria #Seedance #AIVideo #Ecommerce #AIArt
English

@learnwithcheer It is funny that you respond, because I am using EvoLink for this project. 🤣
English

@Lukealexxander Best direct API access I've found is here 👉 evolink.ai/seedance-2-0
English

@boochi_dot_dev @JSFILMZ0412 @robertdoleary seedance2 access is available here 👉 evolink.ai/seedance-2-0
no enterprise approval needed.
English

@JSFILMZ0412 @robertdoleary Where did you get access to seedance 2.0? Via cap cut or API?
English

@mhdfaran @Flovaai If you want raw API access to plug into your own workflow, this works as well 👉 evolink.ai/seedance-2-0
English

So @Flovaai just plugged in Seedance 2.0 and i need to talk about why this matters if you're making AI video.
try here: flova.ai/?refCode=E7CGS…
most platforms that support Seedance make you go through setup hell. API keys. config files. waiting. Flova skipped all of that. you click one button and Seedance 2.0 is running. same thing for NanoBanana. nobody else has this. i checked.
the actual output is where it gets interesting. the motion looks natural. not that weird AI drift where people slide across the floor like they're on ice. real movement. cinematic framing. the kind of stuff you'd actually put in a short drama without feeling embarrassed.
generation is stable too. no random crashes mid-render. no 45-minute queues. PRO subscribers can run 50 generations at the same time. fifty. for context, most platforms cap you at 10 and act like they're doing you a favor.
but what actually sold me is the workflow. you storyboard inside Flova. you generate inside Flova. you edit inside Flova. you're not bouncing between 4 tabs and 3 tools and a discord server hoping someone answers your question. the whole pipeline lives in one place.
if you're making AI short dramas or cinematic content and you're still stitching together workflows from 5 different tools you're spending more time managing tools than making videos.
#Flovaai #Flovaseedance #Seedance2_0 #AIShorts #VideoCreation
@
Flova now integrates Seedance 2.0 — unlocking next-level AI video creation. With Seedance 2.0, you get: • High-quality, long-form video generation • Strong motion consistency and cinematic output • Faster generation with significantly improved efficiency Flova also introduces
English











