Future Thinker - Benji

448 posts

Future Thinker - Benji banner
Future Thinker - Benji

Future Thinker - Benji

@AIfutureBenji

We create videos and tutorial of using AI software for productivity and animations.

Katılım Kasım 2023
114 Takip Edilen621 Takipçiler
ACE Music
ACE Music@acemusicAI·
ACE-Step-1.5-xl is out now. We scaled the DiT decoder to 4B. And it shows better audio quality, better prompt following, and better musicality. It still fast -- 8 steps with turbo distillation. What didn't change: - Same generation API, same LoRA training code, same everything - All LM models (0.6B / 1.7B / 4B) fully compatible with all 3 variants - Your existing projects work with XL — no changes needed Try it for free at acemusic.ai Weithgts and code: GitHub: github.com/ace-step/ACE-S… Hugging Face: huggingface.co/ACE-Step/Ace-S… (3 variants: xl-base, xl-sft, xl-turbo) XL demos: #XLDemos" target="_blank" rel="nofollow noopener">ace-step.github.io/ace-step-v1.5.…
English
13
73
364
31.4K
Future Thinker - Benji
Future Thinker - Benji@AIfutureBenji·
@TeamYouTube thanks you guys, I often not posting emotional things publicly. But this case, is not about that few hundred ad revenue. I go out a night for drink I spend it already. It's about the truth , the justice.
English
0
1
16
434
Future Thinker - Benji
Future Thinker - Benji@AIfutureBenji·
Hi @TeamYouTube My channel @BenjisAIPlayground have flagged as inauthentic content , claimed I am massively automated created content. I submitted appeal explain my content creation process. Can you guy take a look? Or ask my audience if I am automate my content or not.
English
95
11
88
1.9K
Future Thinker - Benji
Future Thinker - Benji@AIfutureBenji·
@TeamYouTube got it, you take those ad revenue for your salary it is okay, inflation is cause living hard in your place, I understand. But you guys use AI scan channels, then false claim about my work. So I have to clarify that on the public. Usually I don't like to speak things publicly.thx
English
1
1
6
64
TeamYouTube
TeamYouTube@TeamYouTube·
Hi there. If you've already submitted an appeal, our teams will respond within 14 days. If your appeal is successful, we’ll reapprove your channel for YPP & you’ll be able to monetize your channel again. If your appeal is rejected, you’ll need to wait 90 days from the suspension to make changes to your content & reapply: goo.gle/4t11jAa
English
20
1
1
642
Future Thinker - Benji
Future Thinker - Benji@AIfutureBenji·
@TeamYouTube And can you guys stop hiring copy and paster to work on YouTube Studio chat support? Even Google Gemini works better than those staffs.
English
2
1
21
479
Future Thinker - Benji retweetledi
Wan
Wan@Alibaba_Wan·
🎬 Meet the Speakers: Wan2.7 Creator Webinar Next-Gen Workflows: Automating Creativity with Wan2.7+ AI Agents April 8, 2026 13:00 UTC / 06:00 PDT YouTube | X | LinkedIn Tongyi Lab & Alibaba Cloud We're thrilled to announce the incredible lineup joining us for the English session! Guest Spotlight: Benjamin Law Managing Director, Touchmobi Media Co. Ltd. Topic: "Using OpenClaw to Create a Wan 2.7 API Client" David Gyori CTO, vide8. com | Host of AI Agents A-Z (YouTube) Topic: "Creating Viral Slow-Motion Construction Videos with n8n + Wan 2.7" Zachary Huang Microsoft Research AI frontiers, Researcher Topic: "Markdown to Cartoon Video — An Automated Pipeline Powered by Wan2.7" Ryan Chu Wan Model Tech Team Roxanne Peng (Host) Marketing Manager, Tongyi Lab Don't miss this chance to learn from builders who are shaping the future of AIGC automation.
Wan tweet media
English
1
7
31
3.3K
Future Thinker - Benji
Future Thinker - Benji@AIfutureBenji·
@AzeAlter hi , how do you appeal the demonization? I have a channel talk about AI tech , not automation contents but still flagged me.
English
0
0
0
68
Aze Λlter
Aze Λlter@AzeAlter·
I believe this will pass. As Google is promoting their own AI tools. So this is definitely an automated demonetizing issue. They’re definitely handling this all wrong. Imagine you pay for Veo, use it instead of stock footage for a narration, then get demonetized for it.
English
3
4
30
2.1K
Aze Λlter
Aze Λlter@AzeAlter·
Seems there is a mass YouTube demonetization with channels being slapped as ’Inauthentic Content’. Affecting both AI & non AI creators. YT is not handling this AI wave well. I understand demonetizing those fully AI generated content farms that upload multiple times a day. But why take down animators, AI assisted storytellers & even some completely non AI channels? Is this another youtube apocalypse?
English
42
25
178
14.9K
Future Thinker - Benji retweetledi
Wan
Wan@Alibaba_Wan·
🚀 The future of creation isn't just prompts. It's pipelines. Join us for the Wan2.7 Creator Online Webinar: Next-Gen Workflows: Automating Creativity with Wan2.7+ AI Agents. Learn how to chain AI agents with Wan2.7 APIs to build automated video & image production pipelines—from concept to final output. • Agent-Driven Workflows • Wan2.7 Image & Video APIs • Scale Your Creativity English Session: April 8, 2026 13:00 UTC / 06:00 PDT Japanese Session: April 8, 2026 16:00 JST YouTube | X | LinkedIn Tongyi Lab & Alibaba Cloud Featuring special guests from the AI creator community showcasing real-world production cases.
Wan tweet media
English
5
12
66
5.5K
Future Thinker - Benji retweetledi
Google
Google@Google·
New AI capabilities are coming to Google Vids, including high-quality video generation powered by Veo 3.1, available at no cost. Now, anyone with a Google account can bring stories to life from just a simple prompt or photo. Explore more of the new features 🧵
English
117
280
2.4K
253.9K
Future Thinker - Benji retweetledi
Tongyi Lab
Tongyi Lab@Ali_TongyiLab·
1/10 🚀 Qwen3.5-Omni is here! Scaling up to a native omni-modal AGI. Meet the next generation of Qwen, designed for native text, image, audio, and video understanding, with major advances in both intelligence and real-time interaction. A standout feature: Audio-Visual Vibe Coding: Describe your vision to the camera, and Qwen3.5-Omni instantly builds a functional website or game for you. Highlights: Script-Level Captioning: Generate detailed video scripts with timestamps, scene cuts & speaker mapping. SOTA Performance: Qwen3.5-Omni has secured 215 SOTA scores across various sub-tasks, matching the top-tier text/vision capabilities of the Qwen3.5 series. Audio-Visual Understanding: From auto-segmentation to fine-grained script generation, it understands the relationship between characters and their environment like never before. Seamless Interaction: With native API support for Semantic Interruption, voice conversations feel human-like and background-noise resistant. Global Multilingual Mastery: Pioneering support for 74 languages in speech recognition and 29 languages in expressive speech generation, breaking down global communication barriers. Autonomous Intelligence: Native support for WebSearch and complex Function Calling—the model now independently decides when to pull real-time data. Qwen3.5-Omni is built to be the backbone of next-gen AI applications, empowering developers and users alike with true multimodal reasoning.
Tongyi Lab tweet media
English
77
288
2.3K
11.4M
Future Thinker - Benji retweetledi
ModelScope
ModelScope@ModelScope2022·
🎧 Fish Audio S2 Pro is open source: a 4B+400M Dual-AR TTS model with free-form inline prosody and emotion control, trained on 10M+ hours of audio across 80+ languages.💬 🏗️ Dual-AR architecture: 4B Slow AR for semantics + 400M Fast AR for 9 residual codebooks — quality without inference overhead 🎭 Inline control via free-form tags: [whisper], [laughing], [professional broadcast tone] — 15,000+ unique tags, word-level precision 🌐 80+ languages, Tier 1: Japanese, English, Chinese ⚡ SGLang-native: continuous batching, paged KV cache, RadixAttention prefix caching — all inherited from LLM serving stack 📊 RTF: 0.195 on H200, ~100ms time-to-first-audio, 3,000+ acoustic tokens/s 🔓 Weights + fine-tuning code + streaming inference engine all released 🌍 Model: modelscope.ai/models/fishaud… 🤖 Model: modelscope.cn/models/fishaud… 🔧 GitHub: github.com/fishaudio/fish…
ModelScope tweet media
English
2
8
102
5.6K
Future Thinker - Benji retweetledi
ModelScope
ModelScope@ModelScope2022·
Style transfer with Qwen-Image-Edit-2511 + LoRA 🤩 Feed it any style reference and watch your artwork transform completely, color, mood, and atmosphere all carry over beautifully! Download the LoRA here👉modelscope.ai/models/daniel8…
大雄@dx8152

This time, we're showcasing the Qwen-Image-Edit-2511 model, a fun LoRA model for migrating everything, used in the LoRA training competition. Download link and more examples are in the comments section. @Ali_TongyiLab @Alibaba_Qwen @ModelScope2022 #HappyQwensday #QwenImageLoRA

English
2
5
33
3K
Future Thinker - Benji retweetledi
ModelScope
ModelScope@ModelScope2022·
14B faster than 1.3B. Helios is here 🚀 a 14B real-time long video generation model running at 19.5 FPS on a single H100, with native T2V, I2V, and V2V support. 🌟 The breakthroughs: - No anti-drifting heuristics: no self-forcing, no keyframe sampling — drift simulated during training instead - No standard acceleration: no KV-cache, no sparse/linear attention, no quantization - Compute cost matches 1.3B models via heavy context compression + reduced sampling steps - Four 14B models fit in 80GB during training, no parallelism framework required Outperforms prior methods on both short- and long-video benchmarks. Base + distilled model both released. 🤖 Models: modelscope.cn/collections/Be… 🌍 Models: modelscope.ai/profile/BestWi… 📄 Paper: modelscope.cn/papers/2603.04… 🔧 GitHub: github.com/PKU-YuanGroup/…
English
1
13
104
8.3K
Future Thinker - Benji retweetledi
Sayak Paul
Sayak Paul@RisingSayak·
Diffusers 0.37.0 is out 🔥 New models, including LTX-2, Helios, GLM-Image, and more. We're proud to be shipping the wild hot RAEs in this release, too! New CP backends, caching methods, etc., are in too! Check out the release notes for more details 🧨
Sayak Paul tweet media
English
4
19
131
17.9K