

GPT-image-2 just landed on MuleRun🎉 Architecture diagrams, product posters, anime character sheets — one prompt, production-ready output. Check the replies for image cases & a link to grab every prompt👇
Thomas Mark
141K posts

@Thomwithai
Al Enthusiast | Ghostwriter | Empowering buisness models & personal brands to grow with latest tech trends. DM for collab


GPT-image-2 just landed on MuleRun🎉 Architecture diagrams, product posters, anime character sheets — one prompt, production-ready output. Check the replies for image cases & a link to grab every prompt👇


Turn Claude into the best creative agent in the world! Our users generate over 1 billion images and videos every year. Now Claude can too. RT and comment "Pixa" for free access!

The last stronghold of coding has just been conquered by AI. In the most recent three Codeforces live competitions, i.e., Round 1087, Round 1088, and Round 1089, GrandCode, our agentic AI system, ranked first in all of them, beating all human participants, including legendary grandmasters. GrandCode is a multi-agent reinforcement learning system designed for competitive programming. It orchestrates a variety of agentic modules (hypothesis proposal, solver, test generator, summarization, etc) and jointly improves them through post-training and online test-time RL. GrandCode is developed based on Qwen. Huge respect to the Qwen @Alibaba_Qwen team for their contributions to the community. It is hard to imagine how quickly AI has advanced in just one year: 1st — GrandCode (March 2026) 8th — Gemini 3.1 Pro (February 2026) 175th — OpenAI o3 (April 2025) We can’t wait to see what happens over the next year.


The ChatGPT moment for voice AI is here. Meet Willow Voice for Teams. The AI voice copilot quietly used every day by 10% of the Fortune 500. Now it's here for you. RT, like, and comment "Willow" to get 1 month free. Must be following so we can DM you.

🎬 Seedance 2.0 is coming. Elser AI is the world's first studio-grade AI film engine and Seedance 2.0 makes it even more powerful. ⚙️ 🎞 True multi-shot storytelling 🎥 Built for real AI filmmaking 🚀 Coming soon. 🎟 Early access available — link in replies 👇 #Seedance2 #ElserAI #AIFilmmaking #AIVideo

Introducing the fastest way to vibe code. Karpathy said English is the hottest new programming language. We took that seriously. Today we're launching Willow for Developers — voice dictation built for vibe coding. Your keyboard is the bottleneck. Kill it.

"Sometimes I'll start a sentence, and I don't even know where it's going. I just hope I find it along the way." — Michael Scott & every AI video generator (until now). Today, we’re launching the greatest storytelling tool for video: Story Mode. To celebrate we're giving everyone FREE CREDITS. Just sign up to get started on your next viral masterpiece. Try it while it's free 👇

The biggest problem in AI video is style inconsistency - every scene has different styles and transitions are "jumpy". That's why we're launching Story Mode in Agent Opus. Generate style-consistent videos up to 4 mins long in just one click. ✅ Your original audio or script ✅ Your custom style reference ✅ 16:9 | 9:16 | 1:1 formats ready for monetization No more random, glitchy scenes. Just one continuous, high-retention shot. Try it today for free 👇

KLING 3.0 is LIVE UNLIMITED on Higgsfield. Today marks the MOST advanced AI video model release EVER. Exclusive partnership and Day 0 access. Multi-shot sequences. Macro close-ups through dynamic camera movement. Native audio, real lip-sync and spatial sound. Up to 15 seconds of continuous generation. Get @Kling_ai KLING 3.0 UNLIMITED today with 70% OFF on Premium Plans👇

🚨 memU bot is live. A better alternative to @openclaw (formerly Moltbot / Clawdbot) 👉Get instant access to the memU bot: memu.bot 🕒 A 24/7 proactive assistant memU bot runs continuously on your machine and works as a proactive assistant. It takes action based on your behavior and context — instead of waiting for explicit commands. 🧠 Highly personal, built for you memU bot learns from your long-term usage and memory, and gradually adapts to your work style and preferences. It becomes your assistant — not a generic AI. ⚡ Very easy to use — download and run No complex setup. No configuration. Even non-technical users can simply download and run memU bot. 🔒 Local-first and secure, with no server dependency memU bot runs locally on your device. Your data never needs to be uploaded to public networks or third-party servers. 💸 Lower LLM token cost (more efficient than @openclaw) While supporting always-on and proactive behavior, memU bot is designed to reduce LLM calls and token usage — so it runs cheaper than OpenClaw, without sacrificing performance. 🧠 "Always-on" is the real key to a proactive agent. And memory is what gives it true proactivity. With memory, an agent is no longer generic. It becomes personal — shaped by who you are. This is how a user-intention-driven proactive agent is born: before you even issue a command, it can already anticipate what kind of help you’ll need, based on your past, your habits, your context. 🔮 A 24/7 process that can observe 👀, remember 📝, and act ⚡ — not just wait for prompts. 🤖 memU bot is our attempt at a user-intention-driven proactive agent — one that lives beyond the chat box.

Your kids will ask why people used to type with their fingers. You won't have a good answer. Introducing Willow Voice - the all new voice typing experience. Fast. Personal. Inevitable. Now on Mac, Windows, and iOS.

Voice agents are awkward, and everyone notices: You ask a question. The agent thinks. You wait. And wait... Nobody wants this. I'd rather talk to a person. If your model's response time is over 300ms, you won't make it. Unfortunately, most text-to-speech models can't get anywhere close to that. I want you to take a look at the latest model released by @inworld_ai: TTS-1.5. I built a simple voice agent using the model so you can see it in action and test it on your computer. You'll find the repository link below. The latency numbers of this model are wild: • Max model → under 250ms • Mini model → under 130ms That's 4x faster than prior generations and faster than human response times!

We're Einsia.ai — a team of researchers and students who got tired of losing hours to formatting, plotting, debugging, and endless LaTeX errors. So we built an AI academic partner that lives inside your Overleaf. When @OpenAI dropped Prism yesterday, we panicked. Thought we were cooked. Then we ran a head-to-head test. Turns out... we might actually be better? 👀 Don't take our word for it. Here's the receipts — same paper, same prompts, side by side. Einsia.ai goes fully live in 3 days. 👉 einsia.ai to join the waitlist. #EinsiaAI #Overleaf #AcademicTwitter

ANGLES v2 is LIVE! 🧩 TAKE A SHOT FROM ANY ANGLE. FULL 360° camera control, REDESIGNED interface with 3D cube & sliders, EXPANDED behind subject perspectives, UPGRADED project management. Retweet & reply & follow & like for 10 credits in DM

VEO 3.1 in 4K is LIVE on Higgsfield with 85% OFF! 🧩 Last 8 hours. Google's SOTA VEO 3.1 in MAX quality turning your ideas into cinema. Native 4K, portrait and landscape support, stronger prompt adherence, native audio, and clean transitions. Retweet & reply & follow & like for 220 credits in DM

ONE CLICK TO CRAZY VIRAL AI INFLUENCERS 🧩 FOR FREE. 85% OFF. LAST 10 HOURS. UNLIMITED Nano Banana Pro + ALL Kling models. Higgsfield AI Influencer Studio lets you merge ethnicities. Hybridize species. Build twins. Create bodies that don't exist in nature. Retweet & reply & follow & like for 220 credits in DM

Inworld TTS-1.5 releases today. The #1 TTS on Artificial Analysis now offers realtime latency under 250ms and optimized expression and stability for user engagement, and costs half a cent per minute. Some voice models are fast, some are expressive, some are affordable. We outperform them all across the board. Production-grade realtime latency: <250ms latency for Max model, <130ms for Mini (P90 first audio) - 4x faster than before. Voice agents now respond before users notice any delay. Engagement-optimized quality: 30% more expressive to serve a wider range of personalities and 40% lower word error rates for fewer hallucinations, word cutoffs, and audio artifacts. Built for consumer-scale: Radically affordable with enhanced multilingual support (15 languages including Hindi) and enhanced voice cloning, now via API. On-prem options now available for enterprises.