Thomas Mark

141K posts

Thomas Mark banner
Thomas Mark

Thomas Mark

@Thomwithai

Al Enthusiast | Ghostwriter | Empowering buisness models & personal brands to grow with latest tech trends. DM for collab

California, USA Katılım Mart 2024
6.4K Takip Edilen14.5K Takipçiler
Thomas Mark
Thomas Mark@Thomwithai·
“Where is Kris?” a full “Where’s Waldo”-style crowd search poster set in New York City. Broadway signs, yellow cabs, Wall Street hundreds of tiny characters, all from one GPT-Image-2 prompt. Now live on MuleRun. The creative prompts in this collection keep surprising.
Thomas Mark tweet media
MuleRun@mulerun_ai

GPT-image-2 just landed on MuleRun🎉 Architecture diagrams, product posters, anime character sheets — one prompt, production-ready output. Check the replies for image cases & a link to grab every prompt👇

English
0
0
0
152
Thomas Mark
Thomas Mark@Thomwithai·
Not a startup. Not a team. Just one operator prompting his way into a distributed AI swarm. Built on free tiers, orchestrated via Telegram. This is what “AI-native” actually looks like.
EverMars@shujunliang

x.com/i/article/2044…

English
1
0
3
110
Thomas Mark
Thomas Mark@Thomwithai·
Apparently Spotify and Apple teams use Willow Voice?? The Teams launch just made this the easiest call for any company that actually cares about how their people work. Congrats on the launch!
Thomas Mark tweet media
Willow@WillowVoiceAI

The ChatGPT moment for voice AI is here. Meet Willow Voice for Teams. The AI voice copilot quietly used every day by 10% of the Fortune 500. Now it's here for you. RT, like, and comment "Willow" to get 1 month free. Must be following so we can DM you.

English
0
0
0
100
Thomas Mark
Thomas Mark@Thomwithai·
Seedance 2.0 fixes short-clip pain: multi-shot + consistent characters for seamless films—powered by Elser AI’s film-engine workflow. UNLIMITED VIP (2 years): 87% OFF top-tier (Kling 3, Nano Banana Pro, Midjourney V7, Veo 3.1). @ElserAIOfficial #Seedance2 #ElserAI
Elser AI@ElserAIOfficial

🎬 Seedance 2.0 is coming. Elser AI is the world's first studio-grade AI film engine and Seedance 2.0 makes it even more powerful. ⚙️ 🎞 True multi-shot storytelling 🎥 Built for real AI filmmaking 🚀 Coming soon. 🎟 Early access available — link in replies 👇 #Seedance2 #ElserAI #AIFilmmaking #AIVideo

English
3
1
1
227
Thomas Mark
Thomas Mark@Thomwithai·
Proactive, simple, and always ready: memU bot captures intent and acts automatically. Keeps your data local. Compared to Openclaw, faster and more cost-efficient. memu.bot
memU@memU_ai

🚨 memU bot is live. A better alternative to @openclaw (formerly Moltbot / Clawdbot) 👉Get instant access to the memU bot: memu.bot 🕒 A 24/7 proactive assistant memU bot runs continuously on your machine and works as a proactive assistant. It takes action based on your behavior and context — instead of waiting for explicit commands. 🧠 Highly personal, built for you memU bot learns from your long-term usage and memory, and gradually adapts to your work style and preferences. It becomes your assistant — not a generic AI. ⚡ Very easy to use — download and run No complex setup. No configuration. Even non-technical users can simply download and run memU bot. 🔒 Local-first and secure, with no server dependency memU bot runs locally on your device. Your data never needs to be uploaded to public networks or third-party servers. 💸 Lower LLM token cost (more efficient than @openclaw) While supporting always-on and proactive behavior, memU bot is designed to reduce LLM calls and token usage — so it runs cheaper than OpenClaw, without sacrificing performance. 🧠 "Always-on" is the real key to a proactive agent. And memory is what gives it true proactivity. With memory, an agent is no longer generic. It becomes personal — shaped by who you are. This is how a user-intention-driven proactive agent is born: before you even issue a command, it can already anticipate what kind of help you’ll need, based on your past, your habits, your context. 🔮 A 24/7 process that can observe 👀, remember 📝, and act ⚡ — not just wait for prompts. 🤖 memU bot is our attempt at a user-intention-driven proactive agent — one that lives beyond the chat box.

English
0
0
1
26.2K
Thomas Mark
Thomas Mark@Thomwithai·
Why waste time with 8 prompts when one does it all? SuperAgent delved into Demis Hassabis, unpacked DeepMind’s latest, and crafted 12 personalized discovery questions—solo. One command. Complete research. No hand-holding. Check it out 👇
English
0
0
0
36
Thomas Mark
Thomas Mark@Thomwithai·
Sub-200ms is insane. That’s faster than most humans respond in conversation. At that point, the machine stops feeling like a machine.
Santiago@svpino

Voice agents are awkward, and everyone notices: You ask a question. The agent thinks. You wait. And wait... Nobody wants this. I'd rather talk to a person. If your model's response time is over 300ms, you won't make it. Unfortunately, most text-to-speech models can't get anywhere close to that. I want you to take a look at the latest model released by @inworld_ai: TTS-1.5. I built a simple voice agent using the model so you can see it in action and test it on your computer. You'll find the repository link below. The latency numbers of this model are wild: • Max model → under 250ms • Mini model → under 130ms That's 4x faster than prior generations and faster than human response times!

English
0
0
0
25
Thomas Mark
Thomas Mark@Thomwithai·
OpenAI brought polish. The students brought understanding. Only one of those compounds over time.
Einsia@EinsiaAI

We're Einsia.ai — a team of researchers and students who got tired of losing hours to formatting, plotting, debugging, and endless LaTeX errors. So we built an AI academic partner that lives inside your Overleaf. When @OpenAI dropped Prism yesterday, we panicked. Thought we were cooked. Then we ran a head-to-head test. Turns out... we might actually be better? 👀 Don't take our word for it. Here's the receipts — same paper, same prompts, side by side. Einsia.ai goes fully live in 3 days. 👉 einsia.ai to join the waitlist. #EinsiaAI #Overleaf #AcademicTwitter

English
0
0
0
47
Thomas Mark
Thomas Mark@Thomwithai·
Reinforcement learning meets product needs in a groundbreaking way with Inworld TTS-1.5. Not only does it sound better, but it also boosts accuracy, cuts lag, and heightens expression. Here's how it's revolutionizing the world of voice agents:
Inworld AI@inworld_ai

Inworld TTS-1.5 releases today. The #1 TTS on Artificial Analysis now offers realtime latency under 250ms and optimized expression and stability for user engagement, and costs half a cent per minute. Some voice models are fast, some are expressive, some are affordable. We outperform them all across the board. Production-grade realtime latency: <250ms latency for Max model, <130ms for Mini (P90 first audio) - 4x faster than before. Voice agents now respond before users notice any delay. Engagement-optimized quality: 30% more expressive to serve a wider range of personalities and 40% lower word error rates for fewer hallucinations, word cutoffs, and audio artifacts. Built for consumer-scale: Radically affordable with enhanced multilingual support (15 languages including Hindi) and enhanced voice cloning, now via API. On-prem options now available for enterprises.

English
0
0
0
59