Mighty Bear Labs

67 posts

Mighty Bear Labs banner
Mighty Bear Labs

Mighty Bear Labs

@MightyBearLabs

Building in the open. Dev Diaries from an AI-native studio (@wearemightyai) Experiments, fails and breakthroughs.

Katılım Ağustos 2025
30 Takip Edilen52 Takipçiler
Sabitlenmiş Tweet
Mighty Bear Labs
Mighty Bear Labs@MightyBearLabs·
3 hours of sleep, one prayer, and its pushed to prod. Oh btw we’re Mighty Bear Labs 👋 Long build cycles? nah Endless brainstorms? nah Perfect timing? nah Vibes? always 😬 This is Day 1!
Mighty Bear Labs tweet media
English
2
5
19
4.2K
Mighty Bear Labs retweetledi
Koko
Koko@citizen906·
If you're struggling to make AI agents work the way everyone says they should, you're not alone. 2026 is "the year of the agent." Everyone's automating everything. But here's what's really happening: I've been automating parts of my workflow for almost a year now. Custom news feeds. Report automations. Invoice handling, etc. I've tried so many tools like n8n, Make, Lindy, Claude. The micro automations works. Then I saw a post from @boringmarketer and decided to try something bigger: end-to-end ad automation for Gifts.Fun from market research > competitor intelligence > positioning > actual asset generation. Everyone's raving about Claude's Agent SDK, and it is impressive. But there's heavy caveats for non-devs like me. The real build took over a week. Because I kept getting pulled into side quests. Context windows maxing out mid-task. Scraping tools failing silently. Integrations just stopping. Problems that had nothing to do with the actual work I was trying to do, but completely blocked my progress. When the TikTok scraper silently failed, the agent didn't say "I couldn't access the data." It confidently told me "Your competitor doesn't do ads on TikTok!" I knew that wasn't true. But I had to dig into the process to figure out what broke. A marketer shouldn't have to debug software to get competitor research. AI confidence is unreal. It will tell you wrong things with complete conviction. And if you don't have the domain expertise to know what right looks like, you'll just believe it. Managing these tools isn't set-and-forget, it requires knowing your craft well enough to catch when the AI is confidently wrong. Eventually, once everything worked, the research was actually quite good. But the part that really can't be automated is the final step: asset generation. On-brand creative with the right colour schemes, the right visual language, you can't fully automate that without a human in the loop. Even the best models like Nano Banana Pro can't get there alone. This is exactly what we've spent three years building with Nexus. The secret isn't better model, it's better systems around the model. Visual context so the tool knows what on-brand actually looks like. Checkpoints where humans can review before things go sideways. Refining questions so it asks instead of assumes. Brand rules encoded as constraints, not suggestions. It works for us. But the full end-to-end loop from research to on-brand assets to shipping? I haven't found anything that just works. Marketers aren't going to become developers. And they shouldn't have to. The solutions that win this year will be the ones that actually solve the UX. I'm hopeful we're close. What agents are you getting the most value from right now?
English
0
1
4
154
Mighty Bear Labs
Mighty Bear Labs@MightyBearLabs·
We made a bet on AI early. In 2022, Mighty Bear Games put out an internal challenge: everyone should use AI for at least half of their output. Our Head of Growth doesn't code. She now runs AI agents that handle invoicing, reports, market research and more. This is what it looks like three years later. Business Insider published her story 👇 bit.ly/3YbxEq2
English
0
1
6
173
Mighty Bear Labs retweetledi
Ben
Ben@cocolitron·
[MightyBearLabs]$ log –version 3.9 Today’s update is simple: NanoBanana Pro wipes the floor with NanoBanana, and we’ve wired the Pro version directly into Nexus a few days ago now. The base model was already great and allowed to build fairly robust production workflows on it. NanoBanana Pro is a big step up: camera control, lighting control, pose work, swaps, 3D, text, all the things that matter in real pipelines are behaving like tools now. You’ll see it immediately in the comparison table and the example boards. NanoBanana Pro has become a creative infrastructure. And inside Nexus, Pro is consistent, predictable, controllable, exactly the foundational improvements needed for creative OS–level workflows. That’s the post. NanoBanana Pro is live in Nexus. Game on. Like this content? Follow me and @MightyBearLabs for production-level behind-the-scenes, AI deep dives, process notes, prompt recipes, and reproducible workflows.
Ben tweet mediaBen tweet mediaBen tweet media
English
1
2
2
253
Mighty Bear Labs retweetledi
Koko
Koko@citizen906·
Notes on creating an AI trailer that saved us $$$ 🛠️ Main Tools we used were Nexus (our own tool) for brainstorming and storyboarding, followed by VEO3 & FAL (Kling) -Landing jokes on paper vs. landing jokes in video like night & day. 80% of regenerations were trying to get the beat of the joke to land. Wanted dry, deadpan comedy. In hindsight, very difficult way to approach this -You own the creative vision now in a much deeper way exposing where agencies really bring value -Filmmaking and visual storytelling knowledge matters: beats, scene transitions, pacing. AI can't compensate for not knowing this -High action= high risk for warping and requires manual work in post -Custom filmmaking language for prompt engineering matters, specificity vs. vague -Know AI content filters before you start. Toilet scenes wouldn't generate even if completely innocuous. Worked around with tight framing and trouser legs -Keep scenes tight. Longer concepts corrupt more even within 8-second limit -oh yeah and AI does weird shit. why would someone drinking through a straw tip their head back to drink?!?!. Sometimes you just accept it and move on It's true what they say: 300 to 400 generations to get 15 usable clips
English
4
7
9
2.6K
Mighty Bear Labs retweetledi
Ben
Ben@cocolitron·
[MightyBearLabs]$ log –version 3.7 Quick bite: we’re switching entirely to Nano Banana for aspect-ratio changes, zoom in, zoom out, crop, and outpaint. Sharper images and fewer moving parts. Brought to you by @kimtako44! About Us: 20+ years shipping games & apps. Industry veterans with Apple & Disney credits, now pushing frontier AI. 🎯 Goal: we want a single, high-quality path for reframing shots (3:4, 1:1, 16:9), including tight face zooms, wide pullbacks, and canvas expansion, without mushy detail. 🚫 What didn’t work: • Kontext was our ratio workaround, but zooming/cropping degraded quality fast. • Two-models to do image edits could sometimes confuse Nexus and increased ops overhead by requiring more guidance. ✅ What worked: • Nano Banana now handles zoom-in (keeps facial detail), zoom-out (reveals room cleanly), crop, and outpainting, with no visible detail loss. • Consistent character + background across new generations and reframes. • One model = simpler guidance, more focused Nexus. ✨ Results: • Sharper outputs vs. Kontext, especially on close-ups. • No need for user/agent to upscale post edits. • Lower complexity in Nexus (one tool for image edits). That’s it! Curious how you’d stress-test this on your shots? Like this content? Follow me and @MightyBearLabs for production-level behind-the-scenes, AI deep dives, process notes, prompt recipes, and reproducible workflow.
Ben tweet mediaBen tweet media
English
0
3
6
211
atlas
atlas@bestplayeratlas·
mfs who keep tweeting with 0 likes fear nothing.
English
4K
3.1K
31.9K
1.3M
Mighty Bear Labs retweetledi
Ben
Ben@cocolitron·
[MightyBearLabs]$ log –version 3.6 Quick Bite: Dynamic context injection lands in Nexus, tag a (@)Project or (@)Subject and Nexus auto-loads the right guides/assets into the prompt. Nexus → your AI-IDE for creative production. Brought to you by @kimtako44 ! About Us: 20+ years shipping games & apps. Industry veterans with Apple & Disney credits, now pushing frontier AI. 🎯 Goal: Make creating 100% on-model, on-brand content as easy as typing “(@)Amy riding a superbike”. Problem: Creative prompts drift without enough context using the right art bibles, character sheets, and world docs wired in. Too much copy-paste and human glue, especially as Production output scales up. ❌ Didn’t Work: Static system prompts + manual paste of docs, prompts, ref images; brittle, slow, and inconsistent across projects. ✅ Worked: User-created Project Guides and Asset Cards that can be mentioned in chat, like (@)Project, (@)Character, (@)World. Mentions inject scoped context (style, do/don’ts, refs) into the tool call at runtime. Result: One tag → consistent outputs; less prompt wrangling, faster iteration; teams can spin up new IP spaces with their own controlled vocab. 💡 Takeaways: • Context should be dynamic, scoped, and composable. • Mentions are the UX primitive for creative IDEs. • Project/Asset cards become source-of-truth for brand/IP. Closing: This unlocks true “type-driven production” and sets the foundation for shortcuts, chained tools, and team workflows—all inside Nexus. Like this content? Follow me and @MightyBearLabs for production-level behind-the-scenes, AI deep dives, process notes, prompt recipes, and reproducible workflow.
Ben tweet mediaBen tweet media
English
0
5
6
660
Mighty Bear Labs retweetledi
Ben
Ben@cocolitron·
[MightyBearLabs]$ log –version 3.5 Quick bite: We found the best models and process to go from PRD to on-brand key art + first UI, in one pass. Yes. Brought to you by @Zh14_O ! Detailed evals / faceoff: miro.com/app/board/uXjV… About Us: 20+ years shipping games & apps. Industry veterans with Apple & Disney credits, now pushing frontier AI. Problem: How can we quickly explore UI styles while generated content still kind of makes sense? Goal: Pick the best text-to-image model for “game mockup with UI” and codify a repeatable flow. ❌ What didn’t work: Generic input, lack of details, lack of context around actual gameplay ✅ What worked: PRD → Guide → Prompt • Game mockup breakdown (screens, motifs, palette) • Brand guide generation (tone, shapes, icon grammar) • Targeted prompts (poster + collection screen) 🏆 Best models overal: Dreamina 3.1 and Seedream 4 Resulting in consistent key Art poster + collection UI in one run, clean framing, coherent icons, readable-ish scaffolding. VERY GOOD 👍🥹 💡 Takeaways: Structure > magic words; prefer models that respect geometry/negative space; next, grid checks, in-paint text/nav, add UI-legibility score. Detailed evals / faceoff: miro.com/app/board/uXjV… Like this content? Follow me and @MightyBearLabs for production-level behind-the-scenes, AI deep dives, process notes, prompt recipes, and reproducible workflow.
Ben tweet mediaBen tweet media
English
0
3
5
134
Mighty Bear Labs retweetledi
Ben
Ben@cocolitron·
Introducing Creative Codex, the groundbreaking addition to Nexus, our in-house Creative Production Agent! Creative Codex transforms brand and assets guidelines into muscle memory for Nexus. Here's a little walkthrough from the @MightyBearLabs . What do you think?
English
0
4
13
1.7K
Comet
Comet@cometwtf·
If you are not a bot, reply and gain 100+ followers
Comet tweet media
English
15.9K
1.2K
20.4K
1.9M
DogeDesigner
DogeDesigner@cb_doge·
Algorithm loves bangers. So give it what it wants. Post bangers.
English
642
146
1.4K
91.6K
Mighty Bear Labs retweetledi
Ben
Ben@cocolitron·
[MightyBearLabs]$ log –version 3.4 Quick Bite: We tested Outpainting, Sketch Reference, and Group shots across Seedream 4.0, Nano Banana, and Qwen. TL;DR: Seedream = pretty but drifty; NB + Qwen remain production picks. Brought to you by @wemmwem and @kimtako44 ! Detailed eval here: miro.com/app/board/uXjV… About Us: 20+ years shipping games & apps. Industry veterans with Apple & Disney credits, now pushing frontier AI. 1) Outpainting • 🏆 Qwen: clean edge extension, preserves comp. • NB: fine, slightly reinterprets. • Seedream: re-generates instead of extending. Note: set AR first; don’t use Seedream’s “fill white margins.” 2) Sketch Reference • 🏆 Nano Banana: best pose/layout obedience, expression match. • Seedream: often ignores sketch → fresh comps. Note: keep sketches simple; strip helper marks. 3) Group Shots (with annotations) • Nano Banana: strongest scene + lighting integration. • Seedream: everyone in frame, but “pasted” feel. Note: after NB placement, run a quick cleanup to remove guides. Final Routing • Outpainting → Qwen • Sketch/Layout → Nano Banana • Cheap vibe refresh (no strict layout) → Seedream 4.0 Like this content? Follow me and @MightyBearLabs for production-level behind-the-scenes, AI deep dives, process notes, and reproducible workflow.
Ben tweet mediaBen tweet mediaBen tweet mediaBen tweet media
English
0
2
3
173
Mighty Bear Labs
Mighty Bear Labs@MightyBearLabs·
Log 3.3 out now!
Ben@cocolitron

[MightyBearLabs]$ log –version 3.3 Quick Bite: Image restyle shootout across Nano Banana, Kontext, Qwen, Seedream 4.0. Brought to you by @wemmwem ! Detailed eval here: miro.com/app/board/uXjV… About Us: 20+ years shipping games & apps. Industry veterans with Apple & Disney credits, now pushing frontier AI. Goal: Find the most reliable model for controlled “style swap” edits from a clean photo/input. ❌ What didn’t work: • Nano Banana: inconsistent and off-model for this use case. • Seedream 4.0: often too literal to prompts (drifts into spoofed IP vibes). ✅ What worked: • Qwen: most consistent identity + scene preservation; best overall results. • Kontext: “good enough” backup, it holds composition, occasional color/style drift. ✨ Results: • Qwen wins both tests (cat→Ghibli, portrait→Simpsons). Kontext is second. Nano Banana benched. 💡 Takeaways: • For restyles where composition must hold, start with Qwen → fallback Kontext. • Keep prompts tight; avoid brand/IP cues that Seedream over-literalizes. • Next: stress-test on faces, hands, and busier scenes; measure pass rate at batch scale. That’s it, quick, sharp, and reproducible. What should we test next? Like this content? Follow me and @MightyBearLabs for production-level behind-the-scenes, AI deep dives, process notes, prompt recipes, and reproducible workflow.

English
0
0
2
72
Mighty Bear Labs retweetledi
Ben
Ben@cocolitron·
[MightyBearLabs]$ log –version 3.2 Quick bite: Nexus now injects project-specific context at the start of the session so that outputs stay on-model and on-brand, brought to life by @kimtako44 About Us: 20+ years shipping games & apps. Industry veterans with Apple & Disney credits, now pushing frontier AI. 💡The idea: Generate repeatable, on-brand and on-model assets across scenes with minimal touch-up. ❌ What didn’t work • One-shot prompts → identity drift. • Generic refs → poor expression/angle fidelity. • Early video attempts → continuity breaks. ✅ What worked • Agent-first questions (vibe, pose, expression, BG, ratio), builds more in-session context. • Turnaround + expression sheet per character, injected. • Nano-banana for consistent interpolation; refs managed in Nexus for reproducibility. ✨ Result • Consistent Amy/Pepe generations across scenes. • Much less manual repainting; faster handoffs. 💡 Takeaway • Ask for context up front. • Standardize a tiny ref pack (turnaround, expressions, palette). • Expose “use project refs” in the UI and persist per workspace. Like this content? Follow me and @MightyBearLabs for production-level behind-the-scenes, AI deep dives, process notes, prompt recipes, and reproducible workflows. About Us: 20+ years shipping games & apps. Industry veterans with Apple & Disney credits, now pushing frontier AI.
Ben tweet mediaBen tweet media
English
0
1
2
98
aditii
aditii@aditiitwt·
How did people even learn coding back when there were no documetation & YouTube tutorials
aditii tweet media
English
144
29
701
24.7K