G00dVibes

1.9K posts

G00dVibes banner
G00dVibes

G00dVibes

@aivibelab

Linker. 5G Self-Sovereign AI Architect. ✨A+B=Magic✨

Atlanta, GA Katılım Nisan 2025
2.1K Takip Edilen316 Takipçiler
G00dVibes
G00dVibes@aivibelab·
I'm attending Google Developer Groups Google Cloud EMEA Developers w/ Wednesday Build Hour: Build and Deploy to Google Cloud with Antigravity on May 6.
English
0
0
0
28
G00dVibes retweetledi
LudovicCreator
LudovicCreator@LudovicCreator·
🎨 FRAGMENTED REALITY OVERLAP 🎨 Prompt : [SUBJECT] portrayed within a Fragmented Reality Overlap, where layers of corrupted images intersect. Use overlapping glitches of [COLOR1] and [COLOR2] to reflect the complexity of multiple realities merging. Check ALTS
LudovicCreator tweet mediaLudovicCreator tweet mediaLudovicCreator tweet mediaLudovicCreator tweet media
English
6
6
46
1.3K
G00dVibes retweetledi
LudovicCreator
LudovicCreator@LudovicCreator·
🎨 SUNDAY VIDEO CHALLENGE #65 🎨 VERY IMPORTANT : First, be sure to follow ALL the rules described below. The VIDEO with more likes AND following ALL rules after 48 H will decide THEME next week. * RULES * : Share only ONE VIDEO 10 seconds max ( FOR THIS ONE LONGER VIDEOS WILL BE ACCEPTED) , use ANY TOOL you want . NO QUOTE because this will be diluted and impossible de catch up. - The theme is : "Under Water Beauty Challenge " Dive deep into the magic of the ocean and create your most breathtaking underwater beauty! ✨ Whether it's a graceful mermaid, a cosmic sea goddess, a glowing siren, or a dreamy ocean queen, show us your vision of underwater elegance, mystery, and wonder Theme by : @TGBA2023 VIDEO COVER BY :@TGBA2023 Check exemple video cover. I won't ask to tag people , do as it please you , I just want people to be free , no pressure no obligation
English
31
16
79
2.1K
G00dVibes
G00dVibes@aivibelab·
The pre-mortem is great for identifying failure, but I’ve found it even more powerful to use AI Studio and Gemini for 'Future-Back' builds. I don't just ask how I'll fail; I ask the agent to look 6–12 months ahead based on today's trajectory and tell me what the world will actually demand by then. It turns the AI into a partner for divining concepts that don't even exist yet.
English
0
0
3
390
Ole Lehmann
Ole Lehmann@itsolelehmann·
POV: claude traveled 6 months into the future and told you exactly how your next move failed. it's called a premortem. daniel kahneman (nobel prize-winning psychologist behind "thinking fast and slow") called it his single most valuable decision-making technique. google, goldman sachs, and procter & gamble all use it before major launches. here's the problem it solves. when you ask claude "is this a good plan?" it finds all the reasons to say yes. that's what it was trained to do. so you walk away feeling confident. you execute, and spend weeks / months building on top of that plan. then it blows up. and you realize the problem was obvious in hindsight, you just never stress-tested it because claude told you it was solid. a premortem fixes this by flipping the frame. instead of asking "what could go wrong?" you tell claude "it's 6 months from now and this is already dead. tell me how it died." that shift turns off claude's optimism because there's nothing to be optimistic about. the premise already says it failed. so claude stops looking for reasons your plan will work and starts explaining how it fell apart. claude comes back with every way your plan could die, each one with a full failure story and the early warning signs to watch for. then a synthesis pulls it all together: > which failure is most likely > which failure is most dangerous > the single biggest hidden assumption you're making (often the most valuable part) > a revised version of your plan with the gaps closed you say "premortem this" and give it your plan. the skill handles the rest.
Ole Lehmann tweet media
English
141
605
6K
587.8K
G00dVibes
G00dVibes@aivibelab·
@karpathy The 'strawberry' era is over. Just put the agent to work on the big life decisions—the 50m car wash dilemma. GPT says we're walking. 😎
G00dVibes tweet media
English
0
0
0
58
Andrej Karpathy
Andrej Karpathy@karpathy·
Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights: The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons: 1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing. 2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc. 3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc. I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3). The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to... Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.
Stephanie Zhan@stephzhan

@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.

English
254
719
5.5K
755.4K
G00dVibes retweetledi
LudovicCreator
LudovicCreator@LudovicCreator·
🎨 SEEDANCE 2 🎨 Prompt : A motorcyclist speeds across a massive suspension bridge during a violent storm. Lightning strikes the bridge tower and the entire structure begins collapsing. At the 2-second mark the rider jumps a breaking section, lands on a falling cable, slides down it, and launches back onto the roadway. The camera follows as the rider escapes just before the bridge collapses into the ocean. Storm bridge collapse stunt, cable slide jump, motorcycle escape, cinematic disaster action, 4K. Made in @Hailuo_AI
English
9
10
89
2.6K
G00dVibes retweetledi
YokerAI
YokerAI@IATheYoker·
Google acaba de hacer algo que Claude Design no esperaba. Y lo hizo gratis, en abierto, para todo el mundo. Se llama DESIGN.md. Pero primero hay que hablar de lo que provocó esto. Anthropic lanzó Claude Design con límites de uso ridículos. Cerrado. Caro. Controlado. La herramienta de diseño con IA más potente del momento, pero con la llave en su bolsillo. Google vio la jugada. Y respondió de la única forma que duele de verdad: Abrió el código. Gratis. Para todo el mundo. Se llama DESIGN.md. Un estándar abierto que cualquier IA puede leer. En cualquier herramienta. En cualquier plataforma. El problema que resuelve es real: Los agentes de IA no tienen ni idea de tu sistema de diseño. No saben qué significa tu color primario. No saben si ese botón es accesible. No saben para qué sirve cada componente. Cada vez que usabas IA para diseñar, empezabas de cero. DESIGN.md lo resuelve con un solo archivo. Tres cosas concretas que hace: 1️⃣ Tokens de diseño Tus colores, tipografías y espaciados con descripciones en lenguaje natural. La IA entiende para qué sirve cada valor, no solo cuál es. 2️⃣ Validación de accesibilidad El agente comprueba sus decisiones contra las reglas WCAG. Sin que tú tengas que revisar cada detalle manualmente. 3️⃣ Portabilidad total Exportas e importas tus reglas de un proyecto a otro. Un archivo. Cualquier herramienta. Cualquier plataforma. Incluidas las de Anthropic. Eso es lo que más duele en ese lado de la mesa. Anthropic construyó muros. Google tiró la pared entera. El repositorio está en GitHub: github.com/google-labs-co… Esto es lo que pasa cuando los grandes se pelean. Nosotros ganamos.
YokerAI tweet media
Español
34
488
2.8K
180.9K
G00dVibes retweetledi
Sudo su
Sudo su@sudoingX·
if you are running local ai or thinking to start, if i could give you one single piece of advice it is this: choose your agentic harness carefully. it matters more than the model. i have lost count of how many people have dm'd me saying their local model is "dumb" or "broken" or "not as good as the cloud one." then they switch from openclaw or some other bloated framework to hermes agent and the same model suddenly works. just clean tool calls and the agent doing the thing it was supposed to do. hermes agent is the best general purpose agent i have used in 2026. drives my single 3090 with qwen 3.6 27b dense q4, drives my dgx spark with nemotron omni q8, and the same harness handles coding, research, video editing, automation, anything you point it at. packed with skills out of the box (browser tools, code, github, jupyter, multimodal, more than i have used yet), full tool calling that holds across long sessions, persistent memory, sub agents. if you tried local ai once or twice and gave up because it felt half baked, the issue might not have been the model. it might have been the harness wrapping it. swap the harness, run the same model again, and watch what changes. hermes agent is the one i recommend to everyone running local. and especially to anyone who almost gave up on it.
Sudo su tweet media
Sudo su@sudoingX

most of you don't know how big a deal it is that a single rtx 3090 from 2020 runs qwen 27b dense q4 with 256k context at 40 tok/s, full agentic loops on hermes agent, zero tool call failures. the more i build on this card the more i think nobody really knows how untapped it actually is. the silicon was always capable, the models finally caught up.

English
125
206
2.2K
184.6K
G00dVibes retweetledi
Sarah Hodsdon
Sarah Hodsdon@sarahndipitous·
In about 2-ish hours… Saturday Morning-ish Cartoons- AI out in the wild episode 24 with @kukaj94 & the crew over at @tsi_org x.com/i/spaces/1ykap… What an interesting week we have had folks… today we will be talking again about AI tool account settings… it sounds boring but… waking up to a 30k bill overnight becomes very interesting & less boring when it happens to you 😉 *sips coffee and looks over at Ross*… a typical ‘Do as we say, not as we did’ situation. Hoping today is a helpful bookmarked space and productive Q&A for small AI business types with larger than life plans and aspirations. As always, ALL are welcome… brew a cuppa, get some grub, walk the dog… see y’all in a bit ✨☕️✨
Sarah Hodsdon tweet media
English
1
4
6
401
G00dVibes retweetledi
LudovicCreator
LudovicCreator@LudovicCreator·
🎨 GPT 2 Serie : Case 10 — Visual Prompt Anatomy Template 🎨 Today : Case 10 : Visual Prompt Anatomy Template A template for showing the structure of a prompt block by block: subject, style, composition, lighting, material, framing, text, constraints, and expected output FInd 4 examples . Prompt template in first comment , other prompts in comments because too long for ALT All visuals are made @pixio_ai 1/6
LudovicCreator tweet mediaLudovicCreator tweet mediaLudovicCreator tweet mediaLudovicCreator tweet media
English
6
27
174
7.4K
G00dVibes
G00dVibes@aivibelab·
This conversation is sublime—Sarah, I’m walking the perimeter in Vercel and AI Studio today. Really thinking about which projects will have the paid Gemini key. I've got 25 projects in Vercel and at least 20 in AI Studio (haven't counted them all yet). Here is the logic I’m using for the 2026 stack: The Google "Pay-as-you-go" Reality It’s less about a bill and more about a Privacy Shield. Google currently offers a generous free tier even on the paid plan—you're only charged if you exceed specific RPM/TPM thresholds. The "Safety" Catch On the Free tier, your data is essentially the product; it may be used to train Google's models. Once you set up billing, your data is generally opted-out of training. For private repos and proprietary agentic workflows, that security plus is a no-brainer. Cost vs. Scale in 2026 For a small project like my Multiversal Studios, I’ll likely stay under the free limits anyway, unless my agents start running hundreds of complex prompts an hour. Will it cost me today? No. Setting up the billing account won't trigger a charge until I actually hit heavy production usage. It’s exactly like putting a credit card on file for a hotel "just in case." Updating the Stripe restricted keys and ElevenLabs environment variables now to make sure the whole perimeter is secure.
English
1
0
1
32
G00dVibes retweetledi
Alex Volkov
Alex Volkov@altryne·
April 2026 was wild - a major AI release nearly every day! Mar 31: Claude Code leak Apr 1: Alibaba Wan 2.7-Image · Fish Audio STT Apr 2: Google Gemma 4 | Alibaba Qwen 3.6-Plus Apr 4: OpenAI GPT-Image-2 (Arena leak) Apr 6: Gemini Robotics-ER 1.6 Apr 7: Anthropic Claude Mythos Preview · Z.ai GLM-5.1 Apr 8: Meta Muse Spark Apr 9: Anthropic Managed Agents Apr 10: AI Engineer London Apr 11: MiniMax M2.7 (open weights) Apr 14: Baidu ERNIE-Image 8B Apr 15: Google Gemini 3.1 Flash TTS Apr 16: Anthropic Claude Opus 4.7 | OpenAI Codex (computer-use) Apr 17: Anthropic Claude Design Apr 20: Moonshot Kimi K2.6 · OpenAI Codex Chronicle Apr 21: OpenAI ChatGPT Images 2.0 Apr 22: OpenAI Privacy Filter (1.5B) Apr 23: OpenAI GPT-5.5 + GPT-5.5 Pro Apr 24: DeepSeek V4 Pro & Flash Apr 27: OpenAI on AWS Apr 29: Cursor SDK | Baidu ERNIE 5.1 Preview | Stripe Link Wallet (Agents) Apr 30: xAI Grok 4.3 This is an excerpt of today's @thursdai_pod , link in first comment.
English
5
8
23
5.9K
G00dVibes retweetledi
LudovicCreator
LudovicCreator@LudovicCreator·
🎨 GPT 2 Serie : Case 9 — AI Agent Workflow Card 🎨 Today : Case 9 : Creative Brief Visual Template A template for transforming a creative brief into a visual one-pager: objective, audience, tone, deliverables, references, constraints, and art direction. FInd 4 examples . Prompt template in first comment , other prompts in comments because too long for ALT All visuals are made @pixio_ai 1/6
LudovicCreator tweet mediaLudovicCreator tweet mediaLudovicCreator tweet mediaLudovicCreator tweet media
English
5
19
152
7K
G00dVibes retweetledi
Grok Imagine
Grok Imagine@imagine·
Your entire creative workflow just collapsed into one infinite canvas. In @imagine Agent Mode, you can brainstorm, write, generate and edit images, then turn them into videos without leaving the page. Try it at grok.com/imagine, on desktop.
English
780
849
6.9K
37M