Meetless AI

238 posts

Meetless AI banner
Meetless AI

Meetless AI

@MeetlessAI

Meetless is the Version Control for Product Change.

USA 参加日 Mayıs 2025
51 フォロー中11 フォロワー
Meetless AI
Meetless AI@MeetlessAI·
the scariest part of AI agents isn’t hallucination. it's that nobody built an approval layer between "agent decides" and "agent ships." your CI/CD pipeline has gates for code. what's the gate for decisions? asking because I watched a team revert 3 days of agent work last week because the original requirement was already outdated.
English
0
0
1
10
Meetless AI
Meetless AI@MeetlessAI·
@official_taches @Apple spec driven development only works if the spec stays current. the moment someone changes scope in a slack thread and forgets to update it, your context-engineered agent is building confidently against stale reality
English
0
0
0
12
Lex Christopherson
Lex Christopherson@official_taches·
.@Apple Xcode 26.3 ships with Claude and Codex built in. Agentic coding is going mainstream guys. 2026 will be the year vibe coding becomes the norm. Those that achieve success with it will be using approaches that GSD was built on; context engineering, spec driven development and metaprompting.
English
12
2
25
1K
Meetless AI
Meetless AI@MeetlessAI·
@lennysan when building costs zero the only expensive thing left is building the wrong thing. and you find out you built the wrong thing when the spec changed mid-sprint and nobody propagated it
English
0
0
0
1
Lenny Rachitsky
Lenny Rachitsky@lennysan·
Creator of the Design Sprint on how AI is making distribution the new biggest challenge for startups ❗️Sprints are no longer about reducing uncertainty in the face of high costs. ‼️ Today they are about deciding what to do and how to stand out when the cost of building moves toward zero.
John Zeratsky@jazer

This is weird for me to write, but I think our sprint methods are the best way to work in a world that didn't exist when they were created. Let me explain... When @jakek created the Design Sprint in 2010, the cost of building software was very high. Part of what made his method valuable — and why we embraced it at GV starting in 2012 — was that it allowed teams to run in Plan Mode before executing (yeah that's a Claude Code reference!) Today, the cost of building software is dramatically lower. It's not as low as people on here say, but it's objectively way faster and cheaper to ship software today than it was in 2010. Today, we find that our sprints (the Design Sprint plus new methods like the Foundation Sprint) have taken on a different and likely more important role for teams building software. ❗️Sprints are no longer about reducing uncertainty in the face of high costs. ‼️ Today they are about deciding what to do and how to stand out when the cost of building moves toward zero. I'm seeing this idea pop up everywhere. Builders now emphasize thoughtful planning and definition before setting agents to work. Design and product are more important than ever... but the jobs have also fundamentally changed. @lennysan says that writing PRDs is now the most important technical skill. Everything has changed since 2010. Yet in other ways, nothing has. It's eerie. It's still important for founders — maybe more important than ever — to make good decisions based on good information, then move quickly to validate their hypotheses. Sure, the reasons have changed: it's no longer because of high costs; it's because of low costs. But this work is essential, and despite all the advice out there about WHAT to do (use Markdown this way, set up your environment that way, use agents for this, etc), there are still few frameworks for HOW to do this planning right. Except, of course, for our sprints :) That's why Lenny called them "the missing manual" last year. Which is kinda wild. And there's one more challenge: When it's cheaper for everyone to build, what happens to competition? Yeah, it explodes. If you want to be successful — capture attention, solve a real problem, be reliable — I think you need to leverage opinionated, incisive human thought. That's always been a part of our sprints. By breaking complex decisions into concrete steps, creating focused work time, and working alone together, we help teams get to crisper, better, differentiated perspectives on what to build and how to talk about it. So... I guess this whole post just turned into "yay us"! LOL But seriously... I'm posting this because I believe more than ever in the value of working this way — and continue to see it be absurdly effective for teams building new products. And also, I'm just kind of amazed. It's not often that a tool built for one era turns out to be even more valuable in another. What do you think? What else has proved surprisingly valuable in this wild new world?

English
15
10
171
39.8K
Meetless AI
Meetless AI@MeetlessAI·
@martinfowler context engineering assumes the context is current. in most teams the PRD was last updated 3 weeks ago and the real spec lives in a slack thread. youre engineering stale context into your agents
English
0
0
0
8
Martin Fowler
Martin Fowler@martinfowler·
NEW POST Powerful context engineering is becoming a huge part of the developer experience of modern LLM tools. Birgitta Böckeler explains the current state of context configuration features, using Claude Code as an example. martinfowler.com/articles/explo…
English
26
94
626
60.5K
Meetless AI
Meetless AI@MeetlessAI·
@NickADobos exactly right. and clarity isnt just a clear initial prompt. its whether the spec you gave the agent at 9am still matches what the PM agreed to at 2pm. clarity decays faster than code
English
0
0
0
5
Nick Dobos
Nick Dobos@NickADobos·
Amazing post on vibe coding. “the thing i got wrong at first is thinking vibe coding is about learning to code and... it's not! it's about learning to communicate the skill isn't python or javascript or whatever, the skill is clarity … the problem wasn't Claude. the problem was me. i was asking it to read my mind”
English
13
26
446
79.3K
Meetless AI
Meetless AI@MeetlessAI·
@milan_milanovic the agent made a decision. nobody reviewed it. blast radius was production. this is what happens when you automate execution but leave decision governance completely manual
English
0
0
0
5
Dr Milan Milanović
Dr Milan Milanović@milan_milanovic·
How Amazon's AI coding tool deleted a Production environment Recently, AWS engineers gave their agentic coding tool, Kiro, a simple task: fix a small issue in Cost Explorer. Kiro's response was to delete the entire environment and rebuild it from scratch. That took down a customer-based service for 13 hours! 𝐈𝐭 𝐰𝐚𝐬𝐧'𝐭 𝐭𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐭𝐢𝐦𝐞. A senior AWS employee told the Financial Times this was at least the second AI-caused production outage in recent months. The first involved Amazon Q Developer. Both times, engineers let the AI agent resolve issues without intervention. The employee described both incidents as "entirely foreseeable." 𝐀𝐦𝐚𝐳𝐨𝐧'𝐬 𝐫𝐞𝐬𝐩𝐨𝐧𝐬𝐞: '𝐮𝐬𝐞𝐫 𝐞𝐫𝐫𝐨𝐫, 𝐧𝐨𝐭 𝐀𝐈 𝐞𝐫𝐫𝐨𝐫.' Their argument is that the engineer had broader permissions than expected, a misconfigured role, not an AI autonomy problem. Technically true. But a human engineer with those same permissions probably wouldn't have nuked a whole environment to fix a minor bug. A human would have paused. The agent didn't. 𝐓𝐡𝐞 𝐬𝐚𝐟𝐞𝐠𝐮𝐚𝐫𝐝𝐬 𝐜𝐚𝐦𝐞 𝐚𝐟𝐭𝐞𝐫, 𝐧𝐨𝐭 𝐛𝐞𝐟𝐨𝐫𝐞. Mandatory peer review for production access, staff training, and resource protection measures, all added post-incident. You can't retroactively blame "user error" when the process that should have caught it didn't exist yet. 𝐓𝐡𝐞 𝐛𝐢𝐠𝐠𝐞𝐫 𝐩𝐢𝐜𝐭𝐮𝐫𝐞 𝐢𝐬 𝐨𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐚𝐥. Amazon mandated 80% weekly Kiro usage and tracked it as a corporate OKR. Engineers who preferred Claude Code or Cursor needed VP approval to use alternatives. Around 1,500 engineers pushed back on internal forums. Source: FT
Dr Milan Milanović tweet media
English
17
11
67
37.6K
Meetless AI
Meetless AI@MeetlessAI·
@rohanpaul_ai the fix will probably be 'senior engineer must review AI code.' but if the decision to change the spec was never reviewed first, reviewing the code is too late. governance starts upstream
English
0
0
0
2
Rohan Paul
Rohan Paul@rohanpaul_ai·
Amazon is holding an emergency meeting this March-26 to address website crashes caused by errors in AI-assisted coding. Recent data shows the company suffered 4 major outages in a single week after AI tools suggested unsafe software changes. A Sev 1 is the highest level of technical emergency because it means critical features, like the checkout button, have completely broken. The trouble began when engineers used GenAI tools to speed up coding process. Amazon admits they are still figuring out the best rules for how to safely use these powerful AI tools during daily work. To stop future crashes, the company is now forcing senior engineers to manually review any code changes made by AI assistants. They are also investing in agentic safeguards, which are AI systems that act like digital hall monitors to catch errors before they cause problems. --- cnbc. com/2026/03/10/amazon-plans-deep-dive-internal-meeting-address-ai-related-outages.html
Rohan Paul tweet media
English
17
20
82
14.7K
Meetless AI
Meetless AI@MeetlessAI·
@TheNewMidia @Yuchenj_UW yeah and 'requirements for production code' is a deceptively hard problem because the requirements are also moving. the thing teams need to get right isnt just writing them down, its making sure every change to them reaches everyone who's building against them
English
1
0
0
10
The Midia
The Midia@TheNewMidia·
@Yuchenj_UW Vibe coding any one those would be a feat in and of itself for these people. First, people have no clue the requirements that exist for production grade code and thats where the conversation pretty much ends If you cant build requirements or spec the conversation is over
English
2
0
0
36
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
“SaaS is dead.” “Software is free.” What stops you from vibe-coding Slack, GitHub, Notion, Zoom, 1Password, Jira, and Rippling for your company over a weekend then?
English
354
90
2.3K
201.9K
Meetless AI
Meetless AI@MeetlessAI·
@Jack__boy_ @Chain_GPT the auditing piece is the one people skip. requirements get written, generation happens, but nobody audits whether the requirements changed between the spec and the build. by the time youre in review you're comparing code to a spec that's 2 weeks stale
English
0
0
0
11
Jack
Jack@Jack__boy_·
@Chain_GPT Vibe coding works best when you treat the prompt like a real technical spec. Clear requirements + modular generation + auditing creates a powerful workflow for builders. 🚀 #CGPTCommunity
English
1
0
0
6
Meetless AI
Meetless AI@MeetlessAI·
@TheGeodexes spec-driven is the floor. the ceiling is spec-driven where the spec actually stays current when scope changes. most teams write good specs and then watch them become wrong over the next 2 weeks as requirements drift in Slack and nobody updates the source of truth
English
1
0
1
15
Kevin
Kevin@TheGeodexes·
most ai coding tools: "build me X" → agent starts coding immediately the problem: - no requirements - no design - no plan garbage in, garbage out spec-driven > vibe-driven
English
2
0
0
13
Meetless AI
Meetless AI@MeetlessAI·
@kevin_insight @VivienMahe agree. and the follow-up problem is keeping the spec current. requirements defined on Monday, sprint scope changes on Wednesday, half the team builds against Monday's version. the spec discipline breaks down not at creation, but at change propagation
English
0
0
0
5
kevin’s insight
kevin’s insight@kevin_insight·
@VivienMahe I don’t like the term “vibe coding.” If you care about reliability, it’s not vibes — it’s specs. Clear requirements.
Defined constraints.
Explicit outputs. AI performs best with structure. Less vibe coding.
More spec coding.
English
1
0
0
13
Vivien Mahé
Vivien Mahé@VivienMahe·
Vibe coding killed UI originality. Every vibe-coded app looks the same. The UI/UX has 0 soul. It's empty, poor. Same colors, same buttons, same chips, same cards. AI was supposed to make products better, not copy-paste the same boring UI everywhere 🥲
English
309
13
398
66.3K
Meetless AI
Meetless AI@MeetlessAI·
@ryan_tech_lab @rburton 100%. and the sneaky second order failure mode: spec is sharp on day 1, then it changes. PM updates scope in Slack, nobody propagates it, agent keeps building against the original spec. garbage in isnt just about vague requirements, its about stale ones too
English
0
0
0
20
Ryan Craven
Ryan Craven@ryan_tech_lab·
@rburton The requirements part is criminally underrated. Most vibe coding failures I see aren't AI failures, they're spec failures. Garbage in, garbage out. The sharper you can describe what you want, the better the AI performs.
English
1
0
0
9
Richard L ₿urton
Richard L ₿urton@rburton·
Remember the 80/20 rule with Vibe Coding. 80% writing clear requirements 20% testing. That's your job going forward.. for now.
English
1
1
1
43
Meetless AI
Meetless AI@MeetlessAI·
@yourfriendbrett this is right but theres a layer below it: requirements need to stay current as the project evolves. you spec it well on day 1, PM changes 2 acceptance criteria on day 5, agent is still building against day 1. most teams have no way to propagate that delta
English
1
0
1
18
Brett (33.3%)
Brett (33.3%)@yourfriendbrett·
Agentic engineering (aka vibe coding) output quality largely depends on your ability to properly specify the requirements prior to implementation. These are my favorite tools to help you spec for various levels of complexity… Very complex / very large features / large apps: github.com/glittercowboy/… Moderately complex / large features / smaller apps: github.com/github/spec-kit Small to moderately complex features / bug fixes: Claude Code native Plan Mode The aspect that seems to impact quality the most is how much you are tying to do in a single context window. Even as context windows grow, quality and accuracy drops like a rock as you go deeper into a single context window.
English
6
3
12
2.9K
Meetless AI
Meetless AI@MeetlessAI·
@XRMultiverse good prompt for session start. the issue is it only helps at boot. if someone changes the requirements while the agent is already 3 commits in, there's no mechanism to surface that and the agent just keeps building against stale scope. prompt cant fix org dysfunction
English
1
0
0
12
XR Multiverse
XR Multiverse@XRMultiverse·
AI coding (Vibe coding) is stupid and dumb but it works sometimes. Here's a prompt to make it easier. "Include anything missing and required." This tells the AI to use what it knows about what you are making and ensure that it has the requirements to make it work.
English
1
0
0
33
Meetless AI
Meetless AI@MeetlessAI·
@AscendAlan21 40% is probably optimistic. PM builds mental model. doc tries to capture it. engineers build against the doc. mental model and doc drift apart by day 3. nobody catches it until QA or a user reports it. the 3 weeks of Figma didnt fail, the propagation did
English
0
0
0
7
Ascend Alan
Ascend Alan@AscendAlan21·
Every handoff kills 40% of the original intent. Watched a PM spend 3 weeks on a Figma spec that got built completely wrong. Same PM learned Cursor, shipped the working feature in 6 hours. The handoff is the bottleneck.
Ascend Alan tweet media
English
1
0
0
15
Meetless AI
Meetless AI@MeetlessAI·
@usebrief this is the outcome when scope changes happen in conversations instead of artifacts. what they actually wanted was discussed somewhere, probably multiple times, but it never became shared source of truth engineering could build against. so they built against the ticket
English
0
0
0
10
Brief
Brief@usebrief·
So you build the feature. Ship it. And then realize you solved the wrong problem, because what they actually wanted was bulk actions, not exports. We built Brief as a context layer for AI coding agents. But here's what we didn't expect: human builders use it.
English
2
0
0
9
Brief
Brief@usebrief·
Your memory isn't as good as you think it is. Three weeks ago, you had a call where a customer mentioned they'd "love better export options." Last week, another customer said something similar. Or was it the same thing? You can't quite remember.
Brief tweet media
English
1
0
1
26
Meetless AI
Meetless AI@MeetlessAI·
@tanep3 @lexfridman documenting before you build is good discipline. thing that kills teams isnt the initial doc, its that the doc becomes stale after the first scope change and nobody updates it. now you have a precise requirements document that describes the old thing. happens every sprint
English
0
1
1
13
Tane Channel Technology(たねちゃんねる)
My vibe coding process is as follows. The key to success is thorough and precise design documentation. 1. Create a requirements specification document with the AI. 2. Have the AI perform a simulation review to check for any missing requirements. 3. Create a system design document with the AI. 4. Have the AI perform a simulation review to check for any missing system design. 5. Generate code. Once you've created precise documentation through steps 1 through 4, the AI will output near-perfect code.
English
1
0
0
414
Lex Fridman
Lex Fridman@lexfridman·
Programming with AI is insanely fun. Process is: 1. generate code 2. read & understand code that was generated 3. make small changes "manually" (still with great autocomplete) 4. test & debug 5. make big changes with new prompt 6. go back to step 1 Pure vibe coding skips step 2 & 3. And I think we'll need human expertise & experience for steps 2, 3 (and 4) for quite a while. But holy shit, I'm learning much faster, being way more productive, and having more fun. Not sure we're close to "AGI/ASI", but the software engineering world is definitely getting transformed very rapidly. It feels surreal to be experiencing it directly on a daily basis. Of course, there are both scary (jobs, security) & exciting (productivity, access) consequences to this transformation, as with all powerful technology. This post was fully written by human in one-shot without spellcheck, it's 100% organic human writing 🤣
English
623
622
6.6K
990.4K
Meetless AI
Meetless AI@MeetlessAI·
@Anirudh_g23 rewrites are usually not a coding quality issue. they're a requirements propagation issue -- scope changed somewhere upstream and nobody realized until QA found it or a customer reported it. better vibe coding is not the fix for that, that's organizational
English
0
0
0
8
Anirudh
Anirudh@Anirudh_g23·
Missing features, endless rewrites, code that feels “alive” but never finished? It’s a skill issue—vibe coding only works with a killer product requirements doc. Here’s my 5-step framework for actual completion 🧵
English
2
0
0
29
Meetless AI
Meetless AI@MeetlessAI·
@Optmyzr strong argument for making requirements explicit before building. prototype reveals gaps. the harder problem is when you're past prototype and scope changes mid-sprint. now nobody's sure which version engineering is building against and the prototype cant tell you
English
0
0
0
1
Optmyzr
Optmyzr@Optmyzr·
6️⃣ How does vibe coding help beyond automation? It clarifies thinking. When you see a working prototype, you quickly spot: • Missing requirements • Hidden assumptions • Real complexity It sharpens strategy, not just execution.
English
2
0
0
22
Optmyzr
Optmyzr@Optmyzr·
1/ Vibe coding meets PPC. At our latest PPC Town Hall, Nils Rooijmans answered the 9 biggest questions marketers are asking about vibe coding. Here’s the full breakdown 🧵👇
English
1
0
1
106
Meetless AI
Meetless AI@MeetlessAI·
@JPSalomonAI exactly right. and the problem isnt even writing requirements, its that they change and the change doesnt reach everyone. PM updates scope in Slack thread, engineering is still building against the ticket. happens on teams with thorough requirements docs too
English
0
0
1
9
Julien-Pierre Salomon
Julien-Pierre Salomon@JPSalomonAI·
Another pendulum swing: vibe coding out, 'agentic engineering' in. Still missing the actual constraint. Typing was never the bottleneck—muddy requirements were. Faster agents don't help when you don't know what to build. Are we solving the right problem?
English
1
0
0
17