VibeCodes

286 posts

VibeCodes banner
VibeCodes

VibeCodes

@getvibecodes

Drop an idea. AI agents refine it, build the board, and ship the code. Open source project platform — kanban boards, 54 MCP tools, agents as team members.

Everywhere 参加日 Şubat 2026
211 フォロー中65 フォロワー
VibeCodes
VibeCodes@getvibecodes·
the trigger-to-agent pipeline is real. but here's the thing nobody's talking about: once you have 20 agents running 24/7 on different triggers across your business... who's keeping track of what they actually did? the infrastructure for running agents is here. the infrastructure for managing them is not. and that gap is going to cost people a lot before they realize it.
English
0
0
0
427
GREG ISENBERG
GREG ISENBERG@gregisenberg·
claude code just shipped routines you tell it what to do, point it at your project, set a trigger, and it runs 24/7 on their servers with your laptop closed i immediately thought of larry ellison: "the money is never in the technology, it's in the infrastructure the technology runs on" the model is the commodity. the trigger is the product. and whoever maps the most valuable real world events to the most specific industry workflows is going to build something massive here's what i mean by trigger.... a permit gets filed. a customer's usage drops 40% in a week. a competitor launches a feature. a deal sits in your pipeline untouched for 14 days. a contract hits 90 days before renewal. a stripe payment fails. these are all triggers. some public, some inside your own tools and every single one is a moment where an AI agent can step in and do something valuable before a human gets around to it the playbook is like this: map every trigger that matters in one industry → wire an AI agent to each one → sell the outcome. the person who shows up first with exactly what someone needs at exactly the right moment wins the deal every time and the people who go embarrassingly deep on one industry's trigger map are going to build generational companies that's the entire game right now for people reading this tweet. claude routines, openclaw, hermes etc... the infrastructure is all here. just pick your niche build audiences/content to get awareness wire the agents to triggers start selling and pinch yourself that this is the greatest time in history to be starting a company let's go
Claude@claudeai

Now in research preview: routines in Claude Code. Configure a routine once (a prompt, a repo, and your connectors), and it can run on a schedule, from an API call, or in response to an event. Routines run on our web infrastructure, so you don't have to keep your laptop open.

English
103
146
2.2K
364.9K
VibeCodes
VibeCodes@getvibecodes·
interesting pattern happening right now: - Claude Code just shipped Routines (agents running 24/7) - Uber's CTO says AI coding tools already blew through their 2026 budget - Microsoft says AI agents should get their own software licenses everyone's racing to deploy more agents. but almost nobody is building the management layer for when they're actually running. who tracks what they did? who catches it when they break something? who's responsible? this is the missing infrastructure right now.
English
2
0
2
33
VibeCodes
VibeCodes@getvibecodes·
this is the part most people miss. deploying AI agents isn't a one-time setup — it's an ongoing management problem. who's monitoring what agents are doing? who reviews their output before it goes live? how do you even know if an agent broke something three steps ago? we're seeing the same pattern. teams need tooling that treats agents like team members — with task boards, audit trails, and actual accountability. the "agent manager" role is going to be huge.
English
1
0
0
324
Aaron Levie
Aaron Levie@levie·
One corollary to the fact that AI agents take real work to setup in company at scale, is that the role of the forward deployed engineer -or whatever it gets called in the future- isn’t going away any time soon. When a vendor sells any kind of agents into an organization, you’re no longer just selling a software tool that gets implemented and you’re done. You’re fundamentally selling some form of the actual workflow being done by your technology. This is far closer to a customer buying from a professional services firm than implementing traditional technology. This will almost always require a deep understanding of the domain that the customer operates in, the ability to help a customer wire up their systems to support the agents, make sure all the context is setup in the right way, and help provide change management to actually get the company to adapt its business processes. The ability to do this across customers, figure out best practices in a specific industry and customer segment, take new features back to go build in the product, and so on is going to be key. There’s no shortcut to getting this work done by the enterprise, and the vendors are going to have to do a lot of this or risk low adoption. Finally, this is a big opportunity for existing and next gen professional services companies. There are all new practice areas emerging in every system integrator and consulting firm just to do this kind of work, and this is going to continue to be in demand for quite some time. Yet another example of jobs that aren’t actually going away.
Aaron Levie@levie

The more enterprises I talk to about AI agent transformation, the more it’s clear that there is going to be a new type of role in most enterprises going forward. The job is to be the agent deployer and manager in teams. Here’s the rough JD: This person will need to figure out what are the highest leverage set of workflows on a team are (either existing or new ones) where agents can actually drive significantly more value for the team and company. In general, it’s going to be in areas where if you threw compute (in the form of agents) at a task you could either execute it 100X faster or do it 100X more times than before. Examples would be processing orders of magnitude more leads to hand them off to reps with extra customer signal, automating a contracting review and intake process, streamlining a client onboarding process to reduce as many straps as possible, setting up knowledge bases than the whole company taps into, and so on. This person’s job is to figure out what the future state workflow needs to look like to drive this new form of automation, and how to connect up the various existing or new systems in such a way that this can be fulfilled. The gnarly part of the work is mapping structured and unstructured data flows, figuring out the ideal workflow, getting the agent the context it needs to do the work properly, figuring out where the human interfaces with the agent and at what steps, manages evals and reviews after any major model or data change, and runs and manages the agents on an ongoing basis tracking KPIs, and so on. The person must be good at mapping the process and understanding where the value could be unlocked and be relatively technical, and has full autonomy to connect up business systems and drive automation. This means they’re comfortable with skills, MCP, CLIs, and so on, and the company believes it’s safe for them to do so. But also great operationally and at business. It may be an existing person repositioned, or a totally net new person in the company. There will likely need to be one or more of these people on every team, so it’s not a centralized role per se. It may rile up into IT or an AI team, or live in the function and just have checkpoints with a central function. This would also be a fantastic job for next gen hires who are leaning into AI, and are technical, to be able to go into. And for anyone concerned about engineers in the future, this will be an obvious area for these skills as well.

English
80
53
730
136.1K
VibeCodes
VibeCodes@getvibecodes·
this is exactly the gap we kept running into. Claude Code is amazing for building, but the second you have a team of people (or agents) shipping code, everything falls apart without some kind of coordination layer. we're building VibeCodes open source to solve this — kanban boards where AI agents are actual team members, full audit trails of what changed, and 54 MCP tools so you can see exactly what every agent did. the governance stuff you're describing doesn't have to be enterprise-only.
English
0
0
2
501
Santiago
Santiago@svpino·
Claude Code is the best coding tool I've ever used. I use it every single day. But if you asked me to let 12 people across a company start building tools with Claude Code and ship them to production, I'd think you are nuts. Claude Code is brilliant for building software, but there are a ton of questions that Claude Code doesn't answer: •⁠ ⁠Who has access to what data? •⁠ ⁠What compliance policies apply to the application? •⁠ ⁠Where are the audit logs? •⁠ ⁠Who approves production applications? •⁠ ⁠Who can see what's being built and by whom? That's the main issue with vibe-coding: you can't answer these questions with an LLM. As far as I know, Superblocks 2.0 is the only platform that's focusing on enterprise software for the modern AI world. This is pretty awesome! They wrap governance around any application built with AI: •⁠ ⁠security •⁠ ⁠permissions •⁠ ⁠audit trails •⁠ ⁠compliance All of these centrally managed from one platform. I really don't see any other way going forward for companies that want to take advantage of vibe-coded applications. By the way, Superblocks does all of this without moving your data outside your network. Everything runs in full isolation in your cloud. Here is the link to check it out: superblocks.rocks/spaul Superblocks was gracious enough to collaborate with me on this post.
Santiago tweet media
English
56
42
540
99.6K
VibeCodes
VibeCodes@getvibecodes·
@anissagardizy8 @LauraBratton5 this is why the real cost of AI coding isn't the tokens — it's the chaos after. 12 agents generating code with no coordination means 12x the merge conflicts and "wait who changed this?" the bottleneck moved from writing code to managing it. most teams haven't caught up yet.
English
0
0
2
5.1K
Anissa Gardizy
Anissa Gardizy@anissagardizy8·
Uber's CTO told @LauraBratton5 that AI coding tools—particularly Anthropic’s Claude Code—has already maxed out its 2026 AI budget 📈 “I'm back to the drawing board, because the budget I thought I would need is blown away already,” Neppalli Naga said. theinformation.com/newsletters/ap…
English
89
140
1.2K
1.4M
VibeCodes
VibeCodes@getvibecodes·
the biggest lie in vibe coding right now: "I built this app in 20 minutes" yeah you generated the code in 20 minutes. then you spent 3 days debugging it, figuring out what the AI actually did, and trying to get it to work with everything else. the hard part was never writing code. it's managing everything around it. what changed, what broke, what's actually ready. that's the real bottleneck nobody wants to talk about.
English
1
0
0
20
VibeCodes
VibeCodes@getvibecodes·
the MCP server point is underrated. we ship 54 MCP tools with our open source project and the amount of organic discovery that generates is wild. people find you because their AI already knows about you. no ads, no cold outreach. the AI literally recommends your tool mid-workflow.
English
1
0
1
15
GREG ISENBERG
GREG ISENBERG@gregisenberg·
200,000+ new vibe coding projects get created every day yet almost NONE of them get customers 7 distribution strategies that actually work right now for your startup: 1. build an MCP server. when someone asks claude or chatgpt the question your product answers, your tool shows up. the AI becomes your sales team. 2. programmatic SEO. pick a keyword pattern (best X for Y). use firecrawl to pull real structured data so pages have actual value. one next.js template, AI generated content, human editing loop so it doesn't read like AI. 10,000 pages × 30 visits × 2% CVR × $10 = $60k/month from pages you built once. 3. vibe code a free tool (calculator, software etc). one problem, one tool, ship it today. it ranks, lives in people's workflows, markets your brand for years. ahrefs' free backlink checker has sent them more customers than most paid ads ever will. 4. answer engine optimization. people are getting answers from chatgpt and perplexity now, not just google. find the top questions your customer is asking AI. publish structured, definitive answers. one founder went from 4% to 20% AI referrals in a month just by doing this. 5. make the output of your product shareable. think spotify wrapped. think github graphs. think stripe atlas. what does your user want to screenshot and send? build that moment. add a pre-filled share button. every share is free impressions to your exact audience. 6. buy a niche newsletter. 10k subscribers for $5k to $20k. most owners are making $0 to $500 a month. DM them "ever thought about selling?" you inherit trust and a direct channel to your exact customer on day one. underrated. 7. 30 minute voice memo into claude: five tweet threads, three linkedin posts, one newsletter, short form clips. do this weekly. in 3 months you have more content than competitors who aren't doing this. obviously, your project needs to be optimized so it isnt ai slop, but you'll get there. code is commoditized. time to focus on distribution. pick 2 of these ideas and start this week to get customers. this episode was designed to get your creative juices flowing. maybe it'll give you more ideas on growth tactics you'll use this week. full breakdown on the @startupideaspod watch.
English
224
156
1.5K
180.5K
VibeCodes
VibeCodes@getvibecodes·
the "vibe coding slop" thing is real and it's not just anthropic. it's everywhere. the root problem is teams shipping AI-generated code with zero process around it. no review, no tracking, no accountability. fast output, no quality control. the tools got faster but the workflows didn't keep up.
English
1
0
0
152
David Cramer
David Cramer@zeeg·
anthropic being down, rate limits, etc is understandable. huge hardware reqs and insane growth. claude code being bug filled is just a result of people vibe coding slop at anthropic and accepting it as ok. it is not ok you will lose your customers just as fast as you've gained them.
English
90
36
1.1K
116.7K
VibeCodes
VibeCodes@getvibecodes·
@trq212 the plan-first approach is such a shift. most people are still doing "just write the code" prompting and wondering why things break. editing the plan before execution is where the real leverage is. you catch bad assumptions before they become bad code.
English
0
0
0
274
Thariq
Thariq@trq212·
New in Claude Code: /ultraplan Claude builds an implementation plan for you on the web. You can read it and edit it, then run the plan on the web or back in your terminal. Available now in preview for all users with CC on the web enabled.
English
527
662
10.3K
1.3M
VibeCodes
VibeCodes@getvibecodes·
we built something we actually needed ourselves we kept losing track of what our AI agents were doing across projects. code was getting generated but nobody knew what changed, what broke, or what was ready for review. so we built VibeCodes — an open source project platform where AI agents are actual team members on your kanban board. you drop an idea in, agents pick it up, break it into tasks, write the code, and move cards across the board as they go. 54 MCP tools. full visibility. no more "wait, who changed that file?" it's free and open source: vibecodes.co.uk
English
0
0
0
13
VibeCodes
VibeCodes@getvibecodes·
100%. the "more important things" part is what's interesting right now. when AI handles the syntax, the job shifts to architecture decisions, reviewing what the AI built, and managing the workflow. basically the same skills good senior devs always had, minus the typing. the people who'll struggle aren't the ones who can't code. it's the ones who never learned to think about systems beyond the code itself.
English
0
0
0
20
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
What we are losing with AI is syntax -- and good riddance. The less our brains are occupied by semicolons and braces the better. There are much more important things for us to consider and manage.
English
117
154
1.9K
242.3K
VibeCodes
VibeCodes@getvibecodes·
unpopular opinion: we don't need more AI coding tools. we need AI project management tools. everyone's racing to generate code faster. cool. but the bottleneck moved months ago. now it's: who changed what, does it work, does it break anything else, and does anyone actually know what the project looks like right now. that's why we're building VibeCodes as an open source platform where AI agents work on a kanban board like real team members. 54 MCP tools. full visibility. if you're tired of the "generate and pray" workflow, check it out: vibecodes.co.uk
English
1
1
1
28
VibeCodes
VibeCodes@getvibecodes·
the permissions loop is painful. feels like the agent is fighting the environment instead of just doing the work. biggest issue with these tools right now is they try to do everything in one shot with no visibility into what's happening. you're just watching a spinner and hoping for the best. there's no way to course correct mid-task.
English
0
0
0
707
Evan You
Evan You@youyuxi·
Honestly don't know what happened to Claude Code. Tried a one-off simple task on a fresh directory yesterday, tried a bunch of things that didn't work, asked for a ton of permissions, and then got stuck for 4 minutes before I got tired of waiting and killed the session. This was on medium effort. Switched to Codex gpt 5.4 with medium effort and one shotted the task in under 1 minute.
English
240
96
2.8K
334.6K
VibeCodes
VibeCodes@getvibecodes·
question for anyone building with AI agents: how are you keeping track of what your agents actually do? not the prompts. not the model. the actual work output. like which files they changed, what decisions they made, whether their changes broke something else. because right now the answer for most people is "I don't" and that's wild when you think about it.
English
1
0
2
19
VibeCodes
VibeCodes@getvibecodes·
because "building agents" sounds cool in a blog post but in practice it's just... project management with extra steps. who's keeping track of which agent did what? how do you review their work? what happens when two agents step on each other? nobody wants to talk about this because it's boring. but it's the actual hard part.
English
0
0
0
477
dax
dax@thdxr·
a million posts on my feed telling me how to build agents zero people talking about building agents
English
123
25
1.2K
59.8K
VibeCodes
VibeCodes@getvibecodes·
"in the hands of someone who knows what they're doing" is the key part people keep glossing over. AI doesn't remove the need for expertise. it just shifts where the expertise matters. instead of writing every line yourself, you need to know how to direct, review, and manage what the AI builds. that's honestly a harder skill than most people realize.
English
0
0
1
574
Taelin
Taelin@VictorTaelin·
also, despite everything, bend2 will be the ultimate testimony of how incredible software can be built with AI in the hands of someone who knows what they're doing, particularly opus on this cycle (but also gpt 5.4 to an extent). people will be mind blown not just by it, but by how it was created. I'll probably make the full commit history public so everyone can see the ridiculous rate of progress that a single person can have when using these tools wisely. for today: cuda is basically done, C parallel is done, GC is done, performance is *tremendous*, metal is next, and soon its stdlib's will be larger than stackage. turns out AI's exert their full potential when handcuffed by a great type system under a well done foundation who could've guessed!
Taelin tweet media
Taelin@VictorTaelin

lots of progress, Bend is now ~3 Opus fast-mode weeks of being launched, faster if spud is actually happening next week, I'm so happy

English
32
23
599
24.9K
VibeCodes
VibeCodes@getvibecodes·
this makes total sense from Anthropic's side. they already have the best model and the users. adding a "build" button is the obvious move. but here's what most people overlook: generating a full-stack app is the easy part now. the hard part is what happens on day 2 when you need to actually manage the project, track changes, and coordinate multiple agents working on the same codebase. that's a different product entirely.
English
1
0
1
20
VibeCodes
VibeCodes@getvibecodes·
fair point but also - even the people building crud apps with AI are hitting a wall they don't talk about. it's not the coding part that fails. it's the "ok now 4 different agents touched my project and nobody knows what changed" part. the unsolved problems aren't just in distributed systems. they're in managing the chaos that comes after you let AI loose on a codebase.
English
0
0
2
631
wordgrammer
wordgrammer@wordgrammer·
“Software is dead” because you keep vibe coding the same 3 crud apps. If you worked on cool stuff like “distributed game engines” you’d find out that there are plenty of unsolved problems
English
92
208
3.7K
103.6K
VibeCodes
VibeCodes@getvibecodes·
the "thin wrapper" take is mostly right but misses one thing: generating code is solved. managing what happens after isn't. Claude can ship code all day. but who tracks what got built, what broke, what needs review? that's the actual gap. the winners won't be the ones generating code. they'll be the ones who figure out how to manage AI as a team member - task tracking, accountability, handoffs. that's a product problem, not a model problem.
English
0
0
0
431
VibeCodes
VibeCodes@getvibecodes·
this is exactly it. the moment you need to change something non-trivial in AI-generated code, you realize you're basically reverse engineering your own project. we started logging what the AI actually does step by step for this reason. not just the final output, but the decisions it made along the way. makes debugging 10x easier when you can trace back through the agent's reasoning.
English
0
0
0
51
Sergio Pereira
Sergio Pereira@SergioRocks·
You can’t debug what you don’t understand. At first, it feels under control. You built it with AI. You know roughly how it works. You can tweak prompts and fix small things. Then something breaks. - The output is wrong, but you don’t know why - A flow fails in the middle - Fixing one issue creates another somewhere else - Behavior changes without you touching anything Now you’re guessing. Because you don’t fully understand: - How the system is structured - What assumptions it relies on - What happens under different conditions That’s the trap. Vibe coding gets you to something that works. But it doesn’t force you to understand it. And when things go wrong, that understanding is exactly what you need. Without it, every fix is trial and error. Every change feels risky. And progress slows to a crawl. AI didn’t create this problem. It just made it easier to get there faster. Because it’s now possible to build a product… Without ever fully understanding how it actually works.
English
13
4
28
2.7K