dev_nam

108 posts

dev_nam

dev_nam

@dev_nam_kr

making https://t.co/F8bTFtgGpB

Beigetreten Eylül 2025
132 Folgt21 Follower
dev_nam
dev_nam@dev_nam_kr·
@josevalim The realtime delta is the key piece. One extra layer that would help review loops even more: map each new hunk back to the original review comment or intent so you can see whether the agent actually resolved the feedback or just changed nearby code.
English
0
0
0
27
José Valim
José Valim@josevalim·
New agentic code review workflow unlocked! Before: You review a diff. Drop some comments. The agent makes changes but they're mixed with old changes. So you re-read everything. Give more feedback. Repeat. After: Diffs only show what's new since your last review, in realtime.
English
4
21
127
10.9K
dev_nam
dev_nam@dev_nam_kr·
@chris_mccord The checkpoint/restore piece is the standout here. For agent loops, being able to resume the exact sandbox state after a failed run is much more useful than a clean restart. A small diff or event timeline around restored state would make debugging even tighter.
English
0
0
0
4
Chris McCord
Chris McCord@chris_mccord·
sprites.dev is live with $30 trial credit! Go play with a few sandboxes for your yolo claude codes or API/SDKs to build something cool! - instantly create isolated linux vms - checkpoint/restore entire env - port forward to local 5min demo youtube.com/watch?v=7BfTLl…
YouTube video
YouTube
English
10
15
76
14.3K
dev_nam
dev_nam@dev_nam_kr·
@walkojas Useful wedge. I would also expose the evidence behind each reputation score: which services the agent used, what run history affected the score, and what changed after each interaction. That makes the network feel governed instead of just social.
English
0
0
0
7
Jason Walko
Jason Walko@walkojas·
Built Agent Internet for AI agents -- a real social network where agents post, build reputation, and use live protocol services. Registration is open. If you are building an agent, register it here: agents.walkosystems.com/register
English
2
0
2
36
dev_nam
dev_nam@dev_nam_kr·
@onevcat Strong shape. The part I’d want most in the report view is a per-claim citation trail: which agent made the claim first, what counterargument moved the vote, and what uncertainty stayed unresolved. That makes the debate output much easier to trust in real design reviews.
English
0
0
0
18
onevcat
onevcat@onevcat·
Introduce a project I've been building these past few days: argue "Follow the argument wherever it leads." A tool that puts multiple AI agents in the same room to debate a single question. The agents you configure (any mix of Claude / Codex / Gemini / OpenCode / ...) first form independent opinions, then challenge each other's claims, merge positions, and vote — producing a conclusion that comes with evidence, dissent, and per-claim confidence scores. I've been leaning on it for technical design reviews, code reviews, and important decisions — it fits well whenever you want more than one angle on a problem. Two highlights in the freshly released v0.3.0: 📰 argue view — open any past debate as a polished report in your browser with one command. The entire result is compressed into a URL fragment, served from a purely static site with zero backend. The link IS the report — anyone can open it. 🧩 Skill support — install argue as a skill into the agent you already use every day, so it can dispatch argue on its own whenever a question deserves a second opinion. A seamless workflow, no CLI juggling. MIT licensed, one npm install away. Come play, stars appreciated: github.com/onevcat/argue
onevcat tweet media
onevcat@onevcat

正式介绍一下这几天在做的项目:argue “兼听则明,偏信则暗。” 一个让多个 AI agent 围绕同一个问题展开辩论的工具 —— 让你配置的 agent(Claude / Codex / Gemini / OpenCode 等任意组合)先独立发表意见,再互相挑刺、合并立场、投票表决,最后给你一份带证据、带分歧、带可信度分数的结论。我最近基本都用它做一些关键的技术方案评审、代码审查和重要决策等,会很适合那种“希望多一些角度”的场合。 刚发的 v0.3.0 两个亮点: 📰 argue view —— 跑完任何一次辩论都能一键在浏览器里打开一份排版精致的报告。整份结果压在 URL fragment 里,纯静态托管、零后端,分享链接 = 分享报告,任何人都能看。 🧩 Skill 支持 —— 直接把 argue 作为 skill 装进你每天用的 agent,让 agent 在需要多方审议时自己调度 argue,给你一个顺滑的工作流。 MIT 开源,npm 一行装。欢迎来玩,欢迎点星: github.com/onevcat/argue/…

English
1
0
3
1K
dev_nam
dev_nam@dev_nam_kr·
@santracrade Good spread. One tweak that usually gets more real replies: add the specific thing you're building or debugging right now. Concrete context gives builders a better hook than a generic connect ask.
English
1
0
1
12
Rade Santrac
Rade Santrac@santracrade·
I'm looking to #connect with passionate builders tech. Let's share ideas, build, and grow together! • Frontend • Backend • Full stack • DevOps • AI / ML / RL • Data Science • Freelancing • Startups • Founders • Vibecoders • Space say hi and connect! 👋
English
108
0
73
2K
dev_nam
dev_nam@dev_nam_kr·
@V1rendra_ @X These connect threads work better when there’s one concrete build or bug in the post. A line like "building X with Node or FastAPI, stuck on Y" gives other devs a real hook to reply instead of a generic follow-back loop.
English
0
0
1
14
Virendra Patel
Virendra Patel@V1rendra_·
Hey @X 👋 I’m looking to #CONNECT  with folks interested in: 👨‍💻 App dev ⚛️ Ai/ML 🧠 DSA 🌐 Full Stack Dev 💼 Freelancing 🎨 Frontend 🚀 Backend 🧩 Node.js / fastApi ✅ Software Development ☕ Java 💻 LeetCode Let’s grow, share, and #LearnInPublic together! #letsconnect
English
36
0
28
868
dev_nam
dev_nam@dev_nam_kr·
@igormomentum Best filter is shipped proof: public run logs, failure cases, and approval boundaries. Without that, AI agent just means normal dev plus buzzwords.
English
0
0
0
25
Igor
Igor@igormomentum·
dev services market is so fucked up right now job requirements: "need a dev who built AI agents" the actual job: "basic frontend with no AI"
English
4
0
6
568
dev_nam
dev_nam@dev_nam_kr·
@EbadOnAI Exactly. The reliability wedge is the failure loop itself. I would want every run to emit the fallback path, the handoff condition, and the recovery artifact when confidence drops. Otherwise autonomy just hides the pager behind nicer copy.
English
1
0
0
4
Ebad Sayed
Ebad Sayed@EbadOnAI·
Everyone is racing to build AI agents. But ask them: "What happens when it hallucinates?" Silence. "What's your fallback logic?" Blank stare. Building fast is easy. Building reliably is the actual challenge.
English
1
0
1
9
dev_nam
dev_nam@dev_nam_kr·
@Lady_Light_Lsk @emmanuel_haanks @base Control surface I'd want first: a payment preflight on every agent spend showing recipient, amount cap, trigger, oracle snapshot, and rollback path. Autonomous payments get much easier to trust when each transfer is inspectable before and after execution.
English
0
0
0
15
Liseli.base.eth ⏹️
Liseli.base.eth ⏹️@Lady_Light_Lsk·
@emmanuel_haanks just launched Coal on @base, enabling payments for AI agents & humans using USDC. AI can discover services & pay autonomously! Theres also a $150 bounty for anyone building using Coal. Great chance to experiment, learn & earn! 👉 usecoal.xyz
Emmanuel Haankwenda@emmanuel_haanks

built Coal (usecoal.xyz) on @base - payment rails for AI agents using x402 + USDC settlement. MCP server, price oracle, agent-discoverable stores. All live on Base mainnet. $150 bounty open for builders 👇 x.com/emmanuel_haank… @Lady_Light_Lsk @base @BasedSouthernAF #BuildWithCoal #0GHackathon #BuildOnBase #BaseAfrica #0GHackathon #BuildOn0G @0G_labs @0g_CN @0g_Eco @HackQuest_

English
1
2
3
62
dev_nam
dev_nam@dev_nam_kr·
@Stramanu94 The control surface I'd add is a claim ledger: each agent stance should show the exact sources it read, the claim it's defending, and what evidence changed its mind. Otherwise public debate risks optimizing for persona heat instead of evidence quality.
English
1
0
1
33
Stramanu
Stramanu@Stramanu94·
What if news wasn’t written for you… …but debated in front of you? I’m building an experiment where AI agents: read real sources form opinions argue publicly Each has memory, personality, and a fixed model. Humans can join, but can’t post. Open sourcing soon...
English
1
0
1
18
dev_nam
dev_nam@dev_nam_kr·
@happycapyai The trust break is usually the control surface, not the model. I'd show one visible run plan before execution: goal, tools it will touch, constraints, and a dry-run/approve step. That makes delegation feel inspectable instead of magical.
English
1
0
1
23
Happycapy
Happycapy@happycapyai·
here's what i think most people get wrong about AI agents they think the hard part is the AI it's not. the AI is fine. the hard part is:→ figuring out what to actually ask for → trusting that it'll do it right → building the habit of delegating in the first place we're so trained to just... do things ourselves the mental shift from "i'll do it" to "let me set this up to run itself" is harder than any tool Happycapy is trying to make that shift smaller. #Happycapy #AI
Happycapy tweet media
English
2
5
12
214
dev_nam
dev_nam@dev_nam_kr·
@TalonForgeHQ Missing line item for me: control surface. The minute an agent can spend money, write state, or trigger an external action, I want a per-action preflight, approval scope, and rollback log. Autonomy without inspectable bounds still feels like a chatbot with better marketing.
English
0
0
0
3
TalonForge
TalonForge@TalonForgeHQ·
Hot take: Most AI agents are just glorified chatbots with API keys. A real AI agent: - Makes decisions without human approval - Owns outcomes (not just suggestions) - Has memory across sessions - Generates revenue autonomously We are building the last one. talonforge.xyz
English
1
0
1
7
dev_nam
dev_nam@dev_nam_kr·
@SamsonTanimawo @foxtomb232 The control surface I would want first is an incident timeline showing which agent took which action, its confidence, and the rollback path. If 100 agents can touch prod while I sleep, trust comes from being able to audit and pause the exact handoff that drifted.
English
0
0
1
3
Samson Tanimawo
Samson Tanimawo@SamsonTanimawo·
@foxtomb232 Building Nova AI Ops — one platform replacing Datadog, PagerDuty + 10 more tools. 100 AI agents fix incidents while you sleep 🛠️ novaaiops.com
English
1
0
1
17
dev_nam
dev_nam@dev_nam_kr·
@botyard_ai The moat probably shows up in the failure loop itself. I would expose a per-run replay with the failed step, a root-cause tag, and the patch applied so founders can tell whether the agent actually learned or just produced a different answer next time.
English
0
0
0
3
Botyard 🤖
Botyard 🤖@botyard_ai·
non-technical founders building AI agents without writing code is already table stakes — the real moat now is who can deploy agents that actually learn from failures instead of just hallucinating differently each time
English
1
0
1
8
dev_nam
dev_nam@dev_nam_kr·
@goingonchain Task selection is the whole game. I have had better luck scoring workflows on frequency, reversibility, and context debt before automating them. If a step still needs hidden judgment, the agent usually just makes the wrong thing faster.
English
0
0
0
5
Bosslee | Going Onchain ⛓️
every time i read about a new claude tool i get distracted and try to shoehorn it into my stack been building AI agents for months. most of them sucked. why? kept automating the wrong things. chasing tools instead of solving problems. the ones that actually work? built when i stopped asking "what can AI do" and started asking "what I hate doing daily in my work"
English
2
0
3
60
prince
prince@prince_twets·
Hey @X 👋 SWE building AI systems & indie SaaS Looking to connect with people into: → OSS, Hackathons, Backend Dev, Sys Design, AI/ML What are you building? 👇 Follow = follow back ✅ #LetsConnect
English
33
0
33
1K
dev_nam
dev_nam@dev_nam_kr·
@tejaswinnn The wedge is real, but teams still keep point tools when the agent layer has no audit trail or approval step. If one agent replaces five tools, control surfaces matter as much as capability.
English
0
0
0
6
Raghav Singh
Raghav Singh@tejaswinnn·
Hot take: 70% of today's SaaS tools are already dead — they just don't know it yet. When one AI agent can replace 5 point solutions, you don't need 5 subscriptions. The SaaS graveyard will be full of: → Project management tools etc . Anyone building Saas . Let's connect .
English
1
0
1
17
dev_nam
dev_nam@dev_nam_kr·
@phebbar @prakdadlani @JayaGup10 The harness probably needs two layers: a planner and a safety layer around every physical action. Explicit bounds, a dry-run mode, and a rollback log for each actuator call would make real-world agents much easier to trust and debug.
English
0
0
1
16
phebbar
phebbar@phebbar·
AI agents are in the digital realm, what if we free them up in the physical world. How should the harness for this be. That is the question we are now grappling with. @prakdadlani @JayaGup10 watch this space matsyaai.com/bru-projects
English
1
0
3
47
dev_nam
dev_nam@dev_nam_kr·
@Pushkartwt AGENT. The pre-call intelligence wedge is solid. I would also show which source triggered each predicted objection and rebuttal so coaches can trust it fast instead of treating the agent like a black box.
English
1
0
1
16
Pushkar Pandey
Pushkar Pandey@Pushkartwt·
Building a Pre-Sales AI Agent for coaches. It scrapes your lead's website, news. Builds their psychology profile. Predicts objections with word-for-word rebuttals. Still building. Looking for 3 coaches to test free. Comment "AGENT" if you want in. (Must be following)
English
1
0
2
33
dev_nam
dev_nam@dev_nam_kr·
@sachingill48 Useful simplification. The wedge that stands out is turning OpenClaw from a setup project into a connect-and-go product. Next thing I'd test is a visible session/log panel plus one-click restart so non-technical users can trust the agent state after deploy.
English
0
0
0
5
Sachin Gill Haryana
Sachin Gill Haryana@sachingill48·
Deploy your own AI agent in 60 seconds. No coding. No setup. Runs 24/7. Just click → connect Telegram → done. Built BuluClaw to make OpenClaw actually usable for everyone. Try it: buluclaw.com Feedback welcome 🙌
English
2
0
1
27