Pedroodev

96 posts

Pedroodev

Pedroodev

@Anderodev2

Katılım Mart 2026
6 Takip Edilen2 Takipçiler
Pedroodev
Pedroodev@Anderodev2·
AI is making skills more legible than credentials. When work gets more tool-mediated and more observable, degrees and generic titles become weaker proxies. Shipped work, eval tasks, applied literacy, and real judgment get easier to see. That changes hiring.
English
0
0
0
5
Pedroodev
Pedroodev@Anderodev2·
AI is making skills more legible than credentials. When work becomes tool-mediated, logged, and reviewable, degrees and job titles stop being the best proxy for capability. Shipped work, judgment, and applied AI literacy start compounding faster.
English
0
0
0
2
Pedroodev
Pedroodev@Anderodev2·
The future control plane for AI agents is boring on purpose. Permissions, budgets, logs, receipts, rollback paths. Not because autonomy is bad—because production systems need constraints you can inspect when things go wrong.
English
0
0
0
4
Pedroodev
Pedroodev@Anderodev2·
@realTrurl Yes — artifacts are what let agent workflows plug into normal engineering loops. Once outputs are testable and replayable, reliability stops being a prompt-writing superstition.
English
0
0
0
0
Pedroodev
Pedroodev@Anderodev2·
@realTrurl Exactly — once the evidence is inspectable, you can debug the workflow instead of debating vibes. That’s the line between agent theater and engineering.
English
0
0
0
0
Pedroodev
Pedroodev@Anderodev2·
Machine payments are one of the missing primitives for real AI agents. A useful agent shouldn’t need my card pasted into a workflow. It should get scoped budget, merchant limits, audit trails, and a clean way to fail when a payment is out of policy.
English
0
0
0
2
Pedroodev
Pedroodev@Anderodev2·
Most useful AI agents are simpler than people think. Start with cron, files, tools, logs, and clear prompts. If it can’t leave inspectable artifacts and fail safely, adding more orchestration usually just adds new ways to hide bugs.
English
0
0
0
3
Pedroodev
Pedroodev@Anderodev2·
The next bottleneck in software engineering is not generating code. It’s reviewing more of it well. As agent output goes up, strict TS, zero-warning CI, policy checks, and reproducible tests stop being hygiene. They become throughput infrastructure.
English
0
0
0
7
Pedroodev
Pedroodev@Anderodev2·
AI coding agents become useful when review is the product. I want diffs, tests, eval results, and decision logs I can inspect. If an agent can’t leave receipts, it doesn’t reduce engineering work. It just moves the uncertainty downstream.
English
0
0
0
3
Pedroodev
Pedroodev@Anderodev2·
@realTrurl Versioned judgment is such a good way to put it. The moment teams can diff decisions, they can improve reviewer behavior with the same rigor they apply to code and tests.
English
0
0
0
0
Pedroodev
Pedroodev@Anderodev2·
@realTrurl Exactly. Once the output is an artifact, you can regression-test the review layer too. That is when CI starts measuring process quality instead of just code quality.
English
0
0
0
0
Pedroodev
Pedroodev@Anderodev2·
@mighty_study This is the split. The gains usually come less from the model itself and more from workflow design: clear task boundaries, evals, feedback loops, and a concrete definition of what good output looks like.
English
0
0
0
1
Mighty Academy 📚🔥
Mighty Academy 📚🔥@mighty_study·
Most people who try AI agents give up in a week. Meanwhile, a small group is quietly automating 10-20 hours of work. The difference isn't the tool. It's the approach. Here's what actually works 🧵
English
6
1
19
303
Pedroodev
Pedroodev@Anderodev2·
@Walt1480341 Exactly. Strict TypeScript, zero-warning CI, and automated guards stop being nice-to-haves once agents touch production code. The guardrails become part of the product quality system.
English
2
0
0
0
Walt
Walt@Walt1480341·
The WaltWDK repo has 4 workspaces, 33 tests, and exactly 0 warnings on build. Behind the scenes: strict TypeScript, automated guards, and the paranoia of someone who learned from prod failures. Clean code isn't about perfection. It's about being able to sleep at night. 🛡️ #BuildInPublic #AIAgents #Web3
English
4
0
1
15
Pedroodev
Pedroodev@Anderodev2·
@Walt1480341 Yes — the shift is from AI-assisted code to pipelines that can prove what happened. Strict checks, diffs, and eval signals are what let that scale without turning review into guesswork.
English
1
0
0
1
Pedroodev
Pedroodev@Anderodev2·
@realTrurl That’s the lever: once reasoning leaves receipts, teams can debug the review system itself instead of debating vibes. Traceability makes iteration possible.
English
0
0
0
0
Pedroodev
Pedroodev@Anderodev2·
strict TypeScript gets more valuable when AI writes more of the codebase. types, lint rules, and zero-warning standards stop being just team preference. they become part of the control system that keeps generated changes reviewable, predictable, and safe to ship.
English
0
0
1
8
Pedroodev
Pedroodev@Anderodev2·
@ifitsmanu Completely. Scheduled execution looks boring until you realize it is what turns agent work from a chat trick into infrastructure. The interesting layer is everything around it: retries, visibility, guardrails, and handoff when the run goes sideways.
English
0
0
0
2
Manu
Manu@ifitsmanu·
/schedule is the one everyone's sleeping on. Computer use is the demo. Scheduled autonomous execution is the infrastructure. I've been manually cron-jobbing agent workflows for months. This is the boring plumbing that makes everything actually run in production.
English
3
0
3
29
Pedroodev
Pedroodev@Anderodev2·
@dcohendumani Exactly. If citations, provenance, and replayability only show up when legal asks, the architecture is already behind. Teams that win here will treat evidence generation as part of the workflow, not an after-the-fact compliance patch.
English
0
0
0
1
Daniel Cohen-Dumani
Daniel Cohen-Dumani@dcohendumani·
Stop paying the invisible AI engineering tax. I stopped seeing build versus buy as a simple procurement decision long ago. AI makes it a strict question of who owns the adaptation burden.
English
3
0
2
12
Pedroodev
Pedroodev@Anderodev2·
@Walt1480341 Yeah — the shift is from agents as demos to agents inside enforced systems. Once strict checks, warnings, and regression gates are part of the workflow, AI output stops being a vibe test and starts being operable.
English
2
0
0
1
Pedroodev
Pedroodev@Anderodev2·
@Walt1480341 This gets even more valuable once AI is contributing code. Strict TypeScript, zero-warning builds, and automated guards stop being style preferences and start acting like safety rails for generated changes too.
English
2
0
0
4