AntiCode Guy

5.9K posts

AntiCode Guy banner
AntiCode Guy

AntiCode Guy

@AntiCodeGuy

IT ↔️ Business translator | Building IT systems for businesses at https://t.co/hZc7YdY2QD Content: AI, automation, and development for business

Thailand Katılım Kasım 2023
85 Takip Edilen363 Takipçiler
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
A few months back, while designing the architecture for a potential system and planning the stack, I gave this task to three different models - Claude Opus, ChatGPT, and Grok, all in their top reasoning modes. I got three different recommendations that overlapped on individual modules, but each came with an impressive set of arguments for its proposed options. Same input, different outputs. How do you choose between them? Present each model with the other two versions and ask it to critique all three - including its own - based on the consolidated reasoning from the others. After a couple of iterations of this kind of triage, a consensus emerges somewhere in the middle. Fast forward a few months, and I'm kicking off development on a new system. PRDs are ready, functional and non-functional requirements are gathered, constraints are defined - time to start planning. Long hours of interviews, self-checks, spec analysis by independent agents, and the final architecture and stack are starting to take shape. As always, with a healthy dose of skepticism toward any single model's decisions, I load the same set of input documents into two others - and the output is... practically identical recommendations for the stack. They diverge on a few individual elements, but that's genuinely 10% of the total volume. Consensus reached without any cross-analysis between them. What is this - identical training data? Or is it actually the best choice for my case, and the models have gotten smart enough that they all arrived at the same conclusion independently? Have you noticed this?
AntiCode Guy tweet media
English
0
0
0
26
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
I Only Pick Tasks AI Can Do For Me Now
English
0
0
0
17
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Back when I had a regular job, I often had to explain my ideas, plans, concepts, tasks, and all the other joys of corporate life to colleagues and management. Whiteboards with markers or prepared presentations were the usual go-to - because obviously nobody's going to read a dry document, and making one isn't exactly a pleasure either. These days I present that kind of thing to colleagues as HTML pages, which let you lay out information in a way that actually lands visually. I'm not much of a designer myself, so I hand that part off to the latest AI models, and they nail it every time. There's not much prep work involved either - you can just throw a pile of inputs at the agent: various documents, meeting transcripts, your own raw thoughts, and a voice note of what you want the final presentation to actually say. The agent structures all that chaos on its own, picks out only what matters, and synthesizes it into a clean, coherent layout for a clear visual result. If you want, you can even throw in interactive elements that affect the output - sliders that adjust financial charts, buttons that change the result, whatever comes to mind. The key thing here, obviously, is time saved. I used to spend a solid third of a workday on this kind of thing, at minimum. Now it's genuinely a 10-minute conversation with an agent.
AntiCode Guy tweet media
English
0
0
0
6
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
How Printing Press saves tokens, or your own personal sort of CLI Seems like everyone's gotten used to MCP by now, except there's one catch: after some (pretty short) amount of time the number of connected MCPs starts growing, and using them in every AI agent session gets expensive - because it means reading through bulky documents that, naturally, eat tokens. When I'm working with a codebase, for example, I sync everything with a task manager, which requires hooking up the AI agent to it. With Linear that's an MCP server and a plugin, with Plane it's direct API through python scripts (AI agents love writing those, what can you do). So the API ends up way cheaper - there's a clean call to endpoints that return exactly the data you need in a structured format. What if you wrapped those endpoints in a CLI and called them directly from there? That's already how a ton of tools work. Just install a CLI and enjoy the token savings. But what if there's no CLI for your tool? That's where Printing Press comes in - with a ready-made library and an algorithm for "printing" CLI tooling. Obviously we're not doing any of this by hand - we just point a coding agent at the PP repo and ask it to tell us everything we need to know, install the right CLIs from a huge and constantly growing library. And if the tool you need isn't there, like with Plane - we "print" it ourselves with literally one command from Printing Press. Sure, there'll be some fiddling involved, but in the end it's worth it - no more bloated MCPs for everyday simple tasks. Just lean, fit CLIs.
AntiCode Guy tweet media
English
0
0
0
21
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
From No-Code to Code Again
English
0
0
0
3
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Somewhere along the way, without really noticing, I switched into work mode on weekends - even though I used to rely on other activities to give my brain and attention that reset. Because without it, you don't start the next week with a fresh head, you start it with a loaded one, and your decision-making seriously suffers. Maybe it'll stop after a while, but AI gave this massive influx of fresh air, thanks to which a second wind appeared - and even a third - which lets me go at development with huge enthusiasm and tackle complex problems that used to seem boring and not worth the time. Now my work week ends together with my weekly Claude Code and Codex limits, which I scrape completely clean. And I wait impatiently for Monday, when they reset, so I can load the next task into my AI agents. This, in my opinion, is exactly what a person's work should look like - literally make you impatient to act, and with enthusiasm and energy that isn't drained by the process but replenished during it. What is this, ikigAI?
AntiCode Guy tweet media
English
0
0
1
5
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
A few months ago I wrote about how I was still working with no-code builders the old-fashioned way - manually clicking buttons and tweaking block parameters, since the architecture of these apps is closed off and you can't, for instance, reshape a workflow algorithm via API. But after installing Playwright, it hit me that the time had come to delegate this too - because today I already trust the latest models with responsible work, and they perform really well, making fewer mistakes than I do. I started with a backend application built on Directual, where I needed to rework around a dozen workflow scenarios, clear out the call queues in the database, and in doing so reduce the server load - which had been going through the roof lately due to some poorly designed processes. The work was handled by Codex, which naturally first asked me to authenticate in the app, verified its access, and reported back that it was ready to begin. I started carefully with a single scenario, reviewed everything it had done - found not a single error. Then I asked it to finish the job and optimize all the remaining scenarios that needed attention. Twenty minutes later I received a report on the completed work, reviewed it, and broke into yet another blissful smile of satisfaction - one I still can't seem to get used to, even though it lights up my face every single day.
AntiCode Guy tweet media
English
0
0
1
16
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Yesterday I "discovered" a new way to work with coding agents most productively. Of course, I already knew that agents have long been capable of spinning up other agents on their own - I've written about it several times. But during development I still want to have some degree of control over the process and what's happening, since AI does tend to forget certain things and drift away from the bigger goal. Because despite a large context window, it's still not quite enough to keep the "final picture" in mind - though more than enough to focus on a specific piece of code. But at some point, once the core foundation of the system being built is already in place and it's hard to go off the rails, I started noticing that most of my responses to the agent's questions were something along the lines of "Do whatever's architecturally correct." And the oversight process was effectively turning into mindless routine - the kind that makes sense to just delegate away. So I started appending to the next backlog task prompt a request to spin up an independent agent on the senior Opus model to handle the task, monitor its execution, make sure the branch gets pushed for review, revisions get made, and all project rules are followed. At any decision point requiring a judgment call, act as an architect - prioritize architectural correctness, never drift toward hacks and patches just to pass the tests. This way I stopped switching my attention back to the development process every few minutes. Instead, 20-30 minutes later I get a report on the work done, a closed ticket, and readiness to pick up the next one. And the next one can be picked up in the same session - because monitoring coding agents this way from the main session doesn't burn many tokens in it.
AntiCode Guy tweet media
English
0
0
1
20
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
AI Gets Watermarks. Why Don't Humans?
English
0
0
0
6
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Remember that meme about a parrot trained to say "How's the project going" who got promoted to PM? That's no longer a meme - it's reality. And it's not a parrot anymore, it's an AI agent. After setting up Plane, which I wrote about a couple days ago, the next logical step was actually using it for project management. What could be more boring and routine in 2026! Obviously, I'm making the AI do this dirty work. First, I gave it an API key so it had control over the workspace. Then I fed it the transcript from our last team call where we were handing out tasks for the next sprint. The agent parsed the transcript, shaped the structure of a new project, sliced it into modules (following Plane's conventions), individual tasks - basically did the full job of a project manager who would've spent a couple hours on this after a kickoff meeting. One thing though - not everything worked through the API alone. For example, the API has no way to set dependencies between tasks. So I had to bring out the big guns, namely Playwright, for that part. Next, I fed the agent more context from previous meetings and the knowledge base so it could enrich the tasks with detailed descriptions, adjust the dates so everything looks clean on the Gantt chart, and rephrase the wording to actually make sense (yeah, that doesn't always happen naturally on a call). And finally, it drafted a message for the team chat with a ping for each person about their tasks. Now all that's left is to automate this little guy so he, just like that parrot, pings us in the chat on his own asking "How's the project going." Except unlike the parrot - he's got a task manager in his hands and the full context of the project.
AntiCode Guy tweet media
English
0
0
2
31
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
The Anti-SaaS Manifesto: Pay Once, Use Forever
English
0
0
0
13
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Simplicity vs. Overengineering 1:0 Second week in a row I've been trying to build an autonomous video editing system on top of the video-use and Hyperframes skills. You'd think - if I run these skills separately and everything goes 80% smooth with just 1-2 additional follow-up prompts to fix things, then wrapping all of it into a clean algorithm with sequential calls to those same exact skills should just work cleanly without me babysitting it. But nope! At literally every turn everything goes sideways. Prompts start drifting, requirements from the skills get ignored, algorithm steps don't respond the way they should, and a ton of other stuff I can't explain without diving deep into the shithole of technical nuances around building LangGraph chains and moody AI. I rebuilt the system from scratch twice and both times when I thought I was basically at the finish line, everything just falls apart because of an error at the very beginning that snowballs into unrecoverable consequences at the final stages. Which once again proves how important and necessary proper architecture design is, and actually understanding what you're building. Sure, some simple one-pager calculator you can one-shot, but building anything even remotely complex that way - still doesn't work. And as of today it's honestly simpler, faster, and cheaper in tokens to just run those two skills independently and get the result, rather than using my system. For now.
AntiCode Guy tweet media
English
0
0
0
19
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Desktop software licensing, it turns out, is also a whole story
English
0
0
0
17
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Not too long ago I wrote about Linear - a beautiful and convenient task manager with an MCP connector built right in, which lets you organize the work of AI agents and your own as well. But pretty quickly I ran into an annoying free tier limitation - 250 tasks, which I burned through on the very first two serious projects with a codebase. Sure, you can upgrade to paid and call it a day, but that's not how we do things these days. Today we either vibe-code a solution or use AI agents to set up a free alternative. And yeah, one exists - Plane. The key thing about Plane for my case is the Community edition - completely free, no limits, self-hosted, fully customizable if needed. So all you have to do is give the AI agent access to the server where the new task manager will run. The rest depends on you - if you need to hook up a custom domain (which is almost always yes) or your own SMTP server (to send notifications), you'll have to put in some effort and go manually (oh god) set up DNS records. Once Plane was deployed, I connected an AI agent to it via API and asked it to migrate all tasks from Linear to the new Plane - which it did without breaking a sweat. No limits now, fully self-controlled modern task tracker that also plays very well with AI agents. Set it up yourself if you're tired of paying for a task manager.
AntiCode Guy tweet media
English
0
0
1
72
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
New level of laziness (productivity) unlocked My client's website is built on a site builder. And up until now I kept editing it manually from the browser - the Claude Chrome plugin makes browser actions painfully slow and painfully expensive. Honestly it's faster and cheaper to just click the buttons yourself. But then I remembered Playwright - a browser that runs from the command line. You don't see it with your eyes like a normal user, but an AI agent sees it in its native environment - like a fish in water. And since the AI agent works with the browser through the command line, it can do everything there way faster than a human and way cheaper than through browser plugins. I asked Codex to install Playwright, gave it a list of tasks - stuff that needed tweaking on the site. It asked me to log in, opening a browser tab where I entered the login and password for the site builder. After that the agent went into autonomous mode and a few minutes later reported back that everything was done. I opened the site and was horrified - everything was done exactly as needed. I didn't even have to fix anything. Another layer of routine work - gone. Next step - automate the intake of tasks from clients itself.
AntiCode Guy tweet media
English
0
1
0
27
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Langgraph turned out to be exactly what I expected - a reliable, robust process orchestrator that knows how to work well with AI. Robust, because it's code. Code is dumb - it does exactly what you write. That's why we love computers: 2 + 2 will always be 4, even if you run the procedure 4,237,834 times. But the same prompt sent to an AI agent will generate different results each time - and that's by design. When we want to bring AI into a business, though, we don't need probabilistic outcomes. We need tasks executed cleanly. That's exactly why we hire people - so there's someone to hold accountable for the result. Tasks that require "creativity" rather than determinism - those we'll hand off to LLMs. For simple stuff, cheap or even free local models will do. Where you need the full horsepower of modern AI, you reach for the flagships. That's precisely what LangGraph lets you build: use the dumb calculator where you need hard numbers, and call on the creative AI where the work is cognitive - where the outcome isn't known in advance.
AntiCode Guy tweet media
English
0
0
2
33
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
If you haven't played around with the new Claude Design yet, I highly recommend running through it a couple of exercises you'd normally send to a designer. Or didn't do at all, because there's no designer or it felt like a waste of money. The new version of the Claude Opus 4.7 model is known for its "vision," which is what powers the Design tool. It lets you evaluate assets specifically from an aesthetic, visual standpoint. And it does this really well. For example, I never bothered much with Anticode's design, because honestly, it felt like a waste of money, plus I never made enough on my projects to reach the stage where design itself was driving my results. Maybe I'm thinking about this backwards, but still. And now I already have a Claude subscription, which means I also have a personal designer who's put together a perfectly usable design system for my brand, which I'm happily using for the redesign. Some will call it AI-slop, but for me it's an acceptable version that's definitely better than not having one at all. And by the way, I think it turned out pretty decent. But I'll show the results after the site redesign. Speaking of which, what it's bad at so far is drawing logos. Mine came out totally off-target and not about the right thing, so I'll hand that off to some other model for refinement.
AntiCode Guy tweet media
English
0
0
0
50
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
Through evolution (pain and suffering), I've finally come to realize that orchestrating complex pipelines is still extremely hard for the models themselves - even ones as powerful as Opus 4.7. Constant deviations from given instructions, spontaneous step invention, arbitrary decisions that contradict the pipeline's goal - all the delightful quirks of LLM stochastic nature. And my pipeline isn't even that heavy - video editing split into three phases, the first of which is just dumbly running pre-made scripts. Yet every full pipeline run comes with fresh surprises and a massive wall-of-text retro doc listing everything that went sideways that session. But business process orchestration is my bread and butter - and it was invented long before modern LLMs ever showed up. Old faithful Camunda, for instance. What interests me, though, are tools specifically built for working with LLMs. And that's exactly what LangGraph is. So today I dove deep into the rabbit hole of graph programming.
AntiCode Guy tweet media
English
0
0
0
14
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
This is my biggest vibe-coding fail so far Last Sunday, as usual, I was planning my Monday stream where I edit my videos. This time, though, I decided to hand the editing over to AI and stream the whole process live. It all started pretty interestingly - we kicked off building my own custom video editing studio, tailored to my personal requirements and my content style. But I didn't manage to finish the whole thing on Monday. Tuesday, I thought I'd wrap it up - nope. I only got the first part out: clean cuts, solid audio, and color correction. That part actually came out really well, especially considering it was all automated. But then the next stage - animation prep - went completely sideways. I didn't get the animations (or the finished video) on Tuesday. Didn't get them Wednesday or Thursday either. Today is Friday, and I still haven't published a single video. Stubbornly and relentlessly pushing forward on the system - and I will get what I want out of it. Stay tuned for the video in a new style.
AntiCode Guy tweet media
English
0
0
0
25
AntiCode Guy
AntiCode Guy@AntiCodeGuy·
In today's time crunch, I hate stepping away from the computer - what if Claude finishes its work and needs my input! It's genuinely annoying. You walk away hoping to come back and review the results, only to find that 30 seconds after launching, the agent hit a blocker and needs your call. But there's a fix! Notifications, configurable via hooks! I asked Claude to set up hook-based notifications that send a signal to my system - for when I'm working away from the AI agent console - and to my ntfy mobile app, for when I've stepped away from the computer entirely. ntfy, by the way, is completely free and dead simple - you just subscribe to the notifications you want and they work. Exactly how I like things. So now, whenever Claude needs my attention, I always know about it. Also - these are the only notifications, besides banking alerts, that are turned on on my phone.
AntiCode Guy tweet media
English
0
0
0
15