Andrey S.
74 posts


@SPolozovv @TheFigen_ Not fake. Fully modernized after the 2010 fire and currently operational
English

@weswinder I am also looking for an ultra compact laptop, but surprised that nothing like that is available on the market in 2026. The closest option is the GPD Pocket 4 Modular Laptop, but it is ridiculously expensive

English

@nalinrajput23 I remember accidentally spilling Red Bull on the screen and replacing it in one evening with no repair experience
English

2026 is the year of orchestration
I have been experimenting with a tool called Trigger.dev lately. But what interests me is not the tool itself. It is what it represents.
What is happening to the profession
The question developers ask has shifted. Not dramatically, not all at once. But it has shifted.
Before: "How do I write this code?" Now: "How do I make sure this process does not crash at 3am when an external API hangs for 40 seconds?"
These are very different questions. The first is about syntax, the second is about the lifecycle of long-running tasks. And this is exactly where a whole class of tools called orchestrators comes in.
The problem everyone pretended did not exist
When an AI agent runs a complex chain of tasks, parsing data, calling multiple APIs, waiting on an LLM response, the whole thing can take minutes or even hours. Traditional serverless platforms are not built for this: timeouts hit, state is lost, the job fails silently.
This is not a hypothetical problem. It is something anyone who has tried to ship an #AI agent to production has run into.
There is another side to this that gets less attention: task isolation and containerization. Every run in Trigger.dev executes in its own MicroVM, an isolated environment that spins up in milliseconds and shares no state with other processes. So one failing agent does not bring down the rest, and every task gets a clean, predictable environment.
The snapshot mechanism goes further. When a task pauses waiting for a human-in-the-loop approval or an external event, Trigger.dev physically terminates the container. CPU and RAM are fully released. This is different from the classic checkpoint approach, where state gets serialized and written to a database, but the server process stays alive and keeps holding resources. For agentic workflows where a task waits hours for a human response, the difference on your cloud bill is real.
Here is a practical scenario that shows this well. Imagine you have 500 active tasks in the queue and need to deploy a new version of your service. With traditional long-running workers, this is painful: you either wait for everything to finish or kill processes and lose progress. Trigger.dev handles this cleanly: active tasks complete on the old version using their saved snapshot, new tasks spin up on the updated one. A deploy with no queue drain and no data loss.
Why now
Several things aligned at the same time.
The #mcp standard (Model Context Protocol) made it possible for agents from different vendors to talk to each other without custom integration layers. That removed one of the biggest barriers to production deployments. At the same time, tools like Cursor made generating code almost trivial, which shifted the real challenge: writing code is easy now, but making it run reliably across external APIs, agents, and async processes is still hard.
That is where orchestration moved from "nice to have" to a required layer of any serious architecture.
Google Cloud's 2026 reports show that nearly 90% of companies that adopted agentic systems are already seeing positive ROI. The Comp AI case is a good example: automating evidence collection for SOC 2 certification let startups go through audits at a fraction of the traditional cost, and over 2,500 companies adopted the platform in a short period.
The economics are shifting. Developers who can build reliable orchestration layers are becoming more valuable than those who just write clean code.
The main options and where Trigger.dev fits
Here is how the main options compare:
temporal.io has years of proven use in enterprise environments, but the learning curve is steep. It requires strictly deterministic code and a deep understanding of the workflow model.
inngest.com is built around a pure event-driven approach. Clean for serverless, no long-running workers needed.
Trigger.dev is positioned as "@vercel for the backend." TypeScript-native, MicroVM under the hood, official MCP server for AI agent integration. Well suited for tasks where an agent needs to survive long pauses.
There is no universally better choice. But there is a right choice for a given task, and knowing how to make that call is becoming a key skill.
What this means for developers
2026 is when #agentic systems moved from demos into real production. And as that happened, orchestration quietly became the connective tissue of the stack: the layer that handles task delegation, retries, state, and API coordination across the whole system.
Developers are moving away from writing code toward designing processes.
This is not a threat to the profession. It is an upgrade.
I am still experimenting with this stack. If you are already running orchestrators in production, I would be curious to hear what you chose and why.
@triggerdotdev @inngest @temporalio

English

There’s been discussion about different formats for representing structured data in AI workflows: JSON, TOON and VSC.
Same data example in three formats.
JSON
{ “users”:[{“id”:1,“name”:“Alice”,“role”:“admin”},{“id”:2,“name”:“Bob”,“role”:“user”}] }
TOON
users[2]{id,name,role}:
1,Alice,admin
2,Bob,user
VSC
schema:id,name,role
1,Alice,admin
2,Bob,user
JSON is the standard for web.
TOON tries to reduce syntax noise and token usage for LLM prompts.
VSC goes further and keeps only values when the schema is known.
Will one format win, or will each one occupy their own niche in AI systems?

English


4/9 🧩 Self Hosted AI Agents
A lot of teams want full control over their agent stack.
These projects are designed to run on your own infrastructure with full ownership of data and execution.
• OpenClaw
openclaw.ai
• ZeroClaw
zeroclaw.dev
• NanoClaw
nanoclaw.net
Common direction:
local deployment, custom tooling, private data handling, and infrastructure-level control.
English

3/9 🧱 Enterprise AI Agents
Another direction is enterprise-ready agent platforms built for teams and production environments.
These solutions focus on governance, reliability, and control at scale.
• IronClaw
ironclaw.tech
• OpenClaw
openclaw.ai
• ControlClaw
controlclaw.io
Common direction:
policy management, audit trails, access control, and structured workflows for real production use.
English

2/9 ⚡ Lightweight AI Agents
Another clear trend is smaller and simpler agents.
These projects focus on minimal runtime, fast startup, and low resource usage.
• PicoClaw
picoclaw.dev
• NullClaw
nullclaw.ai
• ZeptoClaw
zeptoclaw.io
Common direction:
smaller binaries, fewer moving parts, easier control, reduced attack surface.
English

1/9 Security First AI Agents
A growing group of AI agents is built around one idea: security first.
Instead of adding more automation, these projects focus on safe execution and strict control.
• ZeroClaw
zeroclaw.dev
• NanoClaw
nanoclaw.net
• IronClaw
ironclaw.tech
Common direction:
sandboxing, isolation, policy control, and limiting what the agent can access.
English

Congratulations to the Tesla team on making the first production Cybercab!
Tesla@Tesla
First Cybercab off the production line at Giga Texas
English

@ch3njus @cursor_ai 3$, but the cost is 2x when the input exceeds 200k tokens
English

















