Mike Christensen

105 posts

Mike Christensen banner
Mike Christensen

Mike Christensen

@christensencode

Staff Distributed Systems Engineer @ablyrealtime

London Katılım Eylül 2014
503 Takip Edilen39 Takipçiler
Mike Christensen
Mike Christensen@christensencode·
@FUCORY @Dayhaysoos What’s stopping you using flue with whatever durable execution platform you want? Temporal, Vercel WDK etc I prefer when a library/framework is hyper focused on a specific need and can be composed with other solutions for adjacent needs
English
0
0
0
9
fucory
fucory@FUCORY·
@Dayhaysoos The point of any framework is to give you the maximal programability and flexibility while removing as much burden to do work unrelated to your actual goals as possible. The features missing from Flue like durable execution means these are now problems you have to solve
English
2
0
2
677
fucory
fucory@FUCORY·
Flue is similar vibes to Smithers. Agentic orchestration as code in the form of a framework. What it's missing: 1. Biggest thing is durable execution - Smithers is built from ground up to handle any issues in long running execution by breaking up all work into durable tasks. These tasks can even be distributed to different machines 2. Declarative workflows - Smithers workflows are built from ground up to be debuggable frame by frame. Flue is purely imperative JavaScript. 3. Hot mode - Smithers supports changing the prompts while the code is running without having to restart 4. Observability - Smithers comes with built in opentelemetry and prometheus metrics 5. Composability - Smithers is built using a component architecture that more easily allows you to build up more abstractions On the library to framework scale, Flue is more on the library side where I believe Smithers is what a real agentic framework looks like. In general Flue looks super cool and as someone who was a huge fan of Astro and Snowpack I'm not surprised. Would love to see it steal some ideas from smithers. I'll definitely be digging into flue for some design inspiration
fks@FredKSchott

Introducing Flue — The First Agent Harness Framework Flue is a TypeScript framework for building the next generation of agents, designed around a built-in agent harness. Flue is like Claude Code, but 100% headless and programmable. There's no baked in assumption like requiring a human operator to function. No TUI. No GUI. Just TypeScript. But using Flue feels like using Claude Code. The agents you build act autonomously to solve problems and complete tasks. They require very little code to run. Most of the "logic" lives in Markdown: skills and context and AGENTS.md. Flue is like Astro or Next.js for agents (not surprising, given my background 🙃). It's not another AI SDK. It's a proper runtime-agnostic framework. Write once, build, and deploy your agents anywhere (Node.js, Cloudflare, GitHub Actions, GitLab CI/CD, etc). We originally built Flue to power AI workflows inside of the Astro GitHub repo. But then @_bgiori got his hands on it, and we realized that every agent needs a framework like Flue, not just us. Check it out! It's early, but I'm curious to hear what people think. Are agents ready for their library -> framework moment?

English
8
6
138
21.8K
Mike Christensen
Mike Christensen@christensencode·
I'll be in SF next week for @temporalio Replay Already added some great-looking talks to my schedule: @samuelcolvin (Pydantic) on durable agents and long-running AI workflows @cursor_ai 's Jeremy Stribling on building agentic consumer products @GradientLabsAI's Eliot Miller on real-time voice agents If you're at Replay or in SF next week and working on customer/consumer-facing AI, send me a DM. Would be great to meet.
Mike Christensen tweet media
English
0
3
5
761
Matt Pocock
Matt Pocock@mattpocockuk·
Any London-based AI events looking for a conference speaker, and planning to professionally film their talks? I have a banger of a talk lined up that I need to let loose somewhere
English
36
10
240
31.1K
Mike Christensen
Mike Christensen@christensencode·
Just a few weeks after @karpathy's LLM Wiki post (gist.github.com/karpathy/442a6…), it seems everyone is building their obsidian "second brain". Here's what mine looks like: At a high level: it's an Obsidian vault, where the agent maintains a Wiki/ folder of entities, concepts, source summaries, and syntheses. I have skills that run every day and automatically ingest data into the wiki: posts on X and LinkedIn, articles and docs I read, GitHub repos and activity, team meetings, sales calls, and Slack action items, etc. The wiki is a graph that the agent walks as easily as it navigates code. Every Claude session I run has access to it: whether I am writing code, drafting content, working on internal docs, thinking through product strategy. Before this, I used to rely on hit-or-miss MCP context retrieval, repeated deep-research agent session artefacts and context that used to live in 30 tabs or a massive context folder dump. Now everything is captured once, automatically structured and maintained and always available to my coding agent if needed, queryable on demand. Every morning, the ingest cycle surfaces what my team worked on, action items from meetings or slack, insights from customer calls, and new learnings from any materials I have read. Knowledge compounds instead of evaporating. About 40 skills under .claude/skills/. The biggest family is ingest, with two patterns: One-off (drop something in mid-day, get it indexed): /obs-ingest-{url|file|x-tweet|li-post|gh-repo|gong-call|fellow-call} Daily morning digest (fires from /obs-plan-day-open, also runnable solo) /obs-ingest-{x-bookmarks|x-likes|x-digest|li-likes|li-digest|gh-activity|gong-digest} After every ingest, a Python script scans new content for external URLs and can recursively ingest linked content if needed. Some of what I can do with the data: • Generate new ideas grounded in current projects and interests (/obs-idea-generate) • Argue against my current thinking using my own vault as evidence (/obs-idea-challenge) • Find places where I hold contradictory beliefs across notes (/obs-idea-contradict) • Surface unexpected bridges between unrelated domains (/obs-idea-connect) • Trace how my view on a topic has evolved across daily notes (/obs-view-trace) • See which topics I've gone quiet on, by absence of mentions (/obs-view-drift) • Get a topological view of the vault, clusters and how ideas relate (/obs-view-map) • Reshape the coming week around what's most alive in the vault (/obs-plan-week-open) And the wiki itself stays maintained automatically: /obs-wiki-backlinks wires bare mentions of new entities across the vault, /obs-wiki-graduate promotes ideas from daily notes into standalone pages, /obs-wiki-lint surfaces contradictions and stale claims. Next I want to run the agent in a @vercel Sandbox, with Pi as the agent, a headless Obsidian for sync, and a UI to use these skills from anywhere. Curious how others are doing this. DM or reply with what your skill set looks like, particularly if you've taken it to the cloud-hosted side. Especially if you've used headless Obsidian!
Mike Christensen tweet media
English
1
0
2
59
Mike Christensen
Mike Christensen@christensencode·
It was also great to chat to @nicoalbanese10 about some interesting UX challenges builders need to deal with. Long agent sessions need client-side list virtualization. Thousands of events get flushed from the server and each one triggers a re-render. It would be cool if the client could window through any portion of the session history without agent/server coordination. And in many apps (especially non-coding agents), you don't even want to surface the whole agent trace. The durable-execution context (every tool call, thought, intermediate LLM response) can be distinct from what you pipe to the client. You want deliberate curation of what reaches the user. This gets more important as agents move beyond internal tooling into customer-facing products. I think durable execution + durable sessions is a interesting combo to tackle these problems. Going to play around with this stack and seeing how far I can push things!
Mike Christensen@christensencode

Loved @vercel's Workflows workshop in London yesterday. Their agent-era stack (AI SDK + Workflow DevKit + Sandbox) is one of the most coherent I've seen for building AI products. My take: Workflows essentially makes serverless stateful. Serverless used to mean pushing state into an external store and rehydrating on each invocation. Fine for request handlers. Painful for agents, which carry a lot of it: tool history, in-flight work, subagent exchanges. With Fluid Compute, the time your agent is suspended waiting on I/O doesn't burn cloud credits. Durable execution is quietly becoming the default primitive for agents. I love this stack, but there are still a few gotchas:

English
2
1
3
317
Mike Christensen
Mike Christensen@christensencode·
I've been thinking about this problem with @ablyrealtime's AI Transport. The idea: session-layer writes are scoped to the step that produced them. On successful step completion, writes remain visible to clients. On failure, writes associated with the failed step can be excluded. We define a total order over step identifiers which defines an unambiguous precedence across step invocations. The durable session layer does the step-boundary accounting so the agent & UI code doesn't have to.
English
0
0
0
8
Mike Christensen
Mike Christensen@christensencode·
But one kind of side effect has no rollback primitive: data you've already sent to the client. Streaming is the obvious case. Half a response lands in the browser, the step fails, the retry produces another attempt. The client now has two partial responses interleaved. There's no way to un-stream tokens.
English
1
0
0
14
Mike Christensen
Mike Christensen@christensencode·
Loved @vercel's Workflows workshop in London yesterday. Their agent-era stack (AI SDK + Workflow DevKit + Sandbox) is one of the most coherent I've seen for building AI products. My take: Workflows essentially makes serverless stateful. Serverless used to mean pushing state into an external store and rehydrating on each invocation. Fine for request handlers. Painful for agents, which carry a lot of it: tool history, in-flight work, subagent exchanges. With Fluid Compute, the time your agent is suspended waiting on I/O doesn't burn cloud credits. Durable execution is quietly becoming the default primitive for agents. I love this stack, but there are still a few gotchas:
Mike Christensen tweet media
English
1
1
4
354
Aaron Epstein
Aaron Epstein@aaron_epstein·
Over the next decade, it’s likely that new AI interfaces will emerge beyond the common chat UI that many products are using today. What are some of the most interesting ones you’ve built or use today?
English
183
47
1.2K
364.9K
Mike Christensen
Mike Christensen@christensencode·
This is an emerging pattern many developers are using to build resumable streaming and multi-client experiences in their AI products. Find out more: durablesessions.ai
English
0
0
0
10
Mike Christensen
Mike Christensen@christensencode·
Pub/sub flips it. The session is the shared resource; clients subscribe to it and resume from it independently, from any device. Multi-tab, multi-device, reconnect stop being special cases.
English
1
0
0
21
Mike Christensen
Mike Christensen@christensencode·
Open a second tab on your AI chat app. The in-flight response isn't there. That's *not* a frontend bug you'll fix with clever state. It's a transport choice.
English
1
1
2
88
Mike Christensen
Mike Christensen@christensencode·
This is an incredible story. It was great to hear @brian_scanlan talk about this at @aiDotEngineer Europe last week. Increasing output with AI tooling does *not* mean lower quality output. If you do it right, it means more output and *higher* quality.
Darragh Curran@darraghcurran

9 months ago we publicly committed to 2x the productivity of our R&D org at @intercom. It was scary. It wasn't always clear we'd pull it off. We hit it with 3 months to spare. In fact, looking back 16 months - we've 3x'd. Here's what actually happened (with receipts): 🧵

English
1
1
4
361
Mike Christensen
Mike Christensen@christensencode·
@jonas @aiDotEngineer I love this idea of agents as stream processors. We’ve been working on something similar on top of pub/sub channels. Channel is a multiplexed oplog of events from agent event streams; session materialised from stream; event format agnostic through codecs github.com/ably/ably-ai-t…
English
0
0
0
44
Jonas Templestein
Jonas Templestein@jonas·
I had such a great time at @aiDotEngineer Europe last week ! Except for one thing: My workshop went terribly because I vibe-slopped a little bit too hard half an hour before it started and was too frazzled to recover So I promised everyone I'd make a video recording of what it was meant to be. Here's that video! This video is relatively long and aimed at the workshop audience. But if there's any interest, I'll make a 3 minute version In short, I believe that (maybe) 1. Agent harnesses should be modelled as stream processors that call append({ event}) and stream() an append only event log 2. All state in agent harnesses should be event-sourced 3. Harness plugins are just stream processors, too, and can run on other machines from the "harness" itself 4. All agents should have a public URL that events can be posted to 5. You should be able to append the source code of a stream processor to a stream and then it magically runs on that stream To prove it, we make a "coding agent" built on @tan_stack AI and @cramforce 's just-bash and deploy it to a real stream at events.iterate.com And because @badlogicgames told us to use our brain more, I though I'd try to actually write the code by hand You can try this yourself by using this repo here github.com/iterate/ai-eng…
English
6
9
66
10.8K
Mike Christensen
Mike Christensen@christensencode·
We've been building ably.com/docs/ai-transp… as our implementation of this. Had a go at plugging it into Open Agents - multi-client just worked. Two browsers on the same agent session, disconnect one, reconnect, it catches right up. Will share a fork soon!
English
1
0
2
33
Mike Christensen
Mike Christensen@christensencode·
The session is shared and provides full, live visibility of all activity across multiple clients. You can route to, cancel or steer the agent from any tab or device.
English
1
0
0
31
Mike Christensen
Mike Christensen@christensencode·
Open Agents by @vercel is a great reference for cloud agents - durable workflows, sandboxed execution, multi-model gateway. Awesome platform. As @karthikkalyan90 points out, if you're building this there are still some hard problems to solve though...
Karthik Kalyan@karthikkalyan90

While the naive approach works for a demo, it breaks down in production: 1/ Crash recovery - If the server dies mid-response (OOM, deploy, cold start), the LLM context, partial response, and completed tool calls are all lost. You restart from scratch, burning tokens and time. 2/ Reconnection - If the WebSocket disconnects (user switches devices, network blip), you need to keep the agent running server-side, buffer all streamed chunks, and replay them on reconnect. You're essentially building a message queue on top of WebSockets. 3/ Idle resource management - Sandboxes have a 5-hour hard timeout, but most sessions are idle after 10 minutes. Without an inactivity-based hibernation system, you're paying for VMs nobody is using. Building one means: polling all sessions, checking activity timestamps, handling race conditions if a user returns mid-shutdown. 4/ Duplicate prevention -User double-clicks send, or the client auto-retries on a network blip. Now two connections are driving the same agent - race conditions, duplicate commits, corrupted sandbox state. On top of all this, you'd need retry logic for transient tool failures, observability into run status, and cancellation support (user clicks "Stop" while the agent is running npm install). You're building a workflow engine inside your app.

English
3
2
7
173