aulneau

4.1K posts

aulneau banner
aulneau

aulneau

@aulneau

with an affinity for toast, clouds, and slow moments. staff engineer @uniswap labs, leading frontend infra. i love react native

Beigetreten Ekim 2012
4.4K Folgt1.6K Follower
Angehefteter Tweet
aulneau
aulneau@aulneau·
@danielkauppi It's this: toast, butter substance (vegan or dairy), peanut butter, sriracha, and brown sugar. Optional side pickle.
English
3
2
23
0
aulneau retweetet
aulneau retweetet
Cheng Lou
Cheng Lou@_chenglou·
My dear front-end developers (and anyone who’s interested in the future of interfaces): I have crawled through depths of hell to bring you, for the foreseeable years, one of the more important foundational pieces of UI engineering (if not in implementation then certainly at least in concept): Fast, accurate and comprehensive userland text measurement algorithm in pure TypeScript, usable for laying out entire web pages without CSS, bypassing DOM measurements and reflow
English
1.2K
7.8K
61.8K
21.2M
aulneau retweetet
Peter Pistorius
Peter Pistorius@appfactory·
TLDR; GH actions, but for agents. ~0ms cache, retry-on-failure, insanely fast. Agents need validation. CI is the last defense. They shouldn't bother you unless everything is green! GH Actions is usually in the top-5 expenses for dev-teams. Add agents to that mix? It'll easily double. It's the wrong tool for the right job: Slow boot, slow cache, retrieving logs is token expensive for agents, the list goes on... So I built a tool with one amazing feature: live-reload for failures. Agent-CI is a local CI runner. I tweaked the control pane and mounts to provide 0ms caching, insanely fast boots. When a step fails it pauses, provides the agent with the failure, and waits for the agent to fix and retry just that step. It uses the standard GH Actions image (via Docker), but emulates the control pane via a local HTTP server. You don't have to change any of your existing GH workflows. Tighter loops. Greener builds. Less babysitting. (Demo below.)
English
15
19
140
14.6K
aulneau retweetet
Nathan Flurry 🔩
Nathan Flurry 🔩@NathanFlurry·
🦀 Introducing Antiox Rust- and Tokio-like async primitives for TypeScript Channels, streams, mutex, select, time, and 12 more mods. $ 𝚗𝚙𝚖 𝚒𝚗𝚜𝚝𝚊𝚕𝚕 𝚊𝚗𝚝𝚒𝚘𝚡 Code snippets & GitHub below --- We did an assessment at @rivet_dev of the bugs in our TypeScript codebases. The #1 issue – by far – was with async concurrency bugs. Every time we use a Promise, AbortController, or setTimeout, an exponential number of edge cases are created. Reasoning about async code becomes incredibly difficult very quickly. But here's the catch: these classes of errors are completely absent from our Rust codebases. And it's not for the reasons you usually hear about "Rust safety." Why? Tokio (popular async Rust runtime) provides S-tier async primitives that make handling concurrency clean and simple. So we rebuilt them all in TypeScript. --- Concurrency: JavaScript is a single threaded runtime. But the second you start running multiple promises in parallel, your potential bugs start increasing exponentially. How Antiox helps: The most common pattern is pairing a channel (aka stream) with a task (background Promise) to build an actor-like system. All communication is done via channels. This helps us manage concurrency control, setup/teardown race conditions, and observability. Almost everything we do in Rivet's Rust code follows this model 1:1 using Tokio. See the screenshot in the thread for an example. Other primitives that we use frequently: - Select: switch but for async promises - Mutex & RwLock: control concurrent access to a resource - OnceCell: initialize something async globally once - Unreachable: type safe error on switch statement fallthroughs - Watch: notify on value change - Time: interval, sleep, timeout, etc - A bunch more --- Comparable libraries: Effect is a lightweight runtime that does a great job solving this problem already. I recommend evaluating Effect as it is a more comprehensive library for error handling, concurrency, and all-needs-TypeScrypt. However for our use case: it was still too heavy for us as we ship inside of our library in the interest of staying lean and minimal overhead. It's also (personally) very hard to reason about memory allocations in Effect, so we prefer to use vanilla TS whenever possible. We looked at effect-smol too, but it does not give us required functionality so we'd have to ship the full Effect runtime as a dependency of RivetKit & co if we used it. Antiox does not tackle error handling like Rust. Consider better-result or Effect for this. We personally prefer using the native JS runtime error handling. There are other libraries that try to make TypeScript more Rust-y. However, these are focused on things like Result, ADT, and match. Antiox focuses on providing minimal memory allocations and overhead, e.g. we do not provide a `match({ ... })` handler that requires allocating an object for a fancy switch statement. There are other libraries for async primitives in TypeScript. But we know Rust like the back of our hand and the APIs incredibly well designed, thanks to the hard work of many WGs and RFCs. Other async libraries tend to have learning curves and huge gaps in their APIs that we don't find with Rust's APIs. Plus LLMs know Rust/Tokio very well, and we're finding this translates to Antiox. We recommend paring Antiox with: - @dillon_mulroy's better-result for Rust-like error handling - Pino for Tracing-like logging (but lacks spans) - Zod for Serde-like (duh) - Need to find: thiserror replacement --- Quite frankly, an LLM can usually one-shot most of these modules. We're not doing anything hard here. But having this all in one package has removed significant duplicate code within our codebases and we hope it can help you too. --- Currently supported modules: - antiox/panic (199 B) - antiox/sync/mpsc (1.4 KB) - antiox/sync/oneshot (625 B) - antiox/sync/watch (677 B) - antiox/sync/broadcast (936 B) - antiox/sync/semaphore (845 B) - antiox/sync/notify (466 B) - antiox/sync/mutex (606 B) - antiox/sync/rwlock (778 B) - antiox/sync/barrier (528 B) - antiox/sync/select (260 B) - antiox/sync/once_cell (355 B) - antiox/sync/cancellation_token (357 B) - antiox/sync/drop_guard (169 B) - antiox/sync/priority_channel (1.0 KB) - antiox/task (932 B) - antiox/time (530 B) - antiox/stream (3.0 KB) - antiox/collections/deque (493 B) - antiox/collections/binary_heap (492 B) "Antiox" = "Anti Oxide" & short for antioxidant (And let's be honest, we usually wish we were writing Rust instead of TypeScript. But the world runs on JS.)
Nathan Flurry 🔩 tweet media
English
20
27
355
21.6K
aulneau retweetet
aulneau retweetet
Ramp Labs
Ramp Labs@RampLabs·
Today, we're releasing Ramp CLI to let agents manage your company's finances. 50+ tools across cards, bills, expenses, travel, and approvals. Fewer tokens than MCP, and comes with pre-built skills like receipt compliance and agentic purchasing.
English
101
105
2.5K
564.1K
aulneau retweetet
rahul
rahul@rahulgs·
seems obvious but: things that are changing rapidly: 1. context windows 2. intelligence / ability to reason within context 3. performance on any given benchmark 4. cost per token things that are not changing much: 1. humans 2. human behavior, preferences, affinities 3. tools, integrations, infrastructure 4. single core cpu performance therefore, ngmi: 1. "i found this method to cut 15% context" 2. "our method improves retrieval performance 10% by using hybrid search" 3. "our finetuned model is cheaper than opus at this benchmark" 4. "our harness does this better because we invented this multi agent system" 5. "we're building a memory system" 6. "context graphs" 7. "we trained an in house specialized rl model to improve task performance in X benchmark at Y% cost reduction" wagmi: 1. product/ui 3. customer acquisition 4. integrations 5. fast linting, ci, skills, feedback for agents 6. background agent infra to parallelize more work 7. speed up your agent verification loops 8. training your users, connecting to their systems and working with their data, meeting them where they are
English
111
227
3.2K
392K
sebastian
sebastian@scspeier·
Over the past two months, my most impactful move as a design manager has been encouraging and supporting every designer to start building with Claude Code. Designers used to create a “source of truth” in Figma which was used for QA and accountability.. if anything was wrong, you’d point to the designs and say, “This is how it’s supposed to look.” The designers would take this artifact hand it off to engineers to build another “source of truth” on GitHub that would become the repo where other engineers can fork and build on top of. Now — the designer creates the source of truth on GitHub, and it’s closer to “the designs” because it is the designs. Less gets lost in translation, and everyone speaks in code. Figma remains useful for napkin sketches and quick visual experiments, but we are clearly moving beyond Figma as the primary document. The hardest part is getting set up—it’s daunting and scary leaving the things you love behind. People just need a little bit of emotional support to get started. But after taking the leap, everyone is feeling empowered and excited. They are all literally 10x’ing their productivity. The worst part is that engineers are a bit overwhelmed because they have way more PR’s to review.
sebastian tweet media
English
32
26
529
45.1K
aulneau retweetet
Aaron Boodman
Aaron Boodman@aboodman·
Zero to 1.0 After two years of work, 50+ releases, thousands of commits, and hundreds of bugfixes, we are officially declaring Zero stable and ready for production workloads. zero.rocicorp.dev/docs/release-n…
English
53
89
877
58.8K
aulneau retweetet
Linear
Linear@linear·
Issue tracking is dead. We are building what comes next. linear.app/next
English
200
248
3.9K
1.9M
aulneau
aulneau@aulneau·
what did they do to my boy claude
English
2
0
3
424
aulneau retweetet
shadcn
shadcn@shadcn·
@joebell_ Remember how these used to take hundreds lines of code and sold as premium plugins for WordPress.
English
12
2
763
25.1K
Boshen
Boshen@boshen_c·
@voidzerodev will be releasing a lot of new stuff this week. Follow us if you don’t want to FOMO.
English
9
15
377
38K