Pavlo

7.7K posts

Pavlo

Pavlo

@fxposter

Systems engineer @WixEng, husband and father

Ukraine Katılım Eylül 2007
964 Takip Edilen393 Takipçiler
Pavlo
Pavlo@fxposter·
@shipilev are you sure it is not mythos already? :)
English
0
0
0
75
Aleksey Shipilëv
Aleksey Shipilëv@shipilev·
I honestly would enjoy either outcome, whether it runs into the ground, or takes off brilliantly. But I suspect the real outcome would be limping through the lingering issues for quite a while. The timing is just right (chef's kiss) for Mythos to come out of closed preview.
English
2
0
9
1.4K
Armin Ronacher ⇌
Armin Ronacher ⇌@mitsuhiko·
Let's see how this /goal thing in codex' harness is doing.
Armin Ronacher ⇌ tweet media
English
8
0
79
31.3K
Tanishq (tk) 🌊
Tanishq (tk) 🌊@tanishqk·
I built pi-live-terminal - a live view of any persistent or long-running commands, powered by Just™ tmux > you or the agent can start and manage panes > shortcuts to focus, kill or close panes > LLMs already know tmux and can interact with panes natively Here's Pi running inside of itself. cc @badlogicgames Use it for: - dev servers - interactive commands like publishing to npm - - running tests in the background - Having Pi test its extensions, visible to you
English
4
5
76
7.9K
Mario Zechner
Mario Zechner@badlogicgames·
People of pi.dev. I'm heads down in the big refactor (see bigrefactor branch). To have any chance of finishing it, I must pause issue triage for the next 2 weeks. All issues filed during that period will not be reviewed. In case of emergency -> Discord
English
20
7
512
19K
Armin Ronacher ⇌
Armin Ronacher ⇌@mitsuhiko·
@fxposter Codex uses permessage-deflate for messages to the server. The websocket library in node doesn’t support that. It also forces max size of 4MB post decompression.
English
1
0
1
267
fks
fks@FredKSchott·
Introducing Flue — The First Agent Harness Framework Flue is a TypeScript framework for building the next generation of agents, designed around a built-in agent harness. Flue is like Claude Code, but 100% headless and programmable. There's no baked in assumption like requiring a human operator to function. No TUI. No GUI. Just TypeScript. But using Flue feels like using Claude Code. The agents you build act autonomously to solve problems and complete tasks. They require very little code to run. Most of the "logic" lives in Markdown: skills and context and AGENTS.md. Flue is like Astro or Next.js for agents (not surprising, given my background 🙃). It's not another AI SDK. It's a proper runtime-agnostic framework. Write once, build, and deploy your agents anywhere (Node.js, Cloudflare, GitHub Actions, GitLab CI/CD, etc). We originally built Flue to power AI workflows inside of the Astro GitHub repo. But then @_bgiori got his hands on it, and we realized that every agent needs a framework like Flue, not just us. Check it out! It's early, but I'm curious to hear what people think. Are agents ready for their library -> framework moment?
fks tweet media
English
182
341
3.8K
737.9K
Jeremy Howard
Jeremy Howard@jeremyphoward·
What search providers are you all using with openclaw/pi/opencode/etc? Brave; serpapi; gemini; ...? Got any favorites?
English
92
4
184
42.2K
Sam Saffron
Sam Saffron@samsaffron·
TIL ... unlike 5.4 the 5.5 endpoint on codex is limited to 272k context. Which is fine, just something I need to update
Sam Saffron tweet media
English
1
0
2
553
Pavlo
Pavlo@fxposter·
@theo I actually think that 5.5 would not have "cheaper short context" at all and all requests up to 1m would be priced the same. we'll see when they update pricing page. but yeah, for most existing flows it would be 2x more expensive per token.
English
0
0
0
2.1K
Theo - t3.gg
Theo - t3.gg@theo·
$5 per mil in, $30 per mil out. GPT-5.5 is smart. I've been using it for a bit. It's also weird, hard to wrangle, and too expensive IMO. Double the price of GPT-5.4. 20% more expensive than Opus 4.7.
OpenAI@OpenAI

Introducing GPT-5.5 A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done. Now available in ChatGPT and Codex.

English
170
54
2.7K
399.3K
Pavlo
Pavlo@fxposter·
@mitchellh ah, so you're not using it via harness LSP integration, but instead querying "directly" by the LLM. didn't think about it. though, "go doc" is what gpt models know how to work with, too.
English
1
0
0
1.6K
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
@fxposter Saving a ton of tokens for finding things like interface implementations, API callers, etc. You don't even need an agent skill, a one-paragrapher in your AGENTS.md thats basically like "for finding APIs or callers use gopls" works.
English
1
1
56
6.4K
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
I'm writing Go again (for what, you'll see later...). `go doc` and `gopls` are like agent superpowers and its shocking how productive agents are out of the box at writing [good] Go code versus other languages I've used (including the JS ecosystem). Also, Go + Zig is a good mix. Go for the higher level and concurrent stuff and then no-libc Zig code plus the Zig compiler for zero dependency cross-compiled cgo with high-performance characteristics (minimize cgo boundary crosses). Chefs kiss. Its funny because a lot of the shitty ergonomics of Go CLIs like `go doc` and `gopls` (prev. stuff like `go oracle` or `guru`) are totally obviated by agents and not just that but in a twist of irony they're excellent for agents. Don't worry, its not Ghostty. Ghostty and libghostty will remain pure Zig; it's a fantastic fit and a perfect pairing. This is for something else. "Wait, I thought you said Go has no place anymore?" I was wrong, mostly because agents are so productive at Go. I won't bring in other languages in this discussion because I don't want to feed the crabs, so to speak. lol.
English
132
157
2.9K
313.8K
Shayan
Shayan@ImSh4yy·
Wait, Cloudflare is now offering Kimi K2.6 at the same price as Moonshot AI but with much lower latency and higher throughput?
Shayan tweet media
English
77
46
1.7K
150.2K
JP (Zenoware)
JP (Zenoware)@Zenoware·
@ImSh4yy They're also the current provider right now that works out of the box with OpenCode.
English
2
0
7
5.6K
Armin Ronacher ⇌
Armin Ronacher ⇌@mitsuhiko·
There is VoiceInk and there is Handy. Is there more? What's your favorite Open Source dictation app and why?
English
46
5
88
44.4K
Pavlo retweetledi
Filip Jerzy Pizło
Filip Jerzy Pizło@filpizlo·
“Stop” might be the right word for concurrent collectors that aren’t on-the-fly, and even then it’s kind of sketchy. Folks often end up with concurrent GCs where the time the mutator is “stopped” is less than the time that a C or Rust mutator would be “stopped” waiting for mmap to return the next available page for an allocation that is growing the heap. For example in FUGC (which is on the fly), the closest thing to “stopping” is a thread scanning its own stack. But what if C++ exceptions have to scan the stack? We don’t say C++ “stops” for that, even though C++ exception unwinding is slower than FUGC stack scans The discussion of “stopping” and “pausing” is detrimental to a good understanding of GC overheads and *why* GCs are often a bad idea. The reason why they’re a bad idea: - GCs will use more memory than a program that uses malloc/free. - GCs cause memory leaks. Relatedly, GCs make it impractical to estimate max memory footprint in systems where this needs to be known statically - GCs increase page demand (the rate at which a program asks for a page that isn’t mapped). - GCs cause a tremendous amount of memory traffic. (A concurrent on the fly GC will have an effect on latency that looks like a pause because during the time it’s running, it’s slamming the memory subsystem). - GCs have positive feedback. They are likely to be triggered when the system is under stress. Hence, they make stress conditions worse. - GCs tend to be inelastic. That is, programs that suddenly free up a bunch of memory will tend to sit on it because the GC hadn’t run yet. - GCs play poorly with real time locking and scheduling (even real time GCs that have a story for this have a bad story in the sense that schedulability analysis gets way nastier than it would be otherwise) These are all good reasons to avoid GCs. Folks should know these reasons. Folks working on GCs should try to address these reasons directly. Talking about “stopping” and “pausing” just confuses the situation by making it harder to see the real problems.
English
6
16
167
35.8K
Pavlo
Pavlo@fxposter·
@thsottiaux Is it only for codex app? If so - any plans on making it available in cli?
English
0
0
1
316
Tibo
Tibo@thsottiaux·
Codex Compute efficient ✅ Always up, never down ✅ Best at hardcore engineering ✅ Crazy good app, first to escape the terminal ✅
English
453
188
5.1K
2.4M
Pavlo
Pavlo@fxposter·
@0xSero I feel that this will finally be resolved only when fixed-price subscription are over :)
English
1
0
3
218
0xSero
0xSero@0xSero·
@fxposter Why is it a gray area? It’s been a year. Wouldn’t it be professional to give out a clear concise description of what what usage is and isn’t allowed?
English
1
0
3
256
0xSero
0xSero@0xSero·
I figured out how to get Claude working anywhere without extra usage and technically in line with Anthropic's TOS. I don't recommend you adopt this, might get you banned but as you can see, no errors, no extra usage consumed. The point of this is to demonstrate futility.
English
35
11
418
51.5K