Eric Fredine

7.1K posts

Eric Fredine banner
Eric Fredine

Eric Fredine

@fredine

Principal engineer writing code for fun and profit. Lapsed photographer. Husband. Father. Aspiring fly fisher.

Vancouver, British Columbia Katılım Temmuz 2008
3.1K Takip Edilen1.3K Takipçiler
dax
dax@thdxr·
they said cursors data flywheel would make them unstoppable but then claude code came out they said claude codes data flywheel would make them unstoppable but then codex came out they said codex's data flywheel would make them unstoppable then composer 2 came out
English
32
5
268
13.3K
Eric Fredine
Eric Fredine@fredine·
I've started using Codex with JetBrains and I'm liking it better than Cursor. I'm using the same GPT-5.4 medium model in both cases. Two main reasons: - Cursor is a cpu hog that often makes it unusable. - The explicit plan mode in Cursor adds friction and confusion.
English
0
0
0
139
Eric Fredine
Eric Fredine@fredine·
It’s mind boggling bad that Cursor is unusable while the agent is working through any non-trivial task?
English
0
0
2
117
Eric Fredine retweetledi
Stephen King
Stephen King@StephenKing·
My fave Chuck Norris joke: Chuck doesn't flush the toilet, he scares the shit out of it.
English
720
6.9K
55.3K
1.7M
Eric Fredine retweetledi
Robert Balicki (👀 @IsographLabs)
Something you're missing is having the agent do less work. So, for example, instead of having the agent run ESLint, you instead have a function that runs ESLint and only invokes an agent to fix the issues if issues are actually found. That's the premise of Barnum. x.com/StatisticsFTW/… This also makes it extremely easy to prevent context bloat.
English
0
1
2
137
Charlie Marsh
Charlie Marsh@charliermarsh·
We've entered into an agreement to join OpenAI as part of the Codex team. I'm incredibly proud of the work we've done so far, incredibly grateful to everyone that's supported us, and incredibly excited to keep building tools that make programming feel different.
English
285
146
3.1K
465.5K
Eric Fredine
Eric Fredine@fredine·
The AIs are better at Ui code than writing efficient data pipeline type code. It can write efficient code but won’t do it without being walked through how to do it.
English
0
0
2
166
Eric Fredine
Eric Fredine@fredine·
@JustDeezGuy Hmm, yes. Learn the true network structure rather than just the weights in a static network.
English
1
0
2
46
Eric Fredine
Eric Fredine@fredine·
@jamonholmgren I think this is how much we should be spending if we are taking best advantage of it.
English
0
0
1
12
Jamon
Jamon@jamonholmgren·
@fredine I have Claude Code premium plan, Codex not exactly sure what plan offhand but it's robust, and a token-usage Cursor plan which I'm willing to spend a decent amount on. Probably like $500/mo but with this system I'm actually burning fewer tokens than when I was doing interactive.
English
2
0
2
96
Jamon
Jamon@jamonholmgren·
Allow yourself to feel the pain. Nothing will motivate you to improve your workflow, docs, and specs like waking up to a mess.
Aryan Agal@aryanagxl

@jamonholmgren Great ideas all round - i'm just not confident on keeping agents idle around the day - what if I miss something before i sleep? this is expensive

English
8
0
50
5K
Eric Fredine retweetledi
Robert Balicki (👀 @IsographLabs)
Introducing Barnum, or... how I ship hundreds of PRs per week, burn through backlogs, and automatically fact-check documentation. LLMs are incredibly powerful tools. But when we try to use them to drive more complicated refactors or more intricate workflows, their shortcomings are quickly revealed. When their context gets full, they get forgetful, and they can't be relied upon to necessarily do the steps that you ask. They often cut corners. Put simply, having an inherently probabilistic process perform what should be deterministic work necessarily comes at the cost of reliability. And you can't build a complicated workflow off of unreliable foundations. That's where Barnum comes in. Barnum is the missing workflow engine for agents. Rather than having agents be responsible for upholding guarantees (e.g., always lint and commit your changes atomically), agents instead do just what they're good at: reading text and reasoning. Everything else is done deterministically, on the outside, by Barnum. This means that you can build bigger, more involved workflows without sacrificing reliability. Because you can intersperse bash scripts, you save on token usage. The agents performing a micro-task only receive the instructions for that specific task, meaning that context does not get overwhelmed and they don't get forgetful. And because all inputs, outputs, and transitions are validated, the agents can't wriggle out of doing the work. This workflow is essentially a state machine described in a config file. And the best part? The configuration has a JSON schema, so agents are actually really good at writing the workflow! It's already been used to ship hundreds of PRs, run automated refactors, burn through various backlogs, fact-check every statement in documentation, and build a deep-research clone! The attached image is a representation of the workflow that I use to identify and implement automated refactors. I follow this up with a separate workflow that splits each commit into a separate PR, judges the refactor, and potentially completes the refactoring (for example, by modifying call sites if the refactor changed some public API). So go on, give it a try. Check out barnum-circus.github.io, star the repository, and join the Discord! I can't wait to see what you build with it! And I'd love for you to get involved!
Robert Balicki (👀 @IsographLabs) tweet media
English
5
12
89
15.8K
Eric Fredine retweetledi
Armin Ronacher ⇌
Armin Ronacher ⇌@mitsuhiko·
Great things are for stealing! I love Oxide's RFDs. We tried something similar at Sentry, but it was just tricky to do it with GitHub issues and pull requests. Earendil's attempt is now based on Google docs synced to a repo and website. rfc.earendil.com
English
9
8
151
17.2K
Eric Fredine
Eric Fredine@fredine·
@zombodb They could have a checkbox: “avoid left turns because of course I hate them”.
English
0
0
0
7
ZomboDB
ZomboDB@zombodb·
@fredine My mother goes out of her way to avoid left turns. Google could consult with her!
English
1
0
1
18
Eric Fredine
Eric Fredine@fredine·
The Google Maps routing algorithm needs a bigger penalty for the cognitive load of needing to make a left turn at busy intersections.
English
1
0
1
150
Eric Fredine retweetledi
Brandur
Brandur@brandur·
Ironically, hypermedia (HATEOAS) has accidentally become a plausible API design scheme again. LLMs will robustly follow API links just like its designers hoped.
English
14
19
303
33.8K
Eric Fredine retweetledi
Jonathan Gorard
Jonathan Gorard@getjonwithit·
I think one of the conclusions we should draw from the tremendous success of LLMs is how much of human knowledge and society exists at very low levels of Kolmogorov complexity. We are entering an era where the minimal representation of a human cultural artifact... (1/12)
English
189
495
4.5K
749K
Lewis Campbell
Lewis Campbell@LewisCTech·
@fredine This has been a good communication exercise to me because I never thought of merely having an ID that points to something else be a foreign key: for me it was the contraint itself that made it foreign. I will be more precise in the future.
English
1
0
1
49
Lewis Campbell
Lewis Campbell@LewisCTech·
Foreign keys - do we really even need them? Discuss.
English
26
0
19
3.8K