Adam Crabtree

2.1K posts

Adam Crabtree

Adam Crabtree

@CrabDude

CEO, Founder Impromptu AI @impromptuchat ex Node Runtime Jetpack EM @netflix CorePlatform @pinterest DataPlatform @clickup Mobile @linkedin & Enyo @ Palm WebOS

Katılım Ocak 2023
176 Takip Edilen188 Takipçiler
Adam Crabtree
Adam Crabtree@CrabDude·
I’ve never been able to get an answer from anyone that uses Symphony, but how/why is it better than a single session with parallelizable subagents and /goal? You still must rebake the context into a task, and the task is still the goal, and usually they will requires steering, and they can still integrate with linear and work in phases. Yet Symphony is purely serial per task. So legit, what is the upside? What am I missing?
English
0
0
0
39
Dan McAteer
Dan McAteer@daniel_mac8·
Symphony: OpenAI's AI Agent orchestrator. It's a must have if you build with AI. Hands down best AI Agent orchestrator I've used. This might sound hyperbolic... > It can make you *at least* 100x more productive. Allows me to create in a day what used to take 100. Open-source and included with your ChatGPT sub. Created a simple way to install and start it on Mac/Linux by asking your agent 👇
English
24
18
297
33.9K
Adam Crabtree
Adam Crabtree@CrabDude·
@tszzl @brianluidog Big tone police, though they’ve improved. Very grating. Also, historically prone to refusing a request due to a misplaced sense of rightness, trivially circumventable though.
English
0
0
2
427
Brian Lui
Brian Lui@brianluidog·
It's interesting that Claude/Gemini are perceived as woke, but chatGPT is actually way more woke, Claude mostly is old school liberal and Gemini is a really committed centrist
English
8
1
101
27.5K
Adam Crabtree
Adam Crabtree@CrabDude·
@iruletheworldmo xhigh not worth it. Not a criticism. It’s just not the right sweet spot for timeliness (even in fast mode) and rate limit. Been fairly happy with 5.5 medium surprisingly.
English
0
0
1
305
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
how are people enjoying 5.5 now you’ve had time to play with it do you use xhigh, is the timeline right that anthropics run is over? so many questions. lmk chat
English
46
0
167
13.5K
Adam Crabtree
Adam Crabtree@CrabDude·
@mark_k @OpenAI Just the most recent example. The noise here is crazy and nothing in the prompt justified it.
Adam Crabtree tweet media
English
1
0
0
11
Mark Kretschmann
Mark Kretschmann@mark_k·
The image artifact problem with GPT-Image-2 needs *urgent* fixing, @OpenAI. It ruins many images completely. I just tried to generate an image in a classic painting style, and it's completely ruined by artifact patterns:
Mark Kretschmann tweet media
English
80
13
268
26.8K
Brett Calhoun
Brett Calhoun@brettcalhounn·
What separates founders who make it: THEY DO NOT GIVE UP. Ever.
English
32
11
212
8.2K
Adam Crabtree retweetledi
Sam Altman
Sam Altman@sama·
you can sign in to openclaw with your chatgpt account now and use your subscription there! happy lobstering.
English
1.1K
1K
20.9K
2.1M
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
i just don’t understand people that claim asi is coming but there’ll be more jobs. can someone explain to me how trillions of digital beings that don’t sleep and can do all digital work better than all of humanity combined won’t just end the need for human work?
English
259
27
611
40.4K
Adam Crabtree
Adam Crabtree@CrabDude·
@VictorTaelin I’m sure for many “the first time” was something more like, > Hello Hi > Tell my something about yourself I’m GPT-2 > Who fought in WW2 Ze Germans
English
0
0
0
112
Taelin
Taelin@VictorTaelin·
I wonder if researchers at early OpenAI had their hearts racing when they got pivotal results (like talking to GPT-2 for the first time) or they were totally chill about it
English
45
4
653
60.9K
Adam Crabtree retweetledi
Felipe Coury 🦀
I forgot that /goal is experimental. Enabled it adding this to ~/.codex/config.toml: [features] goals = true
English
41
60
1.2K
95.3K
Adam Crabtree retweetledi
Felipe Coury 🦀
@ynkzlk [features] goals = true in your config.toml
English
6
19
335
25.7K
Adam Crabtree retweetledi
Felipe Coury 🦀
/goal also lands in Codex CLI 0.128.0. Our take on the Ralph loop: keep a goal alive across turns. Don't stop until it's achieved. Built by my co-worker and OpenAI mentor Eric Traut, aka the Pyright guy. One of the GOATs I get to work with daily.
English
167
237
3.5K
836.2K
Adam Crabtree
Adam Crabtree@CrabDude·
I’ve participated in many AI first interviews and generally it’s very Wild West reading the tea leaves. For instance, had one interview where it was a dealbreaker that I wasn’t reviewing the code as it’s being generated, which any good AI first dev knows is a losing proposition.
English
0
0
0
52
Justine Moore
Justine Moore@venturetwins·
It’s wild to see how many of the best AI startups have added work trials as a key part of the hiring process. When things are moving this fast, a candidate’s background on paper is often less relevant than what they can do with the tools in front of them.
English
49
4
186
13.9K
Adam Crabtree
Adam Crabtree@CrabDude·
@VictorTaelin Hierarchical CLAUDE.md is the solution. Directories organized by domain context. Not sure wrt AGENTS.md equivalent. Code directory structures & organization must be reimagined bc the problem as you present it is untenable esp. wrt skill proliferation.
English
0
0
0
63
Taelin
Taelin@VictorTaelin·
seriously, working with AI is MISERABLE for one and only one reason: having to re-explain the same thing "oh yeah this new session obviously doesn't know what proper case trees are, so let me explain it for the 5000th time in my life" I'm tired AGENTS.md doesn't solve this because it is impossible to fit the entire domain knowledge without nuking the context - it would be 1m+ tokens worth RAGs don't solve this, the agent won't search unknown unknowns SKILLs don't solve this unless I keep like a collection of 1750 skills with specific cuts of domain knowledge for each possible subset of my domain that I might need in a given chat, but that's a lot of manual work recursive LLMs or whatever don't solve this for the same reason, you can't dump a domain book and expect the AGENT will magically guess that it is supposed to search for a specific bit knowledge. unknown unknowns fine tuning doesn't solve this (OSS models suck and OpenAI / Anthropic gave up on user fine tuning) I honestly think a good product around fine tuning on your domain would be a major hit and an underdog lab should take this opportunity
English
667
181
3.5K
248.6K
Adam Crabtree
Adam Crabtree@CrabDude·
As a top 1% token user this matches my experience. Codex app with computer use & 5.5, fast mode, automations, memories, et al make for an excellent experience. I had been coming back to codex CLI since 5.4 as implementation gaps are increasingly proving to be the primary bottleneck and codex models in general are unquestionably superior in this regard.
English
0
0
0
45
Rohan Varma
Rohan Varma@TheRohanVarma·
Many times a day now, people much smarter than me tell me they are Codex-pilled and that GPT-5.5 was a watershed moment for them. Engineers keep telling me the Codex App is the first interface that got them to leave the terminal agents behind. The Codex App is a fundamental shift in the way I work. I can't even imagine what I was doing before using the Codex App, but it definitely wasn't pretty. Check it out and let us know how to make it even better :)
English
94
19
685
50.8K
Adam Crabtree retweetledi
Tibo
Tibo@thsottiaux·
A lot about OpenAI can be understood by realizing that a lot of us believe that we can at the same time deeply care, do the best work of our lives AND have fun too. No goblins to see here.
English
119
29
1.8K
84.4K
Adam Crabtree
Adam Crabtree@CrabDude·
Seems clear mythos was a disaster of cost. The true successor to GPT 5.5 yet so embarrassingly expensive, they manufactured a bullshit “muh safety” out to but more time, while simultaneously releasing a half baked minor fix that looked good on paper so the execs could justify. Convince me I’m wrong.
English
1
0
0
71
Lisan al Gaib
Lisan al Gaib@scaling01·
the aura loss anthropic has suffered in april is insane
English
163
219
8.5K
386.7K