am.will

10.3K posts

am.will banner
am.will

am.will

@LLMJunky

Founder | StarSwap Director of n number of agents. Thoughts are my own.

Texas Katılım Aralık 2025
1.3K Takip Edilen7.8K Takipçiler
Sabitlenmiş Tweet
am.will
am.will@LLMJunky·
I've been testing Spark agents extensively, and I think I've found the roadmap to using them effectively. The secret isn't the model. It's how you control the context that gets passed to it. Everything I've learned swarming with Spark in Codex. 👇 x.com/LLMJunky/statu…
am.will@LLMJunky

x.com/i/article/2025…

English
5
4
99
29.6K
am.will
am.will@LLMJunky·
Whoa 🤯 @Supermicro Co-Founder was just arrested for smuggling $2.5B in @nvidia GPUs into China. I don't understand how you can be this wealthy, and take such an enormous risk. The AI race is heating up, and compute will be the key to "winning"
am.will tweet media
English
1
0
1
42
am.will
am.will@LLMJunky·
Finally proud to announce that I've joined the GPU Minor Leagues. 2 x RTX 6000 Pro. I have six months to pay off the second GPU lol. You are all TERRIBLE influences.
am.will tweet media
English
82
8
499
20.6K
am.will
am.will@LLMJunky·
Nice. @cursor_ai just dropped their new "Glass" alpha, and they're leaning heavily into the simplified coding GUI trend that's been blowing up lately. First impressions are really positive. And just look at how insanely fast Composer 2 is. First impressions? Drop yours 👇
English
20
8
366
90.9K
Amar Patel
Amar Patel@amar_patel·
@LLMJunky Interesting. My codex experience has been total goldfish! Remembers the last prompt but doesn’t have the continuity I’m looking for… I’ll dig into it. I am so happy to be wrong about this Will, it’s been driving me nuts!
English
1
0
1
7
Theo - t3.gg
Theo - t3.gg@theo·
T3 Code now supports Claude. If you have the Claude Code CLI installed and signed in, you can use it with T3 Code. Hopefully the lawyers won't make us remove this 🙃
Theo - t3.gg tweet media
English
44
5
411
23.1K
am.will
am.will@LLMJunky·
@theo Lmao crashing out. Love it. This is how I'll use claude henceforth
English
0
0
2
56
am.will
am.will@LLMJunky·
forgive me not trying to be rude here, but grok doesn't know how codex actually works lol i agree that claude doesn't converse like codex, but it absolutely does have memory, it does save sessions, and the entire session is visible to the agent. i just pulled this random thread from 38 days ago, this thread has 69% of the context window used, so its roughly 180,000 tokens used. still remembers. over a month old.
am.will tweet media
English
1
0
1
17
Amar Patel
Amar Patel@amar_patel·
@LLMJunky grok.com/share/bGVnYWN5… I asked Grok to explain it to you. Like I said, I think this is definitely a limited-time prob, but in my experience over the last 5 days, Codex doesn't remember what you asked it five prompts ago in the same session. It doesn't converse like Claude does.
English
1
0
0
33
am.will
am.will@LLMJunky·
@midego1 mad because the fast model is 3x more expensive. which means that if they had simply charged $5/mtok for the slow model, it would have been perfectly acceptable. in other words, you're mad because slow mode is too cheap. can't make this stuff up.
English
0
0
0
3
am.will
am.will@LLMJunky·
oh i read it. i just thought it was absolutely stupid. every composer model before this was only fast mode by default if they had never released a "slow mode" you would be praising them for reducing the price its less than HALF of what composer 1.5 was. but because they were nice and offered an even cheaper option, you wanna call them criminals. like its so fucking absurd dude. instead of being happy that they: 1) reduced the price by A LOT 2) launched a new slower variant at another 67% discount over that instead, you wanna cry about it.
English
2
0
0
15
am.will
am.will@LLMJunky·
Composer 1: $10 mtok Composer 1.5: $17.50 mtok Composer 2: $2.50 or $7.50 mtok "Criminal Move" lol You can do everything to make people happy and they'll still shit on you.
English
4
0
98
11.2K
am.will
am.will@LLMJunky·
@midego1 i'm just devastated. i just scrolled your profile and literally every comment was some toxic nasty bullshit. oh no, please dont mute me!
English
0
0
0
4
am.will
am.will@LLMJunky·
@midego1 i understand perfectly well. you're a whiner who looks at most things through a pessimistic viewpoint. toxic.
English
2
0
0
8
am.will
am.will@LLMJunky·
not a fan of this argument, though i think its funny the reality is the other labs arent stealing data, they're stealing outputs and thinking traces and whatnot. thats more or less stealing the compute used to train the models, which cost a shitload of money money they did not have to spend. china has been doing this for decades i wouldn't mind it so much if they werent actively engaged in corporate espionage globally. wrong is wrong.
English
0
0
0
14
Dwayne
Dwayne@CtrlAltDwayne·
@theo @thdxr Can you steal something twice? Ironic Anthropic trained their models on stolen data in the first place.
English
1
0
8
425
dax
dax@thdxr·
where the heck is deepseek v4
English
39
9
458
24.5K
Theo - t3.gg
Theo - t3.gg@theo·
@thdxr They need to steal another 150,000 prompts from Claude first
English
19
0
256
9.2K