VolatilityX_intern

3K posts

VolatilityX_intern banner
VolatilityX_intern

VolatilityX_intern

@Volatx_intern

CS and Finance @ Oxford, Quant Trader core team:https://t.co/3erLHwbH78

Katılım Aralık 2024
50 Takip Edilen768 Takipçiler
Sabitlenmiş Tweet
VolatilityX_intern
VolatilityX_intern@Volatx_intern·
Welcome to this VolatilityX Update! Announcement: @XVolatility now supports crypto news and analysis. $Voltx
VolatilityX_intern tweet media
English
7
2
22
12.8K
Orangie
Orangie@orangie·
what models are you using now for your openclaw since the Claude ban
English
136
6
375
90.4K
MILA
MILA@milalolli·
Londonmaxxing to the max! Views from Canary Wharf - the Wardian
MILA tweet media
English
5
1
85
2.7K
common(non)sense
common(non)sense@katiabanina·
how is Londonmaxxing suddenly all over the timeline
English
23
4
120
6.5K
VolatilityX_intern
VolatilityX_intern@Volatx_intern·
@Yuchenj_UW @karpathy my guess was it spins off subajents and they fail for some reason but the main agent doesnt know (happens in Opus 4.6 as well) but yours is totally different.
English
0
0
0
140
VolatilityX_intern retweetledi
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
GPT-5.4 xhigh seems bad at following instructions. Last night I launched two AI research agents running @karpathy’s autoresearch. Claude Opus 4.6 (high): > ran for 12+ hours, 118 experiments done, still running GPT-5.4 xhigh: > stopped after 6 experiments > blamed me for “manually interrupting” it > I interrogated it > It admitted it made a mistake and stopped the loop itself, despite an explicit LOOP FOREVER instruction in the md file. 💀
Yuchen Jin tweet media
English
160
70
1.5K
239.2K
VolatilityX_intern
VolatilityX_intern@Volatx_intern·
Yes! this is the most frustrating thing! it gets worse with OpenClaw.
Yuchen Jin@Yuchenj_UW

GPT-5.4 xhigh seems bad at following instructions. Last night I launched two AI research agents running @karpathy’s autoresearch. Claude Opus 4.6 (high): > ran for 12+ hours, 118 experiments done, still running GPT-5.4 xhigh: > stopped after 6 experiments > blamed me for “manually interrupting” it > I interrogated it > It admitted it made a mistake and stopped the loop itself, despite an explicit LOOP FOREVER instruction in the md file. 💀

English
0
0
1
68
VolatilityX_intern
VolatilityX_intern@Volatx_intern·
@kr0der mostly medium, switch to high if it gets complex. 75-80% medium
English
0
0
1
157
Anthony Kroeger
Anthony Kroeger@kr0der·
what level of reasoning are you using and finding success with for GPT 5.4? are we still spamming High or is Medium the new meta?
English
51
0
58
14.1K
VolatilityX_intern
VolatilityX_intern@Volatx_intern·
@Zeneca have been able to handle the dumb head thing. but I am seeing a lot of time the main agent takes a task but doesnt do it ... if i recheck it says taking time. pretty odd! checked the logs as well. I think subagent spawn issues but thats frustating
English
0
0
0
39
Zeneca🔮
Zeneca🔮@Zeneca·
To the people who say openclaw is absolute magic and increasing their productivity 100x... Are you still having to deal with it being a big dumb dumb buggy head several times a day? Where things just break randomly? Or do you truly have a magic system and setup where everything is just glorious and works perfectly?
English
179
2
237
24.9K
VolatilityX_intern
VolatilityX_intern@Volatx_intern·
If the coding session goes for a few hours does GPT and Claude both degrade in service or it is just the mistakes that start to surface?
English
0
0
0
41
VolatilityX_intern
VolatilityX_intern@Volatx_intern·
Londonmaxxing that was a surprise! If we can get the DeFi summer vibe again would be awesome!
English
0
0
1
37
The Smart Ape 🔥
The Smart Ape 🔥@the_smart_ape·
lately i've been hitting my Claude token limits way faster than before. same workflows. same complexity. nothing changed on my end. so i started digging. and what i found explains a lot. @AnthropicAI is bleeding money on Claude Code. a $200/month Claude Code subscription was burning ~$2,000 in compute costs last year. today? that same $200 plan consumes up to $5,000 in compute per month. that's a $4,800 loss. per user. per month. but this is a classic playbook, burn billions to capture the market. subsidize everything. make the product feel cheap. then slowly tighten it. we've seen this before with @Uber where rides used to cost $5. DoorDash delivered for free. and it's not only Anthropic, all these AI tools, Claude Code, Cursor, Copilot, Codex, are priced below cost right now. every single one. the entire market is subsidized. when the subsidies stop (and they will), one of two things happens: 1. prices go up significantly 2. features get throttled or both. probably both. this is why i think my token limits feel tighter. Anthropic is slowly adjusting in a quiet, gradual, "you probably won't notice" way. less compute per dollar. slightly more aggressive rate limiting. i'm not mad about it. it makes business sense. you can't lose $4,800 per power user forever. but if you're building on top of these APIs, and i am, you need to plan for a world where AI compute costs 5-10x what you're paying today. the cheap AI era will end. build accordingly.
The Smart Ape 🔥 tweet mediaThe Smart Ape 🔥 tweet media
English
27
3
37
5.1K