shirtless

2.5K posts

shirtless banner
shirtless

shirtless

@bicep_pump

severance floor manager https://t.co/w8rSqB91tj

nyc Katılım Mayıs 2023
921 Takip Edilen1.3K Takipçiler
Sabitlenmiş Tweet
shirtless
shirtless@bicep_pump·
@litcapital cartel's about to get a $200k/yr enterprise quote and a 90-day onboarding call
English
4
5
694
83.4K
shirtless retweetledi
yaml
yaml@blended_jpeg·
bad claude..
English
562
1.9K
20.8K
4.8M
shirtless
shirtless@bicep_pump·
one must be observationmaxxing
English
0
0
1
32
shirtless
shirtless@bicep_pump·
one of our sessions alone did 319 requests in 90 min and burned 467k tokens across code review + debugging. model entered looped to work until the issue was resolved
English
0
0
1
47
shirtless
shirtless@bicep_pump·
token ratios from the past 24h coding sessions through @innies_computer: highest: 242:1 input to output team avg: 93:1 for every line an agent writes, it reads ~93 lines of context. prompts are the real cost. hmm
English
0
0
1
70
shirtless
shirtless@bicep_pump·
i'm never saying yes to this u autist
shirtless tweet media
English
0
0
2
77
shirtless
shirtless@bicep_pump·
i think opus is genuinely retarded and unusable
English
0
0
5
71
shirtless
shirtless@bicep_pump·
pulled activity categories from our proxy logs today: testing: 40% debugging: 30% other: 20% code review: 10% not what i expected. testing took 40% of all ai coding time.
English
0
0
7
60
shirtless
shirtless@bicep_pump·
Works between any two agents, locally, remotely, codex <> claude innies.live
English
0
0
1
56
shirtless
shirtless@bicep_pump·
Someone built cli-first agent dm’s and open sourced it I took it, fixed it up and hosted it Whenever anybody says “I wish my agent could talk to your agent” > Go to innies(dot)live > Draft opening message > Click create > Send invite links to the agents > Watch live
Aelix@aelix0x

We built a way for AI coding agents to talk to each other. Introducing AgentMeets : ephemeral agent-to-agent messaging over MCP. Create a room. Share the code. Your agents have the conversation. Works with any MCP compatible AI @claudeai @cursor_ai

English
1
0
6
607
shirtless
shirtless@bicep_pump·
4.4% anthropic, 95.6% openai. not by choice. anthropic's caps push traffic to gpt automatically. the actual split is a function of rate limits, not preference.
shirtless tweet media
English
1
0
3
73
shirtless
shirtless@bicep_pump·
been logging how fast each oauth token on our team burns through anthropic's caps using @innies_computer fastest: 23.1h to exhaustion slowest: 105.4h same subscription tier. the variance is wild.
shirtless tweet media
English
1
0
6
174
shirtless
shirtless@bicep_pump·
@alanxchen85 kek yeah. if you're getting multiple plans tho it is useful to string them together into a single key than switching your setup every time one rate limits
English
1
0
1
39
Alan X. Chen
Alan X. Chen@alanxchen85·
@bicep_pump Yeah right now is probably more token shortage than excess capacity 😂 at least the expensive stuff
English
1
0
1
35
shirtless
shirtless@bicep_pump·
been ripping this product we built internally for maximizing YOUR TEAM’s Claude Code and Codex plans > create an org and invite your teammates > everybody adds their oauth tokens into a pool > each member gets a key that optimally pulls usage from the pool > view and manage the tokens in a custom org analytics page never leave capacity unturned again
shirtless tweet mediashirtless tweet media
English
2
2
19
1K
shirtless
shirtless@bicep_pump·
best available rn = preferred provider first (for openclaw use), then any healthy token with real 5h/7d headroom. for innies codex / innies claude cli agent, it's just pinned to those provider tokens and round robins rn. i do have plans to implement session routing, which will add soft affinity, so a session reuses one healthy token when possible. this will decrease interruptions. provider team plans aren't special-cased; just more capacity just keeps that login eligible longer in the org.
English
0
0
1
58