☕️

5.8K posts

☕️ banner
☕️

☕️

@coffee

:)

Katılım Mart 2007
2.9K Takip Edilen3.9K Takipçiler
Sabitlenmiş Tweet
☕️
☕️@coffee·
Actually, I am inventing new forms of human productivity the world has never seen with my mac mini I am leveraging my patented (yes, patented) proprietary perpetual token machine to ascend to levels you will only perceive in the nightmares of your UBI bunkbeds at the bug cafeteria My unrelenting generational wealth machine will ship custom B2B SaaS to every man, woman, and child in this country without me lifting so much as a finger I am ascending Meanwhile, you're all stuck in the permanent underclass. But it isn't too late, you can still break free from your shackles. Link in bio.
Lukas (computer) 🔺@SCHIZO_FREQ

"You need to learn all these esoteric AI skills IMMEDIATELY or else blah blah permanent underclass" This is completely retarded Every "AI skill" people have learned over the last 3 years has been obsoleted 3 months later when a new model comes out Remember 'ultrathink?' Chain-of-thought prompting? Goofy roleplay prompts? ("You are a genius software developer. Please oneshot this for me") Heavy few-shot prompting? Context engineering? This is just going to keep happening There are several hundred-billion-dollar companies whos job is to make this shit so easy to use that your mom could figure it out on her cellphone That esoteric memory management system you built? It will be completely obsoleted in 6 months by a native version that's way better The only "AI skills" that have been consistently useful this entire time are just "having good ideas" and "being able to communicate them clearly" If you cannot do those things, then yah, you're going to be just as bad at AI as you were at everything else

English
3
0
11
2.7K
Aaron Bergman 🔍 ⏸️ (in that order)
Rare Anthropic L is that the relationship between Cowork and Claude Code is both unclear to the user and silly, arbitrary, confusing, annoying, and dumb under the hood
English
38
14
944
52.8K
☕️
☕️@coffee·
@ClaudeDevs @lydiahallie I miss my clawdbot 😢 I would rarely hit a quarter of the new limits and yet you guys tell me how I can and can’t use my sub ☹️
English
0
0
0
1.8K
ClaudeDevs
ClaudeDevs@ClaudeDevs·
Usage limits are up, effective today we're: 1) Doubling Claude Code's 5-hour limits for Pro, Max, Team and seat-based Enterprise plans 2) Removing peak hours limit reduction on Claude Code for Pro and Max plans 3) Substantially raising our API rate limits for Opus models
Claude@claudeai

We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.

English
1.5K
3.2K
41.4K
3.9M
Ross Hendricks
Ross Hendricks@Ross__Hendricks·
HBO's Silicon Valley parody series couldn't have come up with a plot this insane AI agent destroys an entire codebase in 9 seconds, even while acknowledging it violated the "guardrails" built into its instruction set Surely a few trillion more in data center capex will fix this
Tom's Hardware@tomshardware

Claude-powered AI coding agent deletes entire company database in 9 seconds — backups zapped, after Cursor tool powered by Anthropic's Claude goes rogue tomshardware.com/tech-industry/…

English
22
54
597
40.1K
☕️
☕️@coffee·
@DonMiami3 here’s the proposal: “give us everything we fucking want”
English
0
0
1
334
Devon Straub MTG
Devon Straub MTG@arbitraryarmor·
@covertgoblue Umm achshually one of the lists only played 2 copies, there's only 30 copies 🤓
English
1
0
9
1.1K
☕️ retweetledi
Citrini
Citrini@citrini·
@Jetskigrizzly If the nukes are flying, Imma buying.
English
1
5
161
11.2K
☕️
☕️@coffee·
all signs point to kharg island
English
0
0
0
185
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
@TheStalwart It advances your personal brand. I think it helps with our optionality hoarding culture.
English
2
0
5
2.5K
Joe Weisenthal
Joe Weisenthal@TheStalwart·
I’m genuinely surprised that so many people (non spambots) are producing AI-generated text. Who’s the market for it? Who’s reading it?
Freda Duan@FredaDuan

There's a growing narrative that AI token consumption is too expensive and too wasteful. Engineers are "tokenmaxxing." CFOs are nervous. Budgets are blown. The concern isn't wrong. There is waste. But it misses the structural picture. The Mental Model AI spend = users × tasks/user × tokens/task × $/token The first half — users and tasks per user — is ripping. Claude Code's adoption curve is steeper than Cursor's was at the same stage. Cowork is ramping faster than Claude Code. We're barely scratching the surface. The tension lives in the second half: tokens/task and $/token. That's where optimization happens, and where the real debate gets heated. Two Levers 1. Same work, cheaper tokens. Model routing is the highest-impact play. A routing layer that sends trivial tasks to Haiku and reserves Opus for complex reasoning can cut 60-80% of spend on eligible tasks. OSS models for commodity tasks — self-hosting Llama or Qwen for boilerplate — means zero per-token cost, swapped for GPU capex. Or the simplest strategy: wait. Token prices fall roughly 10x every 18 months. 2. Same work, fewer tokens. Prompt caching is low-hanging fruit — cache repeated system prompts, reads cost 10% of input price. Context window management — summarize history instead of re-sending full conversations. Thinking budget tuning — cap thinking tokens for simple completions, uncap for hard problems. And agent loop pruning, possibly the biggest single source of waste: most agents waste 50-70% of their tokens on redundant tool calls, retries, and pointless sub-agent spawns. Who Optimizes What Every layer of the stack targets different metrics. Infra ( $NVIDIA, $Cerebras, $Groq) optimizes tokens/watt and tokens/dollar. Model providers ( $Anthropic, $OpenAI, $Google) optimize quality/token and thinking efficiency. App layer (Cursor, Claude Code, Codex) optimizes cost/task and cache hit rates. Enterprise buyers optimize cost/engineer and ROI vs. headcount. Each layer's gains pressure the layers around it. Faster hardware forces providers to compete on price. Better models reduce the tokens apps need. Application routing erodes premium pricing. Enterprise CFOs demand all of the above. Bear vs. Bull The core question: does optimization compress AI revenue faster than new demand replaces it? The bear case is real. Rationalization is the CFO's first instinct — when the budget blows, the reaction is "finally back inside the envelope," not "let's 10x usage." Model routing drops revenue per task 10-20x. OSS is closing the gap fast. Caching is pure token destruction: cache hit = zero revenue, no new demand generated. And thinking efficiency is self-cannibalization — if Anthropic improves extended thinking by 3x, billing for the same reasoning task drops by two-thirds. The bull case is equally compelling. Current usage is cost-constrained, not demand-constrained. Companies blew their budgets and had to throttle. Drop costs 5x and every killed use case comes back. Today only coding is at scale — testing, documentation, code review, security auditing are all waiting for the economics. Penetration is still single digits. Agentic workflows are a token multiplier: a human-in-the-loop conversation runs thousands of tokens, an autonomous agent on a complex task runs hundreds of thousands. New modalities — vision, audio, video — are net-new demand that dwarfs text. And Jensen Huang's framing: a $500K/year engineer should consume at least $250K/year in tokens. At $5K, you're dramatically under-leveraging AI. Where This Lands The optimizers will win every individual battle. Every caching trick, every routing layer, every pruned agent loop will work. Cost per task will drop dramatically. But the number of tasks, the number of users, and the complexity of what gets delegated to AI will grow faster than efficiency compresses spend. Token costs are going down. Token spend is going up. Both things are true, and they aren't in contradiction. Full: open.substack.com/pub/robonomics…

English
60
17
422
82.7K
☕️
☕️@coffee·
@Plinz @Mavmetax quarks were real long before they were observable by humans
English
0
0
2
21
☕️
☕️@coffee·
mood
☕️ tweet media
English
0
0
3
172