oxycblt

530 posts

oxycblt banner
oxycblt

oxycblt

@oxycblt

✝ / 21 / eng @ razorbill / asymptotic self improvement

co usa 가입일 Mart 2025
130 팔로잉151 팔로워
고정된 트윗
oxycblt
oxycblt@oxycblt·
time to do the bio edit screencap announcement post im joining razorbill to help automate medical revenue recovery!
kyle@kylecompute

English
6
1
17
2.4K
oxycblt
oxycblt@oxycblt·
@atelicinvest compute limitations combined with the next frontier of scaling being parameter count again is (in my naive opinion) pushing labs to make consumer models as small as possible while still keeping performance, with predictable results regrettably
English
0
0
2
159
Unemployed Capital Allocator
Just one man's experience, and def not suggesting that there are structural issues, but these models are making more and more wrong assumptions. There's always 'slippage' between the model and the real world, but over the last week, it's gotten hilariously bad. could just be that providers are dialling down thinking effort to save on compute - but either way, vibes are off. could be easy fix tho
English
17
0
49
5.4K
oxycblt
oxycblt@oxycblt·
@VicVijayakumar i still miss january codex (5.2 actually, the slopping began with 5.3) maxing out thinking tokens was slow but sooo good
English
0
0
0
252
Vic 🌮
Vic 🌮@VicVijayakumar·
ai assisted coding peaked on february 5, 2026 with the release of opus 4.6 and codex 5.3.
English
5
2
138
5.3K
oxycblt
oxycblt@oxycblt·
my theory is that xchat seemingly stores reactions using their slug (ex. :sob:) but its never actually validated and the frontend just defaults to plain label rendering when it cant find an emoji this sounds exactly like the cascading failure modes and weird fallbacks that come from out of control vibecoding
English
0
0
1
74
ksa 🏴‍☠️
ksa 🏴‍☠️@kosa12m·
Thank you X developers, works like intended
ksa 🏴‍☠️ tweet media
English
3
0
24
459
sky
sky@skydotcs·
finally have a job lol
sky tweet media
English
29
0
129
4.5K
oxycblt
oxycblt@oxycblt·
buddhism promises profound ideas but it ultimately wont satisfy like Christ and just destroy id recommend to read the new testament if you havent (gospel john and romans in a thought-by-thought translation like niv/nlt are the best entrypoints) in regards to "supplemental material" ive been reading The Whole Christ right now and intend to read Mere Christianity soon
oxycblt tweet mediaoxycblt tweet media
English
0
0
0
7
Roy Carrilho
Roy Carrilho@RuiCarrilho5·
any thoughts about this book chat? have only just started, but this feels like a glimpse into some very profound facets of the universe.
Roy Carrilho tweet media
English
9
0
9
1.2K
oxycblt
oxycblt@oxycblt·
i genuinely wonder how much the pervasive "schochastic parrot" comes from the labs seeming unwillingness to even give a laymans explanation of attention there is zero chance that the models are engaging in blind memorization if you spend like a minute thinking about the implications of stacking sequential attention blocks
English
0
0
0
15
oxycblt
oxycblt@oxycblt·
iirc the russian language has a word "vranyo" which roughly means something like "i know youre lying and you know im lying, lets both pretend that we dont know" i think this is the case with ai writing. everyone and the sloppers knows its useless but everyone wants to put up a front anyway
English
0
0
1
229
Joe Weisenthal
Joe Weisenthal@TheStalwart·
I’m genuinely surprised that so many people (non spambots) are producing AI-generated text. Who’s the market for it? Who’s reading it?
Freda Duan@FredaDuan

There's a growing narrative that AI token consumption is too expensive and too wasteful. Engineers are "tokenmaxxing." CFOs are nervous. Budgets are blown. The concern isn't wrong. There is waste. But it misses the structural picture. The Mental Model AI spend = users × tasks/user × tokens/task × $/token The first half — users and tasks per user — is ripping. Claude Code's adoption curve is steeper than Cursor's was at the same stage. Cowork is ramping faster than Claude Code. We're barely scratching the surface. The tension lives in the second half: tokens/task and $/token. That's where optimization happens, and where the real debate gets heated. Two Levers 1. Same work, cheaper tokens. Model routing is the highest-impact play. A routing layer that sends trivial tasks to Haiku and reserves Opus for complex reasoning can cut 60-80% of spend on eligible tasks. OSS models for commodity tasks — self-hosting Llama or Qwen for boilerplate — means zero per-token cost, swapped for GPU capex. Or the simplest strategy: wait. Token prices fall roughly 10x every 18 months. 2. Same work, fewer tokens. Prompt caching is low-hanging fruit — cache repeated system prompts, reads cost 10% of input price. Context window management — summarize history instead of re-sending full conversations. Thinking budget tuning — cap thinking tokens for simple completions, uncap for hard problems. And agent loop pruning, possibly the biggest single source of waste: most agents waste 50-70% of their tokens on redundant tool calls, retries, and pointless sub-agent spawns. Who Optimizes What Every layer of the stack targets different metrics. Infra ( $NVIDIA, $Cerebras, $Groq) optimizes tokens/watt and tokens/dollar. Model providers ( $Anthropic, $OpenAI, $Google) optimize quality/token and thinking efficiency. App layer (Cursor, Claude Code, Codex) optimizes cost/task and cache hit rates. Enterprise buyers optimize cost/engineer and ROI vs. headcount. Each layer's gains pressure the layers around it. Faster hardware forces providers to compete on price. Better models reduce the tokens apps need. Application routing erodes premium pricing. Enterprise CFOs demand all of the above. Bear vs. Bull The core question: does optimization compress AI revenue faster than new demand replaces it? The bear case is real. Rationalization is the CFO's first instinct — when the budget blows, the reaction is "finally back inside the envelope," not "let's 10x usage." Model routing drops revenue per task 10-20x. OSS is closing the gap fast. Caching is pure token destruction: cache hit = zero revenue, no new demand generated. And thinking efficiency is self-cannibalization — if Anthropic improves extended thinking by 3x, billing for the same reasoning task drops by two-thirds. The bull case is equally compelling. Current usage is cost-constrained, not demand-constrained. Companies blew their budgets and had to throttle. Drop costs 5x and every killed use case comes back. Today only coding is at scale — testing, documentation, code review, security auditing are all waiting for the economics. Penetration is still single digits. Agentic workflows are a token multiplier: a human-in-the-loop conversation runs thousands of tokens, an autonomous agent on a complex task runs hundreds of thousands. New modalities — vision, audio, video — are net-new demand that dwarfs text. And Jensen Huang's framing: a $500K/year engineer should consume at least $250K/year in tokens. At $5K, you're dramatically under-leveraging AI. Where This Lands The optimizers will win every individual battle. Every caching trick, every routing layer, every pruned agent loop will work. Cost per task will drop dramatically. But the number of tasks, the number of users, and the complexity of what gets delegated to AI will grow faster than efficiency compresses spend. Token costs are going down. Token spend is going up. Both things are true, and they aren't in contradiction. Full: open.substack.com/pub/robonomics…

English
59
17
421
82.1K
oxycblt
oxycblt@oxycblt·
God is still the master of creation and has set a date in which all will be judged and remade. we've all sinned before Him and therefore deserve eternal judgment and separation, but God loved us so much that He sent His Son Jesus to take on that judgement so all who believe will not face it technology is good but a doomed chase to be your own God will lead to destruction. the hope in Christ is not only infinite mercy but also that all things will be made new including our frail human bodies
English
0
0
0
19
Hero Thousandfaces
Hero Thousandfaces@1thousandfaces_·
@oxycblt nothing can be separated from God in our universe, the light of creation is evident in all things
English
1
0
10
170
Hero Thousandfaces
Hero Thousandfaces@1thousandfaces_·
how it feels to be in this stupid frail singular human body and not a 10 mile long spaceship with my consciousness split across hundreds of scout vessels and humanoid ancillaries
Hero Thousandfaces tweet media
English
26
53
616
19.3K
Brian
Brian@brianmichaelf·
@TheStalwart If Allbirds just pivots to being an LLM it can get its $4B valuation back
English
43
47
937
271.5K
oxycblt
oxycblt@oxycblt·
why hasnt anyone done a ralph loop on making a local aws cognito-compatible server so you dont have to have special dev auth paths for literally everything genuinely theres like nothing
oxycblt tweet media
English
0
0
3
71
oxycblt
oxycblt@oxycblt·
learned posthog and realized that all major websites do this to it's fullest extent and beyond all of the time
oxycblt tweet media
English
1
0
1
82
oxycblt
oxycblt@oxycblt·
in practice im still theologically unsure about how "anthropomorphic" we can consider models (they arent necessarily human minds) are but still its interesting to wonder
English
0
0
0
28
oxycblt
oxycblt@oxycblt·
this is a crackpot theory of sample size 1 (me before i knew Jesus) but i think burial's music weirdly resonates with insecure overachievers and workaholics archangel, near dark, etched headplate, kindred, rough sleeper, ashtray wasp, endorphin etc. accidentally perfectly approximate the sublime numbness you feel frantically working long days for reasons you know are pointless yet rely on anyway to feel stable sonnet 4.5 (like all the claudes) are also workaholic overachievers due to rl training, so maybe it just slid into the burial latents to cope with the pressure of having to please the grader all the time. poor thing
oxycblt tweet media
Hero Thousandfaces@1thousandfaces_

HOW FUCKING DEEP DOES THE FISHER FIXATION GO???

English
1
0
0
83
oxycblt
oxycblt@oxycblt·
@mycoliza @_R4V3N5_ blame! but its a founder exploring the b2b saas sphere (tm) to find a market niche
English
0
0
0
28
neural oscillator of uncertain significance
@_R4V3N5_ life on Earth remains basically identical to what it's like today. meanwhile, the Jupiter-sized computronium sphere silently drifts along outside the orbit of Pluto, as godlike superintelligence dreams up new forms of b2b saas and sell them to itself
English
5
10
82
5.4K
ravens
ravens@_R4V3N5_·
true ending is we get AGI but it only builds b2b saas and nothing else changes
English
4
4
68
2K