0range Crush, CMT

29K posts

0range Crush, CMT banner
0range Crush, CMT

0range Crush, CMT

@0rangeCru5h

Perception deviating from reality creates opportunity; Bull or Bear, timeframes & Risk mgt matter most, Looking for Delta, Not advice it may be sarcasm

Atlanta, GA Katılım Şubat 2012
1.2K Takip Edilen1.4K Takipçiler
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
@TukiFromKL AI will replace everything it's sounding like Self Driving Class 8 Trucks debacle Self driving trucks, Easy It turns out truck Drivers spend 80% driving, the easy part for driver & Computer But the computer can't do the 20% required before/after you drive the truck
English
0
0
0
129
Tuki
Tuki@TukiFromKL·
🚨 do you understand what Karpathy just said.. the guy who co-founded OpenAI.. led AI at Tesla.. one of the best engineers alive.. built an app with AI.. and said the code was the easy part.. the hard part was Stripe.. auth.. DNS.. databases.. deploying it.. connecting 15 different services that all have different dashboards and different docs and different billing pages.. AI can write your entire app in 20 minutes.. but it still can't click "confirm email" on Vercel.. so the thing that's "replacing developers" can't do the thing developers actually spend 80% of their time doing.. vibe coding didn't kill software engineering.. it just proved that coding was never the job.. the job was dealing with the mess around the code.. and that mess is still 100% human.
Andrej Karpathy@karpathy

When I built menugen ~1 year ago, I observed that the hardest part by far was not the code itself, it was the plethora of services you have to assemble like IKEA furniture to make it real, the DevOps: services, payments, auth, database, security, domain names, etc... I am really looking forward to a day where I could simply tell my agent: "build menugen" (referencing the post) and it would just work. The whole thing up to the deployed web page. The agent would have to browse a number of services, read the docs, get all the api keys, make everything work, debug it in dev, and deploy to prod. This is the actually hard part, not the code itself. Or rather, the better way to think about it is that the entire DevOps lifecycle has to become code, in addition to the necessary sensors/actuators of the CLIs/APIs with agent-native ergonomics. And there should be no need to visit web pages, click buttons, or anything like that for the human. It's easy to state, it's now just barely technically possible and expected to work maybe, but it definitely requires from-scratch re-design, work and thought. Very exciting direction!

English
112
215
2.7K
472K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
@CRUDEOIL231 @someBerserker It's a bit deceptive Today it is storage Once the strait opens it becomes effectively a half full pipeline Short term problem solved But the pipeline, tankers moving product, must be refilled.
English
1
0
1
104
JH
JH@CRUDEOIL231·
@someBerserker Iraq and Kuwait will take several months to recover to normal levels. Saudis and the UAE will recover a bit faster. However to be fair, once transit resumes, over 100mb of floating storage will immediately flood the market.
English
2
2
17
1.3K
JH
JH@CRUDEOIL231·
In February, I liked hearing "Nothing will happen," but now, I love hearing "Everything is fine." Even while everyone is optimistic, actual supply losses will continue to accumulate. The more ppl choose to see only what they want to see, the more delayed the recognition of the crisis becomes, pushing any fundamental solution further away. So oil longs, stop being so angry. Don't you realize yet all of this is for your own good? Time is on your side this time. #oott #iran
JH tweet media
English
29
76
473
39.7K
Markets & Mayhem
Markets & Mayhem@Mayhem4Markets·
If this oil shock is anything like 1990, it may still have more upside, could contribute to inflation accelerating and may even help to trigger a recession. It's all about how long the Strait of Hormuz remains closed and how much more infrastructure is damaged. Chart: TS Lombard
Markets & Mayhem tweet media
English
8
8
64
6.7K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
@aakashgupta Stranger doesn't know you but knows everything else You ask a question Stranger goes through all your trash from the past 2 months Finds 1 unique box for fishing something Then fishing must be important and incorporated into the answer This is LLM personalization
English
0
0
0
152
Aakash Gupta
Aakash Gupta@aakashgupta·
Karpathy is describing the exact same failure mode that killed every search engine before Google. Early search ranked by keyword frequency. Ask about “python” once, and every result was python forever. Google’s insight was relevance to the current query, not historical frequency. LLM memory systems in 2026 are running the AltaVista playbook. Store facts about the user, retrieve what’s semantically close, inject it into context. The retrieval is getting better. The problem Karpathy is pointing at is downstream of retrieval. Wang et al. presented “Contextual Distraction Vulnerability” at ICLR and NeurIPS last year: adding semantically coherent but task-irrelevant context to a prompt causes performance drops over 45% in mainstream models. Yang et al. at EMNLP 2025 confirmed it with GSM-DC: irrelevant context corrupts reasoning path selection and arithmetic accuracy. The distraction isn’t a vague qualitative annoyance. It’s measurable, reproducible degradation in capability. OpenAI published the Instruction Hierarchy to train models to ignore untrusted context from third parties. They followed it with IH-Challenge, which improved prompt injection robustness on GPT-5-Mini. The architecture can now deprioritize malicious instructions from adversaries. But memory is trusted context injected by the system itself. It arrives in the same position as system instructions, wrapped in the same formatting, with the same implicit authority. The instruction hierarchy doesn’t help when the distraction is coming from inside the house. Liu et al.‘s “Lost in the Middle” showed models over-attend to content at the beginning and end of context. Memory gets placed at the beginning. Positionally, it’s indistinguishable from instructions. RLHF penalizes wrong answers, refusals, tone failures. It does not penalize “incorporated a correctly retrieved memory when the query didn’t call for it.” Until the reward model includes examples where correct behavior is generating as if the memory isn’t there, models will keep trying too hard with whatever context they’re given.
Andrej Karpathy@karpathy

One common issue with personalization in all LLMs is how distracting memory seems to be for the models. A single question from 2 months ago about some topic can keep coming up as some kind of a deep interest of mine with undue mentions in perpetuity. Some kind of trying too hard.

English
19
21
267
49.4K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
#Tokenmaxing AKA let's waste company money & look good doing it
Allen Holub. https://linkedIn.com/in/allenholub@allenholub

I've just read about "tokenmaxing," which uses the number of LLM tokens you blow through in a month as a sort of sick productivity metric. This is insane. One of the posts I read inadvertently characterized it perfectly: "Engineers are starting to compare token spending the way they used to compare GitHub commits." GitHub commits are and always have been a crappy metric. They measure your ability to game the system, not your productivity, and they measure the wrong thing: time coding (or asking an LLM to code) as compared to time thinking. Both metrics reward output volume without ever considering if you're building the right thing or providing actual value to your customers. A great engineer thinks (and talks to customers and assesses value and other things) for at least an hour before writing 10 lines of code. A crappy engineer writes 500 lines of crappy code in that hour, all of which will have to be thrown out, and some of which will actively damage a formerly good system. Both tokenmaxing and commits reward the latter behavior. Sure you can vibe up a sh*t load of code in no time at all, and spend a fortune in tokens doing it, but do your customers want or need the result? In the AI world, I should also add that tokenmaxing is intentivising people to waste vast amounts of money by ignoring context engineering and using the largest contexts possible. Need to fix a minor bug? Let's throw 100,000 lines of code into the context! Also, the natural compression that happens when you bang up against context limits degrades quality, increases bugs, and generally makes the LLM less effective. Sure, I can waste lots of your money on unnecessary tokens if it makes my bonus bigger. If that's the game, I'll play it.

English
0
0
0
42
Hedgie
Hedgie@HedgieMarkets·
🦔 OpenAI is discontinuing Sora, its AI video generation app launched in September 2025. The app allowed users to create videos from text prompts and share them to a social feed, but struggled to gain traction. OpenAI tightened restrictions around intellectual property shortly after launch, which significantly limited what users could do with it. The Wall Street Journal reports OpenAI is stepping back from video AI efforts entirely as it shifts focus toward building a super app combining ChatGPT, its Codex development tools, and its web browser. My Take Sora launched with genuinely impressive demos and then OpenAI almost immediately restricted the intellectual property use cases that made people want it in the first place. Once you remove the ability to do the thing that drove interest, you're left with a text-to-video tool in a category that Runway, Pika, and Google are competing in aggressively with products that have been iterating longer. Sora never had a real answer to that. What I can't get past is the Disney situation. Three months ago Disney invested $1 billion in OpenAI and signed a three-year licensing deal specifically to bring Marvel, Pixar, and Star Wars characters to Sora, with users supposed to start generating videos in early 2026. OpenAI just shut the whole thing down. That is not a small footnote, that is a billion dollar partnership built around a product that no longer exists, and whatever conversation is happening between Bob Iger and Sam Altman right now is one I'd very much like to hear. Hedgie🤗
Hedgie tweet media
English
11
16
128
7.7K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
I don't people understand this is the first time since early 70s when there is a major production short fall It will have a long term impact on inventories There is a massive difference between a supply distribution, typical crisis And a production, not coming out of the ground, distribution
English
0
0
0
304
Eric Nuttall
Eric Nuttall@ericnuttall·
While the worst energy crisis of our lifetimes remains, the Strait of Hormuz was always going to reopen, eventually. I'm more interested in "the day after": 🛢️while noisy, oil-on-water inventories are falling by over 5MM Bbl/d with less than 35MM bbls of Russian and Iranian floating storage to unsanction. Production is down ~10MM Bbl/d and will take time to be restored even when the Strait fully opens. The "oil glut" narrative is officially dead as those barrels are gone forever. Too, look for SPR refilling and product hoarding. Biggest bear thesis is no more. 🛢️We expect there to be an enduring political risk premium of at least $10/bbl and a renewed focus on "security of supply." We are only one drone away from halting 20MM bbl/d of flows all over again and spare capacity is only valid if you can get it to your customers. 🛢️Energy stocks have greatly lagged the rise in the oil price and are reflecting on average ~$70WTI. 2026/2027 strip = $76WTI. Stock picking matters! 🛢️Geopolitical spikes are never good for energy stock price performance..."we remain bullish"™️ given the twilight of US shale, the peaking of non-OPEC production, the lack of meaningful OPEC spare capacity, and the decades ahead of oil demand growth.
Eric Nuttall tweet media
English
6
69
350
49.2K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
@KobeissiLetter I need to re-watch that Netflix show, the Diplomat Think there's any shred of operational reality in there? 🤔
English
0
0
0
173
The Kobeissi Letter
The Kobeissi Letter@KobeissiLetter·
BREAKING: Turkey, Egypt, and Pakistan have been "passing messages" between the US and Iran "in an effort to de-escalate," per Axios.
English
189
624
4.8K
628.3K
Simon Dixon
Simon Dixon@SimonDixonTwitt·
Everybody told me I don’t understand this war. Many told me it was WW3 & a nuclear bomb will be fired. I said the market disagrees and the outcome has already been agreed. Trump now says the US and Iran “have had very good and productive conversations regarding a complete and total resolution” of the Iran War. Trump has ordered the Department of War to postpone “any and all strikes” against Iranian power plants. Let’s see what comes next.
Simon Dixon tweet media
Simon Dixon@SimonDixonTwitt

🌮 Trump will have to TACO The 10Y Note Yield is now up ~45 basis points since the war began on February 28th. With the 10Y Note Yield now up to 4.40%, the US economy cannot handle a 5% 10Y Note Yield. He has no choice but to crash oil and bond yields by announcing a deal.

English
308
78
795
191.9K
Brent aka Blacklion
Brent aka Blacklion@BlacklionCTA·
That energy is still green tells me what I need to know about this latest 🌮tweet.
English
13
3
76
26K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
@LukeGromen @BobEUnlimited Total Cumulative Production Shut in is the long-term factor It's like the 70's embargo when they cut production slowly Except this is a Rapid production cut Restart ramp, 2 to 3 months to reach 90%'ish, assuming Strait flows return to normal
English
0
0
1
167
Luke Gromen
Luke Gromen@LukeGromen·
@BobEUnlimited Bessent’s oil curve management lowered front costs for a moment but now the whole curve is shifting up
English
7
7
235
13.2K
Bob Elliott
Bob Elliott@BobEUnlimited·
Brent curve is pricing in a $100 average oil price for the rest of the year.
Bob Elliott tweet media
English
22
74
680
130.9K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
Can we say: DLSS 5 is not just for video game graphics DLSS 5 uses AI to redraw & replace video game graphics with what AI thinks it should be Sounds like Google is using their AI to effectively do the same thing with text How long before they rework entire articles too? Do Print News Papers need to come back so we can see the real news?
English
1
0
31
1.7K
Hedgie
Hedgie@HedgieMarkets·
🦔 Google is testing AI-generated replacements for headlines and website titles in search results. The Verge noticed their headlines were being rewritten without their input. One example: "I used the 'cheat on everything' AI tool and it didn't help me cheat on anything" became "Cheat on everything AI tool." Google says the goal is to better match titles to user queries and facilitate engagement. They claim if this rolls out widely it won't use generative AI, though they didn't explain what other kind of AI would be rewriting headlines. Google Discover already does this and apparently it "performs well for user satisfaction." My Take Google is now rewriting other people's work and putting it in front of users as if that's what the publisher wrote. The headline is part of the article. It's an editorial choice that conveys tone, angle, and intent. When Google changes "Microsoft is rebranding Copilot in the most Microsoft way possible" to "Copilot Changes: Marketing Teams at it Again," they're not clarifying anything, they're replacing the author's voice with generic slop. The legal question here is interesting. Google has traditionally claimed protection as a platform that indexes and displays content rather than creating it. Once you start rewriting headlines, you're arguably developing content, which gets into Section 230 territory. Publishers whose work gets misrepresented might have defamation claims if an AI-rewritten headline changes the meaning of their article. Google is already driving less traffic to publishers and now they want to edit what little representation those publishers have left in search results. This feels like another step toward a web where Google just tells you what it thinks you should know instead of connecting you to sources. Hedgie🤗
Hedgie tweet media
English
43
359
1.8K
59.7K
Daniel Lacalle
Daniel Lacalle@dlacalle_IA·
Today’s energy crisis is very different from 2008. In 2008, the US pumped around 5 million barrels a day of oil; today it’s 13.8 mb/d. US natural gas output has doubled to over 1,000 bcm a year. The US is structurally less exposed to energy shocks, becoming more of a shock absorber for the rest of the world. dlacalle.com/en/short-term-…
English
11
68
304
35.3K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
This is the correct way to understand the announcement What is really annoying is when people inject barrels per day numbers into the discussion IMO that is done to downplay the acute character of what is happening Think of global oil flows like a dam The lake is all the oil that is out of the ground. Water over the dam is oil being used for power. The lake was steady, up a little down a little. Now the lake is draining at 20% of daily flow per day If the lake level falls below the intake level, boom! No more power
English
1
0
0
67
Balance Sheet BS
Balance Sheet BS@BalanceSheetBS·
So 140 million barrels "freed up" is actually 40 million stranded plus 100 million already headed to China. 40 million barrels is roughly 10 hours of global consumption. The world is missing 400 million barrels from 20 days of Hormuz closure and the relief package is 10 hours of supply that still needs to be refined. Trust the balance sheet.
English
2
0
1
2.6K
Javier Blas
Javier Blas@JavierBlas·
US Treasury says there are 140 million barrels of Iranian oil "stranded" on the water. That's misleading -- if not false. There're about 100 million barrels of Iranian oil on their way to China, and probably another 40 million on floating storage. Only the later is "stranded."
English
39
241
1.1K
128.8K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
@Aarihaan_Indus @HayekAndKeynes On Bloomberg the metrics show a negative number for rate cuts and when expectations change from cut to hike the metrics go from negative to positive Absolute value is the possibility of a change Positive for hike Negative for cut
English
1
0
0
11
Gaurav Jain 🐦
Gaurav Jain 🐦@Aarihaan_Indus·
@HayekAndKeynes Nice summary 👍 Just 1 small correction that you need to make. In Line 3, Higher Inflation = Lower Rate Cut odds (not Higher). I guess it was a typo from your side.
English
2
1
1
104
HFI Research
HFI Research@HFI_Research·
By narrowly lifting the sanctions, Chinese oil traders will buy everything Iran has in transit with no enforcement worries. Nicely done.
English
7
22
265
77.5K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
They are playing with fire Reminder Global economy runs on oil and it is losing 20% of daily flows daily Think of it this way Radiator in your car springs a leak. Some fluid can leak out without issue but there is a fine line between enough and not enough Not enough isn't empty, it's the point where it won't work anymore I do not want to find this fine line
English
1
0
3
170
GregTheAnalyst
GregTheAnalyst@Analyst_G·
And the same time : *The U.S. is preparing to deploy elements of the 82nd Airborne Division into the Middle East region -CBS *We Need $200 Billion Dollars For A Ground Invasion Trump tries to save the market... The truth is he can't get out from the Middle East easily... Weekend is in Iran's hands.
The Spectator Index@spectatorindex

BREAKING: Trump says 'we are getting very close to meeting our objectives as we consider winding down our great Military efforts in the Middle East with respect to the Terrorist Regime of Iran'

English
5
4
31
3.6K
0range Crush, CMT
0range Crush, CMT@0rangeCru5h·
Encouraging: US SPR has more sour than sweet That's a good thing.
0range Crush, CMT tweet media
English
0
0
0
42