Robin Gattis
650 posts

Robin Gattis
@ColdShalamov
Crypto-Anarchist; Lead developer for Helix blockchain.
Katılım Nisan 2025
46 Takip Edilen28 Takipçiler

@ick_real There was an app on GitHub back in the day that had hacked Gmail to abuse its unlimited space to hold infinite cloud storage fragmented and decoded from emails.
That’s largely why they put any cap at all. People had terabytes of files saved in emails
English


@mark_k I have the Agentify Desktop MCP that lets codex query my ChatGPT subscription in pro or heavy like a high level guide. Otherwise I wouldn’t be using it either.
Its like arguing with a very autistic person
English

@mark_k When I saw this on Reddit it made me realize the future of A list films isn’t even actors, it’s literally just public figures getting their image ripped off and put into shitty action films
I can’t wait
English
Robin Gattis retweetledi

@Comrade_Brandon @WallStreetApes Marxists don’t have a monopoly on fire
English

@ColdShalamov @WallStreetApes The so called libertarian is advocating being Marxists lmao
English

Young Americans are losing it and having mental breakdowns because they just can’t afford to live
“I'm f*cking stressed out. I'm f*cking stressed. We should not be working like this. I work my ass off and I can't even f*cking pay bills — All of sh*t just pointless”
The cost of living in America has to come drastically down. Americans can’t afford bills, they can’t afford a home, they certainly can’t afford to start a family
An estimated 70% of Gen Z struggle to pay rent, an estimated 53% of Millennials also struggle just to pay rent
This is not the America we want to live in
English

@rezoundous My $25 annual z.ai sub was the best $25 I ever spent
English

a lil gust of wind and this nigga flying away 😭
𝘾𝙈@charlesmore25
Jennifer Lopez with her 17 year old son, Max Muniz, during an outing in Beverly Hills 😍
English

I actually believe context length is often hype.
Now in A\ models that have fantastic long context handling, yes it’s amazing but very expensive
In gpt 5.4 it’s set to 400k in codex anyways, and while you can override it, the accumulation of irrelevant tool call outputs degrade quality (empirically), which matches my experience that you get better results from the standard 400k than from overriding 1M.
5.4’s compaction is sota and I’d bet there’s a better sweet spot for compaction point (400k seems arbitrary) but I haven’t experimented as codex’s override is global and it basically nukes your ability to use spark.
English

@kobayashizeytin @ColdShalamov @rezoundous Yeah any serious code base and you might get yourself into trouble. 400k context too (as opposed to 1M). Ironically - you need very little to get to the moon, as per Grok haha:

English













