Mark Kwong

18.6K posts

Mark Kwong banner
Mark Kwong

Mark Kwong

@M3kw9

iOS Engineer(DM open for help) - I love predicting things. Follow for future insights.

HK-T.O.-SF Katılım Şubat 2009
619 Takip Edilen224 Takipçiler
Mark Kwong
Mark Kwong@M3kw9·
@rsuyoy I’ve been using it the same way for months and I have not seen noticeable difference.
English
0
0
0
10
Yousr
Yousr@rsuyoy·
Yeah man OpenAI’s absolutely bullshitting us with the rate limits, there’s no way I just burned through 1/5 of the weekly limit in one light session
Yousr tweet media
English
83
2
216
25.8K
Daniele Franceschi
Daniele Franceschi@Daniele_Media·
Some drama between the #BlueJays and #Dodgers. George Springer approached HP umpire Dan Bellino to inquire about the amount of warmup time for Shohei Ohtani. Dave Roberts was visibly annoyed in the dugout. (📹: @sportsnet) #BlueJays50
English
303
247
6.8K
1.5M
sampson ireman
sampson ireman@IremanSamp32901·
@M3kw9 @iTalkStudiosYT Not at all, the best world series ever. I respected the Dodgers and hats off to them. But the gloating over winning against a team so injured, especially their starters? That's pathetic, and I hope they get the same injury bug, but closer to the playoffs. Karma's a bitch
English
7
0
1
9.4K
Isaac
Isaac@iTalkStudiosYT·
Nah this is a nasty humiliation ritual for the Blue Jays...they get destroyed and then have the man that busted their championship months ago just dicking around on the mound to close the game
English
113
783
15.5K
1.3M
laxman
laxman@llmluthor·
@ricburton @paulg stop bullying me pal, it's not that deep, is it? btw yap is kino, keep crying about it...
English
1
0
2
1.5K
Paul Graham
Paul Graham@paulg·
I got tired of hearing that YC fired Sam, so here's what actually happened:
Paul Graham tweet media
English
206
380
8K
1.4M
Parzival - ∞/89
Parzival - ∞/89@whyarethis·
This is insane. Gemma 4 26B running at 13GB on my Macbook M1, full context window. 20-40 tokens a second. This was a REAP model by @0xseraph further optimized through coherence physics. Dead heads were pruned and replaced by SVD rotations. Weights were quantized, and KV cache was optimized to be negligible. I am now working to get speed up higher. Wild to be talking to a local LLM which has been shrunk through the oscillator physics I have been working on for 6+ months now. #project89
English
67
89
1.4K
133.5K
sampson ireman
sampson ireman@IremanSamp32901·
@iTalkStudiosYT Gloating over beating a team with 4 starters injured, plus their catcher injured. Right fielder injured, and last night another starter injury having to leave in the 3rd, is wild. I hope the Dodgers don't go through this, but you know how Karma works.
sampson ireman tweet media
English
33
1
9
8.9K
Alex Kehr
Alex Kehr@alexkehr·
the lack of color consistency with apple products drives me crazy. am I weird for wanting all my tech to perfectly match?
Alex Kehr tweet media
English
15
0
36
3.3K
Mark Kwong
Mark Kwong@M3kw9·
@jxnlco No point over optimizing, just use stock codex/claude/gemini
English
0
0
1
290
jason liu
jason liu@jxnlco·
What’s everyone’s thoughts on oh my codex?
English
31
3
100
47.1K
The Long View
The Long View@HayekAndKeynes·
Trump is doing a great job so far, really sticking to the message. No weave. Just talking points. Projecting strength (Iran’s military is toast) and making his case. Did a good job highlighting Iran is to blame for higher fuel prices and clearly portraying them as a terrorist “cancer” attacking innocent civilian ships. Stressed they can’t get nukes, and have prevented that. Lastly, regime change was not the goal but they did eliminate the leaders responsible for terrorist attacks. Also highlighted Hormuz is not a strategic priority of the US. If NATO and others rely on it, they need to take it. If they need oil, US has oil, buy it. Even talked about how OBBBA will offset fuel price impacts. Market was weaving aimlessly on low liquidity as Trump hasn’t given his plan yet. Sold off once he said he would hit power infrastructure.
English
23
5
122
23.9K
Mark Kwong
Mark Kwong@M3kw9·
@DodgerBlue1958 Pitchers are fresh and they all have extra motivation to beat the dodgers
English
0
0
0
60
Dodger Blue
Dodger Blue@DodgerBlue1958·
Shohei Ohtani - .176 Kyle Tucker - .182 Mookie Betts - .143 Freddie Freeman - .174 Will Smith - .211 Teoscar Hernández - .200 Not great! The offense will remain flat until a few of them start hitting.
English
61
65
1K
79.8K
GBX
GBX@GBX_Press·
🚨 BREAKING ​Iran announced that the drone used to destroy the US-made Boeing E-3 Sentry AWACS aircraft—valued at $300 million—was a Shahed-136. ​The cost of the Shahed-136 kamikaze drone is estimated to be approximately $20,000.
English
28
221
4.1K
1.5M
borovik
borovik@3orovik·
Trump is playing 4D chess - War ends -> oil drops - inflation collapses - Powell out -> Kevin Warsh cuts rates - Liquidity floods into the economy - Markets giga pump (including crypto) Right into midterms Republicans sweep. That’s the plan
English
1.9K
844
9.2K
645.4K
Pankaj Kumar
Pankaj Kumar@pankajkumar_dev·
Google AI Pro Needs More AI Power, Not More Storage Google AI Pro moving from 2TB → 5TB is nice but honestly, thats not what most users signed up for. Most people bought Google AI Pro for AI usage not storage. What Google AI Pro already does well: - 5TB cloud storage (great, but not the main reason people buy it) - Access to Gemini CLI and tools like Jules - 1000 AI credits for usage - NotebookLM, etc - Antigravity (but low quota and still feels unfinished) We don’t need more storage. We need a plan focused purely on AI. Give us: - Higher AI usage limits - Better Antigravity quota - A more polished experience A dedicated “AI-only” plan would make way more sense for actual users.
Pankaj Kumar tweet media
English
105
78
972
78K
Mark Kwong
Mark Kwong@M3kw9·
@ai_for_success Why do they have AntiGravity and Code assist? They serve the same function.
English
0
0
0
69
AshutoshShrivastava
AshutoshShrivastava@ai_for_success·
Name me a better subscription than Gemini for 20 dollars. Now 5TB Storage from 2TB. You get: 1. Gemini latest model access 2. NotebookLM 3. Antigravity 4. 5 TB cloud storage 5. Gemini CLI, Jules and Code Assist 6. Gemini in Gmail, Docs, Vids, and more 7. 1,000 monthly AI credits. I must have definitely missed few things here..
AshutoshShrivastava tweet media
English
323
88
1.7K
252K
Mark Kwong
Mark Kwong@M3kw9·
@PaulSolt Usually high, medium to save time, but you need to use it enough know when to use it.
English
0
0
1
31
Paul Solt
Paul Solt@PaulSolt·
What level of reasoning works best for Codex? How do you get better results?
English
12
1
13
3.5K
Paul Solt
Paul Solt@PaulSolt·
Medium or High reasoning?
English
14
1
21
4.4K
Mark Kwong
Mark Kwong@M3kw9·
@LLMJunky To avoid that I spawn agents and have the main agent controller auto collect updates.
English
1
0
1
12
am.will
am.will@LLMJunky·
Codex compaction is truly the #1 killer feature for me, and has been ever since 5.2. I used to also have context window anxiety, and built an entire orchestration system to complete tasks within ~40% of the context window. It was a huge amount of manual work, and very time consuming. That is until @steipete made me aware of their new compaction endpoint. It was a difficult habit to break, especially coming from Claude models - where you absolutely have to stay out of the "dumb zone." But I started to trust it more and more. Now, I literally don't worry about compaction at all. It doesn't matter if it compacts 7-8 times. That is why, aside from very large docs or codebases, I don't really care that much about large context windows. Why should I? If you're still obsessing about resetting your context window, I would encourage you to try GPT 5.4 and just let it ride. I do recommend you write your spec to a markdown file so that the agents can manage state and track progress through compactions. This helps keep it on track. My strategy is to have the orchestrator update the spec after every task is complete with a concise log of its work. To me, this is the biggest difference maker between Codex and literally every other product.
dominik kundel@dkundel

I haven't had context window anxiety since GPT-5.1-Codex-Max when the model got natively trained on compaction. I let a thread go on until the feature is done and rely on auto compaction! You can even bring that same compaction into your own apps 👇

English
31
12
200
19.3K
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
Presumably Anthropic had fixed the Claude Code bug, so I wanted to give them the benefit of the doubt one more time & compare its rate limits with Codex. I used this simple prompt for planning and building: “Build a cute pixel RPG to play on the iPhone.” Both my Claude Max account and OpenAI Pro account started with fresh 5 hour resets. After several follow-up prompts, Codex completed a decent, simple, but fully playable RPG with exploration, crafting, collecting, fighting enemies & NPC interactivity features for the iPhone. I still had 93% of my 5 hour rate limit remaining with Codex using 5.4 xhigh & had used only 2% of my weekly limit. Claude Code, by contrast, still had not produced even a minimally viable playable version, with most of the above features missing, after already consuming 80% of the 5 hour limit. I asked for one more revision, but ran out of the 5 hr limit before it could even complete that. I then waited for a new reset and tried again to see whether it could finish the minimal game. It used up another 12% of the 5 hr limit and claimed it had added all the missing features. Yet the game was exactly the same, still unplayable, and not a single one of those missing elements had been added. Claude had also already used 8% of my weekly limit, still without delivering a product that Codex had already completed. Given that Anthropic also did not even bother to reset the limits from the token eating bug, I have now decided to cancel my Claude Max subscription. It has been a very frustrating waste of time dealing with it, and it has consumed all of my goodwill toward Anthropic.
Derya Unutmaz, MD tweet mediaDerya Unutmaz, MD tweet media
English
164
141
2.2K
252K
Chris
Chris@chatgpt21·
🚨 GREG BROCKMAN JUST EXPLAINED THE NEXT LEAP WITH SPUD (GPT 5.5) Greg Brockman: "I think of Spud as a new base, as a new pre-train... I'd say it's like we have maybe two years worth of research that is coming to fruition in this model." Greg says: "There's this thing called 'big model smell'... when these models are just actually much smarter, much more capable, that they bend to you much more, and you feel it." Here is exactly what we are getting with the upcoming GPT 5.5 rollout: • "Big Model Smell": A massive qualitative shift. The models stop being rigid and start intuitively bending to what you actually want them to do. • Unlocking New Abilities: It can just do things it wasn’t able to before. The frustrating moments where the AI "doesn't quite get it" and needs you to over-explain are going away. • Longer Time Horizons: The ceiling is being completely raised. The new models will be able to autonomously solve complex, open-ended problems over much longer periods of time. • A New Pre-Train Base: This is not an incremental fine-tune. Spud is a completely new foundation built to accelerate the entire economy.
English
74
101
1.4K
194.2K
Tibo
Tibo@thsottiaux·
Codex growth goes weeee. And it's not even next week. Who even works on April 1st.
English
136
30
1.3K
96.7K
Pedro Domingos
Pedro Domingos@pmddomingos·
If LLMs are so smart, why do they need all these prompts, harnesses, post-training, scaffolding, etc.?
English
364
47
928
120.5K