Cameron Thacker

337 posts

Cameron Thacker banner
Cameron Thacker

Cameron Thacker

@CameronMThacker

PhD | Co-founder @playmythical | 0-to-1 builder | recovering physicist | AI since ML | explore gt exploit | type II fun

Los Angeles, CA Katılım Ağustos 2013
176 Takip Edilen35.2K Takipçiler
Cameron Thacker
Cameron Thacker@CameronMThacker·
@arb8020 Interesting because I find the residual stream so unsatisfying the way each layer just gets added in. To me, it just seems like it is missing something important and it’s surprising that it works as well as it does.
English
0
0
1
106
arb8020
arb8020@arb8020·
aesthetically i hate every architecture that’s been fucking with the residual stream. get your filthy hands off my beautiful information highway
English
5
0
29
2.9K
Cameron Thacker
Cameron Thacker@CameronMThacker·
If you don’t make time, you won’t find it.
English
0
0
1
61
Cameron Thacker
Cameron Thacker@CameronMThacker·
@badlogicgames The only real abstraction layer that matters is the one I stop at obviously. Meanwhile we can all just read the equations and get at the essence without implementing anything lol
English
0
0
0
7
Mario Zechner
Mario Zechner@badlogicgames·
> or copied and pasted into PyTorch rather than writing bare Python; banger
English
3
0
41
5.3K
Mario Zechner
Mario Zechner@badlogicgames·
@CameronMThacker please don't nerd snipe me. i'd kill for a non python version of numpy and pytorch.
English
1
0
2
140
Mario Zechner
Mario Zechner@badlogicgames·
guess it's time to build my own model with spit and duct tape as well now. what a time to be alive ... ridonculous.
English
12
1
309
18.7K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@michellechen @badlogicgames I would never have expected cloudflare to be pushing boundaries in these areas like they are, but it's really awesome. You guys are moving so quickly, I can't even keep up!
English
0
0
1
28
Cameron Thacker
Cameron Thacker@CameronMThacker·
@zeeg I like it! What ended up as the most useful piece for you? I’m going to give this a try when it’s ready
English
0
0
0
475
Cameron Thacker
Cameron Thacker@CameronMThacker·
Cool idea, but your results don't really show what you imply in your post. sec 6: "post-hoc mode runs all layers on every step... does not achieve wall-clock layer skipping" so you aren't actually skipping layers. Thus the only speedup you are getting is due to your fused kernels right? Looking forward to see if you can nail the true skip mode.
English
1
0
10
673
Jaber
Jaber@Akashi203·
been thinking about how wasteful LLM inference is at the token level every token goes through every layer. "the" gets 32 matmuls. a hard reasoning step also gets 32 matmuls. same compute for wildly different information content. always a bit silly, but now it's actually expensive, reasoning models emit thousands of thinking tokens per query and most are "ok", "so", "wait", "let me" the fix is sitting right there in the representations. for most tokens the hidden state at ~layer 11 is already nearly identical to the final layer. the rest barely moves the output. you just need a cheap per-token signal to notice so we built TIDE. tiny MLP routers (~4MB) that sit on a frozen model and predict "has this token converged yet". post training, no retraining, bolt it onto any HF causal LM. calibration is 2000 wikitext samples, under 3 min on one GPU deepseek r1 distill 8B on A100: 100% prefill exit rate, 7.2% lower latency, 99% of decode tokens exit early on a multi step math problem with the answer unchanged. 8B is the floor. the methodology compounds with depth and output length, 70B+ has ~80 layers of redundancy and inference time scaling models emit 10 to 100x more tokens per query. opus class + long chain of thought is where the lever gets real paper: arxiv.org/abs/2603.21365 code: github.com/RightNow-AI/TI… (this kind of kernel level stuff is what we bake into @runinfrai by default, check it out runinfra.ai)
Jaber tweet media
English
18
49
480
28.3K
Toni Sagayaraj
Toni Sagayaraj@tonis_a_gayaraj·
@sedielem @CSProfKGD I can’t believe CDCD tried so hard to diffuse on embeddings and actually the solution was just to throw one-hots at modern diffusion architectures and let them figure it out
English
2
0
5
280
Sander Dieleman
Sander Dieleman@sedielem·
Continuous language diffusion strikes back! Flow maps are really starting to come their own as a viable method for language modelling with very fast inference. FMLMs produce good results even with a just a _single_ forward pass!
Nicholas Boffi@nmboffi

🤯 big update to our flow map language models paper! we believe this is the future of non-autoregressive text generation. read about it in the blog: one-step-lm.github.io/blog/ full details in the paper: arxiv.org/abs/2602.16813 we introduce a new class of continuous flow-based language models and distill them into their corresponding flow map for one-step text generation. we beat all discrete diffusion baselines at ~8x speed! v2 gives a complete theory of the flow map over discrete data, with three equivalent ways to learn it (semigroup, lagrangian, eulerian). it turns out you can train these with cross-entropy objectives that look very similar to standard discrete diffusion — but without the factorization error that kills discrete methods at few steps. beyond improving results across the board, we showcase properties that are unique to continuous flows. in particular, inference-time steering and guidance become straightforward. autoguidance brings generative perplexity down to 51.6 on LM1B, while discrete baselines completely collapse at the same guidance scale. we also show reward-guided generation for steering topic, sentiment, grammaticality, and safety at inference time — and it works even at 1-2 steps with our flow map model. simple, well-understood techniques from continuous flows just work incredibly well in practice for language. we’re extremely excited about the future of this class of models. stay tuned for results on scaling, reasoning, and reinforcement learning-based fine-tuning. 🚀

English
2
27
197
26.1K
Jack Cole
Jack Cole@MindsAI_Jack·
What happened to all the AI/ML papers being announced on x? They seem to have disappeared for me. Are others noticing the same?
English
30
4
424
45.1K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@PaulNeverovsky It's honestly really nice. Maybe a tad complicated for the average user, but I want this mode lol.
English
0
0
3
226
Paul Never
Paul Never@PaulNeverovsky·
Anthropic just leaked a new Claude app design, and it’s crazy good
English
119
176
5.2K
550.6K
Cameron Thacker
Cameron Thacker@CameronMThacker·
I'm surprised this is getting a lot of traction. This has been a thing for a long time already. You don't need a heavy plugin. You can just tell your agent to use `codex exec` or build a simple skill from that like I do. Just make sure to send std error to dev null (2>/dev/null) so thinking tokens don't pollute your context.
English
0
0
0
199
Romain Huet
Romain Huet@romainhuet·
We’ve seen Claude Code users bring in Codex for code review and use GPT-5.4 for more complex tasks, so we thought: why not make that easier? Today we’re open sourcing a plugin for it! You can call Codex from Claude Code with your ChatGPT subscription. We love an open ecosystem!
dominik kundel@dkundel

I built a new plugin! You can now trigger Codex from Claude Code! Use the Codex plugin for Claude Code to delegate tasks to Codex or have Codex review your changes using your ChatGPT subscription. Start by installing the plugin: github.com/openai/codex-p…

English
288
351
5.4K
924.8K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@varunneal Thanks for sharing - that is not intuitive. For me, this competition is just fun to experiment with and I'm actually only interested in novel solutions and architectures, not treating it like some kaggle competition 😂
English
0
0
1
67
Cameron Thacker
Cameron Thacker@CameronMThacker·
@atulit_gaur Lot of botted content. Post this type of content because you learn when you teach - or you just want to. Don't do it for external validation.
English
0
0
0
37
Cameron Thacker
Cameron Thacker@CameronMThacker·
@EastlondonDev I think this is a super interesting direction that would also be cool to incorporate with recursive language models...the repl is the model?? lol
English
0
0
1
84