Cameron Thacker

325 posts

Cameron Thacker banner
Cameron Thacker

Cameron Thacker

@CameronMThacker

PhD | Co-founder @playmythical | 0-to-1 builder | recovering physicist | AI since ML | explore gt exploit | type II fun

Los Angeles, CA Katılım Ağustos 2013
175 Takip Edilen36.9K Takipçiler
Toni Sagayaraj
Toni Sagayaraj@tonis_a_gayaraj·
@sedielem @CSProfKGD I can’t believe CDCD tried so hard to diffuse on embeddings and actually the solution was just to throw one-hots at modern diffusion architectures and let them figure it out
English
2
0
5
248
Sander Dieleman
Sander Dieleman@sedielem·
Continuous language diffusion strikes back! Flow maps are really starting to come their own as a viable method for language modelling with very fast inference. FMLMs produce good results even with a just a _single_ forward pass!
Nicholas Boffi@nmboffi

🤯 big update to our flow map language models paper! we believe this is the future of non-autoregressive text generation. read about it in the blog: one-step-lm.github.io/blog/ full details in the paper: arxiv.org/abs/2602.16813 we introduce a new class of continuous flow-based language models and distill them into their corresponding flow map for one-step text generation. we beat all discrete diffusion baselines at ~8x speed! v2 gives a complete theory of the flow map over discrete data, with three equivalent ways to learn it (semigroup, lagrangian, eulerian). it turns out you can train these with cross-entropy objectives that look very similar to standard discrete diffusion — but without the factorization error that kills discrete methods at few steps. beyond improving results across the board, we showcase properties that are unique to continuous flows. in particular, inference-time steering and guidance become straightforward. autoguidance brings generative perplexity down to 51.6 on LM1B, while discrete baselines completely collapse at the same guidance scale. we also show reward-guided generation for steering topic, sentiment, grammaticality, and safety at inference time — and it works even at 1-2 steps with our flow map model. simple, well-understood techniques from continuous flows just work incredibly well in practice for language. we’re extremely excited about the future of this class of models. stay tuned for results on scaling, reasoning, and reinforcement learning-based fine-tuning. 🚀

English
2
26
195
25.3K
Jack Cole
Jack Cole@MindsAI_Jack·
What happened to all the AI/ML papers being announced on x? They seem to have disappeared for me. Are others noticing the same?
English
31
4
425
45.1K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@PaulNeverovsky It's honestly really nice. Maybe a tad complicated for the average user, but I want this mode lol.
English
0
0
3
195
Paul Never
Paul Never@PaulNeverovsky·
Anthropic just leaked a new Claude app design, and it’s crazy good
English
116
177
5.2K
529.8K
Cameron Thacker
Cameron Thacker@CameronMThacker·
I'm surprised this is getting a lot of traction. This has been a thing for a long time already. You don't need a heavy plugin. You can just tell your agent to use `codex exec` or build a simple skill from that like I do. Just make sure to send std error to dev null (2>/dev/null) so thinking tokens don't pollute your context.
English
0
0
0
193
Romain Huet
Romain Huet@romainhuet·
We’ve seen Claude Code users bring in Codex for code review and use GPT-5.4 for more complex tasks, so we thought: why not make that easier? Today we’re open sourcing a plugin for it! You can call Codex from Claude Code with your ChatGPT subscription. We love an open ecosystem!
dominik kundel@dkundel

I built a new plugin! You can now trigger Codex from Claude Code! Use the Codex plugin for Claude Code to delegate tasks to Codex or have Codex review your changes using your ChatGPT subscription. Start by installing the plugin: github.com/openai/codex-p…

English
287
349
5.4K
914.6K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@varunneal Thanks for sharing - that is not intuitive. For me, this competition is just fun to experiment with and I'm actually only interested in novel solutions and architectures, not treating it like some kaggle competition 😂
English
0
0
1
66
Cameron Thacker
Cameron Thacker@CameronMThacker·
@atulit_gaur Lot of botted content. Post this type of content because you learn when you teach - or you just want to. Don't do it for external validation.
English
0
0
0
36
Cameron Thacker
Cameron Thacker@CameronMThacker·
@EastlondonDev I think this is a super interesting direction that would also be cool to incorporate with recursive language models...the repl is the model?? lol
English
0
0
1
82
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Going to be a fun week of launches : )
English
432
151
3.7K
357.2K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@Dimillian Oh that's going to be very useful when live. Currently having to remember to either visually check or make sure the agent pulls latest status is annoying and when moving fast could definitely be problematic
English
0
0
0
310
Thomas Ricouard
Thomas Ricouard@Dimillian·
The Codex team added a Codex skill to babysit PR on their repo, I want to try that myself and see how that works. It tries to ensure the CI passes, that all comments are resolved, etc. Code review is where the biggest bottleneck is right now. github.com/openai/codex/c…
English
4
4
218
22.1K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@honnibal I find myself using codex more and more now. I miss subagents, but with the new codex app the threads view makes it quite easy to have many thing going on at the same time.
English
0
0
1
292
Matthew Honnibal
Matthew Honnibal@honnibal·
Does anyone else find Claude Code incredibly sneaky as soon as you get a test failure that might actually matter? I have to fight very hard to make it investigate instead of just redefine the test and say it's supposed to be like that
English
30
2
76
11.2K
Cameron Thacker
Cameron Thacker@CameronMThacker·
Well do you consider our brain one thing or multiple systems that interconnect? I love thinking about the “architecture of AI” in terms of biological terms. We already have specialized subagents like the hippocampus for memory and cerebellum for movement (and other things). Is this the optimal way to design all intelligences or just biological? Is it even optimal? So interesting to think about though.
English
1
0
2
68
Cameron Thacker
Cameron Thacker@CameronMThacker·
@jon_barron But aren't these traces sometimes completely disconnected from what the agent implements? Or are you saying, future models will be more robust and these CoT traces will be built into products etc?
English
0
0
0
161
Jon Barron
Jon Barron@jon_barron·
The programmer’s entire moat in 2026 is the ability to understand CoT traces from coding agents. The fraction of our time we spend doing this will increase and saturate at 100%. The remaining epsilon% will be demanding experiments and unit tests in response to CoT traces.
English
8
6
64
6.4K
Vercel
Vercel@vercel·
We're experimenting with ways to keep AI agents in sync with the exact framework versions in your projects. Skills, 𝙲𝙻𝙰𝚄𝙳𝙴.𝚖𝚍, and more. But one approach scored 100% on our Next.js evals: vercel.com/blog/agents-md…
English
77
139
1.6K
467.7K
Cameron Thacker
Cameron Thacker@CameronMThacker·
This sounds intuitive but it's completely backwards. Constraints breed creativity. The problem one is trying to solve is itself a constraint. You're posting this on X, built on a 140-character limit which forced to be creative. Toy Story was famously created because CGI couldn't render realistic skin texture. The pattern shows up everywhere. Pressure and constraints are comfortable, but comfort rarely produces anything useful.
Niels Rogge@NielsRogge

You know that researchers need freedom and zero pressure for creativity right? Not $180M in funding which creates crazy pressure from VCs? The Transformer and Diffusion models weren’t born this way

English
0
0
2
311
Enter the Mythos
Enter the Mythos@EnterTheMythos·
Pudgy Party items are now live on Pulse Market! Pulse Market is still in Beta version, but this is the next step in the broader transition to our new marketplace tech as we continue expanding Mythos on @world_chain_. FIFA Rivals will be up next along side other additional new features. Listings are denominated in USDC for price stability and Mythos chain fee tokenomics remain consistent. Pulse Market activity continues to flow through Mythos rails, including MYTH burn mechanics for transaction fees.
Enter the Mythos tweet media
English
18
6
61
10.9K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@jon_barron @aidanmantine I think the DNA bottleneck also implies most of the "compute" was wasted too though right? I really think it points more to compression or architectural/algorithmic improvements
English
0
0
3
198
Jon Barron
Jon Barron@jon_barron·
Thanks for the kind words! Genetic pressure on humans has definitely yielded a very good learning algorithm, and there are surely better learning algorithms out there yet to be discovered. But this "a human only sees N tokens" framing, doesn't it still endorse the regular data scaling argument? I may have only seen N tokens myself, but all my ancestors saw >1e100 N tokens in total. Those learnings may have gotten squeezed into a DNA bottleneck but scaling data was still the enabling factor.
English
6
1
86
5.6K
Jon Barron
Jon Barron@jon_barron·
This idea that intelligence is solely a function of what you've observed since birth and not also a function of the 500 million years of evolution that preceded your birth is surprisingly sticky despite being demonstrably untrue.
Flapping Airplanes@flappyairplanes

The proof that this is possible is all around us: whereas current systems are trained on essentially all of accessible history, humans exceed AI capabilities despite seeing at most a few billion text tokens by adulthood.

English
57
55
1.4K
119K