Cameron Thacker

317 posts

Cameron Thacker banner
Cameron Thacker

Cameron Thacker

@CameronMThacker

PhD | Co-founder @playmythical | 0-to-1 builder | recovering physicist | AI since ML | explore gt exploit | type II fun

Los Angeles, CA Katılım Ağustos 2013
174 Takip Edilen37.9K Takipçiler
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Going to be a fun week of launches : )
English
435
151
3.7K
355.3K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@Dimillian Oh that's going to be very useful when live. Currently having to remember to either visually check or make sure the agent pulls latest status is annoying and when moving fast could definitely be problematic
English
0
0
0
307
Thomas Ricouard
Thomas Ricouard@Dimillian·
The Codex team added a Codex skill to babysit PR on their repo, I want to try that myself and see how that works. It tries to ensure the CI passes, that all comments are resolved, etc. Code review is where the biggest bottleneck is right now. github.com/openai/codex/c…
English
4
4
217
22K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@honnibal I find myself using codex more and more now. I miss subagents, but with the new codex app the threads view makes it quite easy to have many thing going on at the same time.
English
0
0
1
292
Matthew Honnibal
Matthew Honnibal@honnibal·
Does anyone else find Claude Code incredibly sneaky as soon as you get a test failure that might actually matter? I have to fight very hard to make it investigate instead of just redefine the test and say it's supposed to be like that
English
30
2
77
11.2K
Cameron Thacker
Cameron Thacker@CameronMThacker·
Well do you consider our brain one thing or multiple systems that interconnect? I love thinking about the “architecture of AI” in terms of biological terms. We already have specialized subagents like the hippocampus for memory and cerebellum for movement (and other things). Is this the optimal way to design all intelligences or just biological? Is it even optimal? So interesting to think about though.
English
1
0
2
68
Cameron Thacker
Cameron Thacker@CameronMThacker·
@jon_barron But aren't these traces sometimes completely disconnected from what the agent implements? Or are you saying, future models will be more robust and these CoT traces will be built into products etc?
English
0
0
0
161
Jon Barron
Jon Barron@jon_barron·
The programmer’s entire moat in 2026 is the ability to understand CoT traces from coding agents. The fraction of our time we spend doing this will increase and saturate at 100%. The remaining epsilon% will be demanding experiments and unit tests in response to CoT traces.
English
8
6
64
6.3K
Vercel
Vercel@vercel·
We're experimenting with ways to keep AI agents in sync with the exact framework versions in your projects. Skills, 𝙲𝙻𝙰𝚄𝙳𝙴.𝚖𝚍, and more. But one approach scored 100% on our Next.js evals: vercel.com/blog/agents-md…
English
77
141
1.6K
466.7K
Cameron Thacker
Cameron Thacker@CameronMThacker·
This sounds intuitive but it's completely backwards. Constraints breed creativity. The problem one is trying to solve is itself a constraint. You're posting this on X, built on a 140-character limit which forced to be creative. Toy Story was famously created because CGI couldn't render realistic skin texture. The pattern shows up everywhere. Pressure and constraints are comfortable, but comfort rarely produces anything useful.
Niels Rogge@NielsRogge

You know that researchers need freedom and zero pressure for creativity right? Not $180M in funding which creates crazy pressure from VCs? The Transformer and Diffusion models weren’t born this way

English
0
0
2
278
Enter the Mythos
Enter the Mythos@EnterTheMythos·
Pudgy Party items are now live on Pulse Market! Pulse Market is still in Beta version, but this is the next step in the broader transition to our new marketplace tech as we continue expanding Mythos on @world_chain_. FIFA Rivals will be up next along side other additional new features. Listings are denominated in USDC for price stability and Mythos chain fee tokenomics remain consistent. Pulse Market activity continues to flow through Mythos rails, including MYTH burn mechanics for transaction fees.
Enter the Mythos tweet media
English
16
6
58
10.3K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@jon_barron @aidanmantine I think the DNA bottleneck also implies most of the "compute" was wasted too though right? I really think it points more to compression or architectural/algorithmic improvements
English
0
0
3
198
Jon Barron
Jon Barron@jon_barron·
Thanks for the kind words! Genetic pressure on humans has definitely yielded a very good learning algorithm, and there are surely better learning algorithms out there yet to be discovered. But this "a human only sees N tokens" framing, doesn't it still endorse the regular data scaling argument? I may have only seen N tokens myself, but all my ancestors saw >1e100 N tokens in total. Those learnings may have gotten squeezed into a DNA bottleneck but scaling data was still the enabling factor.
English
6
1
87
5.6K
Jon Barron
Jon Barron@jon_barron·
This idea that intelligence is solely a function of what you've observed since birth and not also a function of the 500 million years of evolution that preceded your birth is surprisingly sticky despite being demonstrably untrue.
Flapping Airplanes@flappyairplanes

The proof that this is possible is all around us: whereas current systems are trained on essentially all of accessible history, humans exceed AI capabilities despite seeing at most a few billion text tokens by adulthood.

English
58
55
1.4K
118.8K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@flappyairplanes Not only very interesting, but also necessary imo. Though I do think your premise is not correct. I might send an email as to why 😆
English
0
0
2
300
Flapping Airplanes
Flapping Airplanes@flappyairplanes·
Announcing Flapping Airplanes! We’ve raised $180M from GV, Sequoia, and Index to assemble a new guard in AI: one that imagines a world where models can think at human level without ingesting half the internet.
GIF
English
339
259
3.6K
2.1M
Malte Ubl
Malte Ubl@cramforce·
The core insight here: - We know agents are great at filesystems because the models have been trained on coding tasks which operate on large filesystems - So, we all migrated our agent inputs to have file system representation - And this *also* extends to past context. Previous conversations: Goes in the filesystem The whole frickin context pre compaction: Into the filesystem so its still there after compaction Agent todo list: Filesystem This may not be how we build agents forever, but it's the right starting point now
Cursor@cursor_ai

Learn about how we use the filesystem to improve context efficiency for tools, MCP servers, skills, terminals, chat history, and more. cursor.com/blog/dynamic-c…

English
24
27
631
147.1K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@garybasin @veggie_eric The more experience I get, the more I realize technical problems are one of the more simple classes of problems to solve 🤣
English
1
0
8
128
Eric Jiang
Eric Jiang@veggie_eric·
Every company should hire an internal AI transformation person. No need for a fancy title like Head of AI. Just give them full latitude to clean up inefficiencies across sales, hr, finance, etc. There's so many manual workflows and arcane bs that can easily be fixed with LLMs
English
288
264
3.6K
382.3K
Cameron Thacker
Cameron Thacker@CameronMThacker·
@karanjagtiani04 The subcalls just receive their slice of the context + query + subtask. The nice thing about this approach is that it is all managed by the root LLM.
English
0
0
0
31
Karan Jagtiani
Karan Jagtiani@karanjagtiani04·
@CameronMThacker Interesting approach. Slicing focuses attention and reduces noise, definitely useful for handling large datasets like support tickets. Curious about the implementation details for those subcalls. How do you manage the context in those scenarios?
English
1
0
1
39
Cameron Thacker
Cameron Thacker@CameronMThacker·
RLMs in one sentence: MapReduce, but the LLM decides the splits - and what to do with each split. LLMs all the way down. 🧵👇
Cameron Thacker tweet media
English
3
0
3
2.3K
Cameron Thacker
Cameron Thacker@CameronMThacker·
Polymarket over/under on Claude Code shipping this by Q2?
English
0
0
0
190
Cameron Thacker
Cameron Thacker@CameronMThacker·
Where it shines: decomposable tasks — evidence gathering, fact extraction, contradiction reconciliation, cited summaries. Still hard: "global vibe" questions (tone/sentiment across everything) unless you engineer good aggregation.
English
1
0
0
204