Sergio Charles

666 posts

Sergio Charles banner
Sergio Charles

Sergio Charles

@eigentopology

co-founder @thesis_labs | accelerating ai r&d | prev. @google, @nvidia, @stanford

San Francisco, CA Katılım Kasım 2018
440 Takip Edilen362 Takipçiler
dax
dax@thdxr·
you're probably underestimating how crazy things are
dax tweet media
English
169
430
5.4K
346.8K
Brad
Brad@Brad08414464·
I wonder if we’ll ever see a robotics company that takes off as fast as OpenAI and Anthropic did surely there will be a startup that challenges Tesla the same way OpenAI challenged Google
English
36
1
81
6.9K
François Chollet
François Chollet@fchollet·
Current AI is a librarian of existing knowledge. Science requires an explorer of the unknown. You don't win a Nobel Prize by staying in the library.
English
205
235
1.8K
101.8K
Sergio Charles
Sergio Charles@eigentopology·
@paulg Robotics is what takes us to a future of abundance. AGI in a bottle alone is not sufficient. The notion of value also becomes tenuous in such a timeline
English
0
0
1
2.2K
Paul Graham
Paul Graham@paulg·
"Anything made before 2028 is going to be valuable." — an OpenAI employee implicitly discloses their timetable
English
289
343
8.1K
1.1M
Sergio Charles
Sergio Charles@eigentopology·
@tszzl @rhydhimma How so? AF seems quite revolutionary. Not trying to downplay the raw power of coding agents, as they could in theory help develop an AF, but what AF represented at the time was decades of effort like coding agents
English
0
0
7
1.6K
roon
roon@tszzl·
@rhydhimma they are far more significant than alphafold and it’s not close imo
English
21
9
537
25K
Rhydhimma (sci/acc)
Rhydhimma (sci/acc)@rhydhimma·
Codex and Claude Code are probably the most revolutionary products of this century. For now. Maybe not as significant as Alphafold, and all the PhD who slogged to get protein structure data. Data is the keyword.
English
6
2
167
23.6K
andrew gao
andrew gao@itsandrewgao·
nooo how am i supposed to parameter golf when there are no 8xh100s help @willdepue
andrew gao tweet media
English
14
2
155
17.8K
Elon Musk
Elon Musk@elonmusk·
@demishassabis 𝑝(simulation)≈1 However, within the simulation, hardware is extremely hard to do. Only those who have bled on a production line can understand. 
English
238
137
1.6K
166.4K
Elon Musk
Elon Musk@elonmusk·
Matter, Energy & Intelligence
English
12.7K
13.4K
111.5K
53.4M
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
“Dobby runs my entire house over WhatsApp.” Andrej Karpathy is running OpenClaw like Tony Stark runs JARVIS.
Yuchen Jin tweet media
English
31
55
991
42.9K
Sergio Charles
Sergio Charles@eigentopology·
Absolutely. At @thesis_labs, we're accelerating ai r&d by delegating the tasks you don't want to do to a swarm of research assistant agents. As per Jevon's paradox, this should just accelerate the rate at which you do research itself. Think things like tinkering with hparams, writing experiment code for new hypotheses, configuring multi-gpu runs etc.
English
0
0
0
20
Paul Calcraft
Paul Calcraft@paul_cal·
Most science isn't hill climbing. ML research that people *want* to do also isn't usually hill climbing. Publish or perish means a lot of ML researchers are (regretfully) doing hill climbing If we can delegate that to agents, ML/AI research itself should flourish Research taste/high level strategy will presumably also get gobbled up by AI, but that (post)training loop is *way* harder than any current pre or post training objective that the field has nailed AlphaEvolve & other LLM + selection loops are also still v short sighted. Science is hard! I think people will have an important role in AI research for at least another decade, & not just due to institutional inertia Recursive self-improvement is obviously going to ramp up before then, but I think it's much more likely that RSI is localized to certain regions of the skill tree Most people that talk about RSI seem to have *foom* AGI in mind? Scoped RSI (e.g. on coding or data analysis) seems much more plausible
Georgia Channing@cgeorgiaw

I’ve been at a small conference this week, one where the AI people have been presenting early in the week and the domain science people will be presenting later in the week. At the end of the talks last night, the conversation turned very doomer with all the AI people talking about how well Claude Code or Codex can do hill-climbing AI research and how we (the AI people) are maybe all about to lose our jobs! The domain science people expressed their shock at this attitude because, though Claude Code can be let loose to complete lots of banal hill-climbing AI research projects, basically no experimental science is hill-climbing or even metric driven. Most scientific fields are about much more taste-driven exploration that is incredibly difficult to make metrics for or to parameterize, and this misunderstanding from the AI community is one of the most damaging things to the realization of great science with AI. Seems like we’re actually pretty far from having AI models do that… Over the summer, @evijit and I wrote about this (and some other things hindering AI for science) at a bit more length, and today that work is out in Patterns! So, if you care about these problems and the real challenges in bringing AI to science in the real work, I recommend giving it a read!

English
1
0
12
2.1K
Brian Anderson
Brian Anderson@braindersnn·
My team at NVIDIA partners with lots of startups. I see the opposite happening. RL and post training are really enabling so many companies to take proven ML methods and open models and specialize those models to specific use cases and domains. Always using the biggest model is very slow and expensive. Important when you need it, inefficient when you don’t. Specialization will be an important trend and differentiator in the next wave of AI startups. We are entering a new Cambrian era for lean teams.
Yuchen Jin@Yuchenj_UW

Some people at frontier AI labs told me they believe startups are over. OpenAI, Anthropic, Google, xAI will absorb every industry as AGI nears. Coding today, science, medicine, and finance next. Then everything else. If they’re right, that’s a pretty boring end of the world.

English
2
3
28
2.4K
Sergio Charles
Sergio Charles@eigentopology·
@Yuchenj_UW LLMs are not the only form of AI. Even with very strong base models, medicine, robotics, climate, manufacturing, and finance still require domain-specific data, workflows, integrations, validation, and distribution.
English
0
0
0
191
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Some people at frontier AI labs told me they believe startups are over. OpenAI, Anthropic, Google, xAI will absorb every industry as AGI nears. Coding today, science, medicine, and finance next. Then everything else. If they’re right, that’s a pretty boring end of the world.
English
542
163
3K
938.3K
Sergio Charles
Sergio Charles@eigentopology·
On a 48B MoE model trained on 1.4T tokens, vs standard residual baseline: - Same compute budget, 1.25× lower loss - Gains across every benchmark: +7.5 GPQA-Diamond, +3.6 Math, +3.1 HumanEval - Less than 2% inference latency overhead
English
1
0
1
20
Sergio Charles
Sergio Charles@eigentopology·
For 8 years, attention only looked sideways. Kimi just rotated it 90 degrees! A thread on Attention Residuals (AttnRes). arxiv.org/pdf/2603.15031
Sergio Charles tweet media
English
2
0
4
118