Sid

733 posts

Sid banner
Sid

Sid

@sidgraph

Energy, Taste and Intelligence Founder @aifunctor, @synthAGI | e/acc @lossfunk, @Basethesislabs, @GenesisAILabs

Earth Katılım Eylül 2019
1.9K Takip Edilen1.1K Takipçiler
Sid retweetledi
sasaki@engineer
sasaki@engineer@rsasaki0109·
SimScale [CVPR 2026 Oral] Learning to Drive via Real-World Simulation at Scale 🏗️ A scalable simulation pipepline that synthesizes diverse and high-fidelity reactive driving scenarios with pseudo-expert demonstrations. 🚀 An effective sim-real co-training strategy that improves robustness and generalization synergistically across various end-to-end planners. 🔬 A comprehensive recipe that reveals crucial insights into the underlying scaling properties of sim-real learning systems for end-to-end autonomy.
sasaki@engineer tweet mediasasaki@engineer tweet media
English
1
20
82
5.8K
Sid
Sid@sidgraph·
the locality argument has units. Horowitz ISSCC'14, 45nm: 8-bit add 0.03 pJ, SRAM read 5 pJ, DRAM 640–2,600 pJ. arithmetic is free; data movement is not. HERMES @IBMResearch: 12.4 TOPS/W in 14nm. NorthPole: ~5× H100. cortex sits ~500× above the best silicon (Beniaguev Neuron 109:2727: one pyramidal neuron = 7-layer TCN). Fig 7. orchestration still wins benchmarks today: @SakanaAILabs AB-MCTS (arXiv:2503.04412) ~30% on ARC-AGI-2 vs 23% best single; Gemini 3 Pro + Poetiq 54% at 37× cost; humans ~100%. internalization is the path on which closing this gap means reasoning rather than routing. Read More: x.com/sidgraph/statu…
Sid tweet media
English
0
0
0
35
Sid
Sid@sidgraph·
four-condition test for any candidate intelligent system: (1) Turing-complete (2) universally programmable (3) behaviorally consistent at T=0 (4) machine-native semantics same four arrived at independently by @MingchenZhuge @SchmidhuberAI in Neural Computers (arXiv:2604.06425). similar fractional works: - Huginn-3.5B (@jonasgeiping, NeurIPS'25): 132 unrolls 50B virtual params — MoR (NeurIPS'25): per-token ρ, 2× throughput — ATLAS (Behrouz et al.): +80% on 10M-ctx BABILong — batch-xinvariant kernels (@thinkymachines): 1000/1000 bit-identical at T=0 — Mukherjee et al. (arXiv:2505.11711): RL moves 5–30% of params; subnet alone reproduces effect existence proof: DeepSeek-R1 @deepseek_ai pure-RL on AIME 2024, 15.6→77.9% (Nature 645:633). Gemini Deep Think IMO'25 gold, 35/42, NL end-to-end, no tools.
Sid tweet media
English
1
0
1
52
Sid
Sid@sidgraph·
🚨 new essay: On the Non-Compositionality of Intelligence and CLAs! every frontier model today reasons by orchestrating: tool calls, retrieval, scratchpads, search loops. the cost is non-compositional. ASyMOB zero-tool 77%→33%. ARC-AGI-3 frontiers <1%. CharXiv tool Δ +9pts, underperturbation, every benchmark that bans them. the "intelligence" lives in the assemblage. essay proposes the four conditions a system must meet to actually be intelligent.
Sid tweet media
English
1
0
2
51
Sid retweetledi
Vivek
Vivek@vivek_2332·
worked on self compaction at @lossfunk with @sidgraph earlier this year. wanted to see if compact as a tool actually works. let the model decide when to compress its own context, see how far that gets you. a lot of learnings and failures along the way. wrote it all up here!! 👇👇 [co-written with claude opus 4.7.]
Vivek@vivek_2332

x.com/i/article/2051…

English
0
1
2
250
Sid
Sid@sidgraph·
blr have the most beautiful office sunset views :)
Sid tweet mediaSid tweet mediaSid tweet media
English
1
0
1
210
Sid
Sid@sidgraph·
you are ngmi if you are still stuck in some ai wrapper startup w/ esop promises etc etc if you relate to above and want to work on something substantial / contributing to something significant in intelligence — come work w/ us :) Dm to know more! basethesis.com
Brad@Brad08414464

I’m beginning to think that people don’t really want to work at companies. what they really want is to work at a research lab or a creative studio or a think tank or some other communal setup where likeminded people can do interesting things together

English
5
0
26
2.7K
Sid
Sid@sidgraph·
@PranavVajreshw1 Hey Pranav, sure, would love to have you join us and discuss code world models <3
English
0
0
0
48
Pranav Vajreshwari
Pranav Vajreshwari@PranavVajreshw1·
@sidgraph Hi Siddhant, I'm a Microsoft Research Fellow with the AI Frontiers Team, currently focused on code world models. I had mistakenly applied with an empty Research Areas and Work section :) I have reapplied, do consider!
English
1
0
1
89
Sid
Sid@sidgraph·
We’re hosting an invitation only gathering of researchers primarily in AI (but not limited to) over dinner in Indiranagar BLR! Just thoughtful conversations over good food, about seminal papers, emerging fields, research of your group/ recent papers you’ve published 🫶 If you are interested/ doing research in: - Coding Agents - RL (Synthetic RL envs, Rl post training, Alignment, On policy distillation) - World Models - Neural Computers / Self adaptive neural systems We would love to have you over dinner! PS: Amazing venue, chill vibes, good food and most intellect people from BLR all together Hosted by Me and @thesisofsarthak from @Basethesislabs <3! Register through link below👇 luma.com/zndszwdp
English
4
3
42
4.3K
signüll
signüll@signulll·
intelligence is the most important attribute in a man.
English
109
33
769
44.5K
Sid retweetledi
Sid retweetledi
Sid retweetledi
Sarthak
Sarthak@thesisofsarthak·
being brutally selective is a such simple yet rare quality.
English
1
1
5
152
Vivek
Vivek@vivek_2332·
woke up to this. let's gooo!! #icml2026
Vivek tweet media
English
3
0
25
934