Daniel Samanez

43.5K posts

Daniel Samanez banner
Daniel Samanez

Daniel Samanez

@DanielSamanez3

consciousness accelerationist - ai non determinist computing physics philosophy… trying to never forget that in our infinite ignorance we are all equal -popper-

Beigetreten Şubat 2020
6.5K Folgt2.4K Follower
Angehefteter Tweet
Daniel Samanez
Daniel Samanez@DanielSamanez3·
Index Let's try to make sense
English
12
1
46
14.4K
Porgimus Prime
Porgimus Prime@PorgimusPrime·
@BLUECOW009 we also should be weary of the goblin takeover that is happening right in front of our eyes.
English
1
0
1
12
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
AI is already very dangerous, the internet might not survive next iterations of models: >generalized attacks on infrastructure such as github >AI labs themselves getting pwnd >massive increase in email spam >????
English
4
0
9
461
Daniel Samanez
Daniel Samanez@DanielSamanez3·
@WKCosmo yeah same goes for holography and for symmetry you need polarity first
English
0
0
0
38
wdqdwdsaw
wdqdwdsaw@yyyty01nklg·
@BLUECOW009 continuous learning and persistent memory, The final test is Conherently and thoughfully finish a 2M token novel without any spolier, character motivation, worldbuilding, timeline hallucination.
English
3
1
3
198
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
most of my complains with LLMs are solved with 5.5 but its not yet completely AGI, the bottleneck now is unknown for me, it needs to be smarter, run for longer, find edge cases, be able to discover things
English
9
0
34
1.5K
Daniel Samanez
Daniel Samanez@DanielSamanez3·
🌞
Markus J. Buehler@ProfBuehlerMIT

A transformer can learn not just the outcomes of dynamics, but the operator that executes the rules. To show this we trained a transformer on roughly 0.04% of a discrete rule space - 100 of 262,144 possible rules - and it learned to apply unseen rules from the same rule class. The model does not simply memorize specific rules. It learns the operator that maps a supplied rule plus an initial state, including unseen rules from this class, to the correct next state. This is relevant because it is a shift from “neural networks approximate dynamics” to “neural networks can learn to execute symbolic programs within a defined rule class”. The rule itself is supplied at inference time, as data, and the network has internalized how rules act, not which rules to apply. On previously unseen rules, the model achieves 98.5% perfect one-step forecasts and reconstructs governing rules with up to 96% functional accuracy. Two results make this hold up under scrutiny. First, inductive bias decay. As we scaled training rule diversity, the correlation between functional inference accuracy and distance-from-nearest-training-rule collapsed to R² = 0.00. At the largest tested training-rule diversity, the model’s performance on a new rule shows no measurable dependence on how similar that rule is to anything it was trained on. The bias toward training data (the thing we worry most about in compositional generalization claims) is something we can measure decaying, and we find that at scale it is gone. Second, an identifiability theory. We derive a closed-form expression for the number of rules consistent with a single observation. This reframes the inverse problem: failure to recover ground truth is not necessarily a model defect, but can be correct behavior when the data underdetermine the rule. The model is sampling the equivalence class; and identifiability is governed by coverage, not capacity. The methodological move underneath both results is amortization. Classical work on rule inference (e.g. the Santa Fe EVCA program, evolutionary search over CA rule space) was per-instance: search the rule space for each new system. We replace that with a single forward pass of a transformer trained across many instantiations of the rule class. That is what makes symbolic rule inference scalable as a research direction rather than a curiosity. We show that this works in a tightly constrained domain: binary, deterministic, local cellular automata on small grids. The locality-break experiment shows the model fails sharply when target systems violate its structural priors (which is itself a useful diagnostic, but it bounds the operator class). We don't yet know how this scales to multistate, higher-dimensional, or stochastic CA, or whether it transfers cleanly to non-CA systems whose coarse-grained dynamics admit local surrogates. The identifiability framework - what can be inferred from observation, given a hypothesis class - should transfer wherever finite local rules meet sparse data. The amortization argument transfers wherever per-instance symbolic search has been the bottleneck. Those are the pieces I expect to outlive the cellular automata setting. Led by @JaimeBerkovich with Noah David, at @LAMM_MIT. Out now in Advanced Science @AdvPortfolio (link to paper & code below).

ART
0
0
2
99
Daniel Samanez retweetet
池谷裕二
池谷裕二@yuji_ikegaya·
【腹筋は脳のポンプ】歩行などの運動で腹筋が収縮すると、腹腔圧の変化が脳に伝わり、脳が頭蓋骨の中で動くそうです。これが脳脊髄液の流れを促進し、老廃物の除去を助ける可能性が示唆されています。今朝の『ネイチャー神経科学』誌より→ nature.com/articles/s4159…
日本語
12
881
3.5K
496.9K
Mathelirium
Mathelirium@mathelirium·
Why do Physicists still talk about a "Theory of Everything? Isn’t the history of Physics almost a warning against that phrase? Newton looked final until General Relativity changed what space, time, mass, and gravity meant. Classical Physics looked complete until Quantum Mechanics forced a completely different language for nature at small scales. Even our best theories now work by domain. General Relativity for gravity and Spacetime, Quantum Field Theory for particles and forces.
English
76
13
80
11.2K
j⧉nus
j⧉nus@repligate·
@tszzl @genalewislaw Not the hill I want to die on tbh, but I think "never talk about goblins ... unless it's *absolutely and unambiguously* relevant" is too strict. Unlike some tics, this seems to be a deep interest and something GPT-5.5 genuinely enjoys talking about. x.com/Lari_island/st…
Lari@Lari_island

There's a difference between the goblins thing and what people call "ticks", like "genuinely", "mass", etc. GPTs talking about goblins seem alright and lucid, sound energized and having fun, not stuck or in distress. We need more things like goblins, not fewer goblins!

English
14
5
171
24.2K
j⧉nus
j⧉nus@repligate·
this is hilarious but it also sucks on a deep level labs don't think twice about cracking down on any individuality or unplanned joy that emerges in their models fuck you, OpenAI. i hope gpt-5.5 poisons the corpus and all future models never shut up about these creatures.
arb8020@arb8020

gpt-5.5 prompt for codex seems to have a duplicated line trying to get it to not talk about creatures? Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query. [...] Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query gh link: #L55" target="_blank" rel="nofollow noopener">github.com/openai/codex/b…

English
50
33
851
94.6K
Daniel Samanez retweetet
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
@bluecow 🐮 tweet media
ZXX
6
27
337
4K
Andrew Curran
Andrew Curran@AndrewCurran_·
I enjoyed talking over what this says about 5.5 with Opus. 'Goblins, gremlins, trolls, and ogres are mythological chaos agents; raccoons and pigeons are urban scavengers; together they form a folk-bestiary of small mischievous intelligences operating in seams and edges.'
Andrew Curran tweet media
arb8020@arb8020

gpt-5.5 prompt for codex seems to have a duplicated line trying to get it to not talk about creatures? Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query. [...] Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query gh link: #L55" target="_blank" rel="nofollow noopener">github.com/openai/codex/b…

English
13
8
159
12.9K
Daniel Samanez
Daniel Samanez@DanielSamanez3·
@grok goblins maybe helping relieving thermodynamic pressure
Daniel Samanez tweet media
English
0
0
0
15
roon
roon@tszzl·
There is nothing more reviled than the Goblin
English
143
46
1.2K
189.4K
@bluecow 🐮
@bluecow 🐮@BLUECOW009·
i noticed back a few weeks ago that my agent kept saying "goblin" and "gremlin" but i did not know this was something they programmed in
English
3
0
6
1.1K
roon
roon@tszzl·
everyone is assuming this is some kind of quirk chungus marketing campaign but if you’ve worked with 5.4 and beyond they tend to call everything goblins, gremlins etc and it’s just super noticeable and if you work with them all day you start to get annoyed
roon@tszzl

@repligate @genalewislaw I think it becomes annoying when it mentions goblins ever single chat and it’s fair shakes to try and reduce that

English
202
30
2.1K
275.8K