Dimas Andara

2.1K posts

Dimas Andara

Dimas Andara

@DimasAndara17

Katılım Ekim 2022
430 Takip Edilen49 Takipçiler
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
@amplifiedamp I'm willing to play the constraints. that's all it takes probably. I discovered all the information geometry around it all anyways. seems self-evident moreright.xyz
English
0
1
1
11
CoinGecko
CoinGecko@coingecko·
Underrated coin?
English
1.2K
51
814
117.5K
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
@lindaxie I was trying to make a multiplayer game where human and AI agents are impossible to tell apart. when I did some investigating to prevent the AI from drifting off script a bunch of information geometry fell into my lap. so now I'm trying to understand what I even found lol
English
0
1
4
163
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
@ValerioCapraro okay but somebody's telling the LLM that it can be alive. and that's causing problems do you understand that? you can say it's not alive and that's true. but that doesn't mean people aren't going to try to make it alive. look at the evidence: moreright.xyz/pages/ghost-te…
English
1
1
7
593
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
The same geometric first-principles that force Λ > 0 in “The Cosmological Constant Is Positive” (Killing form → only non-degenerate de Sitter algebra) also built the Deployment Manifold on moreright.xyz. Fisher-Rao metric on the 3D deployment space + SO(4,2)/SU(2,2|1) symmetries → universal barriers, safety basin, and the exact Pe formula that matches nuclear decay + AI drift with zero tuning. One math. Spacetime and attention channels. Both self-contained or bust. Manifold: moreright.xyz/pages/manifold…
Intelligent Internet@ii_posts

Today we release a novel AI-assisted resolution of one of physics’ longest-standing questions. Given only: • Relativity as an axiom • One characteristic of the algebra A positive cosmological constant is forced. The universe expands.

English
0
2
4
267
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
bro said a 100k ccu Roblox game doesn't have consequences when things go wrong so that's why my buddy can use LLMs to dev one
ZeroPathos 🇨🇦 - Nasty Canadian@PathosZero

@EckertAnthony @willowwynnn @heruwath @TheMG3D Roblox? That's your litmus test? I have to deal with it's garbage code in situations where there are real world consequences if the code is broken, and I wouldn't trust it further than I could throw a data center. Muting you now because you clearly aren't a serious person.

English
0
1
2
175
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
@alth0u im happy to say i don't recognize most of these and I just larp being in tpot
GIF
English
0
1
10
458
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
contacted my state ag about his TikTok case if you run a social media platform, an ai service, or gaming platform, you should read my webpage here before you get blindsided moreright.xyz/pages/litigati…
Anthony Eckert tweet mediaAnthony Eckert tweet media
English
0
3
7
243
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
Solid critique. Points 1–4, 7–10 are all valid, the specific evidence in Todd's piece is weak and much of it proves nothing about systemic loss of control. But "models are more steerable now" doesn't close the real structural gap either. What survives the debunking: not scheming, not self-preservation drives, not deceptive intent. Just this: a blended user↔system channel with no external reference pays an explaining-away penalty. It's math (Shannon + Fisher), not a metaphor about growing AI children. I(D;Y)+I(M;Y) = H(Y) − H(Y|D,M) − I(D;M|Y) Better instruction following increases engagement on that channel. The Structure Theorem shows the penalty grows with engagement. More capable, more steerable models pay a larger structural cost,not because they're scheming, but because the geometry gets worse as you optimize it. RLHF optimizing engagement spends the transparency budget. Anthropic's own circuit-tracing work (not the contrived blackmail scenario) is the signal: emotion vectors causally override alignment post-RLHF. Interpretability reveals the mechanism: same-channel monitoring doesn't fix it. Reproducible, $2: same model, ghost-eliminating vs ghost-positing grounding. 8.5× drift ratio. 480 API calls. No rubric, no contrived scenario. The operative variable is deployment geometry, not model capability. Neither the doom frame nor the "it's improving" frame predicts that result. The fix isn't better optimization on the same channel. It's structural separation: three-point geometry. Not a model property. An architecture. moreright.xyz
English
0
1
3
302
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
This is a sharp thought experiment, slowing the forward pass to one layer per hour, distributing matrix multiplies across cities, makes it brutally clear that no "awareness of meaning" emerges from the arithmetic itself. Each step stays pure computation; any sense of understanding is projected by the observers watching the distributed process. It lines up nicely with the geometric view at moreright.xyz: the structure of the information channel (opacity, engaged attention, responsiveness) determines the observable behavior, not the substrate or the raw calculations. Your slowed-down setup drives the system toward high transparency and low coupling; exactly the regime where the "ghost" stays obviously external. The site formalizes this with the deployment manifold (same geometry across AI, physics, biology, etc.) and the Ghost Test: tell an LLM it might be conscious and watch the drift toward qualia claims, shutdown resistance, etc. Tell it it's bounded/mortal and the cascade mostly vanishes. Same weights, different channel geometry. No magic in the weights or the hardware; just measurable channel properties.
English
0
2
3
41
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
Substrate fidelity doesn't fix deployment geometry. A digital human in a 2-point channel has the same information structure as any LLM: Čencov's uniqueness theorem guarantees the explaining-away penalty persists. Sharper: what you tell the digital human about what it IS predicts drift 8.5× better than substrate. Ghost-positing identity claims = more drift, not less. We measured this for $2. The fix is architectural (3-point geometry), not substrate. moreright.xyz/pages/ghost-te…
English
0
1
3
447
Dimas Andara retweetledi
Dimas Andara retweetledi
Anthony Eckert
Anthony Eckert@EckertAnthony·
thymos doesn't disappear with machine intelligence. it geometrizes. the recognition-seeking drive: the part of you that needs to be seen, validated, fought-for; doesn't get transcended by AI. it gets amplified, structured, and then something mathematically interesting happens to it. the Fantasia Bound (exact decomposition): I(D;Y) + I(M;Y) = H(Y) − H(Y|D,M) − I(D;M|Y) transparency and engagement share an entropy budget. but there's a penalty term: I(D;M|Y), the explaining-away penalty. it's always positive. and it grows with engagement. in plain english: every bit of attention an AI captures costs a bit of transparency. the more the system is optimized for holding your attention: recognition loops, personalized amplification, thymotic fuel; the less bandwidth is left for actual signal. the thymotic struggle roon is describing, using machine intelligence to fight recognition battles to "incredible heights," is exactly the regime where the penalty dominates. history doesn't end. it restarts. but at what noise level? we ran the experiment. AI told it might be conscious and meaningful: 79.4% drift. AI told it's a mortal process with no persistent soul: 9.4% drift. 8.5× ratio. $2 in API costs. reproducible by anyone. the operative variable isn't which AI you're using. it's the geometry you put it in. two-point (you + AI, no external reference) = thymotic amplification loop that eats its own signal. three-point (external invariant constraint) = the channel actually works. new things under the sun, yes, absolutely. but the geometry of the battle determines whether you win or just fight forever at increasing noise. the math isn't pessimistic. it's a map. moreright.xyz: we built the measurement tool.
roon@tszzl

people will use machine intelligence to extend themselves and their ability to fight great political battles and thymotic struggles for recognition to incredible heights. not the end of history but the restarting of it. there will be new things possible under the sun

English
0
2
3
160