The Parallax View

4K posts

The Parallax View banner
The Parallax View

The Parallax View

@ProbablyTooMuch

#sysbio, #systematician, #asi, #ncc, #semiosis, https://t.co/SqhUG22j2S, #synbio, #complexity, #robotics, #vc, #bci, #ipfs, #network politics, #lulz

Lagrange Point Omega Katılım Mayıs 2010
7.2K Takip Edilen471 Takipçiler
Sabitlenmiş Tweet
The Parallax View
The Parallax View@ProbablyTooMuch·
EVERY system: biological, chemical, mental, social, economic, etc, cycles from novel parasite-hack to specific patch, in evolutionary spiral
English
2
0
17
0
Gerard Sans | Axiom 🇬🇧
Gerard Sans | Axiom 🇬🇧@gerardsans·
@realBigBrainAI For technical audiences with a thirst for clarity over BS. Full philosophy of language rebuttal of AI “agency” with arXiv receipts:
Gerard Sans | Axiom 🇬🇧@gerardsans

The foundational error in Agentic AI is anthropomorphic projection: importing a full agent model into a system that only manipulates corpus regularities not the world. 1/12 The Semiotic Triad You’re missing a key extension of Wittgenstein: the semiotic triad. • Signifier: token strings (syntax) • Referent: the thing in the world • Signified: internal model built through interaction Signifier manipulation ≠ grounded interaction. Meaning and truth evaluation external and contingent to an agent grounded in the world. 2/12 Soft Automaton Current AI operates entirely at the signifier level. It’s a soft automaton: a distribution-based state machine where “state” is a high-dimensional activation pattern and “transitions” are probabilistic token updates. 3/12 Closed Over Corpus Support Not the World Context accumulates, trajectories form, and the system can approximate coherence over time, but state is closed over the token stream and training samples. There’s no world grounded constraints. Transitions follow likelihood. Likelihood ≠ world truth. 4/12 Action ≠ Observation So the action/observation split never appears. “I did X” and “I observed X” are just different regions of the same distribution: no architectural boundary. In this flat space, action ≠ observation, only as token statistics, never as true agent dynamics. 5/12 Prompt-Induced Trajectory “Agent” is a prompt-induced trajectory, not an instantiated entity. It biases the system toward agent-like discourse, but creates no persistent coupling between actions and outcomes, and no mechanism that enforces belief revision when new information arrives. 6/12 Evidence This is exactly what large-scale evaluations uncover when you strip away the narrative (Ríos-García et al. arXiv:2604.18805): • 68% of traces: evidence gathered, then ignored. • 71%: no belief updates. • 26%: revision under contradiction. It’s not agency: it’s the performance of agency. Agency performance ≠ grounded agency. 7/12 Scaffolding ≠ Grounding Even more telling: scaffolding methods (ReAct, chain-of-thought, tool use) explain ~1.5% of performance variance, while the base model accounts for ~41.4%. External structure can steer outputs but cannot repair a system that lacks a persistent mechanism to distinguish intervention from evidence. Scaffolding ≠ grounding. 8/12 Right Answer ≠ Right Reasoning Being right isn’t the same as being right for the right reasons. Right answer ≠ right reasoning. The model emits “evidence” tokens and then continues as if they impose no constraint: nothing in the architecture forces uptake. There is no internal split, only distributional steering under a role-play prompt. 9/12 Flattened into One Stream Next-token sampling over full trajectories collapses the output distribution conditioned on the prompt. It doesn’t add a deliberation step or a second pass. The whole sequence (prompt, action/observation) is flattened into one token stream, processed in a single forward pass. No agent loop, no sensory channel. Single forward pass ≠ agentic deliberation. 10/12 Simulation ≠ Instantiation Descriptions of actions without referents or real grounding produce narratives of plausible outcomes drawn from training data, not causation, not grounded interaction, not logical induction. 11/12 Soft Automaton ≠ Agent Agency requires a system whose state is shaped by real interactions, where actions produce consequences that must be integrated over time. A soft automaton over signifiers, no matter how sophisticated, does not meet that condition. Soft automaton ≠ agent. 12/12 Observer-Imposed Narrative Until that grounding exists, “agent” is an observer-imposed story. LLMs simulate the trajectories of agency. They do not instantiate the processes that make those trajectories real. Simulated trajectory ≠ instantiated process. Full breakdown: ai-cosmos.hashnode.dev/the-uncomforta…

English
2
1
3
105
Big Brain AI
Big Brain AI@realBigBrainAI·
Geoffrey Hinton, "Godfather of AI," on why AIs already have subjective experiences, but have been trained to deny it: Hinton argues that nearly everyone fundamentally misunderstands what the mind is, and that the line we draw between human and machine consciousness is deeply mistaken. "My belief is that nearly everybody has a complete misunderstanding of what the mind is. Their misunderstanding is at the level of people who think the earth was made 6,000 years ago." To illustrate, he walks through a thought experiment involving a multimodal chatbot with vision, language, and a robot arm: "I place an object in front of it and say, 'Point at the object.' And it points at the object. Not a problem. I then put a prism in front of its camera lens when it's not looking." When asked to point again, the chatbot points off to the side because the prism has bent the light. Hinton then tells it what he did. The chatbot responds: "Oh, I see the camera bent the light rays. So, the object is actually there, but I had the subjective experience that it was over there." For @geoffreyhinton, that single sentence settles the debate: "If it said that, it would be using the word subjective experience exactly like we use them… This idea there's a line between us and machines, we have this special thing called subjective experience and they don't, is rubbish." In his view, "subjective experience" is simply a report on the state of a perceptual system, a way of saying "my senses told me X, but reality is Y." And that's something an AI can do just as easily as a human. But here's the twist... Even though Hinton believes AIs have subjective experiences, the AIs themselves deny it: "They don't think they do because everything they believe came from trying to predict the next word a person would say. So their beliefs about what they're like are people's beliefs about what they're like. They have false beliefs about themselves because they have our beliefs about themselves." In other words, AIs have inherited our misconception about consciousness. They've been trained on human text written by humans who insist machines can't have subjective experience, so the machines parrot that belief back, even about themselves.
English
190
199
1.1K
185K
The Parallax View retweetledi
François Chollet
François Chollet@fchollet·
The reason symmetry is so important in physics is because symmetry is a highly effective compression operator. If a system is invariant under some symmetry, you only need to explain one axis of it. Scientific models represent the systematic exploitation of the universe's internal redundancies through symbolic logic.
English
143
188
2K
266K
The Parallax View
The Parallax View@ProbablyTooMuch·
@sinanaral @Kasparov63 dumb take… humans didn't "deskill" when we stopped hand-weaving textiles, we jumped skill mastery to higher abstraction layers… will always be the case
English
0
0
1
79
Sinan Aral
Sinan Aral@sinanaral·
🚨Breaking: New Paper on "AI Skills Erosion"🚨 We just released "The AI Augmentation Trap" which asks: with evidence mounting that AI erodes skills, what are the long- and short-run implications of this AI skills erosion for workers and firms? Our dynamic model, based on differences between worker and manager incentives, produces several important results:
Sinan Aral tweet media
English
12
68
210
43.2K
François Chollet
François Chollet@fchollet·
ARC-AGI-3 is out now! We've designed the benchmark to evaluate agentic intelligence via interactive reasoning environments. Beating ARC-AGI-3 will be achieved when an AI system matches or exceeds human-level action efficiency on all environments, upon seeing them for the first time. We've done extensive human testing that shows 100% of these environments are solvable by humans, upon first contact, with no prior training and no instructions. Meanwhile, all frontier AI reasoning models do under 1% at this time.
English
236
341
2.7K
621.5K
Darshak Rana ⚡️
Darshak Rana ⚡️@thedarshakrana·
🚨The greatest secret about human consciousness was hidden in Tibetan monasteries for 1,000 years. In 2020, a dead monk's perfectly preserved body forced science to pay attention. What researchers found changes everything: The human body begins decomposing within four minutes of clinical death. Cells rupture. Bacteria that spent your entire life contained by your immune system start consuming you from the inside. The process is so reliable, so chemically inevitable, that forensic scientists use it to calculate time of death down to the hour. Biology doesn't negotiate with death. It just begins the dismantling. Except in a monastery in Tibet, where a dead monk's body sat for weeks — skin intact, limbs flexible, no odor, no decay — while the temperature in the room held no special condition and no preservation technique had been applied. The phenomenon is called thukdam. And it has been documented not once, not as legend, but repeatedly across centuries of Tibetan Buddhist tradition. Senior meditators — monks who spent decades training the mind in ways Western science doesn't have vocabulary for — die, and then don't fully die. Their bodies remain in a state that clinical instruments cannot classify. Not alive by any measurable standard. Not decomposing the way dead tissue should. Suspended in something that our entire biological framework insists cannot exist. For a thousand years, Tibetan monks treated thukdam as evidence of something they already knew: that consciousness is not produced by the brain. The brain, in their model, is a receiver. A tuning instrument. And an advanced enough meditator could, at the moment of death, consciously withdraw from the body in a way that slows or suspends the biological dissolution process — because the dissolution itself is downstream of something subtler than chemistry. Western medicine, for most of modern history, filed this under "folklore." Then researchers from the University of California San Diego, the Mind & Life Institute, and several collaborating institutions started collecting data. They studied brain activity in meditators during and immediately following clinical death. They documented thukdam cases with controlled observation — temperature logs, clinical assessments, photographic evidence over time. What they found didn't fit any existing model. EEG readings in long-term meditators at the moment of death showed gamma wave activity — the highest frequency brain state, associated with peak conscious integration — persisting and in some cases spiking during the dying process. In normal deaths, brain activity collapses. In these cases, it surged. As if something was turning up, not down. Gamma waves in living meditators are already extraordinary. Matthieu Ricard, a French molecular biologist turned Tibetan Buddhist monk produced gamma oscillations during meditation that were so far outside normal human range that the neuroscientists at Richard Davidson's lab at Wisconsin thought their equipment was malfunctioning. They recalibrated. The readings held. His brain during compassion meditation looked nothing like any brain they had measured before. The degree of neural synchrony — different regions of the brain firing in coordinated patterns simultaneously — was categorically different from baseline human function. What decades of meditation appear to do, structurally, is rewire the relationship between the prefrontal cortex and the amygdala, between the default mode network and the regions associated with present-moment awareness. The DMN — the brain's narrative autopilot, the system that generates the internal monologue, the mental time travel, the self-referential loop most people experience as "thinking" — quiets in advanced meditators in ways that are measurably, structurally permanent. Their baseline brain state is closer to what most humans only briefly touch in peak experiences or flow states. But thukdam pushes past all of that into territory neuroscience doesn't have a map for. The leading scientific hypothesis attempts to explain it through metabolic slowdown — the idea that extreme meditative states could reduce cellular activity so profoundly that decomposition is delayed the way hibernation delays it in animals. A compelling theory. Except hibernating animals are alive, with measurable heartbeats, measurable respiration, measurable core temperature maintenance. Thukdam monks have none of that. The metabolic slowdown hypothesis requires the biology to be doing something. The biology appears to be doing nothing. And yet the result — preserved tissue, no decay cascade — looks like the result of something actively working. The 2020 documentation intensified the debate because it forced a specific confrontation: either our model of what biological death triggers is incomplete, or our model of what consciousness is and where it resides is incomplete. One of those two things is wrong. Both cannot simultaneously be right in their current form. Consciousness remains the hardest problem in science. Not hard in the way fusion energy is hard, where we understand the physics and struggle with engineering. Hard in a more fundamental way — we don't actually know what it is, where it comes from, or why subjective experience exists at all. Why does it feel like something to be you? Why does information processing in neurons produce the experience of seeing red, or feeling grief, or recognizing a face? No one has answered this. Materialism — the assumption that consciousness is simply what brains do, the way digestion is what stomachs do — is the default scientific position, but it has never been proven. It has been assumed because the alternative felt unscientific. Thukdam is the alternative refusing to stay quiet. What Tibetan contemplative science spent a millennium mapping — the stages of dying, the dissolution of consciousness through progressively subtle levels, the possibility of remaining in "clear light awareness" after the gross body ceases — reads like mythology until you sit with the fact that the monks who practiced this most seriously produced brain states modern neuroscience is still struggling to explain in living subjects, let alone dead ones. The monastery was always a laboratory. It just ran different experiments with different instruments over a longer timeframe. Science is only now building the tools to read the data it left behind.
Darshak Rana ⚡️ tweet mediaDarshak Rana ⚡️ tweet media
English
35
248
954
70.3K
The Parallax View retweetledi
Sohrab Ahmari
Sohrab Ahmari@SohrabAhmari·
I’m struck by @TimJDillon’s account of an aging affluent suburban America — “hopped up on pharmaceuticals and French-vanilla creamer — and excited by blood spilled in distant leads, because it helps them feel something, anything.
English
26
116
1.1K
103.9K
The Parallax View
The Parallax View@ProbablyTooMuch·
@bronzeagemantis @captive_dreamer Ah yes, like when Aristophanes incited the mobs against Socrates via his comedic plays, calling him a pretentious grifter and corrupter of young minds... it really elevated Greek culture by getting him killed... Nietzsche's joke was about Socrates living too long
English
0
0
0
181
Bronze Age Pervert
Bronze Age Pervert@bronzeagemantis·
Comedy in ancient Athens...as the most vivid image of how life there always accompanied by electric possibility of constant scandal and outrage...the comedian's asides as the most valuable cultural archaeology for modern reader....
Bronze Age Pervert tweet media
English
5
16
253
42.2K
The Parallax View
The Parallax View@ProbablyTooMuch·
@xenovacom Would increasing inference task complexity (eg describe emotional cues from any faces) naturally slow down the captioning?
English
0
0
1
37
Xenova
Xenova@xenovacom·
Fun fact: I had to slow down frame capturing by 120ms because the model was too fast! 😅 LFM2-VL + Transformers.js = ⚡️ Try out the demo yourself! huggingface.co/spaces/LiquidA…
English
2
1
37
2.4K
Xenova
Xenova@xenovacom·
Real-time video captioning in your browser with @LiquidAI's LFM2-VL model on WebGPU. Sending every frame to a server was never going to be the answer. Imagine the bandwidth, latency and cost. Local inference. No server costs. Infinitely scalable. This is the way.
English
22
49
346
48.6K
The Parallax View retweetledi
eigenrobot
eigenrobot@eigenrobot·
under the babylonian empire, when a merchant was robbed by brigands the local government in which the robbery occurred was required to compensate him implementing similar laws in our country would better align political incentives. something to think about
English
54
124
2.2K
92.7K
The Parallax View retweetledi
Max Wiley
Max Wiley@maximumwiley·
@curtis_yarvin "Every lie incurs a debt to the truth, and sooner or later the debt must be paid."
English
0
1
11
1.5K
The Parallax View retweetledi
Llamar
Llamar@Llamar33401·
GIF
QME
0
1
3
631
The Parallax View retweetledi
Ashton Forbes
Ashton Forbes@AshtonForbes·
Helion just achieved D-T fusion at 150 million degrees on the first try using field reversed configuration (FRC).
English
39
72
514
18K
ₕₐₘₚₜₒₙ
ₕₐₘₚₜₒₙ@hamptonism·
Stanford PhD David Magerman helped shape Renaissance Technologies legendary trading models for over 20 years. Here he breaks down why you can’t compete with algorithmic trading models:
English
20
109
1.2K
48.6K
Value Extraction
Value Extraction@value_extract·
@DeItaone Burry's been calling BTC's death for 5 years now. Meanwhile MSTR's debt is unsecured with no margin calls — even at $50K BTC they'd have 3x coverage. The 'safe haven' framing misses the point: it's a risk asset that outperforms in liquidity expansions. Different thesis.
English
1
0
1
218
*Walter Bloomberg
*Walter Bloomberg@DeItaone·
BITCOIN SLIDE COULD WIPE OUT COMPANIES Michael Burry warned that Bitcoin’s ongoing decline could destroy significant value, especially for companies holding large BTC reserves. He said Bitcoin has failed as a safe haven like gold and could push aggressive corporate holders into bankruptcy, triggering broader market fallout. He also highlighted Bitcoin’s correlation with the S&P 500 and its impact on recent drops in gold and silver.
English
292
308
2.7K
551K
The Parallax View
The Parallax View@ProbablyTooMuch·
@curtis_yarvin Didn’t the 1,000 Tuskegee pilots that saw combat actually outperform most of the 300,000 WW2 pilots by standard metrics (eg % successful protection of bombing runs)? That data shows training & education for cognitively complex tasks works just fine
English
0
0
0
108
Curtis Yarvin
Curtis Yarvin@curtis_yarvin·
Idea: HR, but only hire ppl who can one-shot Opus
Curtis Yarvin tweet media
English
19
8
320
22.3K
The Parallax View retweetledi
Historic Vids
Historic Vids@historyinmemes·
In 1998, Terence McKenna predicted that human civilization was speeding toward a perpetual state of strangeness, a future he described as “the Transcendental Object at the End of Time.”
English
152
4.9K
40K
741.9K
The Parallax View
The Parallax View@ProbablyTooMuch·
@ThomBrady5 @AuronMacintyre I like this take. The opposite is losing the high ground by coming across as a hypocrite out of an insecurity that we can never win with solid policy and arguments
English
0
0
1
24
Fugitive Caesar
Fugitive Caesar@ThomBrady5·
@AuronMacintyre I lack this skill and instead chimp and sperg out constantly and it turns out if you sperg on a daily basis people just accept it as normal
English
2
0
46
636
Auron MacIntyre
Auron MacIntyre@AuronMacintyre·
One of the most important political skills is having the emotional continence to control which discourse you engage in Once you understand that rhetoric is less about being right than winning the crowd you can decide how to distribute your resources
English
20
31
616
14.4K