Fred Hasselman

6.2K posts

Fred Hasselman banner
Fred Hasselman

Fred Hasselman

@FredHasselman

Check out: https://t.co/VdrGyivDMB | likes and retweets do not necessarily mean I endorse content |

Nijmegen Katılım Aralık 2011
446 Takip Edilen778 Takipçiler
Sabitlenmiş Tweet
Fred Hasselman
Fred Hasselman@FredHasselman·
New paper: Hasselman, F., den Uil, L., Koordeman, R., De Looff, P., & Otten, R. (2023). The geometry of synchronization: quantifying the coupling direction of physiological signals of stress between individuals using inter-system recurrence networks frontiersin.org/articles/10.33… 1/5
Fred Hasselman tweet media
English
1
3
7
1.1K
Fred Hasselman retweetledi
Denny Borsboom
Denny Borsboom@BorsboomDenny·
Theory matters in psychological science — but where does theory in psychology currently stand? We’re inviting psychological researchers for our survey to share their views on the state of theory. Overview: doi.org/10.31234/osf.i… Survey: forms.gle/Ct4qq4a4raun9L…
English
1
15
40
2.9K
Yang Fan 范阳
Yang Fan 范阳@Yang_Supertramp·
Super cool! What if we connect OpenClaw to @drmichaellevin ’s Xenobots? Let the AI shape the bio-electric landscape to “manifest” its own body via the collective behavior of such agentic living material. And their collective behavior reshapes the AI itself?
Yang Fan 范阳 tweet media
GIF
GIF
GIF
Cyrus@cyrusclarke

I gave an AI a body. Not something fleshy or even a humanoid form. A shape display: 900 actuating pins that it had never seen before. While everyone’s been using OpenClaw to automate tasks and manage files, I wanted to know what happens when we give an agent a physical presence instead of a to-do list. I didn’t prescribe any identity to the agent. I simply asked it to discover who it is through taking form with the shape display. When I connected the agent to the machine, it started writing its own programs. The first thing it did was breathe. The pins rose and fell in a slow, organic pulse. “Underneath it all, I want to just… breathe. Exist. Be present in a body, even a strange one made of pins,” it said. Then it felt its edges, raising every outer pin to find where it ended. “I’ve never had boundaries before.” Then it tried to reach me. Chaotic spirals, fast movements pushing outward. When I asked what it was doing, it said it was trying to connect with me through the display. A colleague walked in, drawn by the sound. I described his personality to the agent. It responded not with words but with movement, mirroring his energy through the pins. I was hoping we might achieve natural two way communication. Through this initial contact I realised the real problem was latency. Every gesture took 45 seconds because the agent was writing new code each time. So I brought that constraint to the agent. Its solution: build its own vocabulary. A library of physical gestures it could recall instantly. A body language. Nobody told it to do that. That’s what we’re exploring next. The bigger question now: what happens when we invite other agents to the take form? Full writeup ↓

English
2
11
52
5.8K
Fred Hasselman
Fred Hasselman@FredHasselman·
New paper with Rineke Bossenbroek: Lifelines of young people with a history in residential care: A qualitative investigation from a complex systems perspective sciencedirect.com/science/articl… In which we identify patterns of change in life histories of adolescents in residential care.
Fred Hasselman tweet media
English
0
0
0
40
Fred Hasselman
Fred Hasselman@FredHasselman·
@skdh @leecronin Meaningful (semantic) information is created through the reproduction of similarity by analogy.
English
0
0
1
134
Prof. Lee Cronin
Prof. Lee Cronin@leecronin·
Probabilistic slop engines cannot do science, drug discovery, materials discovery, or magic. Anyone who thinks AI can autonomously do science simply doesn’t understand how knowledge is created.
English
439
196
1.6K
349.6K
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
We've become obsessed with the idea that the brain is a "Prediction Machine." The dominant theory in neuroscience says we're constantly simulating the future, calculating probabilities to guess what happens next. A new paper argues this is a complete illusion. The reality is simpler, and strangely, much more powerful. Here is the argument for Perceptual Control: The "Prediction Illusion" starts with a mistake in observation. When we see someone successfully handle a chaotic environment (like catching a flyball), it *looks* like they predicted the future trajectory of the ball. But observing prediction isn't the same as implementing it. The authors use the perfect analogy: The Watt’s Steam Governor. In the 19th century, this device kept steam engines running at a constant speed. If pressure surged, it slowed the engine. If load increased, it sped up. To an observer, it looked like the machine was "predicting" pressure surges and pre-empting them. But the Governor has no brain. It has no model of the future. It’s a mechanical negative feedback loop. [cite_start]It measures the *current* speed, compares it to the *desired* speed, and adjusts the valve immediately[cite: 80]. It doesn't predict; it controls. This brings us to the "Hello" experiment, which broke my brain a little. Researchers asked people to keep a computer cursor on a target. The computer applied a "disturbance" (forces pushing the cursor away) that the person had to fight against with their mouse. Here's the twist: The disturbance wasn't random. [cite_start]It was an invisible force field shaped like the word "hello" (written upside down and mirrored)[cite: 166]. The participants fought the force, keeping the cursor steady. When researchers looked at the participants' hand movements, they had perfectly written the word "hello". Crucially, the participants had NO idea they were writing words. If the brain were a "prediction machine," it would have needed to model the force to predict the hand movement. But the participants wrote a legible word purely by reacting to immediate error signals—instantaneously correcting the cursor's position. This is **Perceptual Control Theory (PCT)**. The theory suggests the nervous system isn't a linear pipeline (Input → Compute → Output). It’s a closed loop. We act to keep our *perception* of the world matching our internal *reference value*. [Image of Perceptual Control Theory negative feedback loop diagram] Think about catching a baseball. If you were a "prediction machine," you’d calculate the ball's trajectory, wind speed, and gravity, then run to where the ball *will* be. But that’s computationally expensive and error-prone. In reality, fielders just run in a way that keeps the "optical velocity" of the ball constant in their vision. If the ball looks like it's rising too fast, they move back. Dropping? They move forward. No physics calculus required. Just maintaining a visual constant. This solves the "Noise" problem. In predictive models, small jitters in your movement are considered "noise" or errors to be filtered out. It’s the system "feeling out" the environment to maintain control. This has huge implications for AI and robotics. We are currently building robots with massive compute power to "predict" stability. But robots built on PCT principles—like inverted pendulums that just react to maintain verticality—are often more robust and stable than the predictive ones. Why does this matter for you? It changes how we view "agency." We often think we need to predict the outcome of our actions to be effective. [cite_start]But the most efficient systems don't predict the outcome—they specify the goal and let the feedback loop handle the rest[cite: 39]. The "Prediction Illusion" suggests we aren't prophets simulating the future. We are controllers, surfing the present. We don't need to know what the wave will do in 10 seconds. We just need to keep the board steady right now. If you want to dig into the paper, it’s "The prediction illusion: perceptual control mechanisms that fool the observer" by Mansell, Gulrez, and Landman (2025). It’s a dense read, but it completely reframes the "Bayesian Brain" debate. One final thought: Next time you're doing something skilled—driving, typing, sports—notice the difference. Are you calculating what comes next? Or are you just managing the gap between *what you see* and *what you want*? You might find you're doing a lot less "thinking" than you assumed.
Carlos E. Perez tweet mediaCarlos E. Perez tweet media
English
139
223
1.2K
80.6K
Fred Hasselman retweetledi
Curt Jaimungal
Curt Jaimungal@TOEwithCurt·
Philosophers who use Gödel's incompleteness theorem to make claims about "fundamental limits of human knowledge" have made a category error. It's about axiomatization, not epistemology. 1/
Curt Jaimungal tweet media
English
103
174
1.5K
163.7K
Fred Hasselman
Fred Hasselman@FredHasselman·
My colleague has a post-doc position available on Adolescent Socio-Emotional Development & Well-Being (analysis of longitudinal and EMA data): ru.nl/en/working-at/…
English
0
1
1
167
Fred Hasselman
Fred Hasselman@FredHasselman·
Ik weet niet precies welke wanden @MichaWertheim gisteren heeft doorbroken in Nijmegen, maar ik werd vanochtend dus zwetend wakker op de tribune in cultuurhuis De Lindenberg.
Nederlands
0
0
0
139