Mark Howell

5.5K posts

Mark Howell banner
Mark Howell

Mark Howell

@MNWH

research engineer; control systems; maker; innovator; programmer; dad. Uganda - UK - USA - NZ - ... If you follow me & have 0 posts I will remove and block you

Ōtautahi, Aotearoa Inscrit le Şubat 2011
2.7K Abonnements651 Abonnés
Tweet épinglé
Mark Howell
Mark Howell@MNWH·
Parameter identification in a changing environment. Variable action space Continuous Reinforcement Learning Automata pic.twitter.com/xUdNVKUIqX
Christchurch City, New Zealand 🇳🇿 English
1
4
16
0
Mark Howell
Mark Howell@MNWH·
Reading, J. Bronowski, "The ascent of man" A book from his BBC TV program of the same name, From.the 1970s I think.
Mark Howell tweet media
English
0
0
0
31
Mark Howell
Mark Howell@MNWH·
My ChatGPT award
Mark Howell tweet media
English
0
0
0
75
Mark Howell
Mark Howell@MNWH·
Pumpkin or courgette/zucchini
Mark Howell tweet mediaMark Howell tweet media
English
0
0
0
52
Mark Howell
Mark Howell@MNWH·
@JacklouisP I did a summer job at a pen factory where they had a machine like this. It orientated metal bits so that they could be pushed onto the plastic pen. It's fascinating to watch. Great engineering
English
1
0
1
1.6K
Jack 🤖
Jack 🤖@JacklouisP·
The vibratory bowl feeder. Patented in 1950. Here is the physics behind this ubiquitous tool. It solves a universal factory problem: you have a bin of randomly oriented parts and need them single-file, perfectly aligned, feeding into the next machine. No vision. No sensors. No code. Just physics. The bowl vibrates with an asymmetric waveform - part vertical, part rotational. During the slow phase, static friction grips the part. During the fast phase, the part slips or micro-jumps. Net result: parts climb the spiral. At the top, geometry takes over. Slots, ledges, and narrowing tracks are machined for one specific part shape. Wrong orientation? Fall back in. Correct? Exit single-file. Every bowl is custom tooled. Change the part, change the bowl. Inflexible? Completely. But at high volume, nothing beats it on cost, speed, or reliability. Running 24/7 since 1950.
English
24
55
726
379.4K
Mark Howell retweeté
IFAC_Control
IFAC_Control@IFAC_Control·
It's that time of year again - Welcome to the Control Advent Calendar 🎄 A unique way to explore the world of automatic control. Each day, a new question opens the door to real-world challenges — and highlights how control engineers help solve them. 🔗 buff.ly/9p5iYw8
IFAC_Control tweet media
English
0
2
6
268
Mark Howell
Mark Howell@MNWH·
The control systems advent calendar looks like it is back for another year. control-advent.com Happy holidays
English
0
0
0
36
Mark Howell retweeté
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
We've become obsessed with the idea that the brain is a "Prediction Machine." The dominant theory in neuroscience says we're constantly simulating the future, calculating probabilities to guess what happens next. A new paper argues this is a complete illusion. The reality is simpler, and strangely, much more powerful. Here is the argument for Perceptual Control: The "Prediction Illusion" starts with a mistake in observation. When we see someone successfully handle a chaotic environment (like catching a flyball), it *looks* like they predicted the future trajectory of the ball. But observing prediction isn't the same as implementing it. The authors use the perfect analogy: The Watt’s Steam Governor. In the 19th century, this device kept steam engines running at a constant speed. If pressure surged, it slowed the engine. If load increased, it sped up. To an observer, it looked like the machine was "predicting" pressure surges and pre-empting them. But the Governor has no brain. It has no model of the future. It’s a mechanical negative feedback loop. [cite_start]It measures the *current* speed, compares it to the *desired* speed, and adjusts the valve immediately[cite: 80]. It doesn't predict; it controls. This brings us to the "Hello" experiment, which broke my brain a little. Researchers asked people to keep a computer cursor on a target. The computer applied a "disturbance" (forces pushing the cursor away) that the person had to fight against with their mouse. Here's the twist: The disturbance wasn't random. [cite_start]It was an invisible force field shaped like the word "hello" (written upside down and mirrored)[cite: 166]. The participants fought the force, keeping the cursor steady. When researchers looked at the participants' hand movements, they had perfectly written the word "hello". Crucially, the participants had NO idea they were writing words. If the brain were a "prediction machine," it would have needed to model the force to predict the hand movement. But the participants wrote a legible word purely by reacting to immediate error signals—instantaneously correcting the cursor's position. This is **Perceptual Control Theory (PCT)**. The theory suggests the nervous system isn't a linear pipeline (Input → Compute → Output). It’s a closed loop. We act to keep our *perception* of the world matching our internal *reference value*. [Image of Perceptual Control Theory negative feedback loop diagram] Think about catching a baseball. If you were a "prediction machine," you’d calculate the ball's trajectory, wind speed, and gravity, then run to where the ball *will* be. But that’s computationally expensive and error-prone. In reality, fielders just run in a way that keeps the "optical velocity" of the ball constant in their vision. If the ball looks like it's rising too fast, they move back. Dropping? They move forward. No physics calculus required. Just maintaining a visual constant. This solves the "Noise" problem. In predictive models, small jitters in your movement are considered "noise" or errors to be filtered out. It’s the system "feeling out" the environment to maintain control. This has huge implications for AI and robotics. We are currently building robots with massive compute power to "predict" stability. But robots built on PCT principles—like inverted pendulums that just react to maintain verticality—are often more robust and stable than the predictive ones. Why does this matter for you? It changes how we view "agency." We often think we need to predict the outcome of our actions to be effective. [cite_start]But the most efficient systems don't predict the outcome—they specify the goal and let the feedback loop handle the rest[cite: 39]. The "Prediction Illusion" suggests we aren't prophets simulating the future. We are controllers, surfing the present. We don't need to know what the wave will do in 10 seconds. We just need to keep the board steady right now. If you want to dig into the paper, it’s "The prediction illusion: perceptual control mechanisms that fool the observer" by Mansell, Gulrez, and Landman (2025). It’s a dense read, but it completely reframes the "Bayesian Brain" debate. One final thought: Next time you're doing something skilled—driving, typing, sports—notice the difference. Are you calculating what comes next? Or are you just managing the gap between *what you see* and *what you want*? You might find you're doing a lot less "thinking" than you assumed.
Carlos E. Perez tweet mediaCarlos E. Perez tweet media
English
139
223
1.2K
80.6K
Mark Howell retweeté
inControl podcast
inControl podcast@inControlpdcst·
🎙️ New episode! What is feedback, really? We go back to its prehistory, revisit Black’s negative-feedback amplifier, and trace the idea through biology, strategy, behaviour, and even our assumptions about causality. Link: incontrolpodcast.com Thanks: NCCR Automation
inControl podcast tweet media
English
0
2
8
408
Math Cafe
Math Cafe@Riazi_Cafe_en·
Tell us a math joke
English
24
4
51
12K
Mark Howell retweeté
Richard Sutton
Richard Sutton@RichardSSutton·
To learn more about temporal difference learning, you could read the original paper (incompleteideas.net/papers/sutton-…) or watch this video (videolectures.net/videos/deeplea…).
Khurram Javed@kjaved_

The Dwarkesh/Andrej interview is worth watching. Like many others in the field, my introduction to deep learning was Andrej’s CS231n. In this era when many are involved in wishful thinking driven by simple pattern matching (e.g., extrapolating scaling laws without nuance), it’s refreshing to hear an influential voice that is tethered to reality. One clarification for the podcast is that when Andrej says humans don’t use reinforcement learning, he is really saying humans don't use returns as learning targets. His example of LLMs struggling to learn to solve math problems from outcome-based rewards also elucidates the problem with learning directly from returns. Fortunately for RL, this exact problem is solved by temporal difference (TD) learning. All sample-efficient RL algorithms that show human-like learning (e.g., sample-efficient learning on Atari, and our work on learning from experience directly on a robot) rely on TD learning. Now Andrej is not primarily an RL person; he is looking at RL through the lens of LLMs these days, and all RL done in LLMs uses returns as targets, so it’s understandable that he is assuming that RL is all about learning from observed returns. But this assumption leads him to the incorrect conclusion that we need process-based dense rewards for RL to work. If you embrace TD learning, then you don't necessarily need a dense reward. Once you have learned a value function that encodes useful knowledge about the world, you can learn on the fly in the absence of rewards, just like humans and animals. This is possible because in TD learning there is no difference between learning from an unexpected reward and learning from an unexpected change in perceived value.

English
19
119
1.1K
159.3K
Mark Howell
Mark Howell@MNWH·
@chrishipkins Safeguards need to be added to prevent future governments from raiding the fund the same way the current government raided the Climate Emergency Response Fund.
English
0
0
7
282
Chris Hipkins
Chris Hipkins@chrishipkins·
#BREAKING: Labour will launch the New Zealand Future Fund to create good, well-paid jobs and keep wealth and talent here at home. Our plan is about a future made in NZ.
English
312
67
480
48.2K
Mark Howell
Mark Howell@MNWH·
A little cloudy
Mark Howell tweet media
English
0
0
0
62
Satnam Singh
Satnam Singh@satnam6502·
I have landed my dream job. I’ve just accepted a position at Harmonic, a Palo Alto startup applying AI to formal mathematical reasoning. Harmonic’s Aristotle formal reasoning model achieved Gold Medal level performance at this year’s International Mathematical Olympiad (IMO). I will work on exploring applications of Aristotle to the formal verification of hardware. This job is a perfect intersection of hardware design and verification, functional programming, formal methods and machine learning, bringing together several threads of my career so far. The beauty of asking an AI to generate a proof for a lemma (e.g. a formal property about a circuit) is that it can be checked by an external interactive theorem prover (like Lean) to establish whether the AI’s output is actually correct. This is an awesome superpower! harmonic.fun @HarmonicMath
English
177
95
3K
240.1K
Mark Howell
Mark Howell@MNWH·
Installed an older printer work were throwing out. It came with a new printer cartridge for 25000 pages, so it should last me a while.
Mark Howell tweet media
English
0
0
2
64