Join us for next MorphoTalks with Professor Thomas Gorochowski @chofski (University of Bristol - England 🇬🇧) for a deep dive into programming biological systems on 25 February 2026 .
With Kieran Nazareth and @MorphComp#Robotics#Robot#SoftRobotics
📣New blog post! 📣 "75 Ideas to Organise your Research Community" - Organising your research community not only lets you give back, but also demonstrates leadership, which is vital for job applications. Organising also helps you to rapidly build a career-boosting network. No seniority or budget is needed. Here are 75 inspiring ideas!
worksmartandberemarkable.com/blog/2025/75-i…
Fundamental limits of reservoir computing: stability vs. reach
Reservoir computing (RC) uses the natural dynamics of a large recurrent network to generate time-varying signals. You only train a simple readout, not the whole network, which makes RC appealing both for neuroscience models and for physical implementations (photonic chips, mechanical oscillators, cultured neurons). Yet RC often behaves unpredictably: sometimes it learns a target sequence easily, sometimes it cannot hold the pattern or never gets close.
Daoyuan Qian and Ila Fiete show that these failures come from two distinct causes. First is stability—the network must have a feedback-stabilized orbit that actually sustains the target output. If the orbit is unstable, the network can match the target during training but drifts away as soon as training stops. Second is reach—even if a stable orbit exists, the learning rule must be able to steer the system close enough to it. FORCE learning generally has greater reach than teacher-forcing, and adding controlled “forgetting” can improve reach further.
The key insight is that stability and reach are separate. Increasing reservoir size helps reach (more expressive dynamics), but also makes stability harder (more modes near instability). Simply scaling up a network is not always better. Designing reservoirs with multiple neuron types can improve this trade-off—retaining expressive power while keeping the dynamics stable.
The result is a clearer, more practical view of how to design and train reservoir systems. If the target lies outside the stability region, change the reservoir. If it lies inside but training stalls, change the learning rule. This framework connects engineered RC systems with ideas from neural dynamics in the brain, where biological circuits must also balance flexibility with stability.
Paper: journals.aps.org/pre/abstract/1…
@LindauWest@Procreate Thank you, Linda! We work on bio-inspired and growing robots, although not in the style as depicted. The drawing is mostly for fun.
@MorphComp@Procreate Love the creativity; sci-fi sketches always spark wild ideas ✨ What inspired this mix of robotics and organic growth in your drawing?
Almost exactly 10 years after joining @imperialcollege as a Postdoc, I am honoured to announce that I am now Professor in Machine Learning and Robotics! 👨🎓 🤖
My fantastic team found the best gift to celebrate this special occasion!
🚨 Hiring! We are looking for a postdoc researcher with expertise in computational fluid dynamics and structural simulations to investigate the biomechanical and mechanosensory basis of insect flight.
Extreme agility ✅
Morphological computing✅
Meshes!✅
jobs.rvc.ac.uk/vacancy.aspx?r…
Humanoid robots tend to be designed around software that controls everything centrally. This "brain-first" approach results in physically unnatural machines. @harajabi_sciencealert.com/humanoid-robot…
Colleagues at Tokyo are proposing an interesting take on "informational embodiment" - the highlights are very much in line with the scalability and attunement theses of irruption theory! 🤓