Sabitlenmiş Tweet
Mitchell B Slapik
465 posts

Mitchell B Slapik
@mslapik
MD/PhD Candidate 🥼 @McGovernMed | F30/TL1 Fellow 🏥 @NIH | Computational Neuroscience 🧠 @DragoiLab | Jazz Sax 🎷 @TheChirpChirps
Houston, TX Katılım Kasım 2022
997 Takip Edilen3.1K Takipçiler
Mitchell B Slapik retweetledi

"Before we name, we touch. We propose that the roots of language lie not in abstract, amodal symbols but in early bodily experience." Our Opinion paper is out in TICS. Congrats Luca Sergey Rinaldi !!
sciencedirect.com/science/articl…




English

@Here4Many2 Yes this is an important question. One answer is that these processes may happen in different brain regions or networks: for example, top-down dominance in cognitive areas but bottom-up dominance in sensory areas.
English

@mslapik how would this account for Autism Schizophrenia comorbidities? can imagine that one might be dominant over the other (60/40 Autism/Schizo split. 70/30, etc, vast oversimplification since we can't really quantify it like this but ykwim)
Do we have data on Autistic Schizophrenics?
English

Sources:
Tarasi et al. (2021): pubmed.ncbi.nlm.nih.gov/34774901/
Pellicano et al. (2012): pubmed.ncbi.nlm.nih.gov/22959875/
Fletcher et al. (2008): nature.com/articles/nrn25…
Figure created using BioRender: biorender.com.
English
Mitchell B Slapik retweetledi

In linguistics studies across cultures, humans have been shown to associate certain nonwords with shapes; the iconic example is “bouba,” which associates with a round shape, and “kiki,” which sounds spiky.
In a new Science study, researchers report that the bouba-kiki effect is also exhibited by newly hatched chickens.
The authors propose that such sound-shape correspondences may belong to a set of innate cross-modal associations that are shared across species, rather than being a speech-related phenomenon that is distinctive to humans.
📄: t.co/JC6GvitgEu
#SciencePerspective: t.co/csaa2BiTG5

English
Mitchell B Slapik retweetledi

Online now: Linking neural manifolds to circuit structure in recurrent networks dlvr.it/TRL6b7

English
Mitchell B Slapik retweetledi

One of the major open problems until very recently was the existence of a single tile that can tile the plane without any periodicity, the so-called ein stein problem of Hilbert. A demo app
bnaskrecki.faculty.wmi.amu.edu.pl/spectre/
In 2024, Smith, Myers, Kaplan, and Goodman-Strauss published a paper in which they introduced the first example of this mythological creature of tilings.
It is a notoriously hard puzzle, so I decided to prepare an interactive interface where you can test your patience and skills and try to continue forever the quest of tiling the entire plane with the Spectre.
Technically, it took me about four hours to vibe-code the whole app, starting from an initial seed based on a Jupyter Notebook. I used Codex CLI with the model gpt-5.2 (reasoning x-high, summaries auto). This is a project that, before agents, would have taken a few weeks to finish.


English

Excited to announce the 2nd paper of my PhD, which investigates how edge detectors contribute to representational untangling, making object categories more linearly separable than they are in pixel space. Now out in Neural Computation: direct.mit.edu/neco/article/3…
GIF
English
Mitchell B Slapik retweetledi

How does an embryo reliably "compute" its form - "cell by cell" - using only local interactions and mechanics, yet produce a precise global body plan? I’m excited to share our Nature Methods paper "MultiCell: geometric learning in multicellular development", presenting #AIxBiology research led by @HaiqianYang and the result of a great collaboration with Ming Guo, George Roy, Tomer Stern, Anh Nguyen and Dapeng Bi.
A long-standing challenge in developmental biology is to predict how thousands of cells collectively self-organize as tissues fold, divide, and rearrange. In MultiCell, we represent a developing embryo as a dual graph that unifies two complementary views of tissue mechanics with single-cell resolution: cells as moving points (granular) and cells as a connected foam (junction network). This lets the model learn dynamics from both geometry and cell–cell connectivity.
On whole-embryo 4D light-sheet movies of Drosophila gastrulation (~5,000 cells), our model predicts key cell behaviors and the timing of events, including junction loss, rearrangements, and divisions with high accuracy, at single-cell resolution. Beyond prediction, the same representation supports robust time alignment across embryos and offers interpretable activation maps that highlight the morphogenetic "drivers" of development. The broader goal is a foundation for cell-by-cell forecasting in more complex tissues, and eventually for detecting subtle dynamical signatures of disease.
Kudos to the team for this inspiring collaboration with brilliant researchers to push the boundary of AI for biology!
Citation: Yang, H., Roy, G., Nguyen, A.Q., Buehler, M.J., et al. MultiCell: geometric learning in multicellular development. Nature Methods (2025), DOI: 10.1038/s41592-025-02983-x
Code/data links are in the manuscript.
English
Mitchell B Slapik retweetledi

Now out in Nature Human Behaviour! 🚀🚀
Over the past decades, research on collective human behaviour has relied heavily on networks. This is intuitive: people interact with other people.
However, we argue that this dominant framework misses a crucial ingredient.
Traditional networks represent agents as nodes and pairwise relations as edges. As a result, they fundamentally assume that social interactions can be decomposed into pairs.
Yet many social processes are irreducibly group-based.
A simple example: a group of three coauthors writing a paper cannot be reduced to three independent pairs of coauthors. The group itself matters.
In this article, we review a wide range of empirical and theoretical cases where group interactions cannot be decomposed into pairwise ones, and show that higher-order interactions shape collective behaviour above and beyond dyadic ties.
We advocate studying collective behaviour on hypergraphs, where interactions can involve multiple agents simultaneously.
We review how hypergraphs provide new insights across domains, including affiliation and collaboration networks, high-frequency contact settings (families, friends), and key social processes such as social contagion, cooperation, truth-telling, and moral behaviour.
Finally, we outline promising directions for future research: addressing computational challenges of higher-order models; studying bias and inequality in group dynamics; combining hypergraphs and large language models to investigate the coevolution of language and behaviour; and using higher-order networks to simulate the impact of policies before implementation; and others.
We are very excited about this work and hope it will inspire further research in a rapidly growing and fundamental area with broad real-world implications.
Link to the paper in the first reply
This work was brilliantly led by Federico Battiston (@fede7j), with an outstanding team of co-authors: Fariba Karimi (@fariba_k), Sune Lehmann, Andrea Bamberg Migliano, Onkar Sadekar (@OnkarSadekar), Angel Sanchez, & Matjaz Perc (@matjazperc)

English
Mitchell B Slapik retweetledi

Mapping the genetic landscape across 14 psychiatric disorders | Nature
nature.com/articles/s4158…
English
Mitchell B Slapik retweetledi

Kinetic Hopfield networks: When memories live in dynamics, not energy minima
Content-addressable memory—the ability to recall a stored pattern from a partial or noisy cue—is usually framed in terms of energy landscapes. In Hopfield-style networks, memories sit at the bottom of energy wells: the system relaxes downhill until it lands in the right minimum. But what if memories didn’t live in special low-energy states at all—and were encoded purely in the kinetics of how the system moves?
Félix Benoist, Luca Peliti and Pablo Sartori tackle exactly this in their new work. Instead of carving patterns into the energy landscape, they keep the energy function almost blind to the stored content and encode memories in the transition rates between states. Patterns become kinetic traps: not the deepest valleys, but the states that are reached fastest and escaped from very slowly.
What’s striking is that this “kinetic encoding” performs comparably to classical Hopfield networks on core metrics: it supports extensive capacity (scaling with system size), can be boosted with higher-order couplings (up to ∼N² patterns), and even exhibits glassy, ageing-like dynamics where correlation functions depend on the system’s history. The patterns are transient—eventually the system escapes—but their lifetime can be exponentially larger than the retrieval time, making them effectively robust memories over operational timescales.
Beyond a neat theoretical twist, this work suggests a different way to think about computation in physical and synthetic systems. In biochemical networks, self-assembly, or neuromorphic hardware, it may be more natural (or cheaper) to sculpt pathways and kinetic barriers than to engineer carefully tuned energy minima. Kinetics, not just thermodynamics, can carry the load of associative memory.
Paper: journals.aps.org/prl/abstract/1…

English
Mitchell B Slapik retweetledi

🚨Our November issue is now live, and it includes a study on how LLMs align with the reading brain, a generative model for structure-based molecular design, and much more! Check it out! nature.com/natcomputsci/v…

English
Mitchell B Slapik retweetledi

Mitchell B Slapik retweetledi

During the colloquium dinner, my colleague David Weinberg reminded us that one of the earlier indirect claims for dark matter came from Jeremy Ostriker and Jim Peebles (1973)—the simple argument that without dark matter, disks would naturally buckle, which does not match observations. So I attempted to reproduce this result with Gemini 3 for fun.
Here is the live website
tingyuansen.github.io/Ostriker_Peebl…
English
Mitchell B Slapik retweetledi
Mitchell B Slapik retweetledi

Excited to share our work with @EngelTatiana
now published in Nature Machine Intelligence!
nature.com/articles/s4225…

Tolmachev Pavel@TolmachevPavel2
RNNs are often used as a model for exploring how the brain may solve specific tasks. In a new preprint, we show that, depending on the architecture, RNNs find different circuit solutions, behaving differently when exposed to novel stimuli. biorxiv.org/content/10.110… @EngelTatiana
English
Mitchell B Slapik retweetledi

In "On Growth and Form," published in 1917, the scientist D’Arcy Thompson highlighted similarities between living and nonliving matter. His thesis — that physical and mechanical forces shape organisms — is coming back into vogue. quantamagazine.org/genes-have-har…

English
Mitchell B Slapik retweetledi


