Magdalena Kachlicka @mkachlicka.bsky.social

2.6K posts

Magdalena Kachlicka @mkachlicka.bsky.social banner
Magdalena Kachlicka @mkachlicka.bsky.social

Magdalena Kachlicka @mkachlicka.bsky.social

@mkachlicka

Postdoctoral Researcher https://t.co/F8yfkW93pZ & HRF @bbkpsychology @audioneurolab | speech+sounds+brains 🧠 cogsci, audio, neuroimaging, language, methods

London, England Katılım Şubat 2012
2.9K Takip Edilen961 Takipçiler
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
tyler bonnen
tyler bonnen@tylerraye·
excited to share some recent work! tldr; models trained on multi-view sensory data are the first to match human-level 3D shape perception—all zero shot, with no training on experimental data/images project page: tzler.github.io/human_multiview 1/🧠
English
3
23
126
15K
Nadieh Bremer
Nadieh Bremer@NadiehBremer·
📣 NEW! I’ve just released the BIGGEST and perhaps most creative project I’ve ever worked on! “Searching for Birds” searchingforbirds.visualcinnamon.com 🐤 A #dataviz article & exploration that dives into the data that connects humans with birds, by looking at how we search for birds.
English
150
1.3K
6.6K
413.9K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
Luiz Pessoa
Luiz Pessoa@PessoaBrain·
𝗪𝗵𝗮𝘁 𝗶𝘀 𝗻𝗼𝗶𝘀𝗲 𝗶𝗻 𝘁𝗵𝗲 𝗯𝗿𝗮𝗶𝗻? Almost always we average responses thus equating response variability with noise. Well, we shouldn't because variability is also signal, not noise to be entirely discarded. doi.org/10.1016/j.neur…
Luiz Pessoa tweet media
English
4
73
303
15.4K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
Colton Casto
Colton Casto@_coltoncasto·
The cerebellum supports high-level language?? Now out in @NeuroCellPress, we systematically examined language-responsive areas of the cerebellum using precision fMRI and identified a *cerebellar satellite* of the neocortical language network! authors.elsevier.com/a/1mUU83BtfHC-… 1/n🧵👇
Colton Casto tweet media
English
5
56
169
17.8K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
ChangLabUCSF
ChangLabUCSF@ChangLabUcsf·
New work from our lab showing the human frontal lobe receives fast, low-level speech information in parallel with early speech areas! doi.org/10.1038/s41467…
English
0
9
24
4.4K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
David Poeppel
David Poeppel@davidpoeppel·
CogNeuroLanguage: new by @lauragwilliams.bsky.social & @jeanremiking.bsky.social (w Alec Marantz & me) shows how the brain maintains-updates continuously unfolding lang hierarchy during comprehension, anchoring ling theories to biological implementation pnas.org/doi/10.1073/pn…
English
0
12
56
3.5K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
Shravan Vasishth
Shravan Vasishth@ShravanVasishth·
Applications for the summer school on statistical methods for linguistics and psychology (Potsdam, Germany) are now open: vasishth.github.io/smlp2026/
English
1
10
22
2K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
Sam Nastase
Sam Nastase@samnastase·
I'm recruiting PhD students to join my new lab in Fall 2026! The Shared Minds Lab at @USC will combine deep learning and ecological human neuroscience to better understand how we communicate our thoughts from one brain to another.
Sam Nastase tweet media
English
5
99
434
29.2K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
Sam Norman-Haignere
Sam Norman-Haignere@SamNormanH·
Human auditory cortex integrates information in speech across absolute time (e.g., 200 ms), not phonemes, syllables, words, or any other time-varying speech structure: nature.com/articles/s4159…
Sam Norman-Haignere tweet media
English
4
18
58
7.3K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
Greta Tuckute
Greta Tuckute@GretaTuckute·
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text. In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech. Poster P6, Aug 19 at 13:30, Foyer 2.2!
English
8
32
194
18K
Magdalena Kachlicka @mkachlicka.bsky.social retweetledi
A.Benítez-Burraco
A.Benítez-Burraco@abenitezburraco·
In contrast to most LLMs, which learn from pre-tokenized texts, this AuriStream tool is a biologically inspired model for encoding speech that learns from cochlear tokens isca-archive.org/interspeech_20… 🤔This could help to generate LLMs for minority languages
A.Benítez-Burraco tweet media
English
2
10
36
6.6K