Kording Lab 🦖

30.6K posts

Kording Lab 🦖 banner
Kording Lab 🦖

Kording Lab 🦖

@KordingLab

Konrad kording, @Penn Prof, deep learning, brains, #causality, rigor, https://t.co/tTJW05RRfa, https://t.co/qf7ZHxjaK1, Transdisciplinary optimist, Dad, Loves outdoors, 🦖

Philadelphia, PA Katılım Kasım 2012
3.3K Takip Edilen58.7K Takipçiler
Kording Lab 🦖 retweetledi
Tredegar
Tredegar@frosty_tredegar·
@KordingLab so we're cool as long as I know what a prince and lover ought to be, right
Tredegar tweet media
English
1
1
1
282
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
My app that helps students think more carefully through their projects now looks really pretty by 2026 standards (imho). Try it out at planyourscience.com
Kording Lab 🦖 tweet media
English
1
3
18
1.1K
Kording Lab 🦖 retweetledi
Fatih Dinc
Fatih Dinc@fatihdin4en·
For decades, two revolutions in neuroscience ran in parallel: - 🧠 In vivo imaging — watch neurons fire in living animals - 🧬 Spatial transcriptomics — read cell's molecular identity Meet TRU-FACT - a graph-based method that matches cells between these datasets at scale 🧵
GIF
English
4
55
249
17.8K
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
People were astonished by the artifact. Its text was better than anyone else's they knew. It told rich stories. It spoke to their hearts. It told them of a better world. And yet, some people wished the artifact had never been invented. I, personally, am fine with books.
English
1
2
14
2.7K
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
Neuroscience is a global endeavor whose main promise is curing the many brain related diseases. Global collaboration is the key to that. I am so sad that our Iranian friends will, once again, be excluded.
Neuromatch@neuromatch

We have difficult news to share. Neuromatch's Office of Foreign Assets Control (OFAC) license renewal, which has allowed us to include participants residing in Iran since 2020, has been denied by the United States Government.

English
1
9
41
4.3K
Kording Lab 🦖 retweetledi
Micah G. Allen
Micah G. Allen@micahgallen·
Spontaneous behavior in freely exploring mice is not random wandering but a succession of self-directed tasks where low-level actions are sequenced to achieve high-level goals. cell.com/neuron/fulltex…
English
3
26
145
9.3K
Girish Kumar, PhD
Girish Kumar, PhD@girishkaitholil·
@KordingLab Modellers with biological constraint are safe. What's compressing is the modeller-as-pipeline-builder, where the model is generic and the value was just running it. AutoML and copilots ate that role. Where's the constraint coming from in your lab's work?
English
1
0
1
151
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
@girishkaitholil Neurotensin has a largely overlapping list (apparently upstream of a lot of DA activity). Plus addiction and eating disorders.
English
1
0
0
84
Girish Kumar, PhD
Girish Kumar, PhD@girishkaitholil·
@KordingLab dopamine has parkinson's, schizophrenia, and addiction as flagship diseases driving nih funding, neurotensin doesn't. that's most of the 100x gap.
English
1
0
3
153
Kording Lab 🦖 retweetledi
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
There's a quadrillion-dollar question at the heart of AI: Why are humans so much more sample efficient compared to LLM? There are three possible answers: 1. Architecture and hyperparameters (aka transformer vs whatever ‘algo’ cortical columns are implementing) 2. Learning rule (backprop vs whatever brain is doing) 3. Reward function @AdamMarblestone believes the answer is the reward function. ML likes to use pretty simple loss functions, like cross-entropy. These are easy to work with. But they might be too simple for sample-efficient learning. Adam thinks that, in humans, the large number of highly specialised cells in the ‘lizard brain’ might actually be encoding information for sophisticated loss functions, used for ‘training’ in the more sophisticated areas like the cortex and amygdala. Like: the human genome is barely 3 gigabytes (compare that to the TBs of parameters that encode frontier LLM weights). So how can it include all the information necessary to build highly intelligent learners? Well, if the key to sample-efficient learning resides in the loss function, even very complicated loss functions can still be expressed in a couple hundred lines of Python code.
English
190
170
1.9K
933.8K
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
@ozalabCP But it would be doable to export the app's content into your graphical one pager.
English
0
0
1
26
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
@ozalabCP you can think of the app as a test for all the things you are writing about. It is graphically different because it is in an app and has a bunch of extra AI content. But it is very similar in spirit.
English
1
0
1
36
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
My science app primarily helps students see weaknesses in their projects. But its called planyourscience.com. People think its admin but its "more awesome science". Should I
English
3
1
8
2.8K
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
@seanluomdphd Well. Automated liquid handlers. Acoustic dispensers. Microfluidics. Cartesian gantry robots integrated into workcells. Automated plate sealing, labeling, and barcode systems. Plate readers. Automatic Incubators. === what a shame the future looks bleak for experimentalists.
English
0
0
0
48
Sean X. Luo MD PhD
Sean X. Luo MD PhD@seanluomdphd·
@KordingLab Yes - if the super-productiveness translates to obvious idleness. You put him on more grants and papers. No need to hire new staff. This apparently happens more often at hyper competitive depts like MSKCC. I have so many stories. 😏
English
1
0
2
39
Kording Lab 🦖
Kording Lab 🦖@KordingLab·
@seanluomdphd Wait. A professor hires some experimentalists and some theoreticians. The theoreticians become super productive. You think that will produce fewer not more theory hires? I have never seen such a dynamic unfold.
English
1
0
0
80
Sean X. Luo MD PhD
Sean X. Luo MD PhD@seanluomdphd·
It does in the sense that more productivity will go after the same dollars increasing competitiveness ever more so than before. As a PI one does not necessarily care, because the increased productivity allows you to filter the ones who aren’t as productive to hire. For postdocs it does matter because the less productive ones don’t get hired. In theory you put the same number of postdocs on grants and they are doing more for you. In practice, competition rises so eventually fewer grants get funded because AI is writing grants.
English
1
0
2
83
Sean X. Luo MD PhD
Sean X. Luo MD PhD@seanluomdphd·
@KordingLab There *definitely* is. It’s called NIH budget cap. Medicine is actually the exact opposite: demand is price insensitive. There’s endless demand for medicine causing cost to explode without limit.
English
1
0
2
80