pierre orhan

98 posts

pierre orhan banner
pierre orhan

pierre orhan

@PierreOrhan

Exploring with giants’ treasure maps. PhD student with Yves Boubenec and Jean-Rémi King

ENS - McGill เข้าร่วม Şubat 2020
295 กำลังติดตาม179 ผู้ติดตาม
ทวีตที่ปักหมุด
pierre orhan
pierre orhan@PierreOrhan·
New result! A theory of @AdrianDu_'s data, where we model distributions of tuning curves. -Random connection brings symmetry 🌈 -SVD revealed the distribution of Fourier coefficients 🥳 -The model is a null-hypothesis usable anywhere in the brain With @apeyrache and @AdrianDu_
English
1
7
46
0
pierre orhan รีทวีตแล้ว
Cliona O'Doherty
Cliona O'Doherty@ClionaODoherty·
1/7 Does the infant brain have representational structure? 👶🧠In the FOUNDCOG project, we scanned 134 awake infants using fMRI. Published in Nature Neuroscience, our research reveals 2-month-old infants already possess complex visual representations in VVC that align with DNNs.
GIF
English
1
47
178
14.6K
pierre orhan
pierre orhan@PierreOrhan·
@TimDarcet 2) also teams can make the decision of not releasing checkpoints, sometimes because it reveals cues about training strategies. But this considerably reduces the impact/CO2 cost ratio a model can have on future research... so maybe this should be accounted for here!
English
0
0
1
35
pierre orhan
pierre orhan@PierreOrhan·
@TimDarcet Definitely, great to this reported! And to that one could add: 1) travel CO2 budget: how many people (if anyone) went overseas to present the project: this might sadly exceed the target goal of 2T for any citizen. (and one can have a reasonable cost/benefit discussion)
English
1
0
1
113
TimDarcet
TimDarcet@TimDarcet·
Final run != Whole project, in particular for environmental impact! Please report an estimate of the total carbon footprint of the project in your papers You can get your total number of gpu.h of your past Slurm jobs over a time period via sacct
TimDarcet tweet media
JFPuget 🇺🇦🇨🇦🇬🇱@JFPuget

This is often misunderstood by commenters. The cost of training a model is not the cost of the final run. It is that cost plus the cost of all preceding experiments and tuning.

English
2
2
27
3.1K
pierre orhan รีทวีตแล้ว
Micha Heilbron
Micha Heilbron@m_heilb·
New preprint! w/ @dr_han_jones Adding human-like fleeting memory to transformers improves language learning, but impairs reading time prediction This supports ideas from cognitive science but complicates the link between model architecture and behavioural prediction
Micha Heilbron tweet media
English
1
13
53
4K
pierre orhan รีทวีตแล้ว
Paris-Saclay Institute of Neuroscience
We grieve the passing of Dr. Yves Frégnac. Yves has shaped the study of cortical processing and plasticity. He heralded interdisciplinarity and was one of the founding fathers of our institute. Our thoughts go to his family and to his friends in the community and beyond. pic.twitter.com/BGE4QGcg8g
English
4
37
95
36.5K
Adrian Duszkiewicz
Adrian Duszkiewicz@AdrianDu_·
So happy to finally see this out! We harnessed the simplicity of coding in the cortical head-direction circuit (which represents a 1D circular variable 🧭) to look for principles governing interneuron tuning, which is notoriously fuzzy😵‍💫 [1/n] nature.com/articles/s4159…
English
4
21
102
11.1K
pierre orhan
pierre orhan@PierreOrhan·
@adityaasopa @AdrianDu_ on the contrary we show that no plasticity is required here, neither particular stabilization dynamics, we advocate that this effect results from random local e->i connectivity
English
1
0
2
29
Aditya Asopa
Aditya Asopa@adityaasopa·
@AdrianDu_ @PierreOrhan Is that mainly due to inhibitory stabilization dynamics driven by principal cells (HD) or due to cotuned feed-forward E-I weights (and plasticity) on both HD and FS cells?
English
1
0
0
50
pierre orhan
pierre orhan@PierreOrhan·
@AdrianDu_ long term effort showing how inhibitory cells get their tuning from local excitatory cell is out in Nat. Neuro! I helped by finding that this tuning emerge from random wiring and showing that models predicted a local connectivity (cortical rather than thalamic) 😁!
Adrian Duszkiewicz@AdrianDu_

We then showed with optogenetic experiments and @PierreOrhan's clever simulations that it's because interneurons inherit their tuning from local excitatory inputs and NOT from the upstream thalamic inputs. We predict this rule holds for other cortical networks... [3/n]

English
0
0
6
124
pierre orhan
pierre orhan@PierreOrhan·
@apeyrache don’t you think we are in strategic need of scalable saving/sharing data hub? Even for the models analyses.
English
1
0
0
133
Adrien Peyrache 🪐😴⚡️
The amont of data we collect is insane. I bet you this won’t last until the summer.
Adrien Peyrache 🪐😴⚡️ tweet media
English
2
0
11
1.7K
pierre orhan รีทวีตแล้ว
SueYeon Chung
SueYeon Chung@s_y_chung·
👋@FlatironCCN is currently hiring Flatiron Research Fellows! For openings in Neuro-AI and Geometric Data Analysis group, you can find more information here: apply.interfolio.com/134616 P.S. This is a great job in an incredible environment for computational research. If you know, you know😉 Email me if you have any questions!
SueYeon Chung tweet mediaSueYeon Chung tweet media
English
1
19
64
20.2K
pierre orhan
pierre orhan@PierreOrhan·
@nikparth1 @DrewLinsley To see the differences, you need to focus on a subset of the stimulis (for example score how well you predict responses to a set of stimulis of the same class: the metric needs to be more precise), introduce controlled stimulis, or look at timescales offered by ephys but not fmri
English
1
0
0
59
pierre orhan
pierre orhan@PierreOrhan·
@nikparth1 @DrewLinsley It depends on the stimulis. In dataset (audio/images) with different class, most of the variance can be captured by separating enough the class. If you look at voxel’s bold (fmri) or a neuron mean firing rate (ephys) for each stimuli, the activity pattern will be quite similar.
English
1
0
0
96
Nikhil Parthasarathy
Nikhil Parthasarathy@nikparth1·
NeuroAI people: Why do we think a single pretrained deep learning model should be able to linearly predict both fMRI voxel activity and ephys recordings? It seems like that these are different signals that carry different information and thus require different scales of modeling?
English
2
2
11
2.4K
Alexandre Défossez
Alexandre Défossez@honualx·
EnCodec, is now on 🤗 Transformers!Think of it as a low level latent space 🔮 inversible to audio 🔊 It also provides a discrete space for Language Models as used in our MusicGen model.
Vaibhav (VB) Srivastav@reach_vb

Want to train your own Bark/MusicGen-like TTS/TTA models? 👀 The SoTA Encodec model by @MetaAI has now landed in 🤗Transformers! It supports compression up to 1.5KHz and produces discrete audio representations. ⚡️ Model: #overview" target="_blank" rel="nofollow noopener">huggingface.co/docs/transform… Colab: github.com/Vaibhavs10/not…

English
2
8
91
41.6K
pierre orhan รีทวีตแล้ว
Linnea Evanson, PhD
Linnea Evanson, PhD@EvansonLinnea·
🔥New paper accepted to ACL 2023! “Language acquisition: do children and language models follow similar learning stages?” With @lakretz and @JeanRemiKing arxiv.org/abs/2306.03586 Very happy to share this work from my internship at @MetaAI ! Three key results below 👇 1/8
Linnea Evanson, PhD tweet media
English
2
13
67
14.9K
pierre orhan
pierre orhan@PierreOrhan·
@TimDarcet you may test that by putting in two images A and B, two patches (one for A, one for B) with semantic link, and two patches (one for A, one for B) with no semantic link. (second patches act like control for the case where the network picks texture irregularity)
English
1
0
0
74
pierre orhan
pierre orhan@PierreOrhan·
@TimDarcet very cool, to you is the semantic similarity purely based on the in-image context (i.e the structure of the image, for example here the wing are in a certain position along the skeleton relative to the tail), or does it also relies on semantic knowledge learned by the network?
English
1
0
0
64
TimDarcet
TimDarcet@TimDarcet·
1/ This week we released DINOv2: a series of general vision encoders pretrained without supervision. Good out-of-the-box performance on a variety of domains, matching or surpassing other publicly available encoders.
English
5
113
706
124.2K
pierre orhan
pierre orhan@PierreOrhan·
@TimDarcet Hi Timothée, great work ! Could you explain a bit more what this matching does and how it works? thank you!
English
1
0
1
191
TimDarcet
TimDarcet@TimDarcet·
6/ With these capabilities emerge new interesting properties. A very nice one is the ability to perform semantic keypoint matching between images simply by matching the closest features. This works across very different domains !
TimDarcet tweet mediaTimDarcet tweet mediaTimDarcet tweet media
English
2
11
55
8.6K