Martin Proks

468 posts

Martin Proks banner
Martin Proks

Martin Proks

@mproksik

Computational Scientist @UCPH_reNEW · Bioinformatics · Stem Cell and Developmental Biology

Katılım Ekim 2015
281 Takip Edilen82 Takipçiler
Martin Proks retweetledi
Jacob (Yaqub) Hanna - يعقوب حنا
EEE meeting is BACK! Early Embryogenesis & Epigenetics conference in Berlin 02/2026. Checkout great program and over 12 slots for (not so) short talks for submitted abstracts!. Early registration now open - molgen.mpg.de/embryo2026
Jacob (Yaqub) Hanna - يعقوب حنا tweet media
English
0
10
41
3.1K
Martin Proks retweetledi
kuba sędziński
kuba sędziński@ksedzinski·
🧵1/14 Preprint thread! Can we predict a cell’s fate based on its dynamics? 🔮 Our new study unveils a framework for watching development unfold in real-time, revealing how a cell's shape and movement encode info about its future fate. 🔬📄 Preprint: tinyurl.com/4shf8v4x
English
4
21
84
6.1K
Martin Proks retweetledi
Mikaela Koutrouli
Mikaela Koutrouli@MKoutrouli·
We are hiring! Together with Ana Carolina de Sousa Leote, I am looking for two Computational Biology interns to join our group at Genentech in South San Francisco. Deadline June 26. Apply here: roche.wd3.myworkdayjobs.com/ROG-A2O-GENE/j…
English
1
4
10
804
Martin Proks retweetledi
Marta Perera
Marta Perera@MartaPrera·
Have you ever wondered how the same pathway can elicit different responses in distinct contexts? Check out the second part of my PhD work, now available @Dev_journal Read the full story here: doi.org/10.1242/dev.20… Or read the tweetorial below (1/7)
English
3
7
23
3K
Martin Proks
Martin Proks@mproksik·
I am at #BiotechX in Basel! Let’s chat about science and collaborations. Poster number 42.
Martin Proks tweet media
English
0
1
5
321
Martin Proks retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
MASSIVE idea proposed in this paper. Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs) for approximating nonlinear functions 🤯 📌 Unlike MLPs which have fixed activation functions on nodes, KANs have learnable activation functions parametrized as splines on edges. This allows KANs to achieve higher accuracy and parameter efficiency compared to MLPs. 📌 The Kolmogorov-Arnold representation theorem states that any continuous function of n variables can be represented as a composition of 2n+1 univariate functions. The paper generalizes this to KANs of arbitrary widths and depths. A KAN layer with n_in inputs and n_out outputs is defined as a matrix of learnable 1D spline functions. Deep KANs are constructed by stacking multiple such layers. 📌 The key implementation tricks for optimizing KANs include: 1) Using residual activation functions that are a sum of a basis function (e.g. SiLU) and a learnable spline. 2) Careful initialization scales for the splines and weights. 3) Dynamically updating the spline grids based on the input activations during training to handle unbounded activation ranges. 📌 Theoretically, the paper proves an approximation bound for KANs showing that the approximation error in C^m norm scales as G^(-(k+1-m)) where G is the spline grid size and k is the spline order. Notably, this bound is independent of the input dimension, avoiding the curse of dimensionality that affects MLPs. Empirically, KANs are shown to achieve the theoretically optimal scaling exponent of k+1=4 (for cubic splines). 📌 On various experiments including regression, PDE solving, and continual learning, KANs demonstrate superior accuracy and parameter efficiency compared to MLPs. For example, on a PDE task, a 2-layer width-10 KAN achieves 100x lower MSE than a 4-layer width-100 MLP with 100x fewer parameters. KANs also exhibit better continual learning without catastrophic forgetting by leveraging the locality of the spline bases. 📌 The interpretability of KANs is highlighted through techniques like sparsification, pruning, and symbolic simplification of the learned splines. On real-world applications in knot theory and condensed matter physics, KANs are able to uncover known relations and phase transitions in a transparent manner, with the potential for scientific discovery through human-AI collaboration using the language of KANs. Overall, KANs provide a powerful and interpretable alternative to MLPs by leveraging the Kolmogorov-Arnold theorem and spline approximations, demonstrating state-of-the-art performance on a range of tasks along with favorable scaling behavior and unique capabilities for interactivity and knowledge discovery.
GIF
English
12
123
633
137.7K
Martin Proks retweetledi
dr. jack morris
dr. jack morris@jxmnop·
one of the most important things I know about deep learning I learned from this paper: "Pretraining Without Attention" this what I found so surprising: these people developed an architecture very different from Transformers called BiGS, spent months and months optimizing it and training different configurations, only to discover that at the same parameter count, a wildly different architecture produces identical performance to transformers this may imply that as long as there are enough parameters, and things are reasonably well-conditioned (i.e. a decent number of nonlinearities and and connections between the pieces) then it really doesn't matter how you arrange them, i.e. any sufficiently good architecture works just fine i feel there's something really deep here, and we may be already very close to the upper bound of how well we can approximate a given function given a certain amount of compute. so we should spend more time thinking about other questions, such as what that function should actually look like (what data? which objective function?) and how to make it more efficient
dr. jack morris tweet media
English
93
409
3.1K
489.2K
Martin Proks retweetledi
Jacob Schreiber
Jacob Schreiber@jmschreiber91·
I regularly hear people in ML+genomics complain that they're running out of memory or disk space. Frequently, the culprit is inefficient handling of RNA/DNA sequence and you can make big gains in compression with a few tricks. 1/
English
4
36
202
42.7K