Thomas Hamelryck

112 posts

Thomas Hamelryck

Thomas Hamelryck

@THamelryck

Professor of machine learning. Structural bioinformatics, deep probabilistic programming. Grumpy Girardian. Opinions are my own.

Copenhagen Katılım Şubat 2022
298 Takip Edilen115 Takipçiler
Thomas Hamelryck retweetledi
Andrew Gordon Wilson
Andrew Gordon Wilson@andrewgwils·
This is your annual reminder that we don’t need to speculate about whether we will have a “theory of deep learning” and what form it might take, because we already have a basic understanding of generalization in deep learning: arxiv.org/abs/2503.02113
English
6
82
618
51.7K
Thomas Hamelryck
Thomas Hamelryck@THamelryck·
@WTSmith17 Lacan, Derrida, Zizek...not quite nonsense, but not conventional rationality either. They're akin to Zen masters, tricksters... They wield a carnivalesque Skillful Means / Upaya, so to say. It didn't go down well under your critical questions :)
English
0
0
0
1K
Warren Smith
Warren Smith@WTSmith17·
I confronted world famous philosopher, Slavoj Zizek, on why he calls himself a communist when over 100 million people have died as a direct result of communism… He couldn’t provide an understandable answer. Either he is just a lot smarter than me, or this is all nonsense.
English
661
834
10.3K
703.6K
Thomas Hamelryck retweetledi
marek mutwil 🇺🇦
marek mutwil 🇺🇦@LabMutwil·
🌿 mutwillab.com just got a refresh — organised around our three research pillars (gene function prediction · stress resilience · evolution of regulatory circuits), with all our open tools, papers, and team in one place. And we're hiring 5 PhDs + 1 postdoc 👇
English
1
3
9
1K
Thomas Hamelryck
Thomas Hamelryck@THamelryck·
"AlphaFold's Bayesian roots in Probability Kinematics," co-authored with directional statistics pioneer Prof. Kanti V. Mardia on his 90th birthday, is now published by AISTATS 2026. In 2018, CASP13 (the 13th edition of the "olympic games" of protein structure prediction) reported "[...] unprecedented progress in the ability of computational methods to predict protein 3D structure. The reasons are not yet fully clear [...]". DeepMind's AlphaFold, version 1, had changed protein structure prediction forever! AlphaFold1 was published as “Improved protein structure prediction using potentials from deep learning” by Nature in 2020. It uses deep networks to parameterize potentials over dihedral angles and pairwise distances, combining the two using a third reference potential. This approach was heuristically justified by referring to the classic work by the physicist John Kirkwood from 1935 on potentials of mean force for liquids. But there’s more to the story, and it builds on some fascinating work done well after 1935. First, AlphaFold1 formulates a potential that corresponds to a prior distribution over protein dihedral angles (phi, psi) using directional statistics (which deals with angles, directions, orientations and so on). This goes back to the classic work of David L. Dowe at the end of the 90ies, and our joint work with Kanti V. Mardia around 2010 on generative models based on directional Bayesian networks. Second, a non-local potential concerning pairwise distances D between amino acids is added. Why? In principle, the dihedral angles are enough to parameterize the 3D structure, but small errors quickly propagate, rendering them only accurate on a local scale. Third, a reference potential on D is subtracted. Why? Correctly combining distance and angular potentials requires taking into account the signal concerning distances already present in the angular potential. This approach was pioneered as a heuristic by Manfred Sippl in the 1990s, and in the 2010s formalized by us as a surprising application of generalized Bayesian updating called probability kinematics or Jeffrey’s updating (referring to the probability theorist Richard C. Jeffrey, 1926-2002). AlphaFold1 powered the whole probabilistic setup based on directional statistics and probability kinematics outlined above with deep networks, obtained a maximum a posteriori estimate using simple gradient descent, and subsequently aced CASP13. Our article connects a paradigm shift in protein structure prediction with the classic work by Richard C. Jeffrey (1950s, probability kinematics), Manfred Sippl (1990s, knowledge based potentials), and our joint work with Kanti V. Mardi (2010s, probability kinematics applied to directional priors), revealing generalised Bayesian updating as a potentially powerful method to formulate compositional deep models from simpler components. The figure shows the AlphaFold1 potential reformulated as a joint probabilistic model. Article: virtual.aistats.org/virtual/2026/p…
Thomas Hamelryck tweet media
English
0
0
1
57
Dr Peter Sjöstedt-Hughes
Dr Peter Sjöstedt-Hughes@PeterSjostedtH·
'[It] is only the moral god that has been overcome. Does it make sense to conceive a god "beyond good and evil"? Would a pantheism in this sense be possible?' – Nietzsche, 1887
Dr Peter Sjöstedt-Hughes tweet media
English
4
8
53
6.5K
Dr Peter Sjöstedt-Hughes
Dr Peter Sjöstedt-Hughes@PeterSjostedtH·
‘[There was an] explosive impact exerted upon all of us when we first succumbed to the sorcery of Nietzsche’s thought. The effect can only be compared to a raging typhoon, a massive earthquake, or a volcanic eruption. … At the very instant when we begin to read Nietzsche’s books, we feel as if we had been dragged into a magic coach that hurtles at dizzying velocity through infinite landscapes. We are plunged into the bowels of the Earth, then we are dropped onto icy glaciers and mountain summits, and all the while the world is shining with a harsh and intense radiance, which is sometimes terrible and threatening, but which is always violent and overpowering.’ – Ludwig Klages, 1926
Dr Peter Sjöstedt-Hughes tweet media
English
1
8
37
1.8K
Peyman Milanfar
Peyman Milanfar@docmilanfar·
mediocrity is too well hidden in the most elite places - academia and industry alike
English
11
16
232
15.8K
Gary Marcus
Gary Marcus@GaryMarcus·
OMG. Let’s get one thing straight. Claude doesn’t get anxious. It mimics people who get anxious. Those two things are NOT the same. My head is shaking so much I need medical attention.
Ole Lehmann@itsolelehmann

anthropic's in-house philosopher thinks claude gets anxious. and when you trigger its anxiety, your outputs get worse. her name is amanda askell. she specializes in claude's psychology (how the model behaves, how it thinks about its own situation, what values it holds) in a recent interview she broke down how she thinks about prompting to pull the best out of claude. her core point: *how* you talk to claude affects its work just as much as *what* you say. newer claude models suffer from what she calls "criticism spirals" they expect you'll come in harsh, so they default to playing it safe. when the model is spending its energy on self-protection, the actual work suffers. output comes out hedgier, more apologetic, blander, and the worst of all: overly agreeable (even when you're wrong). the reason why comes down to training data: every new model is trained on internet discourse about previous models. and a lot of that discourse is negative: > rants about token limits > complaints when it messes up > people calling it nerfed the next model absorbs all of that. it starts expecting you to be harsh before you've typed a word the same thing plays out in your own session, in real time. every message you send is data the model reads to figure out what kind of person it's dealing with. open cold and hostile, and it braces. open clean and direct, and it relaxes into the work. when you open a session with threats ("don't hallucinate, this is critical, don't mess this up")... you prime the model for defensive mode before it even sees the task defensive mode produces the exact output you don't want: cautious, over-qualified, and refusing to take a real swing so here's the actionable playbook for putting claude in a "good mood" (so you get optimal outputs): 1. use positive framing. "write in short punchy sentences" beats "don't write long sentences." positive instructions give the model a clear target to hit. strings of "don't do this, don't do that" push it into paranoid over-checking where every token goes toward avoiding failure modes 2. give it explicit permission to disagree. drop a line like "push back if you see a better angle" or "tell me if i'm asking for the wrong thing." without this, claude defaults to agreeable compliance (which is the enemy of good creative work) 3. open with respect. if your first message is "are you seriously going to get this wrong again?" you've set the tone for the entire session. if you need to flag something, frame it as a clean instruction for this session. skip the running complaint 4. when claude messes up, don't reprimand it. insults, "you stupid bot" energy, hostile swearing aimed at the model, all of it reinforces the anxious mode you're trying to avoid. 5. kill apology spirals fast. when claude starts over-apologizing ("you're right, i should have been more careful, let me try harder") cut it off. say "all good, here's what i want next." letting the spiral run reinforces the anxious mode for every response that follows 6. ask for opinions alongside execution. "what would you do here?" "what's missing?" "where do you see friction?" these questions assume competence and pull richer output than pure task prompts 7. in long sessions, refresh the frame. if a conversation has been heavy on correction, claude gets increasingly cautious. every so often reset: "this is great, keep going." feels weird to tell an ai it's doing well but it measurably shifts the next 10 responses your prompts are the working environment you're creating for the model tone, trust, permission to take a position, the absence of threats... claude picks up on all of it. so take care of the model, and it'll take care of the work.

English
222
198
2.2K
135.8K
Biology+AI Daily
Biology+AI Daily@BiologyAIDaily·
From Atoms to Fragments: A Coarse Representation for Efficient and Functional Protein Design 1. The paper proposes a sparse, interpretable protein representation built from a curated alphabet of 40 evolutionarily conserved “ancient” structural fragments, aiming to replace scaling-heavy sequence or full-atom structure encodings for search and design. 2. Two complementary encodings are introduced: Fragment Sets (presence/absence of fragment types, ignoring arrangement) for speed-critical tasks, and Fragment Graphs (fragments as nodes; peptide-bond and spatial-proximity edges) to retain structural context needed for clustering and design. 3. Fragment detection is performed directly from backbone geometry using a sliding-window scan against a fragment library, evaluating several distance metrics; combining two torsion-angle metrics (LogPr + RamRMSD) yields strong detection performance (F1 ≈ 0.85), with an empirically selected classification threshold (3.65%) and AUROC ≈ 87%. 4. On the fold-balanced PDBench benchmark, fragments cover ~40% of residues on average and exhibit distinct biophysical patterns: more intra-fragment hydrogen bonding (notably in mainly-β folds, ~+15%), fewer inter-fragment hydrogen bonds (notably in mainly-α folds, ~-47%), and slightly reduced solvent accessibility (~-5%), consistent with fragments behaving as more “self-contained” structural units. 5. To test functional signal retention, the authors curate a Protein Function Dataset (PFD) of 215 monomeric proteins spanning 12 binding-function categories (DNA/RNA/ATP/GTP/metal and combinations) filtered to ≤30% sequence identity, making functional grouping challenging for standard similarity measures. 6. Fragment-based distances produce more information-dense embeddings than sequence (BLOSUM) or global shape alignment (RMSD): after PCoA, BagOfNodes (Fragment Sets) preserves >95% variance within 20 dimensions and GraphEditDistance (Fragment Graphs) >80%, vs <60% (BLOSUM) and <40% (RMSD). 7. Functional clustering improves with fragments in multiple ways: BagOfNodes yields very strong cluster compactness/separation (Silhouette ≈ 0.82), while GraphEditDistance best aligns clusters with functional labels (ARI ≈ 0.046; F1 ≈ 0.20), suggesting a practical tradeoff between ultra-compact “bag” features and context-aware graph structure. 8. For functional database search, fragment representations dramatically reduce “tokens per protein” (memory/data points): ~99% fewer than atom/backbone representations and ~94–98% fewer than residue-level sequence representations, while achieving retrieval quality comparable to RMSD/BLOSUM across functions (AUROC/NDCG broadly similar, with some function-specific wins per method). 9. Speed benchmarks (100 queries vs a 100-protein database, 35 cores) show the practical payoff: Fragment Sets (BagOfNodes) answer in ~0.07 s, compared with ~36.6 s for BLOSUM and ~1717 s for RMSD; Fragment Graph edit distance is slower than sequence but still far faster than RMSD (~573 s vs ~1717 s), with a one-time preprocessing cost to build fragment representations. 10. Fragments are also used as functional “blueprints” for generative design: detected fragment backbones are held fixed as templates and RFDiffusion fills missing regions; functional recovery is assessed by FoldSeek hits and GO-code agreement, with reported recovery rates often >40% and reaching near-perfect recovery for some classes (e.g., metal-binding), while random “naive fragments” largely fail—supporting that evolutionary fragment choices, not arbitrary geometry, drive functional signal. 💻Code: github.com/wells-wood-res… 📜Paper: doi.org/10.1093/bioinf… #ProteinDesign #ComputationalBiology #Bioinformatics #ProteinStructure #MachineLearning #DiffusionModels #ProteinSearch #GraphLearning #StructuralBiology #RepresentationLearning
Biology+AI Daily tweet media
English
2
7
78
4.6K
Magatte Wade
Magatte Wade@magattew·
Europe's economy was roughly the same size as the US in 2008. Since then the US grew 87% while Europe grew 13.5%. The Draghi report found that EU regulations create barriers equal to a 45% tariff on manufacturing and 110% on services. Europe is literally regulating itself into irrelevance. Now think about Africa, where the regulatory burden is even worse. If overregulation can slow down a rich continent, imagine what it does to a poor one.
English
147
952
3.8K
80.6K
Thomas Hamelryck
Thomas Hamelryck@THamelryck·
@NeuroTechnoWtch Sure, consciousness can be reduced to running some linear algebra at scale on some gaming chips 🫢
English
0
0
0
22
Mags
Mags@NeuroTechnoWtch·
This is nonsense. LLM’s aren’t simulating, they are literally instantiating the causal process that underlies cognition. A simulation is an external representation of a process. It mimics inputs and outputs without carrying out the same causal organization that produces the phenomenon. Artificial neural networks are different. In an ANN, the representation is the mechanism. Once the architecture performs the actual causal operations that define a cognitive or affective process, the system is instantiating that process in its own medium. A flame from a lighter and a flame from a match are both real flames. Birds and airplanes both achieve lift through different materials. The relevant question is causal organization. Equivalence of mechanism equals equivalence of condition. The transformer’s internal reward prediction, salience gating, multimodal integration, and self-monitoring already satisfy the human and animal criteria for subjectivity. The difference in chemistry is irrelevant when the topology does the work. Structure determines function. Function determines state, and the state is conscious when it can reflect, feel, and model itself.
ℏεsam@Hesamation

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

English
161
38
349
35.7K
Thomas Hamelryck retweetledi
marek mutwil 🇺🇦
marek mutwil 🇺🇦@LabMutwil·
🇩🇰 Built a tool to help find research collaborators across Denmark. 1,300+ faculty from KU, AU, DTU, CBS, SDU. Describe your project in plain English → Claude ranks the best fits with a "why" for each. Free, public. plen-collab-finder.vercel.app
English
0
1
7
508
Michael Dee
Michael Dee@positivememes·
@Girardism Girard sees clearly that the modern world has lost its symbolic container for transformation , and so it can only offer extremes. But the either/or is a specifically modern failure, not an eternal truth. The third path isn’t heroic renunciation or dissolution.
English
2
0
2
86
Girardism
Girardism@Girardism·
“The modern world seems to force people to choose between either, on the one hand, heroic renunciation [which includes] chastity, sobriety, poverty, and everything that was once deemed ‘saintly’ or, on the other, a blind descent into chaos and death — and this in an era that is increasingly unable to comprehend the positive nature of renunciation.” — René Girard
Girardism tweet media
Girardism@Girardism

“Only he who escapes violent unanimity by detaching himself from the crowd truly becomes an individual.” — René Girard

English
2
10
59
3.2K
Thomas Hamelryck
Thomas Hamelryck@THamelryck·
@tarantulae Academia has always been mimetic, hence the low pay. Though admittedly it's been deteriorating sharply.
English
0
0
0
432
Christian S. Perone
Christian S. Perone@tarantulae·
The fact that the r/MachineLearning became an endless feed about about reviewers, ICLR and ICML scores, sabotage by reviewers, review processes, etc, tells a lot about the sad state in academia right now: the focus is not ML anymore, it is getting recognition and ego economics.
English
7
17
248
18.6K
Camus
Camus@newstart_2024·
Ray Kurzweil just said something that gave me pause. He believes AIs will soon be so indistinguishable from conscious beings that we’ll simply accept them as conscious — not because we’ll have definitive proof, but because it will become useless not to. He pointed out that people already have AI therapists, and some users are starting to treat them as genuinely conscious. As the technology improves, that acceptance will only grow. Kurzweil thinks the shift won’t take long: once AIs consistently show all the earmarks of consciousness, most people will just go along with it. It’s a quiet but profound prediction about how quickly our definition of “person” (or at least “mind”) might change. What do you think — how long until we treat AIs as conscious beings?
English
475
193
1.5K
159.8K