Dan Ofer (Was @ICML,@Worldcon )

4.7K posts

Dan Ofer (Was @ICML,@Worldcon ) banner
Dan Ofer (Was @ICML,@Worldcon )

Dan Ofer (Was @ICML,@Worldcon )

@danofer

#Data scientist, #Researcher, Bioinformatician, Photographer, Geek & Bookworm. PhD #AI #LLM @HebrewU @HyadataLab @liniallab @shebaARC

Israel Katılım Mayıs 2008
1.1K Takip Edilen1.1K Takipçiler
Sabitlenmiş Tweet
Dan Ofer (Was @ICML,@Worldcon )
1/ Our paper “Protein Language Models Expose Viral Immune Mimicry” is now published in Viruses! We show that protein Language Models can identify viral proteins, and those that fool our immune system.
English
2
1
15
1.4K
Raj Dabre
Raj Dabre@prajdabre·
ML interview question: Suppose you are implementing Knowledge distillation, and you have a teacher and a student model. However you simply do not have the necessary GPU resources to fit both the teacher and the student into the GPU at the same time. What is your solution?
English
31
6
387
54K
Lucas Maes
Lucas Maes@lucasmaes_·
JEPA are finally easy to train end-to-end without any tricks! Excited to introduce LeWorldModel: a stable, end-to-end JEPA that learns world models directly from pixels, no heuristics. 15M params, 1 GPU, and full planning <1 second. 📑: le-wm.github.io
English
85
480
3.5K
519.8K
alphaXiv
alphaXiv@askalphaxiv·
Yann LeCun and his team dropped yet another paper! "V-JEPA 2.1: Unlocking Dense Features in Video Self-Supervised Learning" In this V-JEPA upgrade, they showed that if you make a video model predict every patch, not just the masked ones AND at multiple layers, they are able to turn vague scene understanding into dense + temporal stable features that actually understands "what is where". This key insight drove improvements in segmentation, depth, anticipation, and even robot planning.
alphaXiv tweet media
English
33
223
1.4K
119.3K
Pranam Chatterjee
Pranam Chatterjee@pranamanam·
1. Let's get this out of the way: there's no experimental validation. Those "sub-nM" claims are irrelevant. They "generate" sequences, fold with Boltz-2, and score with their own model. Not a single peptide was actually tested. 🙅‍♂️
English
2
3
101
7K
Pranam Chatterjee
Pranam Chatterjee@pranamanam·
I usually don't like to criticize papers on social media, but this one deserves it. Not familiar with @Ligandal, but so many problems: AI-hallucinated citations, figures, no real validation, not "structure-free", and definitely not diffusion. I'll go thru my criticisms below. 👇
Andre Watson 🧬@nanogenomic

Extremely excited to announce LigandForge 🧬⚡ Generate high-quality peptides at over 10,000x - 1M the speed of state-of-the-art methods like Bindcraft and Boltzgen. Predict binding affinity with 83% correlation to experimental binding data. 150 protein targets benchmarked.

English
11
58
570
99.9K
Yam Peleg
Yam Peleg@Yampeleg·
The model was evaluated by how much it contributed to building the next version of itself. This is a crazy post. They basically did auto-research IRL: Maximizing how much the RL team's work is delegated to the model during it's development. (Answer: 30-50% btw) Everything researchers do: - Literature review - Experiment design - Data pipelines - Monitoring - Debugging - Code fixes - Pull requests .. Actively trying to delegate more and more between the research iterations. Crazy crazy crazy times
MiniMax (official)@MiniMax_AI

x.com/i/article/2034…

English
16
23
361
116.3K
Inquisitive Bird
Inquisitive Bird@Scientific_Bird·
@Aella_Girl I don't know how large role this has played, but it is relevant and fun. In "Guns, Germs and Steel," Jared Diamond tells an anecdote of a man contradicting a disease from having sex with a sheep.
Inquisitive Bird tweet mediaInquisitive Bird tweet media
English
14
9
167
0
Aella
Aella@Aella_Girl·
Has bestiality always been historically so taboo? Why? It doesn't result in pregnancy; did it result in diseases? Why did humans evolve to get so repulsed from touching genitals with other species?
English
4.2K
160
721
0
Yam Peleg
Yam Peleg@Yampeleg·
MiniMax-M2.7 woww
HT
10
4
277
24.3K
Andrej Karpathy
Andrej Karpathy@karpathy·
Thank you Jensen and NVIDIA! She’s a real beauty! I was told I’d be getting a secret gift, with a hint that it requires 20 amps. (So I knew it had to be good). She’ll make for a beautiful, spacious home for my Dobby the House Elf claw, among lots of other tinkering, thank you!!
NVIDIA AI Developer@NVIDIAAIDev

🙌 Andrej Karpathy’s lab has received the first DGX Station GB300 -- a Dell Pro Max with GB300. 💚 We can't wait to see what you’ll create @karpathy! 🔗 #dgx-station" target="_blank" rel="nofollow noopener">blogs.nvidia.com/blog/gtc-2026-… @DellTech

English
528
838
19.1K
1M
Stella Biderman
Stella Biderman@BlancheMinerva·
If I was going to claim that a finetuning methodology for machine unlearning “really worked,” what evidence would you like to see?
English
14
1
31
8.9K
Dan Ofer (Was @ICML,@Worldcon )
I too have joined the autoresearch + {trendy DL} thing crowd. Just hoping I can get a paper out on it sometime :)
English
0
0
0
68
Charly Wargnier
Charly Wargnier@DataChaz·
THIS is the wildest open-source project I’ve seen this month. We were all hyped about @karpathy's autoresearch project automating the experiment loop a few weeks ago. (ICYMI → github.com/karpathy/autor…) But a bunch of folks just took it ten steps further and automated the entire scientific method end-to-end. It's called AutoResearchClaw, and it's fully open-source. You pass it a single CLI command with a raw idea, and it completely takes over 🤯 The 23-stage loop they designed is insane: ✦ First, it handles the literature review. - It searches arXiv and Semantic Scholar for real papers - Cross-references them against DataCite and CrossRef. - No fake papers make it through. ✦ Second, it runs the sandbox. - It generates the code from scratch. - If the code breaks, it self-heals. - You don't have to step in. ✦ Finally, it writes the paper. - It structures 5,000+ words into Introduction, Related Work, Method, and Experiments. - Formats the math, generates the comparison charts, - Then wraps the whole thing in official ICML or ICLR LaTeX templates. You can set it to pause for human approval, or you can just pass the --auto-approve flag and walk away. What it spits out at the end: → Full academic paper draft → Conference-grade .tex files → Verified, hallucination-free citations → All experiment scripts and sandbox results This is what autonomous AI agents actually look like in 2026. Free and open-source. Link to repo in 🧵 ↓
Charly Wargnier tweet media
English
78
382
2.4K
208.8K
Frank Gao
Frank Gao@ChemVagabond·
We @_DimensionCap ported @karpathy's autoresearch framework to biology. We let Claude run 50 experiments over the weekend on protein thermostability prediction via @modal. It beat a recent baseline (TemBERTure) using a 20x smaller model. Code + research blog later this week!
Frank Gao tweet media
English
25
72
617
57.4K
Dan Ofer (Was @ICML,@Worldcon )
@karpathy Ugh, making it plan experiments to run as a script works surprisingly badly (to many crashes, not enough updated then subagents start "cheating" re time budget)
English
0
0
0
14
Andrej Karpathy
Andrej Karpathy@karpathy·
I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)
Andrej Karpathy tweet media
English
1K
3.6K
28.2K
10.9M
Arnav Gupta
Arnav Gupta@championswimmer·
@tobi Why is every test failing?
Arnav Gupta tweet media
English
5
0
70
21.6K
tobi lutke
tobi lutke@tobi·
OK, well. I ran /autoresearch on the the liquid codebase. 53% faster combined parse+render time, 61% fewer object allocations. This is probably somewhat overfit, but there are absolutely amazing ideas in this.
tobi lutke tweet media
English
103
177
2.8K
989.9K
CamrynsOp
CamrynsOp@CamrynsOp·
@Aviation_Intel CENTCOM has been planning for the precise scenario for decades… sorry, but I don’t believe that story a bit
English
14
1
119
6.8K
Tyler Rogoway
Tyler Rogoway@Aviation_Intel·
The stories that the admin/pentagon was blind sided by Iran closing the strait is the most outrageous oversight I have ever seen in war planning if true.
English
176
327
3.3K
182K