Julia Rozanova @ ICLR

858 posts

Julia Rozanova @ ICLR banner
Julia Rozanova @ ICLR

Julia Rozanova @ ICLR

@juliaroz

Working on Payments Foundation Models at @Visa in Cambridge, UK! ~ PhD in Natural Language Processing, Uni of Manchester ~ ~ Catch me on the dance floor! ~

Cambridge, UK Katılım Ocak 2017
1.8K Takip Edilen727 Takipçiler
Sabitlenmiş Tweet
Julia Rozanova @ ICLR
Julia Rozanova @ ICLR@juliaroz·
This month I was formally awarded my PhD, and so I'm officially #PhDone! Finishing off with an internship at the Amazon Alexa AI research group in Cambridge, and then - I'm on the job market in the UK for 2024! :D Shout if you can recommend any cool orgs hiring NLP people!
English
12
4
120
22.1K
Julia Rozanova @ ICLR retweetledi
elie
elie@eliebakouch·
Qwen first release on interpretability (qwen scope) is very interesting they use SAE features to identify what causes repetition in model outputs, then use steering to manufacture a "bad" rollout where the model repeats a lot. this gives RL a clear negative signal to learn from, since repetition barely shows up in normal rollouts so the model never gets punished for it they also use SAE features as a fingerprint for benchmarks, you look at which features each benchmark activates and compare overlap. lets you find redundancy inside a benchmark and across benchmarks without running any model. for instance 63% of GSM8K features are in MATH but only 10% the other way
elie tweet media
English
14
117
785
38.9K
Julia Rozanova @ ICLR retweetledi
wh
wh@nrehiew_·
You should probably read this. The best available overview of the systems required to scale large MoEs. Notes below.
wh tweet media
English
7
52
475
31.1K
Julia Rozanova @ ICLR retweetledi
Neil Renic
Neil Renic@NC_Renic·
AI cheating has gotten so bad that I now feel genuine affection for horrifically bad essays clearly written by the student
English
324
5.8K
154.5K
2.4M
Julia Rozanova @ ICLR retweetledi
Ronak Malde
Ronak Malde@rronak_·
My takeaways from ICLR 2026 1. Recursive self improvement / continual learning is the next frontier of research. Several great papers in self distillation, auto agent harness optimization, learning from non verifiable reward, self-play are sarly signs of success 2. Multimodal models and world models are attaining emergent reasoning capabilities, opening up a near door to spatial understanding that was previously locked 3. Lots of concerns that the research community is currently too focused on benchmaxxing rather than improving the research process, and a call to action to address this, like Percy Liang’s fully open source training community. 4. Rio is possibly even better than San Diego 🇧🇷🏄
English
31
134
1.5K
86.4K
Julia Rozanova @ ICLR retweetledi
Youssef El Manssouri
Youssef El Manssouri@yoemsri·
@yoavgo The core lesson is that architecture sets capacity, but data sets behavior. If students understand that, they understand most of modern LLM progress.
English
1
7
84
2.9K
Julia Rozanova @ ICLR retweetledi
Aksel
Aksel@akseljoonas·
For the last 72 hours since ml-intern launched we have had over 500+ autonomous AI research projects running on the Space at all times. Some insane ones I saw: 1. A new AI paradigm from scratch — trying to replace transformers with a reasoning architecture based on energy minimization, binary sparse address tables and circular convolution binding. No GPU, no gradients, no training data — pure bitwise operations. Years of research done in 2 days. huggingface.co/Harry00/MLE-Mo… 2. Someone took LoopLM (ByteDance's recurrent depth transformer with shared layers and infinite depth via looping) and crossed it with BitNet b1.58 (ternary 1.58-bit weights). The result: a model that's both infinitely deep AND uses almost no memory per parameter. 3. Designing a new attention mechanism modeled on the thalamo-cortical circuit in the human brain. Pulling from 2025/2026 research out of MIT, Harvard, and UF. The thalamus gates what information reaches the cortex. They're building a learnable gate that mimics this for transformer attention heads, combined with EEG datasets and a reinforcement learning loop. huggingface.co/spaces/daniel8… The use cases people bring are cooler and more impressive than anything we imagined when we built this.
Aksel@akseljoonas

Introducing ml-intern, the agent that just automated the post-training team @huggingface It's an open-source implementation of the real research loop that our ML researchers do every day. You give it a prompt, it researches papers, goes through citations, implements ideas in GPU sandboxes, iterates and builds deeply research-backed models for any use case. All built on the Hugging Face ecosystem. It can pull off crazy things: We made it train the best model for scientific reasoning. It went through citations from the official benchmark paper. Found OpenScience and NemoTron-CrossThink, added 7 difficulty-filtered dataset variants from ARC/SciQ/MMLU, and ran 12 SFT runs on Qwen3-1.7B. This pushed the score 10% → 32% on GPQA in under 10h. Claude Code's best: 22.99%. In healthcare settings it inspected available datasets, concluded they were too low quality, and wrote a script to generate 1100 synthetic data points from scratch for emergencies, hedging, multilingual etc. Then upsampled 50x for training. Beat Codex on HealthBench by 60%. For competitive mathematics, it wrote a full GRPO script, launched training with A100 GPUs on hf.co/spaces, watched rewards claim and then collapse, and ran ablations until it succeeded. All fully backed by papers, autonomously. How it works? ml-intern makes full use of the HF ecosystem: - finds papers on arxiv and hf.co/papers, reads them fully, walks citation graphs, pulls datasets referenced in methodology sections and on hf.co/datasets - browses the Hub, reads recent docs, inspects datasets and reformats them before training so it doesn't waste GPU hours on bad data - launches training jobs on HF Jobs if no local GPUs are available, monitors runs, reads its own eval outputs, diagnoses failures, retrains ml-intern deeply embodies how researchers work and think. It knows how data should look like and what good models feel like. Releasing it today as a CLI and a web app you can use from your phone/desktop. CLI: github.com/huggingface/ml… Web + mobile: huggingface.co/spaces/smolage… And the best part? We also provisioned 1k$ GPU resources and Anthropic credits for the quickest among you to use.

English
26
87
762
99.7K
Julia Rozanova @ ICLR
Julia Rozanova @ ICLR@juliaroz·
Made the social faux pas of bringing a vertical poster to a main track poster session. Humbled.
English
0
0
14
1.1K
wh
wh@nrehiew_·
How I read papers now. This is an explainer by Claude about the new Compressed Sparse Attention v4 uses to compress the KV cache.
wh tweet media
wh@nrehiew_

Now reading:

English
6
69
698
55.5K
Julia Rozanova @ ICLR retweetledi
𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞
📣 Excited to announce our oral presentation at #ICLR! LLMs capture rich semantic structure, as evidenced by their strong performance across a wide range of language and reasoning tasks. But Sparse Autoencoders (SAEs), a popular interpretability tool, mostly learn local, noisy, token-level features when applied to LLMs (e.g., hundreds of features for the word “the”). So why aren’t SAEs finding that rich semantic structure? 👉 Because they ignore the sequential nature of language. We introduce Temporal SAEs to bridge this gap. arxiv.org/abs/2511.05541 🧵 [1/N]
𝙷𝚒𝚖𝚊 𝙻𝚊𝚔𝚔𝚊𝚛𝚊𝚓𝚞 tweet media
English
5
26
167
22.1K
Julia Rozanova @ ICLR retweetledi
GLADIA Research Lab
GLADIA Research Lab@GladiaLab·
But what can we do with injectivity? Well, for one, we can invert language models! We introduce SipIt, an algorithm that exactly reconstructs the input from hidden states in guaranteed linear time. SipIt recovers inputs >100× faster than alternatives, while remaining exact. (4/6)
GLADIA Research Lab tweet media
English
13
35
856
119.2K
Julia Rozanova @ ICLR
Julia Rozanova @ ICLR@juliaroz·
I'm at ICLR with a poster on *DMAP: A Distribution Map for Text* led by the excellent Tom Kempton (@UncleKempez), together with @Visa colleagues. Pop by for a cool story on how our method detected a crucial data error in several major synthetic text detection papers!
Julia Rozanova @ ICLR tweet media
English
1
2
16
2.3K
Joe Stacey
Joe Stacey@_joestacey_·
Excited to share my first postdoc paper with @SheffieldNLP ! 🤩 In this work we argue that supervised uncertainty quantification (UQ) needs better evaluation Want to know more? Here's a little summary 🧵
Joe Stacey tweet media
English
6
9
82
7.3K
Julia Rozanova @ ICLR retweetledi
Joe Stacey
Joe Stacey@_joestacey_·
One nice thing in our new paper are the visualisations illustrating why probes fail as they get increasingly OOD Details are in the paper, but these are hidden states projected to 2d subspaces using PLS regression More red = more incorrect instances, more blue = more correct
Joe Stacey tweet media
English
1
5
75
6.1K
Julia Rozanova @ ICLR retweetledi
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
Attention → Mamba cross-architecture distillation is real Transformer doesn't need to stay just a transformer – Apple showed how you can transfer it into a State Space Model (SSM) ▪️ It happens through a linearized-attention intermediate: 1. Distill the Transformer into a linearized attention model using a kernel trick: → Approximate the softmax exponential similarity in attention as a dot product of transformed features. This turns quadratic attention into linear attention. 2. Distill that model into Mamba SSM with proper initialization. This method helps in 2 aspects: - It avoids hybrid architectures - And allows Mamba to reach perplexity 14.11 vs 13.86 for the original Transformer. So the lesson here is not to start Mamba from random weights, but start it from a sequence mixer already aligned with the teacher Transformer.
Ksenia_TuringPost tweet media
English
7
78
514
33.6K
Julia Rozanova @ ICLR retweetledi
Rosinality
Rosinality@rosinality·
A looped transformer would have cyclic trajectories, which means that the output of a specific block would be similar to that of the same block in different iterations. But it also depends on architectural choice, especially input injection.
Rosinality tweet media
English
3
40
240
23.4K
Julia Rozanova @ ICLR retweetledi
clem 🤗
clem 🤗@ClementDelangue·
We just OCR'd 27,000 arxiv papers into Markdown using an open 5B model, 16 parallel HF Jobs on L40S GPUs, and a mounted bucket. Total cost: $850 Total time: ~29 hours Jobs that crashed: 0 This now powers "Chat with your paper" on hf.co/papers
clem 🤗 tweet media
English
90
248
2.3K
173.9K