Roee Hendel

12 posts

Roee Hendel banner
Roee Hendel

Roee Hendel

@RoeeHendel

Algorithm Developer @AI21Labs

Katılım Kasım 2012
195 Takip Edilen33 Takipçiler
Roee Hendel retweetledi
AI21 Labs
AI21 Labs@AI21Labs·
1/5 As part of our work on improving the efficiency of our LLM online-RL training pipelines, we cut policy update step time by ~70% by introducing a model-agnostic padding minimization method. 🧵
AI21 Labs tweet media
English
1
1
6
655
Roee Hendel retweetledi
AI21 Labs
AI21 Labs@AI21Labs·
1/4 🚀Introducing Jamba2, a memory-efficient open source model family built for total enterprise reliability and steerability.
AI21 Labs tweet media
English
1
9
26
3.2K
Roee Hendel retweetledi
AI21 Labs
AI21 Labs@AI21Labs·
1/5 Releasing Jamba Reasoning 3B under Apache 2.0: Hybrid SSM-Transformer architecture that tops accuracy & speed across record context lengths. e.g. 3-5X faster than Llama 3.2 3B and Qwen3 4B at 32K tokens.
AI21 Labs tweet media
English
5
28
206
409.9K
Roee Hendel retweetledi
AI21 Labs
AI21 Labs@AI21Labs·
Introducing Jamba, our groundbreaking SSM-Transformer open model! As the first production-grade model based on Mamba architecture, Jamba achieves an unprecedented 3X throughput and fits 140K context on a single GPU. 🥂Meet Jamba ai21.com/jamba 🔨Build on @huggingface
AI21 Labs tweet media
English
37
243
1.1K
332.6K
Roee Hendel retweetledi
AK
AK@_akhaliq·
AI21 Labs presents Jamba SSM-Transformer open model production-grade model based on Mamba architecture, Jamba achieves an unprecedented 3X throughput and fits 140K context on a single GPU.
AK tweet media
English
8
107
702
86.3K
Roee Hendel
Roee Hendel@RoeeHendel·
@yalishandi @_akhaliq We haven't explored this in depth, but it's a promising direction. Studies linking ICL to SGD could offer clues. In related tests, ICL sometimes acts like an empirical risk minimizer, fitting random examples, while other times ignoring them. Further exploration is needed.
English
0
0
1
18
Roee Hendel retweetledi
AK
AK@_akhaliq·
In-Context Learning Creates Task Vectors paper page: huggingface.co/papers/2310.15… In-context learning (ICL) in Large Language Models (LLMs) has emerged as a powerful new learning paradigm. However, its underlying mechanism is still not well understood. In particular, it is challenging to map it to the "standard" machine learning framework, where one uses a training set S to find a best-fitting function f(x) in some hypothesis class. Here we make progress on this problem by showing that the functions learned by ICL often have a very simple structure: they correspond to the transformer LLM whose only inputs are the query x and a single "task vector" calculated from the training set. Thus, ICL can be seen as compressing S into a single task vector theta(S) and then using this task vector to modulate the transformer to produce the output. We support the above claim via comprehensive experiments across a range of models and tasks.
AK tweet media
English
11
111
564
101.5K
Louis Kirsch
Louis Kirsch@LouisKirschAI·
Emergent in-context learning with Transformers is exciting! But what is necessary to make neural nets implement general-purpose in-context learning? 2^14 tasks, a large model + memory, and initial memorization to aid generalization. Full paper arxiv.org/abs/2212.04458 🧵👇(1/9)
Louis Kirsch tweet media
English
7
83
389
0
Roee Hendel
Roee Hendel@RoeeHendel·
@kushal_tirumala Is it possible that the different baseline values simply arise from the fact that larger models have an overall better language modeling capability, rather than memorization? It would be interesting to check the memorization value of the "special batch" prior to training on it.
English
0
0
0
36
Elon Musk
Elon Musk@elonmusk·
@Carnage4Life The safety of any AI system can be measured by its MtH (meantime to Hitler). Microsoft’s Tay chatbot of several years ago got there in ~24 hours.
English
513
713
9.1K
0