

Akos Kadar
6.1K posts

@kadarakos
Machine learning researcher and developer.



The TurboQuant paper (ICLR 2026) contains serious issues in how it describes RaBitQ, including incorrect technical claims and misleading theory/experiment comparisons. We flagged these issues to the authors before submission. They acknowledged them, but chose not to fix them. The paper was later accepted and widely promoted by Google, reaching tens of millions of views. We’re speaking up now because once a misleading narrative spreads, it becomes much harder to correct. We’ve written a public comment on openreview (openreview.net/forum?id=tO3AS…). We would greatly appreciate your attention and help in sharing it.



"The only unsaturated agentic intelligence benchmark in the world" Excuse me? @NetHack_LE is unsaturated since 2020.

We're releasing a technical report describing how Composer 2 was trained.


Agency is usually formalized as utility maximization. But must it be? LLMs suggest a different foundation: intelligence as acquiring behavioral schemas from interaction structure. My new paper: "Universal AI as Imitation" investigates the limit-case of LLM-style models.

















to improve fine-tuning data efficiency, replay generic pre-training data not only does this reduce forgetting, it actually improves performance on the fine-tuning domain! especially when fine-tuning data is scarce in pre-training (w/ @percyliang)

