
Thomas Kollar
113 posts

Thomas Kollar
@tkollar
Head of AI @ Wayve; ex-Research Manager @ TRI; ex-Amazon; ex Apple. CMU / MIT










Check out DataComp for language models! Open data, open code, open training recipe, and close to Llama3-8B performance. This has been a labor of love over the last year, a huge thanks to all the collaborators for helping make this happen!

I am really excited to introduce DataComp for Language Models (DCLM), our new testbed for controlled dataset experiments aimed at improving language models. 1/x

What design choices matter when developing a visually-conditioned language model (VLM)? Check out our paper – Prismatic VLMs – and open-source training code, evaluation suite, and 42 pretrained VLMs at the 7B-13B scale! 📜 arxiv.org/abs/2402.07865 ⚙️ + 🤗 github.com/TRI-ML/prismat…

What design choices matter when developing a visually-conditioned language model (VLM)? Check out our paper – Prismatic VLMs – and open-source training code, evaluation suite, and 42 pretrained VLMs at the 7B-13B scale! 📜 arxiv.org/abs/2402.07865 ⚙️ + 🤗 github.com/TRI-ML/prismat…

✨ Introducing 𝐎𝐩𝐞𝐧𝐕𝐋𝐀 — an open-source vision-language-action model for robotics! 👐 - SOTA generalist policy - 7B params - outperforms Octo, RT-2-X on zero-shot evals 🦾 - trained on 970k episodes from OpenX dataset 🤖 - fully open: model/code/data all online 🤗 🧵👇




📢 Releasing TRI's open-source Mamba-7B trained on 1.2T tokens of RefinedWeb! Mamba-7B is the largest fully recurrent Mamba model trained and is a state-of-the-art recurrent LLM. 🚀🚀🚀 huggingface.co/TRI-ML/mamba-7…


