BigScience Research Workshop

355 posts

BigScience Research Workshop banner
BigScience Research Workshop

BigScience Research Workshop

@BigscienceW

A research workshop on large language model gathering 1000+ researchers around the world Follow the training of the 176B multilingual model live @BigScienceLLM

🌐 가입일 Nisan 2021
1 팔로잉14.3K 팔로워
BigScience Research Workshop 리트윗함
Stas Bekman
Stas Bekman@StasBekman·
This is the tech that Tunji Ruwase and I first started working on during @BigscienceW to deal with cluster resizes during BLOOM-176B training and then Sam Ade Jacobs, Lev Kurilenko and Masahiro Tanaka brought it to the finish line, improving the code, and publishing a paper and presentation at USENIX ATC 2025. See Minja's post below for links to paper, code, etc.
Minjia Zhang@_Minjia_Zhang_

📢 Yesterday at USENIX ATC 2025, Xinyu Lian from UIUC SSAIL Lab presented our paper on Universal Checkpointing (UCP). UCP is a new distributed checkpointing system designed for today's large-scale DNN training, where models often use complex forms of parallelism, including data, tensor, pipeline, and expert parallelism. Existing checkpointing systems struggle in this setting because they are tightly coupled to specific training strategies (e.g., ZeRO-style data parallelism or 3D model parallelism), which break down when the training configs need to dynamically reconfigure over time. This makes it difficult to have resilient and fault-tolerant training. UCP solves this by decoupling distributed checkpointing from parallelism strategies. Our design introduces a unified checkpoint abstraction -- atomic checkpoint, and a full pattern matching-based transformation pipeline, which enables scalable and low-overhead checkpointing with reconfigurable parallelism across arbitrary model sharding strategies. We show that UCP supports state-of-the-art models trained with hybrid 3D/4D parallelism (ZeRO, TP, PP, SP) while incurring less than 0.001% overhead of the total training time. UCP is fully open-sourced in DeepSpeed. It has been adopted by Microsoft, BigScience, UC Berkeley and others for large-scale model pre-training and fine-tuning, including Phi-3.5-MoE (42B), BLOOM (176B), and many more. It also has been selected for presentation at PyTorch Day 2025 and FMS 2025(the Future of Memory and Storage). Big thanks to the amazing collaborators from Microsoft and Snowflake: @samadejacobs , @LevKurilenko, @MasahiroTanaka, @StasBekman , and @TunjiRuwase. 🔗 Project: lnkd.in/gG6j4vJe 📄 Paper: lnkd.in/gUiC5kcR 💻 Code: lnkd.in/g6uS29nH 📚 Tutorial: lnkd.in/gi_zWSWh #ATC2025 #LLM #Checkpointing #SystemsForML #DeepLearning #DistributedTraining #UIUC #DeepSpeed

English
0
2
14
2.4K
BigScience Research Workshop 리트윗함
Jeff Boudier 🤗
Jeff Boudier 🤗@jeffboudier·
4 years ago we were on the brink of AI becoming proprietary and centralized, when OpenAI kept GPT3 closed and VCs started dumping money on researchers. From fully open science, to fully closed, in a matter of months. It was scary, and 1,000+ leading researchers and scientists banded together to show the world that it was possible to do the same work in the open, and build an ecosystem that benefits everyone. That was the @BigscienceW BLOOM project, and it put us back on track to open science, starting with forward-thinking organizations like @Meta releasing OPT. Look at us now. Open models have not only caught up, they're state of the art now. Not just LLMs, but models for document AI, speech to text, text to speech, generating images and more. We're closing in on 2 million open weight models on @huggingface. Thanks for the reminder @Thom_Wolf .
English
7
30
94
21.9K
BigScience Research Workshop 리트윗함
Stas Bekman
Stas Bekman@StasBekman·
The Universal Checkpointing paper is out! arxiv.org/abs/2406.18820 If you remember the @BigscienceW BLOOM-176B training, Tunji Ruwase and I co-invented this technology for Megatron-Deepspeed in order to enable to quickly scale up and down node topology while continuing training. Since then @MSFTDeepSpeed continued improving on that and it has now been fully integrated into Deepspeed. The blog post is here: github.com/microsoft/Deep…
English
3
33
170
19K
BigScience Research Workshop 리트윗함
Yacine Jernite
Yacine Jernite@YJernite·
I respect the caution, but also need to stress that efforts that pursue transparency as an operational value in service of actual inclusion and accountability do exist - see for example the writing on this very topic by @BigscienceW, including its ethical charter. 1/3
Meredith Whittaker@mer__edith

I did not sign this statement, tho I agree “open” AI is not the enemy of “safe” AI I can't endorse its premise that “openness” alone will “mitigate current+future harms from AI,” nor that it’s an antidote to concentrated power in the AI industry 1/ open.mozilla.org/letter/

English
1
5
18
7.9K
BigScience Research Workshop 리트윗함
Sasha Luccioni, PhD 🦋🌎✨🤗
Never thought I'd see the day I'd have a publication in JMLR 🥹 So happy that the BLOOM carbon footprint paper has finally found a home at such an incredible venue! Thank you @shakir_za for being such a great editor, it warms my heart to see your name on this paper 💚
Sasha Luccioni, PhD 🦋🌎✨🤗 tweet media
English
7
19
181
38.4K
BigScience Research Workshop 리트윗함
MMitchell
MMitchell@mmitchell_ai·
If you wanted to see the fun panel/Q&A we did with Londoners on AI, you can check out the recording here! My preso at the start is also on Open Science, representing @huggingface & @BigscienceW.
Science Gallery London@SciGalleryLon

Couldn't make it along to last week's event with @mmitchell_ai? Head over to our blog to watch Margaret's full presentation plus the lively panel discussion that followed feat. @lara_groves @irini_mirena & @carolinesinders london.sciencegallery.com/blog/watch-aga… @londondataweek #AI4Me

Shoreline, WA 🇺🇸 English
1
4
19
11.7K
BigScience Research Workshop 리트윗함
BigCode
BigCode@BigCodeProject·
Introducing: 💫StarCoder StarCoder is a 15B LLM for code with 8k context and trained only on permissive data in 80+ programming languages. It can be prompted to reach 40% pass@1 on HumanEval and act as a Tech Assistant. Try it here: shorturl.at/cYZ06r Release thread🧵
BigCode tweet media
English
69
635
2.6K
882K
BigScience Research Workshop 리트윗함
Giada Pistilli
Giada Pistilli@GiadaPistilli·
As you already know, I am very proud of the collective work that enabled the development of @BigscienceW's ethical charter. Today I am even more proud to announce that it's part of @OECDinnovation's catalog to promote Trustworthy AI: such a milestone! oecd.ai/en/catalogue/t…
English
1
8
26
6K
BigScience Research Workshop 리트윗함
Aran Komatsuzaki
Aran Komatsuzaki@arankomatsuzaki·
The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset Documents the data creation and curation efforts of ROOTS corpus, a 1.6TB dataset used to train BLOOM Releases a large initial subset of the corpus data: huggingface.co/bigscience-data abs: arxiv.org/abs/2303.03915
Aran Komatsuzaki tweet media
English
1
35
123
17.4K
BigScience Research Workshop 리트윗함
Yong Zheng-Xin
Yong Zheng-Xin@yong_zhengxin·
(Repost for corrected Arxiv) 🧐What’s the best way to quickly adapt large multilingual language models to new languages? We present our new paper from @BigscienceW 🌸: BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting. 📜 arxiv.org/abs/2212.09535 [1/9]
Yong Zheng-Xin tweet media
English
2
27
65
18.6K
BigScience Research Workshop 리트윗함
Max Ryabinin
Max Ryabinin@m_ryabinin·
Petals, a system for easy decentralized inference and adaptation of 100B+ LLMs, is now online! 🌸Generate text with BLOOM-176B using Colab or a desktop GPU 🔌Fine-tune large models for your tasks 👥Help others by contributing your GPUs or host a new swarm colab.research.google.com/drive/1Ervk6HP…
Max Ryabinin tweet media
English
5
56
253
0
BigScience Research Workshop 리트윗함
clem 🤗
clem 🤗@ClementDelangue·
The Bloom paper is out. Looks like it's doing worse than current GPT3 API in zero-shot generation tasks in English but better than other open-source LLMs & better than all in zs multi-lingual (which was the main goal). Proud of the work from the community! arxiv.org/abs/2211.05100
clem 🤗 tweet media
English
11
105
593
0