fairseq

12 posts

fairseq

fairseq

@fairseq

Sequence modeling toolkit for @PyTorch

Se unió Mayıs 2020
11 Siguiendo1.6K Seguidores
fairseq retuiteado
Mikel Artetxe
Mikel Artetxe@artetxem·
We are releasing a family of dense and MoE language models with up to 13B and 1.1T parameters. We find that MoEs are more efficient, but the gap narrows at scale and varies greatly across domains and tasks. Paper: arxiv.org/abs/2112.10684 Models & code: github.com/pytorch/fairse…
Mikel Artetxe tweet media
English
4
25
93
0
fairseq retuiteado
AI at Meta
AI at Meta@AIatMeta·
We’re introducing GSLM, the first language model that breaks free completely of the dependence on text for training. This “textless NLP” approach learns to generate expressive speech using only raw audio recordings as input. Learn more and get the code: ai.facebook.com/blog/textless-…
AI at Meta tweet media
English
14
333
1.2K
0
fairseq
fairseq@fairseq·
fairseq now supports CPU offloading and full parameter+optimizer state sharding via fairscale's FullyShardedDataParallel module. See our tutorial to train a 13B parameter LM on 1 GPU: fb.me/fairseq_fsdp
English
0
8
58
0
fairseq retuiteado
@·
Facebook AI Research's sequence modeling library @fairseq has made it's twitter debut. Please follow for latest updates.
English
2
10
40
0
fairseq retuiteado
PyTorch
PyTorch@PyTorch·
Fairseq includes support for sequence to sequence learning for speech and audio recognition tasks, faster exploration and prototyping of new research ideas while offering a clear path to production. bit.ly/2WfP85X
English
1
34
119
0
fairseq retuiteado
PyTorch
PyTorch@PyTorch·
roberta = torch.hub.load('pytorch/fairseq', 'roberta.large')
AI at Meta@AIatMeta

Facebook #AI’s RoBERTa is a new training recipe that improves on BERT, @GoogleAI’s self-supervised method for pretraining #NLP systems. By training longer, on more data, and dropping BERT’s next-sentence prediction, RoBERTa topped the GLUE leaderboard. ai.facebook.com/blog/roberta-a…

English
0
83
340
0
fairseq retuiteado
PyTorch
PyTorch@PyTorch·
fairseq now supports the training of gated convolutional language models (arxiv.org/abs/1612.08083). It can train a Google Billion Word language model on 128 GPUs in less than a day.
English
1
10
36
0
fairseq retuiteado
PyTorch
PyTorch@PyTorch·
FairSeq Toolkit - Major Update - Distributed Training - Transformer models (big Transformer on WMT Eng-German in < 5 hours on DGX-1) - Fast Inference: translations @ 92 sent/sec for big Transformer - Story Generation Read more at Michael Auli's post: facebook.com/photo.php?fbid…
PyTorch tweet media
English
4
36
132
0
fairseq retuiteado
@·
Fairseq, now in PyTorch! The open-source convolutional sequence-to-sequence engine from FAIR is now available in... fb.me/1gCPauX6V
English
1
127
302
0