msr_knowledgenlp

17 posts

msr_knowledgenlp

msr_knowledgenlp

@MS_KnowledgeNLP

Microsoft Cognitive Services Research, Knowledge and Language team, https://t.co/JWkzXURXZV

Joined Ekim 2022
6 Following110 Followers
msr_knowledgenlp retweeted
AK
AK@_akhaliq·
Any-to-Any Generation via Composable Diffusion present Composable Diffusion (CoDi), a novel generative model capable of generating any combination of output modalities, such as language, image, video, or audio, from any combination of input modalities paper page: huggingface.co/papers/2305.11…
English
12
168
665
280.2K
msr_knowledgenlp retweeted
Canwen Xu
Canwen Xu@XuCanwen·
Do you know small models can be LLM's plugins? 🥳Introducing SuperICL, it's ICL but super dope! 🚀SuperICL combines small models with LLM like GPT-3.5 and can improve accuracy, multilinguality and interpretability! 📄Paper: arxiv.org/abs/2305.08848 🔧Code: aka.ms/SuperICL
Canwen Xu tweet media
English
2
28
124
16.7K
msr_knowledgenlp retweeted
AK
AK@_akhaliq·
GPTEVAL: NLG Evaluation using GPT-4 with Better Human Alignment abs: arxiv.org/abs/2303.16634
AK tweet media
English
0
15
51
11.8K
LiGuangrui
LiGuangrui@LiGuangruii·
@MS_KnowledgeNLP It seems that I cannot message you. Would you mind sharing an email for offering my resume?
English
1
0
0
185
msr_knowledgenlp
msr_knowledgenlp@MS_KnowledgeNLP·
We are hiring part-time research interns on LLM. Please message us if you are interested : )
English
4
7
10
2.8K
msr_knowledgenlp
msr_knowledgenlp@MS_KnowledgeNLP·
Thank you for the interest in our team! Sorry for not being able to reply everyone. We will keep hiring interns quarter by quarter. Feel free to message us your personal page or CV by Twitter whenever you are available. Our researchers will talk with you if matched : )
English
0
0
1
323
msr_knowledgenlp retweeted
Wenhao Yu
Wenhao Yu@wyu_nd·
Combing Retrieval AND Generation (in step1) can further improve the model performance, as shown in Figure 3. The choice of retrieval or generation is interesting, and their complementarity is worth exploring. Using retriever or generator only where it helps.
John Nay@johnjnay

Right now we do: 1. retrieve docs 2. LLM generate output w/ those But this doesn't fully leverage LLM power for step 1. What if we directly generate contextual docs for a question, instead of retrieving external docs?! Paper arxiv.org/abs/2209.10063 Code github.com/wyu97/GenRead

English
0
2
14
1.8K
msr_knowledgenlp retweeted
Wenhao Yu
Wenhao Yu@wyu_nd·
𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲 𝗥𝗮𝘁𝗵𝗲𝗿 𝗧𝗵𝗮𝗻 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗲 is now 𝗮𝗰𝗰𝗲𝗽𝘁𝗲𝗱 to #𝗜𝗖𝗟𝗥𝟮𝟬𝟮𝟯 🎉🎉 Without using DPR/Google, it achieved SoTA on multiple open-domain QA and knowledge-intensive benchmarks! Work done @ms_knowledgenlp! Code and paper: arxiv.org/abs/2209.10063
Wenhao Yu tweet media
English
2
40
227
20.6K