Tsinghua KEG (THUDM)

239 posts

Tsinghua KEG (THUDM)

Tsinghua KEG (THUDM)

@thukeg

#ChatGLM #GLM130B #CodeGeeX #CogVLM #CogView #AMiner The Knowledge Engineering Group (KEG) and THUDM at @Tsinghua_Uni @jietang @ericdongyx

Beigetreten Temmuz 2022
169 Folgt4.3K Follower
Tsinghua KEG (THUDM) retweetet
Tsinghua University
Tsinghua University@Tsinghua_Uni·
Prof. Liu's team built an #AI doctor for everyday #healthcare! In a #virtual hospital, it treated 10K+ virtual patients with 93% accuracy. They covered 300+ diseases across 21 departments & released BioMedGPT, PathOrchestra, and more for a full #medical AI pipeline. #THUAndBeyond
Tsinghua University tweet media
English
3
8
28
3.9K
Tsinghua KEG (THUDM) retweetet
Tsinghua CS
Tsinghua CS@thudcst·
🏆Congrats to the Storage Research Group from #Tsinghua DCST for winning the#ASPLOS2025/#EuroSys2025 Large-Scale Model Inference Optimization Contest in Rotterdam! They outperformed global competitors, boosting inference performance by 1.1x using AWS NKI framework optimizations.
Tsinghua CS tweet media
English
0
5
7
2.5K
Tsinghua KEG (THUDM) retweetet
Stanford AI Lab
Stanford AI Lab@StanfordAILab·
Check out our latest blog post about MiniVLA, a smaller open-source vision-language-action model! ai.stanford.edu/blog/minivla/
English
5
11
88
12.7K
Tsinghua KEG (THUDM) retweetet
Paul Vicol
Paul Vicol@PaulVicol·
Ruslan Salakhutdinov at the Adaptive Foundation Models Workshop!
Paul Vicol tweet media
English
0
4
31
7.8K
Tsinghua KEG (THUDM) retweetet
Richard Socher
Richard Socher@RichardSocher·
AI has a "last-mile problem" similar to self-driving cars. With self-driving cars, early demos impressed, but real-world deployment took years. It's easy to hack up a prototype, but making it work reliably at scale is hard. If each step of an AI agent is only 95% accurate, none of the 30-step workflows will work reliably. Going from 95% to 99.9% accuracy is the real challenge.
English
4
24
147
14.9K
Tsinghua KEG (THUDM) retweetet
Z.ai
Z.ai@Zai_org·
🌈AndroidLab: a comprehensive platform for developing and evaluating Android agents. By integrating a controlled environment and standardized benchmarks, and leveraging the Android Instruct dataset, we significantly boost open-source model performance. github.com/THUDM/Android-…
English
1
12
42
6K
Tsinghua KEG (THUDM) retweetet
Gradio
Gradio@Gradio·
LongWriter-glm4-9b from @thukeg is capable of generating 10,000+ words at once!🚀 Paper identifies a problem with current long context LLMs -- they can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding lengths of 2,000 words. Paper proposes that an LLM's effective generation length is inherently bounded by the sample it has seen during supervised fine-tuning😮 Demonstrates that existing long context LLMs already possess the potential for a larger output window--all you need is data with extended output during model alignment to unlock this capability. Code & models are released under Apache License 2.0🧡
English
4
37
145
17.5K
Tsinghua KEG (THUDM) retweetet
AK
AK@_akhaliq·
New from @thukeg LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs author @realYushiBai is active in discussion section to answer your questions: huggingface.co/papers/2408.07…
AK tweet media
English
1
5
53
12.6K
Tsinghua KEG (THUDM) retweetet
Yushi Bai
Yushi Bai@realYushiBai·
Thanks @_akhaliq! We find that your long context LLM is secretly a LongWriter💡All you need is data with extended output during model alignment to unlock this capability. Our code, data, and models: github.com/THUDM/LongWrit…
AK@_akhaliq

LongWriter Unleashing 10,000+ Word Generation from Long Context LLMs discuss: huggingface.co/papers/2408.07… Current long context large language models (LLMs) can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding even a modest length of 2,000 words. Through controlled experiments, we find that the model's effective generation length is inherently bounded by the sample it has seen during supervised fine-tuning (SFT). In other words, their output limitation is due to the scarcity of long-output examples in existing SFT datasets. To address this, we introduce AgentWrite, an agent-based pipeline that decomposes ultra-long generation tasks into subtasks, enabling off-the-shelf LLMs to generate coherent outputs exceeding 20,000 words. Leveraging AgentWrite, we construct LongWriter-6k, a dataset containing 6,000 SFT data with output lengths ranging from 2k to 32k words. By incorporating this dataset into model training, we successfully scale the output length of existing models to over 10,000 words while maintaining output quality. We also develop LongBench-Write, a comprehensive benchmark for evaluating ultra-long generation capabilities. Our 9B parameter model, further improved through DPO, achieves state-of-the-art performance on this benchmark, surpassing even much larger proprietary models. In general, our work demonstrates that existing long context LLM already possesses the potential for a larger output window--all you need is data with extended output during model alignment to unlock this capability.

English
1
11
33
14.4K
Tsinghua KEG (THUDM) retweetet
AK
AK@_akhaliq·
LongWriter Unleashing 10,000+ Word Generation from Long Context LLMs discuss: huggingface.co/papers/2408.07… Current long context large language models (LLMs) can process inputs up to 100,000 tokens, yet struggle to generate outputs exceeding even a modest length of 2,000 words. Through controlled experiments, we find that the model's effective generation length is inherently bounded by the sample it has seen during supervised fine-tuning (SFT). In other words, their output limitation is due to the scarcity of long-output examples in existing SFT datasets. To address this, we introduce AgentWrite, an agent-based pipeline that decomposes ultra-long generation tasks into subtasks, enabling off-the-shelf LLMs to generate coherent outputs exceeding 20,000 words. Leveraging AgentWrite, we construct LongWriter-6k, a dataset containing 6,000 SFT data with output lengths ranging from 2k to 32k words. By incorporating this dataset into model training, we successfully scale the output length of existing models to over 10,000 words while maintaining output quality. We also develop LongBench-Write, a comprehensive benchmark for evaluating ultra-long generation capabilities. Our 9B parameter model, further improved through DPO, achieves state-of-the-art performance on this benchmark, surpassing even much larger proprietary models. In general, our work demonstrates that existing long context LLM already possesses the potential for a larger output window--all you need is data with extended output during model alignment to unlock this capability.
English
3
46
210
50.9K
Tsinghua KEG (THUDM)
Tsinghua KEG (THUDM)@thukeg·
#VisualAgentBench: proprietary models (4o, 4o-mini, 3.5-sonnet) currently have an edge as visual foundation agents for now, but open models InternVL & GLM-4V are catching up fast, a similar story to LLMs as agents as revealed in #AgentBench back in Aug 2023. arxiv.org/pdf/2408.06327 github.com/THUDM/VisualAg…
Xiao Liu (Shaw)@ShawLiu12

🚨Thrilled to present VisualAgentBench (VAB) with @yugu_nlp and Tianjie, where we enable both TRAINING & TESTING of visual foundation agents across 5 different environments! In all 17 large multimodal models (LMMs) are tested. Find our paper, data, and more insights below 👇 Paper: arxiv.org/abs/2408.06327 Code & Data: github.com/THUDM/VisualAg… Thanks @_akhaliq for sharing on today’s arxiv on HF!

English
0
1
12
1.4K
Tsinghua KEG (THUDM) retweetet
Z.ai
Z.ai@Zai_org·
We are not just doing “demo only” for video generation. Ying, we are bringing a video generation AI that everyone can use. Create a 6-second video in just 30 seconds. Try our new product now. YING:chatglm.cn/video @ChatGLM/zhipuai-unveils-cogvideox-a-cutting-edge-video-generation-model-293e3008fda0" target="_blank" rel="nofollow noopener">medium.com/@ChatGLM/zhipu…
English
7
30
103
36.3K
Tsinghua KEG (THUDM) retweetet
Tsinghua CS
Tsinghua CS@thudcst·
🏆Proud moment for us! Our paper on 'Explicit factor models for explainable recommendation'(u6v.cn/5OxPGm) has won the Test of Time Award at #SIGIR2024, leading the way in 'explainable recommendation' since 2014. Congrats to outstanding THUIR group from #DCST, #Tsinghua
Tsinghua CS tweet media
English
0
4
19
4.5K
Tsinghua KEG (THUDM) retweetet
Z.ai
Z.ai@Zai_org·
🚀 We published a tech report about GLM's Family! ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools. arxiv.org/html/2406.1279…
Z.ai tweet media
English
1
9
39
3.5K
Tsinghua KEG (THUDM) retweetet
AK
AK@_akhaliq·
ChatGLM A Family of Large Language Models from GLM-130B to GLM-4 All Tools We introduce ChatGLM, an evolving family of large language models that we have been developing over time. This report primarily focuses on the GLM-4 language series, which includes GLM-4, GLM-4-Air,
AK tweet media
English
1
31
82
13.8K