OpenMOSS

39 posts

OpenMOSS

OpenMOSS

@Open_MOSS

OpenMOSS is an open research community aimed at building artificial general intelligence. Discord 👇 https://t.co/FLvN5uX8wc

Katılım Ocak 2025
29 Takip Edilen264 Takipçiler
Sabitlenmiş Tweet
OpenMOSS
OpenMOSS@Open_MOSS·
(1/6) How do you build a video LLM that decouples vision from language — instead of jamming it all into one context window? Our team at OpenMOSS open-sources MOSS-VL, a cross-attention multimodal model with strong video understanding results. Architecture and benchmarks in thread.
OpenMOSS tweet media
English
6
3
14
1.6K
OpenMOSS retweetledi
MOSI
MOSI@MosiAI_Official·
Open-source video should be easy to run, adapt, and build into products. That’s what MOVA is designed for. MOVA-360p has reached 142K total downloads on Hugging Face, with 88,362 downloads in the last month. Developers get open weights, inference code, training pipelines, LoRA fine-tuning scripts, Apache-2.0 licensing, Diffusers support, and Safetensors. Now, with DiffSynth Studio support for MOVA-360p and MOVA-720p, teams can use MOVA across both inference and training workflows. Hugging Face: huggingface.co/OpenMOSS-Team/… GitHub :github.com/OpenMOSS/MOVA DiffSynth Studio: github.com/modelscope/Dif…
MOSI tweet media
English
0
3
8
196
OpenMOSS
OpenMOSS@Open_MOSS·
ModelScope@ModelScope2022

Say hello to MOSS-TTS-Nano 🚀 0.1B multilingual TTS from MOSI.AI and OpenMOSS. Designed for realtime speech generation without a GPU. Runs directly on CPU, keeping the deployment stack simple enough for local demos, web serving, and lightweight product integration. Part of the MOSS-TTS family alongside the 1.7B and 8B flagship models. 🤖 modelscope.cn/models/openmos… 🌍 modelscope.ai/models/openmos… 💻 github.com/OpenMOSS/MOSS-…

English
0
4
7
511
OpenMOSS
OpenMOSS@Open_MOSS·
(1/6) How do you build a video LLM that decouples vision from language — instead of jamming it all into one context window? Our team at OpenMOSS open-sources MOSS-VL, a cross-attention multimodal model with strong video understanding results. Architecture and benchmarks in thread.
OpenMOSS tweet media
English
6
3
14
1.6K
OpenMOSS
OpenMOSS@Open_MOSS·
(5/6) We propose XRoPE — Cross-attention RoPE — mapping text tokens and visual patches into a unified 3D space: time (t), height (h), width (w). 1. Injected into vision Key + text Query for cross-modal alignment 2. Value left untouched to preserve feature fidelity
OpenMOSS tweet media
English
0
0
0
158
OpenMOSS
OpenMOSS@Open_MOSS·
(4/6) The biggest mistake video LLMs make: they treat frames as a sequence of images, not a sequence in time. MOSS-VL wraps every frame with special tokens — <|time_start|>1.2 seconds<|time_end|> — anchoring it in absolute time. Grounded in absolute time, not frame indices.
OpenMOSS tweet media
English
0
0
0
141
OpenMOSS
OpenMOSS@Open_MOSS·
(3/6) We benchmarked MOSS-VL across 30+ multimodal tasks vs Qwen2.5-VL and Qwen3-VL: 1. 📹 Video Understanding: 65.8 (+2 vs Qwen3-VL) 2. 📄 OCR: 83.9 3. 🎯 VSI-bench: +8.3 over Qwen3-VL-8B-Instruct Consistently first or second across the board.
OpenMOSS tweet media
English
0
0
1
154
OpenMOSS
OpenMOSS@Open_MOSS·
(2/6) Hot take: most video LLMs are wired backwards. They jam visual tokens straight into the LLM context, forcing one model to do both perception and reasoning at once. But here's the fix: MOSS-VL uses cross-attention to keep the two in separate spaces, talking only when needed.
OpenMOSS tweet media
English
0
0
0
131
ModelScope
ModelScope@ModelScope2022·
OpenMOSS drops two model series today: MOSS-VL and MOSS-Video-Preview. 🚀 MOSS-VL: offline multimodal engine with cross-attention architecture, XRoPE, and absolute timestamp injection. 🎬 Video score 65.8, beats Qwen3-VL by +2 pts. VSI-bench +8.3 vs Qwen3-VL-8B-Instruct. 🖼️ Strong on image understanding, OCR, document parsing, and visual reasoning. Two checkpoints: Base (pretrain) and Instruct (SFT). MOSS-Video-Preview: built for real-time streaming video understanding. Cross-attention backbone on Llama-3.2-Vision, native frame-by-frame injection, duplex "listen-speak" switching. 👉 Three checkpoints: Base (pretrain) → SFT (offline instruction) → Realtime-SFT (low-latency streaming, sub-ms TTFT). 🤖 MOSS-VL: modelscope.ai/collections/op… 🤖 MOSS-Video-Preview: modelscope.ai/collections/op…
ModelScope tweet media
English
1
14
28
2.9K
OpenMOSS
OpenMOSS@Open_MOSS·
🚨AI can learn scientific taste. 🔬🤖 Great scientists have strong judgement and foresight, closely tied to what we call scientific taste. Here, we use the term to refer to the capacity to judge and propose research ideas with high potential impact. However, most relative research focuses on improving an AI scientist's executive capability, while enhancing an AI's scientific taste remains underexplored. In this work, we propose Reinforcement Learning from Community Feedback (RLCF), a training paradigm that uses large-scale community signals as supervision, and formulate scientific taste learning as a preference modeling and alignment problem. For preference modeling, we train Scientific Judge on 700K field- and time-matched pairs of high- vs. low-citation papers to judge ideas. For preference alignment, using Scientific Judge as a reward model, we train a policy model, Scientific Thinker, to propose research ideas with high potential impact. Experiments show Scientific Judge outperforms SOTA LLMs (e.g., GPT-5.2, Gemini 3 Pro) and generalizes to future-year test, unseen fields, and peer-review preference. Furthermore, Scientific Thinker proposes research ideas with higher potential impact than baselines. Our findings show that AI can learn scientific taste, marking a key step toward reaching human-level AI scientists. We are no longer just building AI that automates the execution of science. We are building AI that can automate the direction of science. Scientific taste is no longer a human monopoly. We have open-sourced everything. Come build the future of AI scientists with us! #AutoResearch #AI #Agent #VibeResearch
OpenMOSS tweet media
English
1
2
7
203
Zaky Vids
Zaky Vids@m4zas24·
@Open_MOSS @Open_Moss any chance you guys will release a web version similar to elevenlabs also please add emotions [laughs] [sighs] etc
English
1
0
0
32
OpenMOSS
OpenMOSS@Open_MOSS·
🚀 The MOSS-TTS Family is here. From zero-shot cloning to real-time VoiceAgents, we have released our most powerful suite of audio models yet. The Lineup: MOSS-TTS Flagship: The industry's best zero-shot voice cloning. Features precise control over duration & Pinyin, capable of generating 1 hour of speech. MOSS-TTSD-v1.0: A new standard for dialogue generation. Comprehensive optimization for conversational scenes and small languages. Best-in-class performance in all evaluations. MOSS-VoiceGenerator: One-shot timbre generation. Create voices with a single sentence and complex instruction handling. MOSS-TTS-Realtime: Built for the next era of VoiceAgents. Synthesis starts in just 2 characters for instant response. MOSS-SoundEffect: Text-to-Audio sound effects to expand your creative toolkit. 🔥 Try it now: studio.mosi.cn/voice-synthesis 💻 Deploy (GitHub): github.com/OpenMOSS/MOSS-… 🔌 API Docs: studio.mosi.cn/docs/moss-tts Welcome to our demo. The era of 'childhood' for TTS is over. #MOSS #AI #TextToSpeech #TTS #OpenClaw #Agent #OpenMOSS #Opensource #VoiceAgent
English
7
6
29
3.5K
OpenMOSS
OpenMOSS@Open_MOSS·
Our bench can also test image edit models! It's a truly unified multimodal generative reasoning benchmark testing video models, image edit models and VLMs. Results on mini test set: (6/6)
OpenMOSS tweet media
English
0
0
0
235
OpenMOSS
OpenMOSS@Open_MOSS·
What about text-heavy logic? Sora-2 takes a prompt + image, and generates a video "writing" the step-by-step solution. It even reads the answer via audio! 🔊 Staggering results: 🎯 MATH: 92% 🎯 MMMU: 69.2% (5/6)
English
0
0
0
209
OpenMOSS
OpenMOSS@Open_MOSS·
Sora-2 solves complex visual puzzles (color filling, shape drawing) by understanding symmetry, gradients, and composition. On Visual-Shape tasks, Sora-2's inductive reasoning actually matches Claude 3.5 Sonnet! 🎨🧩 (4/6)
OpenMOSS tweet media
English
0
0
0
179
OpenMOSS
OpenMOSS@Open_MOSS·
We introduce VideoThinkBench to test this. On "Eyeballing Puzzles", Sora-2 reasons by simulating light reflection and manipulating geometry. Result? It outperforms SOTA VLMs and scores 10% higher than GPT-5! 📈🧩 All code and data are open-sourced: github.com/tongjingqi/Thi… (3/6)
OpenMOSS tweet media
English
0
0
0
185
OpenMOSS
OpenMOSS@Open_MOSS·
Current LLM/VLM paradigms ("Thinking with Text/Images") have limits: static images lack dynamics, and split modalities hinder understanding. Our fix: Thinking with Video. Video frames as a unified medium to draw/write reasoning steps! ✍️🎥 Project: thinking-with-video.github.io (2/6)
OpenMOSS tweet media
English
0
0
0
193