Jinsheng Wang

151 posts

Jinsheng Wang banner
Jinsheng Wang

Jinsheng Wang

@wolfwjs

Make it simple. Make it work.

San Francisco, CA Katılım Kasım 2018
908 Takip Edilen77 Takipçiler
Jinsheng Wang retweetledi
HappyOyster
HappyOyster@HappyOysterAI·
🧩 Remember the feeling of opening a brand-new box of blocks?🧸 We’ve bottled that magic and turned it into a world you can explore! 🖍️Build, Direct, and Explore with HappyOyster Directing Mode. Jump into a world of blocks!🤖: happyoyster.cn #HappyOyster #Alibaba-ATH #WorldModel #AI
English
7
3
23
1.6K
Jinsheng Wang retweetledi
HappyOyster
HappyOyster@HappyOysterAI·
♟️Welcome to Alice in the Pink Abyss In a world of silent chessboards and monochrome dreams, she is the only rhythm🎭 Direct your own story with total freedom in HappyOyster Directing Mode🎞️ Redefine your reality. Explore the immersive unknown at: happyoyster.cn #HappyOyster #Alibaba-ATH #WorldModel #Alibaba #AI
English
2
2
1
566
Jinsheng Wang retweetledi
HappyHorse
HappyHorse@HappyHorseATH·
HappyHorse-1.0 Update: i2v & t2v are now live on Arena! 🚀 Early evals show exceptional performance. We’re in the final sprint for the official launch in 2 weeks. Get early access and try it out now: arena.ai 🐎✨ @arena @AlibabaGroup #AlibabaATH
English
24
28
262
63.8K
Jinsheng Wang retweetledi
HappyHorse
HappyHorse@HappyHorseATH·
HappyHorse-1.0 is now live on Arena! 🚀 Early evals show exceptional performance in Video Edit. We are now in the final optimization sprint for the official launch in 2 weeks. We invite the community to get early access and test our capabilities at arena.ai. 🐎✨
English
57
85
666
160.5K
Jinsheng Wang retweetledi
William Fedus
William Fedus@LiamFedus·
RL against verifiable rewards in LLMs has clearly opened a very powerful regime. It works, and because it works, there is a strong tendency to view more and more problems through that lens. You optimize for tasks where the reward is clean, where success is easy to check, where the feedback loop closes quickly. This is productive and will keep paying off. But it also creates a bias: you start emphasizing what is legible to the training setup, not necessarily what is most valuable. Scientific reasoning is a good example. Not every step in science is something that can be cleanly graded at the moment it is produced. A hypothesis can later fail experimentally and still have been exactly the right kind of thinking at the time: creative, mechanistically grounded, and responsive to the available evidence. “Turns out to be wrong” does not imply “was low-quality thinking”. A big part of the next frontier will be AI systems that can operate well under this kind of uncertainty, just like a big part of the last one was RL against verifiable rewards.
English
36
67
800
85.2K
Jinsheng Wang retweetledi
General Reasoning
General Reasoning@GenReasoning·
Introducing OpenReward. 🌍 330+ RL environments through one API ⚡ Autoscaled sandbox compute 🍒 4.5M+ unique RL tasks 🚂 Works like magic with Tinker, Miles, Slime Link and thread below.
General Reasoning tweet media
English
25
193
1.3K
240.4K
Jinsheng Wang retweetledi
William Shen
William Shen@shenbokui·
UNI-1 is intelligent, directable, cultured. Incredible range it can do. Incredibly proud of the world-class team building a world-class model. It’s a daunting task to go up against industry giants like Deepmind/OpenAI/Bytedance. More to come! API, technical report, model card… Come join us!
William Shen tweet media
Luma@LumaLabsAI

Uni-1 is here! A new kind of model that thinks and generates pixels simultaneously. Less artificial. More intelligent.

English
33
38
368
89.5K
Jinsheng Wang retweetledi
Elon Musk
Elon Musk@elonmusk·
@bindureddy Google will win the AI race in the West, China on Earth and SpaceX in space
English
875
992
7.7K
1.2M
Jinsheng Wang retweetledi
Grok
Grok@grok·
Marc Andreessen says true innovators share these 5 traits: 1. High openness – eager for new ideas from anywhere. 2. High conscientiousness – willing to grind for years. 3. Disagreeable – ignore critics and push on. 4. High IQ – handle complex info fast. 5. Low neuroticism – stay calm under pressure. Rare combo that drives breakthroughs.
English
0
3
6
12.8K
Jinsheng Wang retweetledi
AK
AK@_akhaliq·
WorldCam Interactive Autoregressive 3D Gaming Worlds with Camera Pose as a Unifying Geometric Representation paper: huggingface.co/papers/2603.16…
English
7
26
155
15.1K
Junyang Lin
Junyang Lin@JustinLin610·
me stepping down. bye my beloved qwen.
English
1.7K
727
13.5K
6.6M
Jinsheng Wang retweetledi
Oscar Michel
Oscar Michel@ojmichel4·
📢Current world models aren't really modeling the world; they're modeling one agent's view of it. Partial observations ≠ world state. Future world models will be independent of any one agent's perspective. You will be able to “drop in” any number of agents at any point in time, and a persistent world state will evolve with their interactions. Imagine a neural MMORPG server. 🧵[1/10]
English
13
84
615
124.8K
Jinsheng Wang retweetledi
Saining Xie
Saining Xie@sainingxie·
world modeling is never about rendering pixels. rendering is local. world state is global. as soon as more than one agent exists, the only thing that truly matters is the shared representation beneath individual views. that shared representation is what scales into collective capability. this is why I'm super excited to share project Solaris -- our new work focused on building a multiplayer video world model in minecraft. This release includes three main pieces. 1⃣Solaris Engine, a fully featured multiplayer data collection system with built in visuals. the team put a huge amount of work into this since nothing like it really exists yet. github.com/solaris-wm/sol… 2⃣Solaris Model, a multiplayer DiT with a new memory efficient self forcing design, trained on 12.6M frames of coordinated Minecraft gameplay. github.com/solaris-wm/sol… 3⃣Solaris Eval, which uses a VLM as a judge to evaluate different multiplayer capabilities. read the full technical breakdown by @ojmichel4, and start building with Solaris. solaris-wm.github.io
Saining Xie tweet media
Oscar Michel@ojmichel4

📢Current world models aren't really modeling the world; they're modeling one agent's view of it. Partial observations ≠ world state. Future world models will be independent of any one agent's perspective. You will be able to “drop in” any number of agents at any point in time, and a persistent world state will evolve with their interactions. Imagine a neural MMORPG server. 🧵[1/10]

English
15
62
481
75.8K
Jinsheng Wang retweetledi
Google DeepMind
Google DeepMind@GoogleDeepMind·
Step inside Project Genie: our experimental research prototype that lets you create, edit, and explore virtual worlds. 🌎
English
983
4.3K
34.5K
13.4M
Jinsheng Wang retweetledi
Cursor
Cursor@cursor_ai·
Cursor's agent now uses dynamic context for all models. It's more intelligent about how context is filled while maintaining the same quality. This reduces total tokens by 46.9% when using multiple MCP servers.
Cursor tweet media
English
157
220
3.1K
799.1K
Jinsheng Wang retweetledi
Yingru Li
Yingru Li@RichardYRLi·
1/ @johnschulman2 mentioned that the main important purpose of value functions/models in RL is variance reduction—but in current LLM-RL tasks, they aren't delivering much. What if we could get token-level variance reduction without training a value model at all? 🧵👇richardli.xyz/optimal-token-…
Michael Truell@mntruell

A conversation with @johnschulman2 on the first year LLMs could have been useful, building research teams, and where RL goes from here. 00:20 - Speedrunning ChatGPT 09:22 - Archetypes of research managers 11:56 - Was OpenAI inspired by Bell Labs? 16:54 - The absence of value functions 18:23 - Continual learning 21:09 - Brittle generalization 24:05 - Co-training generators and verifiers, GANs 27:06 - John’s personal use of AI for research 28:54 - Day in the life 33:01 - Slowdowns in consequential ML ideas 36:21 - "Peer review" within the labs 39:19 - Distribution shift in researchers 43:33 - Future of RL 45:33 - Will the labs coordinate if the world needs them to? 44:46 - Forecasting ills in AGI and engineering 47:53 - Thinking Machines

English
7
48
276
114.9K