Owen Tian Ye

1.1K posts

Owen Tian Ye banner
Owen Tian Ye

Owen Tian Ye

@tiny85114767

PhD Student @HKUSTGuangzhou|AIGC Researcher

Guangzhou | HK, SAR Katılım Ekim 2022
1.3K Takip Edilen360 Takipçiler
Owen Tian Ye retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.
Harveen Singh Chadha@HarveenChadha

things are about to get interesting from here on

English
247
545
4.4K
1.3M
Owen Tian Ye
Owen Tian Ye@tiny85114767·
Realtime Editing as a Systems Problem. About 2 months of our full-stack optimization—cache/kernel/VAE serving paths + causal editing distillation + reward-based DMD for few-step editing. Tech Blog Preview: owen718.github.io/blogs/realtime…
Owen Tian Ye tweet media
English
0
4
25
1.5K
Owen Tian Ye
Owen Tian Ye@tiny85114767·
We will release the model checkpoint and inf code ASAP.
English
0
0
1
59
Owen Tian Ye
Owen Tian Ye@tiny85114767·
By introducing, to our knowledge, the first consistency-aware reward for RL in Real-SR, together with multi-reward learning on top of DiffusionNFT, LucidNFT directly tackles the long-standing gap between realism and faithfulness.
Owen Tian Ye tweet mediaOwen Tian Ye tweet media
English
1
0
1
97
Owen Tian Ye
Owen Tian Ye@tiny85114767·
Real-SR has long suffered from a core limitation: results can look impressive, yet deviate from the original LR evidence. We present LucidNFT, the first consistency-driven RL paradigm for generative Real-SR. w2genai-lab.github.io/LucidNFT Paper out now: arXiv:2603.05947 1/n
Owen Tian Ye tweet media
English
1
5
16
846
Owen Tian Ye retweetledi
Donghao Zhou @ CUHK
Donghao Zhou @ CUHK@donghao_zhou·
Ever tried inpainting an object into a scene with #AI, but details got lost? 🥴 Meet HiFi-Inpaint (#CVPR2026)! 🎉 High-fidelity detail preservation for reference-based inpainting — texts, logos, textures, all intact. No more blur in your ad images! 👇 correr-zhou.github.io/HiFi-Inpaint/
Donghao Zhou @ CUHK tweet media
English
3
14
100
5.9K
Owen Tian Ye
Owen Tian Ye@tiny85114767·
The community really needs a small yet powerful image editing model. A lot of people have been waiting for Z-Image-Edit for a long time. If it still doesn’t come out, why not try building it ourselves?
Owen Tian Ye tweet media
English
1
0
5
261
Owen Tian Ye retweetledi
Mehdi (e/λ)
Mehdi (e/λ)@BetterCallMedhi·
I spent time in Shenzhen last year and when I saw Merz come back from China saying Germans need to work more I immediately knew what broke his brain because I lived the exact same cognitive shock my first week in Huaqiangbei I burned through 4 prototype iterations of a motor controller board for less than a thousand bucks total, back home a friend was working on something similar and spent over 12 thousand for a single revision that took almost two months to arrive when you live that contrast in your own hands with your own project something permanently shifts in how you see the world and it goes way deeper than speed & cost what Shenzhen actually built is a collective learning organism, imagine 20 PCB fabs 15 injection mold shops 30 component distributors and a hundred firmware freelancers all within a 2km radius, looks insanely redundant from the outside until you realize redundancy is actually information density in disguise I watched this firsthand with an injection mold supplier I was working with, this guy had seen a hundred founders iterate similar thermal designs over 6 months so he proactively modified his tooling before I even opened my mouth, he knew what I needed before I knew what I needed, the intelligence lives in the relationships between the nodes and it compounds daily the west thinks about manufacturing as a cost center you optimize by centralizing… China accidentally built a distributed neural network of manufacturing intelligence where knowledge diffuses horizontally across thousands of agents faster than any single western company can process internally so when Merz comes back and says we need to work a bit more I think he saw the problem but COMPLETELY misdiagnosed the solution, telling Germans to work harder is like telling a horse to gallop faster when the other side built a combustion engine the gap is ARCHITECTURAL it’s ecosystem density, you need a custom connector in Shenzhen you walk 200 meters, in Munich you send an email and wait 3 weeks it’s iteration speed, parallel search vs sequential optimization at the system level, it’s risk tolerance, Chinese founders ship something broken on Monday fix it Tuesday ship again Wednesday while European companies are still in the approval phase for the pilot program of the feasibility study… and Merz only saw the surface, what he missed is the tier 2 cities like Hefei Chengdu Wuhan replicating the Shenzhen model at scale right now BYD going from irrelevant to outselling every european automaker combined in roughly 5 years, Huawei building its own 7nm chip under maximum sanctions when every analyst said it was physically impossible & behind all of that a government that treats advanced manufacturing as an existential national priority while europe debates whether AI needs another ethics committee I think what we’re watching is the most asymmetric economic competition in modern history and most western leaders are still framing it as a productivity problem when it’s actually an ontological one Europe & America are optimizing variables that China stopped tracking years ago meanwhile China is compounding on dimensions the west has no framework to even measure Merz at least had the courage to name it out loud and I respect that genuinely but working a bit more inside a broken architecture just means you arrive at the wrong destination slightly faster
Megatron@Megatron_ron

NEW: 🇩🇪🇨🇳 German Chancellor Merz says Germans need to work more in order to match China: “We are simply no longer productive enough. Each individual may say, “I already do quite a lot.” And that may be true. But when you return from China, ladies and gentlemen, you see things more clearly. With work-life balance and a four-day week, long-term prosperity in our country cannot be maintained. We will simply have to do a bit more.”

English
908
5.5K
26.7K
4M
Owen Tian Ye
Owen Tian Ye@tiny85114767·
learn-claude-code 很适合入门,能让你快速理解 coding agent 的基本结构。 我重新构建Owen718/coding-agent-vibe-tutorial,希望把“了解如何构建 agent”再往前推一步:不直接喂完整代码,只给骨架大纲和期望功能,让你在和 AI 的共创里,用 vibe coding 亲手把它搭出来。 github.com/Owen718/coding…
Owen Tian Ye tweet media
中文
1
0
0
133
Owen Tian Ye
Owen Tian Ye@tiny85114767·
The future belongs not to those who can generate the most code, but to those who can refuse the most unnecessary complexity.
English
0
0
0
28
Owen Tian Ye
Owen Tian Ye@tiny85114767·
保持简洁、优雅、可解释,会成为每个 vibe coding 实践者的必修课。 未来最稀缺的能力,不是生成更多代码,而是拒绝更多不必要的复杂性。
中文
0
0
0
115
Owen Tian Ye
Owen Tian Ye@tiny85114767·
我越来越相信:在 vibe coding 时代,知道什么不做,比知道什么能做更重要。 Coding Agent 天然倾向于给出讨好型、过饱和的“施工建议”——看起来很努力、很周全、很有产出,但未必真的更好。 真正高级的工程判断,不是不断往系统里加东西,而是持续删去不必要的复杂度。
中文
1
0
0
59
Owen Tian Ye retweetledi
Robin Rombach
Robin Rombach@robrombach·
New paper out! We present a training method for multimodal generative models, called Self-Flow, which combines classic flow matching and representation learning. Why? Unlike most representation alignment methods, our new approach does not require external, pretrained models and thus scales gracefully to joint multimodal training on images, videos and audio. How? It combines per-timestep flow matching with dual-timestep representation learning, improving the models' internal representations. This approach outperforms prior methods and shows promising scaling behavior in multimodal pretraining. It also enables downstream applications such as action prediction for embodied AI. webpage+paper: bfl.ai/research/self-… code: github.com/black-forest-l… Credit to @hila_chefer, @pess_r, Dominik, @dustin_podell, Vikash, @Vinh_Suhi and Antonio. If you enjoy doing open research like this, come and join BFL! We are actively hiring🌲
Robin Rombach tweet media
English
5
36
312
26.6K
Owen Tian Ye retweetledi
Standard Intelligence
Standard Intelligence@si_pbc·
Computer use models shouldn't learn from screenshots. We built a new foundation model that learns from video like humans do. FDM-1 can construct a gear in Blender, find software bugs, and even drive a real car through San Francisco using arrow keys.
GIF
English
186
403
3.9K
1.1M
Owen Tian Ye retweetledi
Kiwi AI
Kiwi AI@kiwibuildworlds·
Introducing Kiwi Live 0 — blazing-fast, real-time creative editing. Creation shouldn’t pause for renders. Kiwi is built for instant, hands-on control. Go Live: edit while the feed is still running — try any look, drop in elements, swap outfits, and restyle your face on the fly. No stop-and-render loop. Build the Frame: sketch a hint, pull it into place, and let prompts do the heavy lifting — draw + drag + prompt to generate a real image and refine it in seconds. Free to try here 👇 live0.kiwivideo.ai
Kiwi AI@kiwibuildworlds

Blazing-fast real-time video editing while you stream — sub-second latency. Capture, and Kiwi transforms your live feed instantly with stunning fidelity.

English
0
2
4
322