Espen JD

2.9K posts

Espen JD banner
Espen JD

Espen JD

@Snixtp

| Cyber Network Engineer | Codex enthusiast | Building tools I find useful

Katılım Haziran 2020
546 Takip Edilen228 Takipçiler
Espen JD retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.
Harveen Singh Chadha@HarveenChadha

things are about to get interesting from here on

English
80
116
949
153.6K
Charles Curran
Charles Curran@charliebcurran·
If you think AI film can’t be art then explain this.
English
1.8K
3K
43.8K
6.3M
Espen JD
Espen JD@Snixtp·
Don't tell me I need to upgrade to Pro to test the new Glass version
Espen JD tweet media
English
0
0
0
13
Espen JD
Espen JD@Snixtp·
Are we serious AutoResearch didn't run overnight, codex stopped after setup... Is Claude Code better for this?
Espen JD tweet media
English
0
0
0
25
Espen JD retweetledi
Fidji Simo
Fidji Simo@fidjissimo·
Companies go through phases of exploration and phases of refocus; both are critical. But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment.
Berber Jin@berber_jin1

SCOOP - OpenAI is planning to simplify its product experience and launch one "superapp" -- part of its broader effort to instill more discipline and focus into the business, and beat back the threat posed by Anthropic more here in our @WSJ story wsj.com/tech/openai-pl…

English
174
63
984
566.1K
Espen JD
Espen JD@Snixtp·
@theo They tell the model to work faster
English
0
0
1
518
Theo - t3.gg
Theo - t3.gg@theo·
Can someone smarter than me explain how these “turbo mode” things work on like an implementation level? Are they using better GPUs? More GPUs? Reserved allocation during generation?
English
99
2
752
124.1K
am.will
am.will@LLMJunky·
Finally proud to announce that I've joined the GPU Minor Leagues. 2 x RTX 6000 Pro. I have six months to pay off the second GPU lol. You are all TERRIBLE influences.
am.will tweet media
English
104
12
750
38.1K
0xSero
0xSero@0xSero·
@Snixtp Yes there’s been one trained like this already
English
1
0
1
15
Espen JD
Espen JD@Snixtp·
Seeing all the local AI stuff lately makes me wish I was working full-time so I could afford a few GPUs. It really feels like local AI is about to take off. The progress recently has been crazy, and it's only going to speed up from here. Really wish I could be part of it. For now I’m just watching from the sidelines, admiring it.
English
0
0
0
12