WebGPU

362 posts

WebGPU

WebGPU

@WebGPU

Next-generation graphics API for the Web

GPU انضم Ekim 2018
8 يتبع5.2K المتابعون
WebGPU
WebGPU@WebGPU·
You could probably one shot cursor in 2 years.
English
0
0
0
1
WebGPU
WebGPU@WebGPU·
I want to buy puts on cursor. It will not be worth even $100 million in 10 years.
English
1
0
0
11
WebGPU
WebGPU@WebGPU·
Life pro tip: if you want to raise a lot of capital, just bullshit your investors
Aakash Gupta@aakashgupta

Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.

English
1
0
0
80
WebGPU
WebGPU@WebGPU·
We need to find a way to grant all humans reach credits, which they can decide to pool, or sell to others etc.
English
0
0
0
21
WebGPU
WebGPU@WebGPU·
@elonmusk musky boi, instead of giving people UBI, people should instead get UGR as in Universal Guaranteed Reach. Reach/human attention as long as only humans can be considered natural entities legally, is zero sum, intelligence is not.
English
1
0
0
30
WebGPU أُعيد تغريده
Dan Greenheck
Dan Greenheck@dangreenheck·
@_avdept I attempted to when modeling sea spray. I did not have a good time lol, even with Claude handholding me through the entire process. WebGPU has opened up a lot of fun possibilities though for sure.
English
0
0
0
32
WebGPU
WebGPU@WebGPU·
If money supply grows faster, raw material costs will rise, its going to be directly inflationary. 98% people will probaby end up getting median wage as ubi, the bottom % will be slightly richer and most higher earners will be nerfed. There is no free lunch ai will bring lol
English
0
0
0
25
WebGPU
WebGPU@WebGPU·
If you did not understand this before, you’re not gonna make it.
English
1
0
0
32
WebGPU
WebGPU@WebGPU·
Wages are a small % of manufactured product cost. There will be no UHI, you’ll just end up being a peasant.
English
1
0
0
42
WebGPU
WebGPU@WebGPU·
I was born to fuck
English
0
0
0
42
WebGPU
WebGPU@WebGPU·
When someone asks you what you wanna do. Tell them that you were born to fuck, not work.
English
2
0
0
70
WebGPU
WebGPU@WebGPU·
Only civilizations that have solved, or at least learned to manage, their “mistakes of Earth” will ever reach another star. The ones that don’t either collapse or stagnate in their home system, quietly becoming extinction statistics.
English
0
0
0
38
WebGPU
WebGPU@WebGPU·
Interstellar expansion without FTL requires generational ships, self‑sustaining ecologies, and political stability measured in millennia. You can’t do that if you’re still fighting over borders, burning fossil fuels, or treating your home planet as a disposable launchpad.
English
1
0
0
51
WebGPU
WebGPU@WebGPU·
Is the lack of FTL actually the filter that ensures only the species smart enough to fix their own planet get to touch the others
English
1
0
0
68
WebGPU أُعيد تغريده
marko
marko@markopolojarvi·
the future of mainstream local AI was never 27B models running on windows, macos or linux the future of local mainstream AI was always webgpu in a browser using <1B agent-specialized model
marko tweet media
English
0
1
0
72
WebGPU أُعيد تغريده
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
I hate when Codex does this: "I’m patching the WebGPU indexed-draw path so suballocated transient meshes are bound using byte offsets instead of relying on firstIndex/baseVertex. That targets exactly the Emscripten-only failure mode" Rebinding vertex buffers for each draw call instead of changing base offsets. Sure it might fix some random debug draw call, at the price of making everything slower...
English
1
1
30
3.8K
WebGPU أُعيد تغريده
Elon Musk
Elon Musk@elonmusk·
So many phonies, so few who are the real deal
English
10.7K
9.9K
96.3K
31.3M
WebGPU
WebGPU@WebGPU·
It doesn't matter that the internet is forever if you don't care
English
0
0
0
113
WebGPU
WebGPU@WebGPU·
Be selfish to the selfish and selfless to the selfless.
English
0
0
0
162