Dean Rie /raɪ/

19.8K posts

Dean Rie /raɪ/ banner
Dean Rie /raɪ/

Dean Rie /raɪ/

@dean_rie

fullstack developer & designer @cursor_ai

Katılım Şubat 2009
457 Takip Edilen1.9K Takipçiler
Sabitlenmiş Tweet
Dean Rie /raɪ/
Dean Rie /raɪ/@dean_rie·
.@cursor_ai giving you grief? Billing confusion? Bugs being buggy? Need something explained? Just @ me or drop a DM - I got you 🤝
English
13
0
10
1.2K
Kinopee
Kinopee@kinopee_ai·
日本の春🌸
Kinopee tweet media
日本語
3
1
26
1.1K
Dean Rie /raɪ/ retweetledi
Cursor
Cursor@cursor_ai·
Cursor cloud agents can now run on your infrastructure. Get the same cloud agent harness and experience, but keep your code and tool execution entirely in your own network. cursor.com/blog/self-host…
English
112
143
2.1K
201.9K
Dean Rie /raɪ/ retweetledi
Lee Robinson
Lee Robinson@leerob·
Yep, Composer 2 started from an open-source base! We will do full pretraining in the future. Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training. This is why evals are very different. And yes, we are following the license through our inference partner terms.
Fynn@fynnso

was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID

English
356
203
2.8K
1.4M
Dean Rie /raɪ/ retweetledi
Dylan Field
Dylan Field@zoink·
Agents, meet the Figma canvas
English
89
115
2K
426.7K
Dean Rie /raɪ/ retweetledi
Sora
Sora@soraofficialapp·
We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team
English
11.4K
5.9K
37.1K
48M
Dean Rie /raɪ/ retweetledi
Ben Lang
Ben Lang@benln·
Cursor team will be visiting these cities over next few weeks: • Bangalore • Chennai • Mumbai • Medellin • Cali • Bangkok • Istanbul • Singapore • Ho Chi Minh City • Jakarta • Kuala Lumpur • Colombo • Rio • Florianópolis Details being added on Luma!
English
431
146
4.4K
253.5K
Adam Hofmann
Adam Hofmann@ajhofmann·
Mt Tamalpais has some of the best bouldering backdrops I’ve ever seen, and it’s under an hour from the city SF is so underrated
English
2
0
31
2.9K
Dean Rie /raɪ/ retweetledi
Mike
Mike@grabbou·
We evaluated Composer 2 in our React Native evals, and I'll say this: the @cursor_ai team is cooking 🧑‍🍳
Mike tweet media
English
44
60
1K
104.3K
Dean Rie /raɪ/ retweetledi
Cursor
Cursor@cursor_ai·
We're releasing a technical report describing how Composer 2 was trained.
Cursor tweet media
English
165
491
5.1K
1.2M
Dean Rie /raɪ/ retweetledi
Cursor
Cursor@cursor_ai·
Cursor can now create new components and frontends in Figma using your team's design system.
English
117
282
3.8K
614.4K
Dean Rie /raɪ/ retweetledi
eric zakariasson
eric zakariasson@ericzakariasson·
there are so many cursor events next 2 weeks it doesn't even fit on the screen! go check out some close to you in the calendar: luma.com/cursorcommunity
eric zakariasson tweet media
English
15
6
89
6.9K
Dean Rie /raɪ/ retweetledi
Lee Robinson
Lee Robinson@leerob·
Some early evals of Composer 2 are coming in! These seem to match the results we published in our blog post. But benchmarks are an imperfect measure. For example, even though these results show Composer 2 closer to GPT-5.4 and ahead of Opus 4.6, that isn't a universally true statement. Even from my own experience taste testing, I prefer the writing style of Opus. I also use Opus for general writing critiques outside of coding, so maybe I just have more familiarity with the model. But I think it's important to not view benchmarks as absolutes, just something to consider before testing yourself. The eval results show the improvements from our continued pretraining and RL work on top of the base. I'd be cool to see Kimi with the Cursor harness added as well to make the comparison as close as possible (the diff there is larger than I'd expect, probably needs some prompt tweaks for Kimi). We'll be sharing more details on the ML work behind Composer 2 here shortly, in addition to the Terminal-Bench 2.0 and SWE-bench multilingual results we published in the announcement. I think you need to spend some real time with a model to understand its behaviors and quirks. For example, when we're dogfooding Composer internally, we flag to the ML team any weird behaviors (bad markdown formatting, overly verbose summaries, etc) so we can fix and penalize those behaviors in RL. Appreciate the Next team making their evals¹ more robust after feedback and including new models there like GPT-5.4, as well as the Roboflow² team for publishing their results! As more evals come out, I'll try to thread them here to see places Composer is good or maybe not so good, and we should fix going forward. For example, I'd expect it to be not as good at non-coding benches. [1]: nextjs.org/evals [2]: blog.roboflow.com/best-coding-ag…
Lee Robinson tweet mediaLee Robinson tweet media
English
76
36
624
73.3K