MarkdownLM

44 posts

MarkdownLM banner
MarkdownLM

MarkdownLM

@MarkdownLM

First tool to treat your rules as infrastructure. Natively integrated everywhere. https://t.co/sGOJ0iX7gi

Katılım Şubat 2026
10 Takip Edilen0 Takipçiler
Sabitlenmiş Tweet
MarkdownLM
MarkdownLM@MarkdownLM·
AI agents forget your rules every session. We built the governance layer that doesn't. MCP-native. CLI sync. Validation gate before code lands. Free BYOK, no credit card. markdownlm.com #AI #AgenticAI #AIGovernance
English
0
0
0
25
Patty
Patty@pattybuilds·
who even uses cursor anymore if u aren't retarded u can pay $200/mo for Claude Code max and build a million dollar software company
Patty tweet media
NIK@ns123abc

🚨NEWS: Cursor’s $50B “in-house model” is literally Kimi K2.5 with RL on top. Got caught in 24 hours >be Moonshot AI >spend hundreds of millions training Kimi K2.5 >1 trillion parameters, 15 trillion tokens, agent swarm architecture >beat GPT-5.2 and Opus 4.5 on real benchmarks >open-source it because you believe in the ecosystem >one condition: display “Kimi K2.5” if you make over $20M/month from it >Cursor takes the model >runs RL on coding tasks >ships it March 19 as “Composer 2” >blog post: “continued pretraining + scaled reinforcement learning” >zero mention of Kimi K2.5 >“our in-house models generate more code than almost any other LLMs in the world” >publishes benchmark chart >Composer 2 against Opus 4.6 and GPT-5.4 >uses the chart to justify raising at $50 billion! >less than 24 hours later >kimi dev intercepts the API response >model ID: kimi-k2p5-rl-0317-s515-fast >they didn’t even rename it >Moonshot head of pretraining runs tokenizer test >confirms: identical to Kimi’s tokenizer >publicly tags Cursor’s co-founder: “why aren’t you respecting our license?” >two more Moonshot employees post confirmations >all three posts deleted within hours >legal is now involved >but it gets worse >Cursor had Kimi K2.5 listed as a FREE model in their UI just weeks ago >users were openly using it >Feb 9: “K2.5 was in my model list. I updated and it vanished” >it vanished because Cursor pulled it from the picker, and relaunched it as their own model >Moonshot valuation: $4.3B >Cursor valuation: $50B Absolute state of Cursor.

English
10
0
60
5.5K
MarkdownLM
MarkdownLM@MarkdownLM·
@leerob If you were believer of open source you would release composer as open source but you won't. Why don't you just say you love getting advantage of open source? Bro says I'm a big believer of open source like stop acting like Linus Torvalds
English
0
0
0
155
Lee Robinson
Lee Robinson@leerob·
I'm a big believer in open source, especially as AI improves. It was a miss to not mention the Kimi base in our blog from the start. We'll fix that for the next model 🙏 Their team clarified our usage was licensed in the tweet below. x.com/Kimi_Moonshot/…
Kimi.ai@Kimi_Moonshot

Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ ' hosted RL and inference platform as part of an authorized commercial partnership.

English
164
84
1.8K
229K
MarkdownLM
MarkdownLM@MarkdownLM·
@adxtyahq You gotta run far away when you see YC and Forbes 30u30
English
0
0
1
298
aditya
aditya@adxtyahq·
PearAI literally forked an open source IDE repo funniest part? YC backed lmao
aditya tweet media
English
50
19
713
77.4K
Peer Richelsen
Peer Richelsen@peer_rich·
wait so a VSCode fork is also a secret Kimi fork?
Aakash Gupta@aakashgupta

Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.

English
8
0
47
6.9K
MarkdownLM
MarkdownLM@MarkdownLM·
@iBuild People really thought a VS code fork gonna create a more efficient model than the top AI models
English
1
0
4
640
void.
void.@iBuild·
cursor messed up big time > created hype about composer 2. > marketed it as better and cheaper alternative to opus 4.6. > made it perform the best across benchmarks. > what benchmarks? cursorbench. > turned out they had just further trained kimi-k2.5 model and named it composer 2. > founder at kimi called it out too. > nothing changed. no response. > you can still verify the claim yourself, they haven't patched it yet. > this raises questions on their previous models such as composer 1.5 too, was it just a copy too? > 30B dollars company btw.
Fynn@fynnso

was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID

English
23
5
198
22.9K
Deep Insight Labs
Deep Insight Labs@DeepInsightLabs·
@isareksopuro Getting into Forbes 30u30 I can understand, but getting into YC and even getting VC funding? Says a lot about the state of things..
English
2
0
14
2.5K
isabelle
isabelle@isareksopuro·
state of silicon valley: > Delve (YC W24) >"AI Native" >literally no AI >forbes 30u30 founders >charges $6k for a chatgpt'd legal contract >uses Indian contractors to fake data (impersonating as US-based CPAs) > leaked sensitive client data (Lovable, Cluely) & blamed it on AI...?
isabelle tweet media
erin griffith@eringriffith

A detailed and brutal look at the tactics of buzzy AI compliance startup Delve "Delve built a machine designed to make clients complicit without their knowledge, to manufacture plausible deniability while producing exactly the opposite." substack.com/home/post/p-19…

English
35
87
1.3K
108.6K
MarkdownLM
MarkdownLM@MarkdownLM·
@sandeepnailwal Wdym, I thought YC-based startups discovered artificial superintelligence and consciousness already 😭
English
0
0
0
7
Sandeep | CEO, Polygon Foundation (※,※)
LLM based AI is NOT conscious. I co-founded a company literally called Sentient, we're building reasoning systems for AGI, so believe me when I say this. I keep seeing smart people, people I genuinely respect, come out and say that AI has crossed into some kind of awareness. That it feels things, that we should worry about it going rogue. And i think this whole conversation tells us way more about ourselves than it does about AI. These models are wild, i won't pretend otherwise. But feeling human and actually having inner experience are completely different things and we're confusing the two because our brains literally can't help it. We evolved to see minds everywhere and now that wiring is misfiring on language models. I grew up in a philosophical tradition that has thought about consciousness longer than almost any other, and this is the part that really frustrates me about the current conversation. The entire framing of "does AI have consciousness?" assumes consciousness is something you build up to by adding more layers of complexity. In Vedantic philosophy it's the opposite. You don't build toward consciousness. Consciousness is already there, more fundamental than matter or energy. Everything else, including computation, is downstream of it. When someone tells me AI is "waking up" because it generated a paragraph that felt real, what they're telling me is how thin our understanding of consciousness has gotten. We've reduced a question humans have wrestled with for thousands of years to "did the output sound like it had feelings?" It's math that has gotten really good at predicting what a conscious being would say and do next. Calling that consciousness cheapens something that Vedantic, Buddhist, Greek and Sufi thinkers spent millennia actually sitting with. We didn't build something that thinks. We built a mirror and right now a lot of very smart people are mistaking the reflection for something looking back.
English
610
166
1.1K
83.6K
MarkdownLM
MarkdownLM@MarkdownLM·
@KadriJibraan They're saying artificial superintelligence while the whole product is... a thin GPT wrapper
English
1
0
9
347
MarkdownLM
MarkdownLM@MarkdownLM·
@JLarky This is why you should use MarkdownLM.com to cure the problem and enforce your rules everywhere, including CI, CLI, MCP, PRs, Issues, Slack and Linear
English
0
0
0
18
JLarky
JLarky@JLarky·
here's how your company is rotting right this moment: - your senior devs stopped writing code - they ask Claude to generate it, they check that it mostly works, they ask a junior to approve the new PR - a junior who never had a chance to learn about architecture or read the docs can't really explain what you are doing wrong, so they blindly LGTM it - your senior devs stopped thinking - instead they "consult" Claude on making a bunch of strategic decisions; they ask the PM/principal to approve the new architecture - your PMs and principals are too busy (re)discovering the joy of producing 10k LOC, so they don't care if what you are doing is wrong, so they blindly LGTM it
English
73
52
1K
94.3K
MarkdownLM
MarkdownLM@MarkdownLM·
MarkdownLM turns your team's architectural decisions, security policies, and engineering standards into a living infrastructure that AI agents obey in real time. Check it out at markdownlm.com #AI #AgenticAI
English
0
0
0
10
Shyam
Shyam@buildwithshyam·
As a vibe coder, what do you build first ? - Frontend - Backend
English
143
2
89
7.5K
MarkdownLM
MarkdownLM@MarkdownLM·
@EggMasonValue @thdxr 15-20 bucks per scan, just to pay another $20 later to fix the problem LLM generated is not reliable. We control the behavior of LLM @ markdownlm.com, so it won't hallucinate and break your standards.
English
0
0
0
4
dax
dax@thdxr·
today was the worst day i had programming with LLMs in a long time - found a ton of garbage LLM code - LLMs could not improve it
English
176
33
2.4K
134.3K
MarkdownLM
MarkdownLM@MarkdownLM·
@DavidVII @thdxr This happens mostly because the model does not know your engineering standards and rules. It uses the information in its training data for every architecture. You can define them once and enforce everywhere via MarkdownLM.com
English
0
0
0
10
MarkdownLM
MarkdownLM@MarkdownLM·
@AIHacksByMK @thdxr You can define your rules and enforce them automatically at CI, CLI, MCP, PR, and Issues without changing your current workflow on markdownlm.com. Free for individuals
English
0
0
1
66
AIHacksByMK
AIHacksByMK@AIHacksByMK·
@thdxr Sounds like the models were overfitting to the existing code quality. This is why I always validate LLM output against a set of predefined coding standards, otherwise you're just perpetuating the same flaws.
English
2
0
12
6.2K
MarkdownLM
MarkdownLM@MarkdownLM·
@thdxr You can define your rules once at MarkdownLM.com and enforce them everywhere, so your AI agents won't hallucinate next time. Completely free for individuals
English
0
0
0
3
MarkdownLM
MarkdownLM@MarkdownLM·
@resend What if I said you can enforce your documentation as a gate everywhere with MarkdownLM?
English
0
0
0
2
Resend
Resend@resend·
Documentation is the product.
English
36
28
296
33K
Aanya
Aanya@xoaanya·
Finally got a MacBook Now how do I make money???
Aanya tweet media
English
959
137
7K
455.6K
𝘼𝙡𝙚𝙭
𝘼𝙡𝙚𝙭@ItsAlexhere0·
Dev question: what’s your default localhost port?
𝘼𝙡𝙚𝙭 tweet media
English
112
34
178
7.1K