Mo

1.8K posts

Mo

Mo

@atmoio

Exploring what AI actually is and sharing my learnings. Building @shapeworkspace, prev @standardnotes. Talking at https://t.co/814DpgwSzr and https://t.co/vlHyF3gEjn.

Katılım Ocak 2026
18 Takip Edilen26.2K Takipçiler
Sabitlenmiş Tweet
Mo
Mo@atmoio·
I was a 10x engineer. Now I'm useless.
English
1.5K
1.7K
16.1K
5.9M
kvick
kvick@kvickart·
@atmoio its definitely not obvious what the best code paths are, if anything its way more difficult to know the optimal paths now
English
1
0
1
14
Mo
Mo@atmoio·
@avazqa like this. they’re everywhere. mp4 gif, catchy hook. 7-8 short paragraphs
Aakash Gupta@aakashgupta

Three months ago, the consensus was that Cursor was cooked. Claude Code crossed $2.5B in run-rate revenue. Google paid $2.4B for Windsurf’s IP and poached its leadership into DeepMind. OpenAI acquired Astral, the team behind Python’s uv package manager, to feed Codex. Viral tweets were circulating about developers ditching Cursor for Claude Code. The usage-based pricing switch last July had users posting surprise bills on Reddit. Consumer subscriptions were running at negative margins because every token served was profit for Anthropic or OpenAI. The company that popularized vibe coding was getting buried by the model providers it depended on. Then Cursor shipped four major releases in 15 days. JetBrains support on March 4. Automations on March 5. Plugin marketplace with 30+ partners on March 11. And now Composer 2, their own model that moggs Opus 4.6 on cost while matching it on performance. Look at the chart. Composer 2: 61.3 on CursorBench at $0.50 per million input tokens. Opus 4.6: 58.2 at $5.00. GPT-5.4: 63.9 at $2.50. The performance gaps are single digits. The cost gap between Composer and Opus is 10x. The part nobody’s pressing on: Cursor still won’t name the base model. Their blog says “our first continued pretraining run,” which means they took an existing model and continued training on code. When the original Composer launched in October, developers kept catching it responding in Chinese. Same tokenizer patterns as DeepSeek. Nathan Lambert congratulated the research team by tweeting “open weight base models + incredible ML teams in a specific niche can create immense value.” Co-founder Aman Sanger told Bloomberg it was trained exclusively on code. Can’t do taxes, can’t write poems. A Chinese open-source chassis, refined with what Cursor calls compaction-in-the-loop RL, and fed by a billion lines of daily user code flowing through the editor every day. That data flywheel is the one asset no API provider can replicate. The honest read requires some skepticism though. CursorBench is Cursor’s own internal benchmark. They built the test, then showed you they pass it. GPT-5.4 still leads on Terminal-Bench 2.0, which is independently maintained. And Opus 4.6 at high thinking effort still outscores Composer 2 on raw accuracy. The cost advantage is real. The performance parity claim needs external validation before anyone should take this chart at face value. But here’s why the chart matters anyway. This was the P0 coming out of the holidays. Building their own model was existential. Every dollar Cursor paid Anthropic per token was margin funding the competitor building Claude Code to replace them. Every dollar paid to OpenAI funded Codex. The only way to stop bleeding cash to the companies trying to kill you is to stop using their models. Four hundred employees. $2B ARR. Reportedly raising at $50B. Entering the model race against labs with thousands of researchers and tens of billions in compute. That chart is the fundraising slide. Whether it holds up in production against Opus and GPT-5.4 is a different question. But three months ago, the question was whether Cursor would survive at all.

English
0
0
0
16
Mo
Mo@atmoio·
With LLMs we’ve basically invented digital candy. LLM written content is designed to maximize engagement in a way human text will never be able to match. It’s why AI-written posts constantly go viral on here. AI will win not because it’s better but because it’s more delicious.
English
36
5
78
5.1K
Mo
Mo@atmoio·
@rgb53562 yeah, i mean there will be some demand for authenticity. but only from the “artsy” type people i think who recognize what “true” writing/art looks like. always a minority
English
0
0
0
2
rgb hr
rgb hr@rgb53562·
One could hope that post-scarcity for glazing, smarmy, lying, and maximally belief-affirming prose will trigger a backlash and a demand for more authenticity. But sadly, I fear, tactics like this do not stop working. No matter how absurd they become, they retain some emotional effectiveness. That's why even the most obvious autocracies STILL bother to organize sham elections.
English
1
0
1
9
Mo
Mo@atmoio·
@mr_mechis seems possible in theory. but between theory and practice can sometimes be hundreds of years
English
0
0
0
6
MrMechis
MrMechis@mr_mechis·
@atmoio The question is, can we get to a point where we just git the prompt and maintain that? Analogy, we don't keep around the OBJ files, we can do a clean and rebuild. Can CodingAI get to that point?
English
1
0
1
8
Ali Rahimpour
Ali Rahimpour@alirahimpour89·
@atmoio One thing I realized is that LLMs often fixes the issue without understanding why the issue exist, the fix often ends up where it surfaces instead of fixing the real reason. It doesn't say "hey, this should be fixed in the backend instead of in the app, let's do that"...
English
2
0
1
33
Julia Turc
Julia Turc@juliarturc·
@atmoio If you want, I can tell you the top 3 reasons why nobody could have seen this coming. Just say the word. 🤢🤢🤢
English
2
0
5
237
Justin ("Goju") Gottschlich
Justin ("Goju") Gottschlich@j_gottschlich·
@atmoio And more spectacular -- and by spectacular, I dont mean that necessarily in a good way. I mean it like "it creates a SPECTACLE." Might be a good spectacle. Might be a bad one.
English
1
0
1
35
Mo
Mo@atmoio·
@meabed there will be both for sure. just as with junk food
English
0
0
1
22
Meabed
Meabed@meabed·
@atmoio I think the human brain will out grow the artificial sweetener and once again go back to healthy organic product..
English
1
0
1
25
Nicholas Dwork
Nicholas Dwork@ndwork·
@atmoio LLMs get a simple prototype up and running sometimes. As the project becomes complicated, when it no longer works and AI can’t fix it, I find that I’m unable to wade through the mass of code created and address any issues. Like any devil’s bargain, it seems we lose.
English
1
0
1
10
Mo
Mo@atmoio·
I was a 10x engineer. Now I'm useless.
English
1.5K
1.7K
16.1K
5.9M
Mo
Mo@atmoio·
@meabed me.io is wayyy better holy shit lol nice
English
0
0
1
13
Mo
Mo@atmoio·
@meabed yeah lots of hype driven narratives at play
English
1
0
1
74
Meabed
Meabed@meabed·
The tables will turn once the complexity dept requires larger effort that was to be build with people, all this execs pushing narratives only to benefit, they did this in covid, pre covid and pretty much always. When you have skin in the game you can’t have a fair assessment or an honest opinion. It’s all Bs
English
1
0
2
80
Mo
Mo@atmoio·
@akshayramabhat i’m pretty sure my marriage contract forbids it
English
1
0
4
306
Mo
Mo@atmoio·
hmmmm i mean a lot of what you’re saying is true. but it is somewhat a caricature of it. it’s true that developers constantly want to rewrite. but it’s really more about needing a bigger car every time you have a new kid squeezing in more and more features really does require a rewrite many times not sure why or how llm black box code suddenly overcomes this
English
1
0
0
45
Brennan McEachran 👨‍🚀
But, it also dismisses human's absolute addiction to spaghetti code. - Every time a new dev takes over a project it "needs to be rewritten because it's all a mess" - Every 7yrs we need to rewrite it completely from Perl to PHP to Node to Rust otherwise we can't support [x] feature - I've personally built layouts with framesets, tables, then divs and floats, then flexbox, then grids, then RN yoga through react-native-to-web. Rounded corners with 4 images, then sliding window bg images, then css. - Servers in my closet, on the cloud, on a vm, serverless, and "serverless servers". It's all spaghetti. It was always spaghetti. It's just everyone thinks their own shit doesn’t stink. Then they quit or get poached for a higher paying job... Man, we used to yell at each-other over tabs vs. spaces. Now we yell at agents over long term code maintainability... it's actually a huge improvement.
GIF
English
1
0
0
63
Mo
Mo@atmoio·
@JonLaRose yeah his lens is definitely through his businesses. as is most people’s tbh
English
1
0
1
67
HarShosh
HarShosh@JonLaRose·
@atmoio A theory: Chamath is currently focused on getting private equity firms to buy into his relatively new startup, 8090. That’s probably what’s driving his recent doomsday predictions. It doesn’t mean he’s completely wrong, but there’s clearly some motivation behind them
English
1
0
2
75
Mo
Mo@atmoio·
@i_am_brennan yeah i hear ya, im mostly thinking out loud. i dont even think my stance is economically defensible since you can live off spaghetti code for years or even a decade before it comes due. but for serious engineering orgs, it might matter much more
English
1
0
0
153
Brennan McEachran 👨‍🚀
Mo, I literally think you're a genius and i quite enjoy your content... but this is wrong. Judgement is inherently easier than creation. That's why every human can rate a movie 7/10 but very few can make a 7/10 movie. LLMs have proven to be good critics. They're now quite capable creators (yes they currently have a speghetti code problem but they can also spot that w/ a fresh context window and a good prompt). This is currently a harness problem if you work in the app layer. Or a RLHF problem if you work in the ML layer. But it's also going to be RL'd away completely through self verifying loops. Passing judgement on a moment-of-time issue on tech that's improving on an exponential is like "640K is more memory than anyone will ever need" I get it. The hype is unbearable and a contrarian opinion is actually refreshing. But, I also think it's worth shaking people awake... Shit is changing.
English
1
0
1
187
Mo
Mo@atmoio·
@xericxenarthra @TomasPiaggio yeah but running those markdown files through an llm doesn’t guarantee you’ll get working code each time. with a c compiler such guarantee is of course the whole point
English
2
0
1
47
xericxenarthra
xericxenarthra@xericxenarthra·
@atmoio @TomasPiaggio As is deterministic, a complete description of the intent of the code itself. If you have a bunch of prompts you can't reasonably call that "code" because any run of that through your AI "compiler" is different.
English
1
0
0
46
Mo
Mo@atmoio·
@happy2png yeah the gellman amnesia effect
English
0
0
1
130
Happy
Happy@happy2png·
@atmoio I saw a post saying if you use AI in places where you have no understanding it looks great, because you dont know whats bad. If you do though, I still have to unshitify whatever it does by default even though its a great tool its not at that point yet.
English
1
0
3
155
Mo
Mo@atmoio·
@Utkarsh51557661 i guess it’s not that it lacks substance but rather a voice
English
0
0
0
14
Utkarsh Singh
Utkarsh Singh@Utkarsh51557661·
@atmoio Wonder how long until people crave substance again over AI's candy. Think we'll reach that point soon?
English
1
0
1
20
Mo
Mo@atmoio·
@Conor_Code it’s true. but it loses the author’s identity.
English
1
0
1
32
Conor
Conor@Conor_Code·
@atmoio I think in a way it can be a positive. Sometimes getting a detailed point across can be tricky. AI can help you create more digestible text.
English
1
0
1
32