Dragos Ilinca

7.1K posts

Dragos Ilinca banner
Dragos Ilinca

Dragos Ilinca

@dragosilinca

Kiro product marketing @ AWS, previously 2x founder. Beast of burden. Led teams in product & product marketing. Opinions my own.

SEA Katılım Kasım 2007
2.9K Takip Edilen2K Takipçiler
Sterling Crispin 🕊️
Sterling Crispin 🕊️@sterlingcrispin·
This has been an open secret, if you could even call it that, for a long time, it's where 'permanent underclass' comes from Dario has his "country of geniuses" and Elon has macrohard, who's explicit goal to out compete everyone with infinite agents and capture all the value. At some point they'll cross a threshold where $1.00 spent on inference generates more than $1.00 in value and they're going to turn the dial up as fast and as hard as they can. $100B spent on millions of agents captures $400B in revenue, that revenue gets spent on more agents, making more revenue, in a runaway spiral. The companies they are out competing lose out on that revenue creating a growing gulf between the providers and their former customer turned competition. These companies are also no longer competitive because their API costs are $5.00 spent for $1.00 in revenue and they'll never get the same level of intelligence-per-dollar to truly be competitive with the big labs. The gulf between he who controls the spice and everyone else basically goes to infinity and they have a monopoly business across essentially all knowledge work. That's the permanent underclass. Then the robots come, and Elon has a monopoly on all physical labor as well. That's the explicit thesis of these companies and the thesis of investors. You can't really vibe code your way out of the permanent underclass unless lightning strikes and you get acquihired like the openclaw guy. If any of this really plays out having any kind of equity exposure to the big labs will basically go to infinity
goodalexander@goodalexander

The Big Rug Gooning is well covered in the Doom thesis. Elon's "Imagine" is digital crack cocaine being given out for free. So that's in progress. But, GPT5 shows us that enterprise / tool calls is where companies are converging This mirrors the rest of the economy. Consumer apps have to use ads or extractive loops (gambling/ porn/ DLC video games) to monetize at scale. Or you do Enterprise. Anthropic's CEO has indicated that companies pay up to 10x as much for better reasoning. And that training a model is positive unit economics t+12 months Which is the first time anyone is talking about unit economics. Which means the GPU costs of ppl just chatting away were getting gnarly. So today I'll write a bit about the Enterprise part of the Doom Thesis - which I call "The Big Rug" The core idea of the Big Rug is that everything you do in Claude Code, all the Vibe Coding you do, and even the work you do with AI that is covered by Terms of Service will end up getting completely stolen and monetized by AI research labs in order to justify their enormous valuations. The US economic system is not sustainable. There is a single chart below that shows this. US labor productivity is growing at 1.2% while Microsoft and other AI companies are reporting token consumption growth above 400%. Salesforce indicated that 30-40% of its code is written by AI but its cash flow from operations is up single digits along with its headcount. So -- if 100% of code was written by AI ... what would everyone do again? At the same time, AI very clearly is a thing. And there's huge demand. But the vast majority of companies are not using AI correctly. We can - in part, deduce this from basic common sense. VSCode and Copilot are terrible / borderline unusable if you use Claude Code/ Cursor. but they are in hyper growth nonetheless People are likely drastically increasing their technical debt. Because AI isn't good enough on default settings to really use to automate huge amounts of work. At least right now. Managers are saying "use AI". And employees are doing it. And doing it poorly. Because AI valuations are so high it's vibe coding and vaporware across corporate America. "We need an AI strategy" The most cynical VC I know just joined Cognition, selling 10s of thousands of seats to financial institutions. GPT5 isn't the Death Star. Productivity is growing at 1.5%. In boom times that can be 5-7%. We are definitively not in a productivity boom despite the hype. But then -- certainly this is not sustainable? If aggregate productivity is going 1.5% -- how is enterprise usage going to go sustainably at triple digits? Like -- the ROI is not going to be there. And then demand will go off an absolute cliff unless there is a fundamental non linear change in the cost of inference Indeed. Everyone knows this. Let's spell it out more specifically. 1. You are an enterprise agent company (whether that's openAI or Claude Code) 2. You see the slop generated by Vibe Coding 3. You see the same aggregate productivity statistics as everyone else namely that A. Margins are not expanding a ton B. Aggregate productivity is also not expanding 4. You know at some point an economic downturn results in massive cuts to token usage 5. Which makes your momo Q2 2025 into a hard comp and people start talking about a tech crash 6. But you just raised billions of dollars at a nosebleed valuation 7. You need to justify this valuation somehow or you're cooked Enter The Big Rug AI usage is a bit magical because you can't point to any single person or workflow that is responsible for training data. The training process, of compression, is a big jumble. We've already seen the implications of this in IP theft. Copyrighted material is fully known by ChatGPT. We don't know exactly how it ended up in the training dat,a bc the model weights aren't really intelligible. So it's hard to prove anyone did anything wrong. Even though the copyrights are there So if copyrights arent' enforceable. Trade secrets, methodologies, and non copyrighted user interfaces are *really not enforceable" This is an important point because many of the actions of closed source models explicitly break copyright and other laws, but they have such enormous financial legal firepower -- and the technical details are so hard to prove - that if you ask for Grok to render images from movies, it will. Or if you ask Chatgpt for the full plots of books - it will happily provide it. So there's already precedent for large scale non compliance with rules in the name of growth. And this non-enforceability is the nature of the big rug. Your employees don't really care about your enterprise IP and are more than happy to use closed source AI tools to help them be more efficient cogs. And then all this information and know how finds its way into the training data of AI research labs And then - when agents come out. It won't be enterprises tailoring agents to their use cases. It will be agents, essentially assembling apps that are FAR BETTER than anything those enterprises could do. With proprietary models that aren't for sale And because AI is completely portable, these agents could be spun off in offshore compliant jurisdictions likely with even less transparency. Or run through subsidiaries. Or even through crypto rails which are now getting supercharged by stablecoins So not only did you *not get an efficiency boost* because the Vibe coded apps were slop. But you also lost all your trade secrets, IP, and know how. And will be competing with an AI equivalent that will destroy your margins Welcome to the New Economy. It's essentially the largest vampire attack in corporate history. Everyone using closed sourced API models thinks they're going to be safe due to enterprise SLAs, or simply don't care (bc they're employees told to use Cursor or get fired). But they won't be safe. Once it's in the model. It's gone. So that's the Big Rug. And here's the funny thing. The Big Rug is actually necessary for this productivity chart to start going up. So before you get a massive acceleration in agentic workflows, the entirety of the people who formed the basis for those agentic workflows being created. Will be made completely obsolete / financially ruined. After the Big Rug - is when unemployment starts ticking up. Token growth will indeed go off a cliff but it won't matter bc we will be past the facade that for some reason AGI was going to be made accessible via API And if I were wrong, then these AI research labs wouldn't be worth what they are. And there wouldn't be animal spirits secondary demand for SPVs getting access to them at insane valuations. The writing is on the wall. You think you're vibe coding, but really you're contributing to the Agent that will drink your milkshake. The reason I haven't written about the Big Rug is that it's fairly far away. It will be a bit (maybe 12 months) before the research labs go mask off and launch agents directly instead of providing their models through APIs. Because as soon as this starts happening suddenly every company is going to lock down the usage of its coding tools. And presumably by then the ROI calculations won't make any sense Smart companies will adapt early on by using self-hosted API layers, and open source models even though they are worse. China will likely keep funding heavy open source development because it's a way to subtly promote the Chinese worldview -- so I guess the downside will be getting brainwashed by the CCP if you want to avoid the Big Rug. Once the Big Rug really kicks off, the enterprise software sector and any cloud player that hasn't hedged with their own AI research equity exposure will get completely shrekt. I've been a long time hater of Accenture and AI consulting plays, as they're basically in a 1-2 year white space of hope before the hammer drops pricing long term growth. Of course, the majority of cloud players have piled into the Labs for exactly this reason. If GPT5 were incredible -- I think we'd have a bit more time before this narrative kicks into gear. But now that the disappointment is there, the enterprise focus is there, the abrupt 'focus on unit economics' is appearing - the 2nd part of the doom thesis - the de-rating of everything non AI - should begin percolating. In crypto I am long Ambient to express this view - but it's a private holding with a minable test-net coming soon. My own network we're working to design to be more robust to the Big Rug (I think Google Docs, and Microsoft Word, and Github Gists are all basically going in the training data - so we are migrating to Proton Docs and using more encryption). In stonks it's genuinely terrible for the whole IT service sector (or anything in a software index that isn't heavily long OpenAI, Anthropic, or Deepmind). The white collar unemployment kick from the Big Rug should result in lower interest rates due to higher unemployment. The breach of trust/ economic shock should result in lower equity multiples. Other financial implications I'm still thinking through but just wanted to put this out here

English
17
22
412
63.6K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
@trq212 I honestly thought this was a $500 pricing plan
English
0
0
2
241
Thariq
Thariq@trq212·
New in Claude Code: /ultraplan Claude builds an implementation plan for you on the web. You can read it and edit it, then run the plan on the web or back in your terminal. Available now in preview for all users with CC on the web enabled.
English
454
572
8.7K
935.8K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
@felixrieseberg It's really nice, much better than Opus, the only tell is that it's a bit repetitive, it tries to make the same point too many times while also trying to be a bit surprising. There's also little emotion.
English
0
0
1
417
Felix Rieseberg
Felix Rieseberg@felixrieseberg·
Now that the Mythos system card is out, I need to tell everyone that I'm mildly obsessed with its prose.
Felix Rieseberg tweet media
English
54
18
575
100.4K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
@esrtweet If that's the case, won't most software turn into web services as a response?
English
2
0
26
1.1K
Eric S. Raymond
Eric S. Raymond@esrtweet·
Fast, cheap AI-assisted decompilation of binary code is here. Which means code secrecy is dead. Decompilers in themselves are not a new technology. Security researchers have employed them for years to analyze compiled malware. There's been some limited use by others, notably by hobbyists decompiling abandonware games. But there were a couple of issues that prevented this from becoming common practice. One is simply that running decompilers was difficult. It wasn't as simple as feed in binary, get out source; it needed a person with specialist skills prepared to do spelunking through wildernesses of machine code and object formats. The other problem was that decompilation didn't give you anything like the explanatory comments that had been in the original code, so you could easily wind up with code that you could read without being able to understand or modify it. Now large language models are busily smashing both of those barriers flat. They're better at the kind of detail analysis required to run the human side of a decompilation than humans are. More importantly, in the process of decompiling code, they rather automatically build a global model of how it works that can easily be expressed by high quality comments in the extracted code. All you have to do, basically, is ask for the comments. I'm going to reinforce that latter point because it may not be obvious how good LLMs are at this, and how much better they're going to get. When they decompile code and comment it for you, they're not just working from that one piece of code you have put in front of them - they'll have in their training set hundreds, possibly thousands of pieces of code similar to it and with comments. This will give them superhuman levels of insight not just into what it does at the microlevel, but what it means to the humans who wrote it, and what technical assumptions it's embodying. Compilation no longer guards your secrets. Or, to put it more precisely the expected time span in which you can still count on it to obscure them is measured in months. Possibly weeks. What does this mean? It means you're in an open-source world now. All it's going to take for anybody to bust your proprietary IP open is care enough to spend tokens on the analysis. You will maximize your chances of survival as a software business if you get out ahead of this rather than trying to fight it. This isn't exactly the way I expected open source to win. But, you know, I'll take it. Good enough.
English
95
383
2.2K
111.8K
Tom Goodwin
Tom Goodwin@tomfgoodwin·
If you brought someone into 2025 from 1825. What would be the most amazing things they'd notice, that we don't tend to think about .
English
55
0
9
12.5K
David K 🎹
David K 🎹@DavidKPiano·
@dragosilinca Tell me Kiro uses state machines in its specs and I'll give it a try
English
2
0
0
1.1K
David K 🎹
David K 🎹@DavidKPiano·
More developers should learn about property-based and model-based testing Unit, integration, E2E tests are fine for "this should work" but are usually not good enough to test "this should never happen"
English
26
20
434
32.4K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
It uses things like QuickCheck and Tyche and some other secret sauce. But what makes this work is generating requirements (from a prompt, it does that on its own, you don't need to write the spec yourself) in EARS notation, which is a bit more formal than a basic plan.md. And this allows it to pull out properties where it can, which it can then test as it implements tasks with agents.
English
0
0
1
93
Dragos Ilinca
Dragos Ilinca@dragosilinca·
@LewisCTech @LukasHozda Then you have to take the other side of the coin too. If your software breaks, you are responsible and potentially liable. Just like a civil engineer or lawyer or auditor.
English
0
0
8
192
Lewis Campbell
Lewis Campbell@LewisCTech·
@LukasHozda We should have gate kept. We should have had certifications. It should have become a real engineering field with real protection. Instead we let any moron in and now github has zero 9s of reliability.
English
6
7
264
5.1K
scott belsky
scott belsky@scottbelsky·
got an early glimpse of "Noon," three things to note vs. others in market: #1 - you work directly on your production code without any translation (other design tools that use MCP and Claude/Codex to work on code and AI produce temporary artifacts) #2 - you don't need multiple tools to do your product design work. #3 - lets you work on both visual and functional design while working on the production code (one single source of truth) fascinating how quickly this market is accelerating...and how close we are getting to the elusive design=code=design moment we've all been waiting for.
Aditya Bandi@bandiaditya

I’m thrilled to announce we’ve raised $44M to build a new home for product design. Meet @noondesign. No workflow is more broken and fragmented in 2026 than the product designers’. The very same people who care most about building software don’t have software purpose built for them. @kushagrasinha7 and I have lived this problem first hand as designers ourselves. That’s why we built Noon. The first product design tool that works entirely on your product code, so you can design not only how a product looks, but also how it works. With AI at its core that works in seconds, not minutes. For the first time, you can create, iterate, build, test and ship. All in one canvas. No translations or roundtrips to the codebase and back. Comment “Get Noon” and we’ll get you on the list for early access.

English
17
25
293
60.9K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
"destroying the editorial credibility that made the outlet valuable in the first place" - in my mind there are soooo few places left that have editorial credibility anyway. Most are doing some sort of pandering for some goal, whether basic financial survival, or building status, or whatever.
English
0
0
0
250
signüll
signüll@signulll·
a huge risk factor for openai is that media acquisitions by tech peeps have an almost perfect track record of destroying the editorial credibility that made the outlet valuable in the first place. bezos/wapo is the canonical example, it's now widely perceived as captured regardless of actual editorial independence. openai buying tbpn likely immediately makes every piece of tbpn coverage read as propaganda to exactly the audience they need to persuade (policy elites, skeptics, etc). i wonder how they thought through this risk structure (the deliberations would've been fun). but ~$200m is peanuts to openai so prolly worth doing regardless.
English
56
23
484
203.8K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
I think this came from Peter Drucker, who advocated for a professional managerial class who became experts in 'management' rather than the work being done. His argument was that good management involves psychology, negotiation, and other disciplines that needed study and specialization. I don't think it was ever about information routing. I do think that player-coach can work, but many times it leans to heavily on player without building strong enough skills on the coach side. What's missing is the prioritization and unblocking, and accounting for the fact that humans are not automatons. That doesn't seem to factor into the 'world model'.
English
0
0
0
32
OBJ
OBJ@owenbjennings·
cheers, all good. yeah my POV is that the *pure* manager role / "Professional Manager" thing doesn't make sense. that's why we functionalized Block so everyone rolls into their discipline and we build world-class centers of excellence. this vs. a designer / engineer reporting to a Professional Manager who lacks context and understanding and couldn't do the work themselves. my personal POV is that someone can be a great manager while building - - in fact, they're a better manager if they are in it creating daily (i.e., a player coach). I have 20 directs. I build and do work myself daily. I also focus on the craft of product mgmt and marketing at block. And I focus on the development of ppl around me. those are all self-reinforcing in my mind. can't imagine being a Professional Manager
English
3
0
11
455
Kris Puckett
Kris Puckett@krispuckett·
It’s a bit funny. This essay misses the relational element of creating with others. 2000 words and zero thoughts about trust, relationships, humanity, and soul. I know I’m in constant danger as a middle manager. If all I’m doing is passing info back and forth, I don’t belong at that place. It’s the idea de jour that middle management should die. But great middle managers can create and shape culture, trust, speed, and safety. Maybe my role should die, maybe it will. AI can route information better than managers. It can. The question is whether the things that make teams actually work, trust, safety, someone willing to fight for your growth, can be modeled well and with a human touch. Jack wrote 2,000 words about how organizations work and never once mentioned why people do their best work. It’s never an information problem, it’s a human one.
jack@jack

x.com/i/article/2038…

English
52
18
283
56.4K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
Exactly, I don't think these agencies can get tech multiples. As AI improves, this becomes easier to replicate by competitors. And those multiples represent future growth. In this space, who's to say the agency will even be around in 2 years? What's interesting to think about though is, what makes a customer pick one agency over another, if all of them are a handful of folks using AI? Brand, partner reputation, GTM, customer acquisition and service workflows, retention and expansion strategy, niche or audience specialization might all make a big difference.
English
0
0
0
203
Nick Huber
Nick Huber@sweatystartup·
@thesamparr It'll be a race to the bottom and agencies will make less revenue per customer and the same margins. Profit gets competed away.
English
13
1
46
5.1K
Sam Parr
Sam Parr@thesamparr·
The agency business model just got really interesting. Shaan and I were talking about this thesis called "service as a software" on MFM. I always thought running an agency was a huge pain in the ass. But AI flips the math. The old model requires an army of humans to get things done, which meant low margins and low multiples. So you replace the human labor with AI, where one person can do the work of seven. At the same time, private equity firms are shifting their budgets away from SaaS to buy up these new service companies. A traditional agency that might run on 40% gross margins, is now an AI service biz that hits 75% and gets tech multiples. Wild shift.
English
213
47
1K
137.7K
Dragos Ilinca retweetledi
Adam KP
Adam KP@AdamKPx·
I’m mad I didn’t think of this first 😭 this is so fun and creative
English
78
1.6K
36.3K
1.5M
Dragos Ilinca
Dragos Ilinca@dragosilinca·
@trikcode What do you mean you can't debug what you didn't write? Of course you can.
English
0
0
0
9
Wise
Wise@trikcode·
Vibe coding creates a dangerous illusion: You think you built it. You think you understand it. You push to production. Your users find the bugs you never could. Because you can't debug what you didn't write.
English
564
59
633
82.3K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
Fair take but one could also make the opposite. Whoever spends their time replacing SaaS for a few K a year in savings lacks imagination for building their own business. Like you said, you need to be on top of things and react quickly to stay afloat in this market. Replacing some non-critical SaaS is a distraction.
English
0
0
0
22
claire vo 🖤
claire vo 🖤@clairevo·
I think everyone who says “no one will EVER build all this in house” lacks imagination. SaaS maybe not dead but def in for a wake up call.
English
11
1
28
2.9K
Dragos Ilinca
Dragos Ilinca@dragosilinca·
@Austen Also wasn’t AI supposed to, like, write all the code now?
English
0
0
0
36