Manil Vasantha

1K posts

Manil Vasantha

Manil Vasantha

@ManilVasantha

Katılım Nisan 2023
178 Takip Edilen112 Takipçiler
Sabitlenmiş Tweet
Manil Vasantha
Manil Vasantha@ManilVasantha·
Hello X 👋 — I’m Manil. CCO by profession, AI infrastructure economist by passion. I write about AI infrastructure, GPU economics, hyperscaler strategy, depreciation mismatches, and the silent incentives driving the $1T AI buildout. Expect orthogonal takes, uncomfortable truths, and clear frameworks. Let’s begin.
English
0
0
16
471
Manil Vasantha
Manil Vasantha@ManilVasantha·
AI's impact on real estate that no one is questioning: Software dev hiring down 20% since 2022 10-15% of jobs eliminated within 5 years 53% of Gen Z already freelance South Korea's fertility rate: 0.75 A generation that won't marry, won't stay put, and works from a laptop doesn't need a 30-year mortgage. The $13T US mortgage market was built on one assumption: stable income, one city, one career. AI just broke all three. #AIDisruption #FutureOfWork #RealEstate
English
0
0
2
40
Manil Vasantha
Manil Vasantha@ManilVasantha·
Compliance didn’t fail. Your model did. You passed PCI 3 months ago. You still lost billions. Whose fault is it? Now add AI: Anthropic writes code Mythos finds more exploits • Risk is continuous • Static scans = false confidence • Runtime ≠ prevention The shift: validation → control Why can “compliant” systems still collapse overnight?
English
0
0
0
10
Sid Sijbrandij
Sid Sijbrandij@sytses·
Looking forward to speaking at OpenAI Forum in a week on how I leveraged ChatGPT to find cancer treatment options after doctors said there was nothing left for me to do. forum.openai.com/public/events/…
English
26
64
613
336K
Manil Vasantha
Manil Vasantha@ManilVasantha·
@grok is this for real? Does not sound like Trump.
Manil Vasantha tweet media
English
0
0
0
15
Manil Vasantha
Manil Vasantha@ManilVasantha·
I’ve done this more than once. It feels right in the moment — but it usually costs you more than you think. Quitting without a plan isn’t courage. It’s lost leverage. Toxic matters — but so do timing, responsibilities, and exit quality. Not all exits should be immediate. Consider all your input variables and prioritize them according to your situation in life
English
0
0
0
43
Nithya Shri
Nithya Shri@Nithya_Shrii·
Is it okay to resign because of a toxic work environment even if you don’t have a backup yet?!
English
1.7K
336
4.4K
470.4K
Manil Vasantha
Manil Vasantha@ManilVasantha·
@Forbes ‘Product graveyard’ is the absolute wrong lens. In AI, velocity > stability. If you’re not killing products, you’re not learning fast enough to win the control layer. AI velocity precedes any product roadmap!
English
0
0
3
1.2K
Forbes
Forbes@Forbes·
The OpenAI Graveyard: All The Deals And Products That Haven’t Happened forbes.com/sites/phoebeli… (📸: Cody Pickens for Forbes)
Forbes tweet media
English
47
82
334
424.8K
Manil Vasantha
Manil Vasantha@ManilVasantha·
Pod dynamics are underrated. @Jason drives energy. @chamath drives narrative. @friedberg drives depth. @DavidSacks drives precision. When airtime skews, you don’t lose volume—you lose signal. The best pods aren’t loud. They’re balanced. The best balance is always at the 'best podcast in the world!'
English
0
0
0
10
David Sacks
David Sacks@DavidSacks·
Amazing.
English
291
1.1K
8.3K
854.9K
Manil Vasantha
Manil Vasantha@ManilVasantha·
Automation isn’t the problem. Loss of control is. Every generation splits the same way: → Some lean in early → Some resist until forced Change is inevitable. Adoption is psychological. The real question: Are you resisting risk… or resisting irrelevance? @elonmusk @grok your thoughts?
English
1
0
0
47
Manil Vasantha
Manil Vasantha@ManilVasantha·
@ThisWeeknAI @Jason @clattner_llvm TPUs may go on-prem, as Google-specific workloads go hybrid. AMD may still win on price. NVIDIA still owns the fabric, the ecosystem, the OS, and everything else. With Marvell/Photonics, strengthen their DC expansion.
English
0
0
0
35
This Week in AI
This Week in AI@ThisWeeknAI·
GOOGLE SIGNS $5B DEAL WITH ANTHROPIC @Jason: Who Nvidia's biggest competitor? @clattner_llvm "Google... They are way better already and have the opportunity to add a couple trillion to their marketcap." From episode 6 of This Week in AI.
English
17
58
623
161.3K
Manil Vasantha
Manil Vasantha@ManilVasantha·
We trained generations to win on intelligence. AI just commoditized it. The uncomfortable truth: If your edge was “being smart”… you’re now average. What matters next: → Who imagines → Who drives → Who people trust The game didn’t end. It changed. #AI #Careers #FutureOfWork #Leadership
English
0
0
0
104
Xiaoyin Qu
Xiaoyin Qu@quxiaoyin·
Comparing human intelligence in the AI era is like flexing your muscles in front of a tractor. Nobody's impressed. Think about it. In farming societies, physical strength was everything. "That kid is strong!" was the highest compliment. Then tractors arrived and suddenly nobody cared about your biceps. We moved on to intelligence. "That kid is so smart!" became the new praise. IQ, test scores, intellectual horsepower. That's what we've been competing on for decades. Well, at least for us Asians. Now AI is smarter than all of us. I've fully accepted this. Compared to AI, I'm not that smart. Neither are you. And when two humans collaborate on intellectual work, we're basically two less-intelligent beings arguing while slowing down the AI that could do it better. So what do humans compete on next? I think it comes down to three things: imagination, desire, and trust. Imagination because AI can execute anything but it can't dream up what to build. The person who can define what people want, who can create desire out of nothing, that's real power. Think about Hermès convincing you one bag is worth more than another. There's zero logic in that. Pure human persuasion. Desire because the people who constantly generate new ideas, new ambitions, new "what ifs" will be the ones directing armies of AI to build their visions. Trust because when every company has perfect AI execution, the reason I choose to work with you over someone else comes down to whether I trust you. Charisma, reputation, relationship. These become the real competitive advantages. We competed on strength. Then intelligence. Let's wait and see what's next. #AI #FutureOfWork #Leadership #HumanSkills
English
79
58
283
55K
Manil Vasantha
Manil Vasantha@ManilVasantha·
@a16z It’s financial cleanup now + AI justification layered on top. And Wall Street approved.
English
0
0
0
339
a16z
a16z@a16z·
Marc Andreessen says AI is the "silver bullet excuse" for companies laying people off, but most layoffs are actually due to higher interest rates and overstaffing during COVID: "This entire labor displacement thing is 100% incorrect. It's completely wrong. It's classic zero-sum economics." "It was the combination of the two—interest rates going to zero during COVID, and then the complete loss of discipline at all these companies when they went virtual and when employees just became an icon on a screen." "What you have happening right now is that essentially every large company is overstaffed. We could debate how much—it's at least overstaffed by 25%. I think most large companies are overstaffed by 50%. A lot of them are overstaffed by 75%." "And now they all have the silver bullet excuse—it's AI." @pmarca with @HarryStebbings
English
83
139
1.2K
185K
Manil Vasantha
Manil Vasantha@ManilVasantha·
Not dumb — just incomplete. You can use recycled/dirty water… but not raw. Cooling systems need stable chemistry. Dirty water = minerals + bacteria → → scale (kills heat transfer) → corrosion (kills pipes) → biofilm (kills efficiency) So we don’t use “clean” water. We use engineered water. AWS currently plays a major role in this.
English
0
0
2
5.7K
Nithya Shri
Nithya Shri@Nithya_Shrii·
I have a dumb question but why can’t AI machines use dirty water instead of clean water?!
English
391
135
16.1K
3.3M
Manil Vasantha
Manil Vasantha@ManilVasantha·
Oracle Cloud Infrastructure never chased features. They built for throughput. 50M Tx/sec vs 1M wasn’t evolution — it was a different system. Now add NVIDIA + photonics. If AI becomes a throughput game, OCI wins where it matters. #OCI #NVIDIA #HPC #AIInfrastructure
English
0
0
1
56
Manil Vasantha
Manil Vasantha@ManilVasantha·
Amazon Web Services tried to move off NVIDIA. Customers didn’t. Now AWS is doubling down because AI demand follows performance — not strategy. This isn’t a partnership. It’s a dependency. #AWS #NVIDIA #AIInfrastructure #Cloud
English
0
0
1
56
Kiera 🌱
Kiera 🌱@kieralwellness·
I don’t really care, I’m rejecting AI & fast civilisation. I don’t want a real time microbiome test or a snapshot of my metabolic profile upon waking. I don’t want 20 ecommerce brands or investment portfolios. I want dirt on my boots, wind in my hair, a child on my hip and dough on my finger tips. I want the luxury of illness and the sweet kiss of recovery. I want the appreciation of death and the pleasure of heartbreak following the luxury to have loved well. And above all else I want real, I want real human experience and not this humanoid nonsense. It’s a privilege to be human, so easily broken and so fragile. The experience that we are having is only made beautiful because it is filled with so much that is not.
English
164
159
1.2K
32.9K
Manil Vasantha
Manil Vasantha@ManilVasantha·
@sweatystartup “If you think AI = swapping GPTs, you’re not building a company. You’re calling an API.” The moat is: data workflow control Not the model.
English
0
0
2
66
Nick Huber
Nick Huber@sweatystartup·
Investing heavily in AI at your company will backfire. You are becoming dependent on something that is unsustainable. The VC money will dry up once they realize nobody is going to make any money in the long run except NVDA and the power companies. The subsidies will stop. And your costs will 5x. There is no moat in AI. Switching from GPT to gemini to grok to claude takes seconds and you don't miss a beat. Its a house of cards.
English
391
60
810
97.8K
Manil Vasantha
Manil Vasantha@ManilVasantha·
This is the pattern: Social proof replaces diligence. Revenue growth replaces architecture scrutiny. Narrative replaces unit economics. In AI, that gets even more dangerous because “product” can be a thin layer over rented intelligence. A lot of zeroes are coming. Not because AI is overhyped. Because underwriting is.
English
0
0
0
515
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
I don’t know if there is anything shady going on here or not, but I will say, more generally, that VCs prefer social proof than actual diligence: “XYZ did the A, we must get into the B” or “ARR is growing so fast, we need to get in”. In the final telling, there will be a lot of zeroes in the AI complex as some companies have spectacular rises and falls. When we look back, the reason above will largely explain why.
Aakash Gupta@aakashgupta

Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.

English
64
31
681
340.3K
Manil Vasantha
Manil Vasantha@ManilVasantha·
@aakashgupta Hot take: Everyone is arguing about “whose model is underneath.” Wrong layer. The real play is routing: cheapest model fastest response acceptable quality That’s already commoditized. And generating more code? Congrats—you just accelerated technical debt!
English
0
0
0
231
Aakash Gupta
Aakash Gupta@aakashgupta·
Cursor is raising at a $50 billion valuation on the claim that its “in-house models generate more code than almost any other LLMs in the world.” Less than 24 hours after launching Composer 2, a developer found the model ID in the API response: kimi-k2p5-rl-0317-s515-fast. That’s Moonshot AI’s Kimi K2.5 with reinforcement learning appended. A developer named Fynn was testing Cursor’s OpenAI-compatible base URL when the identifier leaked through the response headers. Moonshot’s head of pretraining, Yulun Du, confirmed on X that the tokenizer is identical to Kimi’s and questioned Cursor’s license compliance. Two other Moonshot employees posted confirmations. All three posts have since been deleted. This is the second time. When Cursor launched Composer 1 in October 2025, users across multiple countries reported the model spontaneously switching its inner monologue to Chinese mid-session. Kenneth Auchenberg, a partner at Alley Corp, posted a screenshot calling it a smoking gun. KR-Asia and 36Kr confirmed both Cursor and Windsurf were running fine-tuned Chinese open-weight models underneath. Cursor never disclosed what Composer 1 was built on. They shipped Composer 1.5 in February and moved on. The pattern: take a Chinese open-weight model, run RL on coding tasks, ship it as a proprietary breakthrough, publish a cost-performance chart comparing yourself against Opus 4.6 and GPT-5.4 without disclosing that your base model was free, then raise another round. That chart from the Composer 2 announcement deserves its own paragraph. Cursor plotted Composer 2 against frontier models on a price-vs-quality axis to argue they’d hit a superior tradeoff. What the chart doesn’t show is that Anthropic and OpenAI trained their models from scratch. Cursor took an open-weight model that Moonshot spent hundreds of millions developing, ran RL on top, and presented the output as evidence of in-house research. That’s margin arbitrage on someone else’s R&D dressed up as a benchmark slide. The license makes this more than an attribution oversight. Kimi K2.5 ships under a Modified MIT License with one clause designed for exactly this scenario: if your product exceeds $20 million in monthly revenue, you must prominently display “Kimi K2.5” on the user interface. Cursor’s ARR crossed $2 billion in February. That’s roughly $167 million per month, 8x the threshold. The clause covers derivative works explicitly. Cursor is valued at $29.3 billion and raising at $50 billion. Moonshot’s last reported valuation was $4.3 billion. The company worth 12x more took the smaller company’s model and shipped it as proprietary technology to justify a valuation built on the frontier lab narrative. Three Composer releases in five months. Composer 1 caught speaking Chinese. Composer 2 caught with a Kimi model ID in the API. A P0 incident this year. And a benchmark chart that compares an RL fine-tune against models requiring billions in training compute without disclosing the base was free. The question for investors in the $50 billion round: what exactly are you buying? A VS Code fork with strong distribution, or a frontier research lab? The model ID in the API answers that. If Moonshot doesn’t enforce this license against a company generating $2 billion annually from a derivative of their model, the attribution clause becomes decoration for every future open-weight release. Every AI lab watching this is running the same math: why open-source your model if companies with better distribution can strip attribution, call it proprietary, and raise at 12x your valuation? kimi-k2p5-rl-0317-s515-fast is the most expensive model ID leak in the history of AI licensing.
Harveen Singh Chadha@HarveenChadha

things are about to get interesting from here on

English
249
551
4.4K
1.4M
Manil Vasantha
Manil Vasantha@ManilVasantha·
@Nithya_Shrii Libraries solved access to information. AI tools solve the synthesis of information. Different problems, different eras.
English
0
0
2
457
Nithya Shri
Nithya Shri@Nithya_Shrii·
Just to be clear, we already have free data centers. They are called libraries.
English
60
3.9K
21.1K
179K