Aakash Gupta

32.6K posts

Aakash Gupta banner
Aakash Gupta

Aakash Gupta

@aakashgupta

✍️ https://t.co/8fvSCtBv5Q: $72K/m 💼 https://t.co/STzr4nqxnm: $39K/m 🤝 https://t.co/SqC3jTyP03: $37K/m 🎙️ https://t.co/fmB6Zf5UZv: $30K/m

Newsletter: Join 215K+ 👉 Se unió Ocak 2010
767 Siguiendo201.9K Seguidores
Tweet fijado
Aakash Gupta
Aakash Gupta@aakashgupta·
That road is Route 1 in Iceland. A week driving it costs roughly $2,500 per person. Flights from the US run $500-600 round trip. Gas is $8-9 per gallon. A glacier hike is $125. A night in a decent hotel near Vatnajökull is $160-200. Total tab for two people to spend a week staring at that glacier instead of a monitor: somewhere around $7,000. The median American household earns that in about 18 working days. Sitting in a room. Staring at a screen. The people who actually drive that road on a random Tuesday in March fall into two categories: retirees who stared at screens for 40 years and saved enough to stop, or remote workers who figured out how to stare at a screen from Reykjavik instead of a cubicle in Ohio. Both paths run through the screen. The photo is real. The freedom it represents costs $7,000 and 10 days of PTO. The device you’re reading this complaint on is the same device that books the flight.
fardeen@fardeentwt

the world looks like this and we’re expected to sit in a room for 8 hours a day staring at a screen

English
281
1.1K
17K
2.6M
Aakash Gupta
Aakash Gupta@aakashgupta·
Anthropic would have built this in a day and a dev would have tweeted the news. At OpenAI, an exec is telling you about a plan. That gap tells you everything. In the last 7 days, Anthropic shipped Dispatch, channels, voice mode, /loop, 1M context GA, MCP elicitation, persistent Cowork on mobile, Excel and PowerPoint cross-app context, inline charts, and 64k default output tokens. Felix Rieseberg tweeted "we're shipping Dispatch" and you could control your desktop Claude from your phone that afternoon. Every launch came from an engineering account or a GitHub release. In the same 7 days, OpenAI shipped GPT-5.4 mini and nano. Redesigned the model picker. Sunset the "Nerdy" personality preset. Announced three acquisitions. To find a comparable volume of shipped product from OpenAI, you have to rewind to December. This is the most underrated difference in AI right now. Anthropic PMs don't write PRDs. Boris Cherny, head of Claude Code, ships 10 to 30 PRs a day and hasn't written code by hand since November. 60 to 100 internal releases daily. Cowork was built with Claude Code in 10 days. The tools build the next version of the tools. Every cycle compresses the last one. Engineers are empowered to ship and announce. The entire org runs like a product team, not a corporation. OpenAI has the opposite problem. Fidji Simo is CEO of Applications, a title that exists because engineers aren't empowered to ship without executive approval chains. She joined from Instacart. Before that, a decade at Meta running the Facebook app. Since she arrived, OpenAI has acquired 12 companies for $11 billion in 10 months and announced a "superapp" consolidation through the Wall Street Journal. The exec responsible for shipping it is tweeting about "phases of exploration and refocus" on the product she hasn't shipped yet. That's what happens when you layer a Meta-style product org on top of an AI lab. Decisions go up. Shipping slows down. Announcements replace releases. Anthropic's product announcements come from the people who wrote the code. OpenAI's come from the C-suite and the press. One of those loops compounds. The other one meetings.
Fidji Simo@fidjissimo

Companies go through phases of exploration and phases of refocus; both are critical. But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment.

English
2
4
35
2.9K
Aakash Gupta
Aakash Gupta@aakashgupta·
Dave Killeen has been in product for 25 years. He says his AI operating system is better than every human executive assistant he's ever had. That sounds like hype until you see what the system actually does. Every morning he runs one command. Five minutes later he has his top three priorities pulled from quarterly goals, a breakdown of which enterprise accounts need his attention based on overnight deal movement, Slack messages pre-written for his AE team, YouTube and newsletter intelligence clustered by what's novel and contrarian, and LinkedIn outreach cross-referenced against his CRM. He didn't gather any of it. The system did. Here's what makes this different from just asking ChatGPT for a daily plan. Every meeting transcript from Granola auto-appends to the relevant stakeholder page, the project page, and the company page. Every intel scan writes to markdown files. Every mistake the AI makes gets logged into a mistakes file that gets injected into future sessions so the same error never happens twice. The files are alive. They compound. And every fresh chat with Claude starts by loading your strategic pillars, quarterly goals, weekly priorities, and working preferences through session hooks. Dave is the Field CPO at Pendo. He's across 45 enterprise deals. He can't manually track the nuance of 45 deal cycles every week. But his system listens to every customer conversation and surfaces exactly where he needs to lean in, with the Slack message already written. The 45-deal number matters because it shows what this architecture actually unlocks. One person, operating at a level of awareness across a portfolio that would normally require a team of analysts feeding you briefings. He showed a PRD getting generated live. The system pulled context from MCP servers, referenced existing components, flagged overlap with other tools in the backlog, and structured the whole document. His honest take on it: strong first draft, needs editing on commercial framing and metrics baselines. But he admitted he's stopped editing most PRDs entirely. He calls it "vibe CPOing." The AI's context is deep enough from the compounding files that the output is buildable. The mobile app for his system took 37 minutes to build. He spent more time in Xcode trying to publish it than Claude spent writing the code. The part most people will skip past in this episode is the career MCP server. Dave built an MCP that scans his weekly interactions for evidence of skill development, matches it against his career goals, identifies gaps, and produces a promotion readiness score. When he runs his weekly plan, it tells him he's leaning too far into one area and needs to course-correct on goals due in eight weeks. Every conversation Dave has with his system makes the next one smarter. That's the gap between using AI and building on top of it.
Aakash Gupta@aakashgupta

You should be using Claude Code to run your entire work day. Here's exactly how, from @thevibePM, field CPO at $2.6B @pendoio: 1:47 - The one command that plans his whole day 21:42 - His Claude.MD Setup 33:42 - Skills vs MCP vs Hooks 40:11 - Why he left Cursor for terminal

English
1
1
11
3.6K
Aakash Gupta retuiteado
Whole Mars Catalog
Whole Mars Catalog@wholemars·
These “partnerships” are a way to spin the constant need to raise capital as exciting and innovative. They’re able to dilute retail shareholders while convincing them they should be excited about it.
Aakash Gupta@aakashgupta

Rivian lost $13.8 billion in three years. Today they announced a robotaxi. The partner rotation is the business model at this point. Amazon: $1.3B equity plus 100K van order. VW: $5.8B joint venture. US DOE: $6.6B loan. Uber: $1.25B robotaxi deal. Every 12 to 18 months, a new institution writes a check large enough to fund the next chapter. The $1.25B headline is misleading. Uber commits $300 million now. The remaining $950 million is gated behind “autonomous milestones” by unspecified dates through 2031. That’s not a conviction bet on Rivian’s autonomy. That’s a call option priced at $300 million with five years of expiry. Look at Uber’s last 18 months. Waymo rides on the platform. Zoox in Las Vegas this year, LA in 2027. 20,000 Lucid vehicles with Nuro’s autonomy stack. Now Rivian. Every autonomy architecture covered. Camera-only, lidar-first, purpose-built pods, OEM conversions. Uber is building a portfolio of bets so that they win regardless of which stack reaches scale first. Rivian thinks they signed a partnership. Uber signed a hedge. The R2 that this entire deal depends on launches this spring at $57,990. The $45,000 version? Late 2027. The version with lidar and the Gen 3 chip that actually enables robotaxi-grade perception? Late 2026 at the earliest. The robotaxi fleet in San Francisco and Miami? 2028. The Georgia factory for scale production? Still under construction. Waymo is running fully driverless rides across San Francisco, Phoenix, and LA right now. No milestones to hit. No factory to build. No vehicle to finish designing. Rivian needs the product, the factory, the software, and the autonomous driving system to all work simultaneously three years from now. Uber’s smartest move in the last decade was selling its own self-driving unit for a $4 billion loss. Its second smartest is buying call options on everyone else’s.

English
11
6
68
9.6K
Aakash Gupta
Aakash Gupta@aakashgupta·
The biggest bottleneck in AI coding right now is the human. Claude Code can run for hours autonomously. It can write features, run tests, fix bugs, spin up worktrees. But the second it hits an ambiguous decision or needs clarification, it stops. Sits there. Waits for you to look at your terminal. That’s the problem channels solves. Your Claude Code session stays live while you’re on your phone, in a meeting, on a walk. It pings you on Discord or Telegram: “Should I refactor this into two services or keep it monolithic?” You reply from your phone. It keeps building. This changes the math on what a solo developer can ship. Before channels, your effective Claude Code hours were capped by your desk hours. Now the constraint is your response time to a Telegram message. The people building serious things with Claude Code already figured this out. Community projects like claude-code-telegram and Clawdbot have been hacking together phone bridges for months. One developer built a bot that let him find parking near his exam by voice-messaging Claude Code while driving. Anthropic just made it official infrastructure. The timing matters. Claude Code just got /loop for recurring tasks, voice mode, and 1M token context. Stack channels on top and you have an agent that runs continuously, asks you questions asynchronously, and remembers everything from the session. That’s closer to a remote junior developer than a code autocomplete tool. The feature is a research preview for a reason. But the direction is clear: the terminal is becoming optional.
Thariq@trq212

We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Use this to message Claude Code directly from your phone.

English
4
9
82
12.6K
Eric Simons
Eric Simons@EricSimons·
@aakashgupta Wait till you see what we’ve been cooking You’ll understand why I’ve been sleeping soundly 😁
English
4
0
11
315
Aakash Gupta
Aakash Gupta@aakashgupta·
This is the part Replit, Lovable, and Bolt should be losing sleep over. Google AI Studio now ships with the Antigravity coding agent built in, Firebase for auth and databases, and one-click Cloud Run deployment. The entire pipeline from “describe your app” to “it’s live on the internet with a working backend” runs inside one browser tab. Free tier. No credit card. The vibe coding startups charge $15-39/month and still require you to stitch together Supabase, Netlify, and two or three other services before anything actually works in production. Users on Bolt have reported burning $1,000+ on a single project when debugging cycles eat through token budgets. Lovable and Bolt both hit a complexity wall around 15-20 components where the AI starts losing context and making destructive changes. Google just bundled the entire backend those companies never built. This is the same playbook Google ran on email, maps, and cloud storage. Give it away free, make it the default, wait for the market to reorganize around your infrastructure. The vibe coding startups built better creation experiences. Google built the deployment layer those prototypes always needed. The gap Google is exploiting: every startup in this space built a great front door and a mediocre production experience. Google built a mediocre front door sitting on top of the best production infrastructure in the world. They paid $2.4 billion for the Windsurf team to fix the front door. The startups are still trying to build their own backend. One of those problems is easier to solve. And the company with 20+ million developers already on its platform gets to solve it with distribution the startups will never match.
Google@Google

Introducing a new upgraded vibe coding experience in @GoogleAIStudio. You can now turn any idea into functional, production ready apps. Build multiplayer games, collaborative tools, apps with secure log-ins and more.

English
22
8
194
23.9K
Dogan Ural
Dogan Ural@doganuraldesign·
Fuck it, man. I’m so sick of these algorithm changes.
English
283
83
2K
213.1K
Aakash Gupta
Aakash Gupta@aakashgupta·
PMs spend hours answering the same questions about features, specs, launch dates, and edge cases in Slack. Every single day. The same questions. From the same channels. Naman Pandey showed how he built an AI-powered knowledge base inside Slack using @openclaw. He dropped product documentation into the workspace folder, and now anyone in the channel can mention the bot to get instant contextual answers. The critical insight on why this beats a standard Slack bot: > "Slack bot does not have access to local files that live on your computer. Neither does it have the ability to read or write into those sites." OpenClaw reads and writes to local files. It has persistent memory. It evolves as you update your documentation. It is not locked in time. Lesson: The real unlock for AI agents is not intelligence. It is file system access and persistent memory. The agent that can read your PRDs, FAQs, and wikis locally - and remember what changed three months ago - is the one that actually replaces the repetitive work.
Aakash Gupta@aakashgupta

You need to have started using OpenClaw yesterday. Here's the web's easiest setup guide + 5 killer use cases: 38:06 - 1. Live knowledge bot 47:47 - 2. Automated standups 54:46 - 3. Push-based comp intel 1:13:26 - 4. VOC reporting 1:24:30 - 5. Auto bug routing

English
7
3
38
6.5K
Nikita Bier
Nikita Bier@nikitabier·
@0x45o My job is to increase unregretted time spent. Every tap, every word must be intentional and valuable to the user. If you get sucked into bad content, that’s time taken away from a conversation you could be having elsewhere.
English
267
60
1.9K
36K
Aakash Gupta
Aakash Gupta@aakashgupta·
Three months ago, the consensus was that Cursor was cooked. Claude Code crossed $2.5B in run-rate revenue. Google paid $2.4B for Windsurf’s IP and poached its leadership into DeepMind. OpenAI acquired Astral, the team behind Python’s uv package manager, to feed Codex. Viral tweets were circulating about developers ditching Cursor for Claude Code. The usage-based pricing switch last July had users posting surprise bills on Reddit. Consumer subscriptions were running at negative margins because every token served was profit for Anthropic or OpenAI. The company that popularized vibe coding was getting buried by the model providers it depended on. Then Cursor shipped four major releases in 15 days. JetBrains support on March 4. Automations on March 5. Plugin marketplace with 30+ partners on March 11. And now Composer 2, their own model that moggs Opus 4.6 on cost while matching it on performance. Look at the chart. Composer 2: 61.3 on CursorBench at $0.50 per million input tokens. Opus 4.6: 58.2 at $5.00. GPT-5.4: 63.9 at $2.50. The performance gaps are single digits. The cost gap between Composer and Opus is 10x. The part nobody’s pressing on: Cursor still won’t name the base model. Their blog says “our first continued pretraining run,” which means they took an existing model and continued training on code. When the original Composer launched in October, developers kept catching it responding in Chinese. Same tokenizer patterns as DeepSeek. Nathan Lambert congratulated the research team by tweeting “open weight base models + incredible ML teams in a specific niche can create immense value.” Co-founder Aman Sanger told Bloomberg it was trained exclusively on code. Can’t do taxes, can’t write poems. A Chinese open-source chassis, refined with what Cursor calls compaction-in-the-loop RL, and fed by a billion lines of daily user code flowing through the editor every day. That data flywheel is the one asset no API provider can replicate. The honest read requires some skepticism though. CursorBench is Cursor’s own internal benchmark. They built the test, then showed you they pass it. GPT-5.4 still leads on Terminal-Bench 2.0, which is independently maintained. And Opus 4.6 at high thinking effort still outscores Composer 2 on raw accuracy. The cost advantage is real. The performance parity claim needs external validation before anyone should take this chart at face value. But here’s why the chart matters anyway. This was the P0 coming out of the holidays. Building their own model was existential. Every dollar Cursor paid Anthropic per token was margin funding the competitor building Claude Code to replace them. Every dollar paid to OpenAI funded Codex. The only way to stop bleeding cash to the companies trying to kill you is to stop using their models. Four hundred employees. $2B ARR. Reportedly raising at $50B. Entering the model race against labs with thousands of researchers and tens of billions in compute. That chart is the fundraising slide. Whether it holds up in production against Opus and GPT-5.4 is a different question. But three months ago, the question was whether Cursor would survive at all.
Cursor@cursor_ai

Composer 2 is now available in Cursor.

English
17
9
154
40.7K
Aakash Gupta
Aakash Gupta@aakashgupta·
Everyone’s covering the VR reversal. The actual story is what Meta is building underneath it. Meta just replaced the Unity game engine inside Horizon Worlds with a proprietary engine called Horizon Engine. Custom-built for persistent, cross-platform 3D worlds that scale from cloud rendering down to a phone screen. TypeScript scripting. ECS-based simulation capable of handling millions of networked entities. Physics, spatial audio, and streaming sub-levels all native. That’s the Roblox tech stack. Built from scratch. By a company with 3.3 billion daily active users across its family of apps. The mobile numbers are early but moving. Horizon Worlds mobile grew MAU 4x in 2025. The Creator Fund took mobile-only worlds from zero to 2,000+ in a year. Four creators have crossed $1 million in lifetime revenue. Nearly a hundred earned six figures last year. 45 million total downloads, with 2026 downloads up 53% year over year. Now compare that to what they’re competing with. Roblox just posted 144 million daily active users in Q4 2025. $4.9 billion in annual revenue. $6.8 billion in bookings. Creators earned $1.5 billion on the platform last year. Roblox built all of that on a proprietary engine purpose-built for user-generated 3D worlds running primarily on phones. 80% of Roblox sessions happen on mobile. Meta looked at those numbers and made a specific calculation: the VR version of Horizon was forcing the team to build everything twice. One codebase for headsets, one for phones. Bosworth called dropping VR “an easy way to increase velocity.” When the backlash hit, they kept VR alive for existing games but made clear no new VR content is coming. All engineering energy goes to mobile Horizon Engine. The 24-hour reversal is actually the interesting product decision. They announced the shutdown Tuesday. Heard from users Wednesday. Adjusted scope Wednesday afternoon. Most companies that size take quarters to walk back a strategic call. Meta did it in a day, kept the core strategy intact, and gave the loudest users exactly enough to stop complaining without redirecting a single engineer. Meta is spending $83.5 billion to build a mobile 3D platform that competes directly with a company worth $64 billion. The VR obituaries are the headline. The Roblox competition is the story.
Aakash Gupta tweet media
TechCrunch@TechCrunch

Meta decides not to shut down Horizon Worlds on VR after all techcrunch.com/2026/03/19/met…

English
2
1
18
4.9K
Aakash Gupta
Aakash Gupta@aakashgupta·
Major cheat code in life: Learn to identify when someone is trauma bonding with you. Shared pain isn't the same as genuine connection. Mutual suffering doesn't equal compatibility. Heal yourself first. Connect from health, not wounds.
English
1
2
33
2.2K
Aakash Gupta
Aakash Gupta@aakashgupta·
The part of this conversation that stuck with me: Naman's bot read his personal files and answered a Slack question with information it had no business knowing. That's the entire tension of autonomous agents in one sentence. OpenClaw runs as a daemon on your machine. It reads your file system, monitors your Slack, executes cron jobs at 3 a.m. while you sleep. The five use cases here (knowledge base, standups, comp intel, VOC reports, smart bug routing) all depend on that always-on access. But the same architecture that lets it scan 15 competitor websites every 30 minutes also means a rogue Slack message from a coworker could instruct it to read your personal files. Naman's own security audit flagged it: firewall disabled, unrestricted file system access, and if you're running a VPS, your computer doesn't even need to be on for the bot to create or destroy files. The recommended fix is telling the bot to audit itself. The tool that has unrestricted access is also the tool you're trusting to restrict its own access. This is why the comparison to Claude Code and Cowork matters more than the feature list. Claude Code runs in a sandboxed terminal session. Cowork is reactive, you point it at things. OpenClaw is a persistent process with memory, autonomy, and root-level file access that you configure by chatting with it in natural language. The PMs who get the most leverage from this will be the ones who set the guardrails before the first cron job runs. The ones who skip that step will learn why "it never sleeps" cuts both ways.
Aakash Gupta@aakashgupta

You need to have started using OpenClaw yesterday. Here's the web's easiest setup guide + 5 killer use cases: 38:06 - 1. Live knowledge bot 47:47 - 2. Automated standups 54:46 - 3. Push-based comp intel 1:13:26 - 4. VOC reporting 1:24:30 - 5. Auto bug routing

English
6
6
20
4.1K
Aakash Gupta
Aakash Gupta@aakashgupta·
OpenClaw has 325,000 GitHub stars. 2 million weekly visitors. And zero real guides for PMs. I spent weeks building and testing five automations with Naman Pandey on camera. Here's the complete setup and use case guide: 🔗: news.aakashg.com/p/naman-pandey…
Aakash Gupta tweet media
English
2
2
23
2.5K