Vaibhav Aggarwal
541 posts

Vaibhav Aggarwal
@hellovaibhava
Co-founder @FabHotels @TravelPlusHQ | Previously FabFurnish, Groupon, Bain & Co | Wharton, IIT
Katılım Mart 2009
2 Takip Edilen738 Takipçiler
Sabitlenmiş Tweet
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi

The importance of *having fun* is so important to internalize and seems to come up among almost all top performers.
e.g. Magnus Carlsen is always talking about making chess fun:

mark bissell@MarkMBissell
to win gold medals and nobel prizes you just need to be funmaxxing
English
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi


"Beijing has been preparing for Cold War without eagerness for waging it, while the US wants to wage a Cold War without preparing for it." - @danwwang. Recommended reading.
danwang.co/2025-letter/
English
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi

Agreed. Here's the advice I give my son (who is 14):
Some of the most valuable skills are systems thinking, functional decomposition (being able to tackle large problems by breaking them down) and building instinct for how to abstract away complexity for others.
This is not going to change. Those things will be even more important in the age of AI.
Pratham@Prathkum
Most of coding was never about writing code. AI is just making this more obvious. You no longer need to recall syntax, function structure, boilerplate code, or even API endpoints. That’s the easy part and AI is very good at it. The hard part was never typing. It was always thinking. And it still is.
English
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi
Vaibhav Aggarwal retweetledi

One of my favorite lessons I’ve learnt from working with smart people:
Action produces information. If you’re unsure of what to do, just do anything, even if it’s the wrong thing. This will give you information about what you should actually be doing.
Sounds simple on the surface - the hard part is making it part of your every day working process.
English
Vaibhav Aggarwal retweetledi

Sharing an interesting recent conversation on AI's impact on the economy.
AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are fundamentally about the automation of digital information processing.
If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is to what extent the algorithm of it is fixed, i.e. are you just mechanically transforming information according to rote, easy to specify rules (e.g. typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually).
With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be resettable (you can start a new attempt), efficient (a lot attempts can be made), and rewardable (there is some automated process to reward any specific attempt that was made).
The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense).
Software 1.0 easily automates what you can specify.
Software 2.0 easily automates what you can verify.
English















