Gaurav Chande

3.1K posts

Gaurav Chande banner
Gaurav Chande

Gaurav Chande

@gmchande

building AI creative tools for families · writing · ex-@Shopify self-taught programmer · girl dad · investor formerly @gauravmc

Greater Toronto Area, Canada Katılım Aralık 2009
1.1K Takip Edilen811 Takipçiler
Gaurav Chande retweetledi
dylan ツ
dylan ツ@demian_ai·
Everyone thinks AI lowers the value of human labor. I think it’s about to do something stranger. It’s going to split “thinking” into two separate markets: - generating answers - recognizing good answers And the second one may become vastly more valuable than the first. For most of history, intelligence was bottlenecked by production. Writing, coding, research, analysis, the hard part was generating output. But AI destroys that scarcity. A teenager with GPT-5.5 can now produce: - legal drafts -decent code - market research -ad copy -slide decks -synthetic voices -entire apps The cost of generation is collapsing. But something else happens when generation becomes free: judgment becomes the bottleneck. Because once everyone can produce 1,000 ideas, the advantage shifts to knowing which idea is actually good and which signal matters: - which output is true - which model is hallucinating confidently - which insight compounds - which decision survives reality The paradox is that AI may not reduce the importance of expertise. It may amplify it. When answers become infinite, taste matters more. When code becomes infinite, architecture matters more. When content becomes infinite, distribution matters more. When intelligence becomes abundant, wisdom becomes scarce. We are watching “thinking” unbundle in real time. Models are becoming the exoskeleton for cognition. But humans are still the steering wheel. And maybe that’s the next Jevons paradox nobody is naming yet: the cheaper intelligence gets, the more valuable real judgment becomes. Not less.
dylan ツ tweet media
English
14
15
77
3.6K
Gaurav Chande
Gaurav Chande@gmchande·
I keep wanting Hermes Agent to be the thing, but the harness keeps becoming the work. Yesterday: fake GitHub auth failure because it silently changed HOME. Today: light/dark terminal themes still not handled cleanly. This is why Codex w/ gpt 5.5 feels like a better agent harness, without trying to be one.
English
0
0
1
29
Gaurav Chande
Gaurav Chande@gmchande·
@mvcinvesting Price action so far suggests market finally understands it's for high growth and not maintenance CapEx. Folks worried about $NBIS raising CapEx guidance should watch this Bill Ackman clip today
English
0
0
3
392
M. V. Cunha
M. V. Cunha@mvcinvesting·
$NBIS just raised its CapEx guidance from $16–20B to $20–25B. “This increase reflects investments in our 2027 capacity that will come online early next year. We expect these investments to contribute positively to revenue in the first half of 2027.”
English
13
15
385
28.6K
Gaurav Chande
Gaurav Chande@gmchande·
$NBIS subreddit so creative today 😂😂
Gaurav Chande tweet media
English
0
0
1
99
Gaurav Chande retweetledi
Serenity
Serenity@aleabitoreddit·
$NBIS earnings were stellar and it’s now trading $200+ premarket. Reiterated $7-9B ARR in 2026. 40% adj. EBITDA margin projections, which is vastly outperforming expectations. 4 GW contracted capacity. $6.3B capital secured by $NVDA off solid financial offering structures. Glad my high conviction Neocloud pick is performing wonders and happy management is executing so well. In the words of Jensen: “Nebius will take care of you”
Serenity tweet mediaSerenity tweet mediaSerenity tweet media
English
92
102
1.7K
142.6K
Gaurav Chande retweetledi
Mike 📺
Mike 📺@michaelcollado·
Opening Twitter is like having a lil phone cigarette
English
389
34.2K
304.3K
3.9M
Gaurav Chande
Gaurav Chande@gmchande·
> I feel a need to point/gesture to things on the screen 100%. I wonder if one near-term bridge to "interactive neural videos" is Bret Victor / Nicky Case-style explorable explanations. They work really well in terms of giving humans something to point at / gesture around / think with. But they were always very hard to hand-craft. The place it clicked for me was agent-generated deep research. I started asking agents to turn their research output into explorable HTML instead of markdown wall, and it works surprisingly well. I've been playing with a small /explorable skill that turns research docs into self-contained HTML walkthroughs: scenes, diagrams, and small interactions where prose is doing too much work. github.com/gmchande/explo…
English
0
0
0
98
Andrej Karpathy
Andrej Karpathy@karpathy·
This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
Thariq@trq212

x.com/i/article/2052…

English
836
1.8K
17K
2.6M
Gaurav Chande
Gaurav Chande@gmchande·
I agree with the warning that "make HTML" probably doesn’t need a generic skill. The place where I did end up wanting one was: long agent-generated research docs. Deep research can produce useful synthesis, but the markdown wall-of-text is often hard to read. I’ve been playing with /explorable: research markdown → readable HTML article with playful, interactive live HTML/SVG/JS figures (where structure is trapped in prose). Inspired by Bret Victor’s Explorable Explanations and Nicky Case. Some examples: gmchande.github.io/fig-stock-anal… gmchande.github.io/kids-personali… Repo: github.com/gmchande/explo…
English
1
1
6
504
Gaurav Chande
Gaurav Chande@gmchande·
@skanwar @trq212 +1 GitHub Pages comes very close. You can just ask Claude to deploy the HTML file to github.io link, and it handles the repo, push, and gives the URL back.
English
0
0
1
56
Satish Kanwar
Satish Kanwar@skanwar·
@trq212 Need a one-click upload / cloud link for HTML files generated by Claude.
English
1
0
3
687
Thariq
Thariq@trq212·
HTML is the new markdown. I've stopped writing markdown files for almost everything and switched to using Claude Code to generate HTML for me. This is why.
Thariq@trq212

x.com/i/article/2052…

English
885
1K
12.1K
4.3M
Gaurav Chande
Gaurav Chande@gmchande·
Your talk is partly to blame for the /explorable rabbit hole I’ve been in :) The use case I kept hitting was, making an agent do deep research, then struggling to read the markdown output. Built a small skill around that (inspired by Bret Victor's Explorable Explanations): github.com/gmchande/explo…
English
1
0
4
194
shadcn
shadcn@shadcn·
I need horizontal tabs in Codex and Cursor. Impossible to keep track of active chats in the sidebar. Anyone?
English
141
11
922
87.4K
Gaurav Chande
Gaurav Chande@gmchande·
@BillyM2k There needs to be a Drive to Survive-like Netflix show on all this
English
0
0
0
26
Shibetoshi Nakamoto
Shibetoshi Nakamoto@BillyM2k·
i feel like the big ai players are all more or less playing chess against each other
English
62
6
175
19.2K
Gaurav Chande
Gaurav Chande@gmchande·
👀 $FIG
Ara Kharazian@arakharazian

Ramp TOP SAAS VENDORS MAY 2026 @tryramp 1. FIGMA is a fastest growing vendor, and the only publicly traded company on our list. Now competing with Claude Design. 2. AI inference platforms grew as businesses opted for cheaper models to combat exploding token budgets. 3. It's a great time to be in web deployment (Vercel, Netlify, Lovable)

QME
0
0
1
241
Gaurav Chande retweetledi
roon
roon@tszzl·
not enough people are emotionally prepared for if it’s not a bubble
English
484
760
11.4K
2M
Gaurav Chande
Gaurav Chande@gmchande·
This checks out. I used Claude a lot for my Obsidian-vault personal agent setup, and Codex is much better at the thing that matters most there: following the local operating layer. Opus 4.7 willy nilly drifts from claude.md, rules, and skills. GPT 5.5 stays inside the system way more reliably.
English
0
0
0
94
Peter Yang
Peter Yang@petergyang·
I've spent way too long testing OpenClaw, Hermes, Claude Code, Codex, and Gemini as my personal agent. The truth is, nobody has won this race yet. Here's my new deep dive with my honest take on where each product stands, plus the personal agent stack I use right now. 📌 Read now: creatoreconomy.so/p/the-race-to-…
Peter Yang tweet media
English
156
81
780
64.9K
Gaurav Chande
Gaurav Chande@gmchande·
I think it’s clarity about what you actually want. The people I see getting real leverage from agents aren’t just producing more output. They have a clear point of view on what should exist and why. When that vision is muddy, agents mostly amplify the mud. I see both sides in my own projects. And having worked with startup CEOs, clarity of vision is genuinely hard. Customer convos and investor convos can only help so much.
English
0
0
0
109
jack friks
jack friks@jackfriks·
its pretty funny that you can build an MVP in a day now without knowing how to code but still the execution of a good idea is not any easier than it was last year getting results still takes something else, but im not sure what it is... maybe its time and raw effort?
English
51
4
283
35K
Gaurav Chande retweetledi
M. V. Cunha
M. V. Cunha@mvcinvesting·
Less than 16 months after initiating my position in $NBIS, it’s officially a 7-bagger. I’m glad I made it my largest holding by far. Conviction matters.
English
74
24
1.4K
57.9K
Gaurav Chande
Gaurav Chande@gmchande·
@Stammy Most of them are buggy. :( Nothing beats Obsidian. I just wish @obsdmd had a native, minimal Mac app to read/edit/sort/manage .md files. Until then it is neovim with render-markdown plugin for me.
English
0
0
0
156