Denny Nguyen

3.2K posts

Denny Nguyen banner
Denny Nguyen

Denny Nguyen

@ducduy

“You only live once? False. You live every day. You only die once.” - Dwight Shrute

Ho Chi Minh City Katılım Nisan 2009
1.3K Takip Edilen375 Takipçiler
Denny Nguyen retweetledi
Corey Ganim
Corey Ganim@coreyganim·
Karpathy's "second brain" concept in 60 seconds: 1. Three folders (raw, wiki, outputs). That's the whole architecture. 2. One CLAUDE.md schema file tells the AI how to organize everything. 3. Dump your bookmarks, notes, and articles into raw/. Don't organize them. 4. One prompt: "Compile a wiki from raw/ following CLAUDE.md." Walk away. 5. Ask questions against your wiki. Save answers back. It compounds. 6. Monthly health check catches errors before they stack. No Obsidian or complex plugins. Just desktop folders and a schema file. Full walkthrough + free skill that builds it for you in 60 seconds in the article below.
Corey Ganim@coreyganim

x.com/i/article/2041…

English
36
160
1.6K
397.6K
Denny Nguyen retweetledi
Lenny Rachitsky
Lenny Rachitsky@lennysan·
My biggest takeaways from @AnthropicAI's Head of Growth Amol Avasare: 1. Engineering is getting the most AI leverage—and it’s squeezing PMs and designers. With Claude Code, a five-engineer team now produces the output of 15 to 20 engineers. But PM and design productivity haven’t scaled proportionally. The result is a compressed ratio where one PM is effectively managing the output of a much larger engineering team. Anthropic's growth team is responding in two ways: hiring even more PMs (!), and formally deputizing product-minded engineers to act as mini-PMs for any project with less than two weeks of engineering time. 2. Anthropic is using Claude to automate its own growth. The internal initiative is called CASH (Claude Accelerates Sustainable Hypergrowth). It works across four stages: identifying opportunities, building features, testing quality, and analyzing results. Right now it handles copy changes and minor UI tweaks. The win rate is comparable to a junior PM with two to three years of experience, and improving rapidly. 3. The one part of PM work that AI can’t automate yet: getting six people in a room to agree. Amol and his head of design joke that even with AGI, it’ll still be impossible to align six stakeholders. Cross-functional coordination—managing opinions, navigating politics, mediating tradeoffs—remains the bottleneck that AI doesn’t touch for larger projects. This is why Amol believes PM roles aren’t going away, and may actually grow. 4. 60-80% of Anthropic’s growth team's projects have no PRD. For smaller work, kickoffs happen on Slack—messages back and forth with product-minded engineers who can push back and ask the right questions. For larger projects, Amol believes in a proper 30-minute cross-functional kickoff (legal, safeguards, stakeholders) to surface concerns early. 5. Adding friction to onboarding drives growth—if the friction helps users understand why the product is for them. His work Mercury, MasterClass, Calm, and now Anthropic, adding steps to onboarding flows consistently improved conversion. The key: cut annoying friction that doesn’t add value, but add friction that helps users understand why the product is for them. 6. AI companies need to focus on bigger bets, not better A/B tests. Amol’s argument: if your core product value is driven by AI, then the future value is orders of magnitude higher than today’s value, because model capabilities grow exponentially. In that world, micro-optimizations capture a shrinking share of a growing pie. Traditional growth teams do 60% to 70% small optimizations and 20% to 30% big swings. At Anthropic, they flip this ratio. 7. Amol built a weekly AI agent that scans Slack for cross-functional misalignment. Using Cowork with the Slack MCP, he has a scheduled task that looks across his projects and conversations and surfaces areas where teams are about to do overlapping work or pull in different directions. A colleague on the enterprise team already caught major misalignment that would have caused weeks of wasted effort. 8. A traumatic brain injury taught Amol the principle that now drives his work: freedom through constraints. In early 2022, a kick to the head during a Muay Thai sparring session caused a traumatic brain injury. Amol spent nine months off work and months relearning to walk, unable to look at screens or listen to music for more than 20 seconds. He was re-injured a month after joining Mercury and had to take two more months off. He’s still not fully healed. But the constraints—no alcohol, no caffeine, mandatory breaks, daily meditation—have become the habits that let him operate at the intensity Anthropic demands. “The true freedom in life is learning how to be content when you don’t get what you want.”
Lenny Rachitsky@lennysan

Anthropic is on an unprecedented growth run. Just in the past year they grew from $1B to $19B ARR. They added $6B in ARR just in *February*. Companies like Palantir and Atlassian took 15-20 years to reach ~$5B ARR. Anthropic is adding that every month. Amol Avasare is head of growth at Anthropic, and one of the most impressive people I've had on the podcast. In his first ever public interview, Amol shares: 🔸 How Anthropic is automating growth experiments with Claude (their internal tool called “CASH”) 🔸 Why activation is the single highest-leverage growth problem in AI 🔸 Why Amol is hiring more PMs, not less 🔸 How he uses Cowork to automatically detect team misalignment in Slack 🔸 How the company’s focus on AI coding created a research flywheel that accelerated their models 🔸 How Amol landed his role by cold emailing Anthropic’s CPO @mikeyk 🔸 The brain injury that nearly ended Amol's career Listen now 👇 youtu.be/k-H4nsOTuxU

English
81
174
1.6K
347.9K
Denny Nguyen retweetledi
sui ☄️
sui ☄️@birdabo·
SOMEONE MADE A DIGITAL WHIP TO MAKE CLAUDE WORK FASTER 💀
English
1.6K
12K
146.8K
14.8M
Denny Nguyen retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 Andrej Karpathy thinks RAG is broken. He published the replacement 2 days ago. 5,000 stars in 48 hours. It's called LLM Wiki. A pattern where your AI doesn't retrieve information from scratch every time. It builds and maintains a persistent, compounding knowledge base. Automatically. RAG re-discovers knowledge on every question. LLM Wiki compiles it once and keeps it current. Here's the difference: RAG: You ask a question. AI searches your documents. Finds fragments. Pieces them together. Forgets everything. Starts over next time. LLM Wiki: You add a source. AI reads it, extracts key information, updates entity pages, revises topic summaries, flags contradictions, strengthens the synthesis. The knowledge compounds. Every source makes the wiki smarter. Permanently. Here's how it works: → Drop a source into your raw collection. Article, paper, transcript, notes. → AI reads it, writes a summary, updates the index → Updates every relevant entity and concept page across the wiki → One source can touch 10 to 15 wiki pages simultaneously → Cross-references are built automatically → Contradictions between sources get flagged → Ask questions against the wiki. Good answers get filed back as new pages. → Your explorations compound in the knowledge base. Nothing disappears into chat history. Here's the wildest part: Karpathy's use case examples: → Personal: track goals, health, psychology. File journal entries and articles. Build a structured picture of yourself over time. → Research: read papers for months. Build a comprehensive wiki with an evolving thesis. → Reading a book: build a fan wiki as you read. Characters, themes, plot threads. All cross-referenced. → Business: feed it Slack threads, meeting transcripts, customer calls. The wiki stays current because the AI does the maintenance nobody wants to do. Think of it like this: Obsidian is the IDE. The LLM is the programmer. The wiki is the codebase. You never write the wiki yourself. You source, explore, and ask questions. The AI does all the grunt work. NotebookLM, ChatGPT file uploads, and most RAG systems re-derive knowledge on every query. This compiles it once and builds on it forever. 5,000+ stars. 1,294 forks. Published by Andrej Karpathy. 2 days ago. 100% Open Source.
Nav Toor tweet media
English
137
368
3K
382.7K
Denny Nguyen retweetledi
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
🚨 BREAKING: Someone just built the exact tool Andrej Karpathy said someone should build. 48 hours after Karpathy posted his LLM Knowledge Bases workflow, this showed up on GitHub. It's called Graphify. One command. Any folder. Full knowledge graph. Point it at any folder. Run /graphify inside Claude Code. Walk away. Here is what comes out the other side: -> A navigable knowledge graph of everything in that folder -> An Obsidian vault with backlinked articles -> A wiki that starts at index. md and maps every concept cluster -> Plain English Q&A over your entire codebase or research folder You can ask it things like: "What calls this function?" "What connects these two concepts?" "What are the most important nodes in this project?" No vector database. No setup. No config files. The token efficiency number is what got me: 71.5x fewer tokens per query compared to reading raw files. That is not a small improvement. That is a completely different paradigm for how AI agents reason over large codebases. What it supports: -> Code in 13 programming languages -> PDFs -> Images via Claude Vision -> Markdown files Install in one line: pip install graphify && graphify install Then type /graphify in Claude Code and point it at anything. Karpathy asked. Someone delivered in 48 hours. That is the pace of 2026. Open Source. Free.
Muhammad Ayan tweet media
English
270
1.4K
12.7K
941.4K
Denny Nguyen retweetledi
Thariq
Thariq@trq212·
We've released a new frontend-design plugin for Claude Code. Here's Claude's output for: "use the frontend design skill. Create a music player app."
English
117
217
3.2K
478.4K
Denny Nguyen retweetledi
Anthropic
Anthropic@AnthropicAI·
New Engineering blog: We tasked Opus 4.6 using agent teams to build a C compiler. Then we (mostly) walked away. Two weeks later, it worked on the Linux kernel. Here's what it taught us about the future of autonomous software development. Read more: anthropic.com/engineering/bu…
English
861
2.5K
21.4K
8.5M
Denny Nguyen retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
An attempt to explain (current) ChatGPT versions. I still run into many, many people who don't know that: - o3 is the obvious best thing for important/hard things. It is a reasoning model that is much stronger than 4o and if you are using ChatGPT professionally and not using o3 you're ngmi. - 4o is different from o4. Yes I know lol. 4o is a good "daily driver" for many easy-medium questions. o4 is only available as mini for now, and is not as good as o3, and I'm not super sure why it's out right now. Example basic "router" in my own personal use: - Any simple query (e.g. "what foods are high in fiber"?) => 4o (about ~40% of my use) - Any hard/important enough query where I am willing to wait a bit (e.g. "help me understand this tax thing...") => o3 (about ~40% of my use) - I am vibe coding (e.g. "change this code so that...") => 4.1 (about ~10% of my use) - I want to deeply understand one topic - I want GPT to go off for 10 minutes, look at many, many links and summarize a topic for me. (e.g. "help me understand the rise and fall of Luminar"). => Deep Research (about ~10% of my use). Note that Deep Research is not a model version to be picked from the model picker (!!!), it is a toggle inside the Tools. Under the hood it is based on o3, but I believe is not fully equivalent of just asking o3 the same query, but I am not sure. All of this is only within the ChatGPT universe of models. In practice my use is more complicated because I like to bounce between all of ChatGPT, Claude, Gemini, Grok and Perplexity depending on the task and out of research interest.
Andrej Karpathy tweet media
English
621
1.6K
13.4K
1.3M
Denny Nguyen retweetledi
Goodnotes
Goodnotes@GoodnotesApp·
Can you believe it’s been a year since we shared Goodnotes 6 with you?!? Over 40 new features, 50+ app updates, and we’re JUST getting started! ✨ It's been an incredible journey, and we couldn't have done it without you 🙏 👉 Swipe through for some of your favorite innovations from the past year ✨ Thank you for being a part of this amazing community. Here’s to many more years of innovation and creativity! 🚀📝 Tell us your favourite Goodnotes 6 feature in the comments, or what you want to see next 👀
Goodnotes tweet media
English
18
9
84
12.6K
Denny Nguyen retweetledi
Robin Delta
Robin Delta@heyrobinai·
I'm obsessed with collecting websites that feels illegal to know. The 10 best websites I've collected:
Robin Delta tweet media
English
64
310
2.1K
731.6K
Denny Nguyen retweetledi
Goodnotes
Goodnotes@GoodnotesApp·
The wait is finally over! Our #Dynamite TinyTAN collection has arrived in Marketplace! - See what we did there? 😅 From planners, to notebooks to stickers get your hands on #TinyTAN digital merch! Link here: goodnotes.com/deeplink/marke…
Goodnotes tweet media
English
29
956
4K
141.5K
Denny Nguyen retweetledi
Eoghan McCabe
Eoghan McCabe@eoghan·
I announced internally last night that we’re going to invest an extra ~$100M to bring a new AI technology we’re dreaming about to market that we’re calling Fin X. Running a scaled company on your own cashflows brings wonderful discipline, and when I returned to @intercom in late 2022 I prioritized getting profitable. But now that we’ve built up our cash reserves, and given the intense competition and massive opportunity in the space of AI for customer service, it’s obvious that now is the time to go way harder to maintain our lead and put that cash to work. Here’s the email I sent with all the context and numbers and internal Intercom jargon. :)
Eoghan McCabe tweet mediaEoghan McCabe tweet mediaEoghan McCabe tweet mediaEoghan McCabe tweet media
English
46
29
582
257.6K
Denny Nguyen retweetledi
Tim Cook
Tim Cook@tim_cook·
Xin chào, Vietnam! Thank you to the very talented musicians My Linh and My Anh for such a warm welcome. And I loved the egg coffee!
Tim Cook tweet media
English
1.2K
992
16.1K
1.4M
Denny Nguyen retweetledi
Goodnotes
Goodnotes@GoodnotesApp·
Guess who's officially a teenager? 🎉 That's right, it's us! Goodnotes is turning 13 today and we're ready to party like it's ‘99! 🥳 We are so grateful to have you as part of this amazing journey! 🚀 Here's to another year of making good notes with Goodnotes 💙 #Goodnotes
Goodnotes tweet media
English
8
11
123
13.3K
Denny Nguyen retweetledi
Sergei Galkin
Sergei Galkin@sergeyglkn·
Play guitar in spatial computer Part 2! This time I moved my research on the guitar learning interface a bit further. We can play melodies and riffs as well as chords now. + special mode with visual effects to share with friends #lavame4
English
69
233
2.2K
804.5K
Denny Nguyen retweetledi
Daniel Beauchamp
Daniel Beauchamp@pushmatrix·
Chores 2.0
Español
449
1.8K
17.5K
8.2M
Denny Nguyen retweetledi
DHH
DHH@dhh·
Never been a better time to learn Rails.
DHH tweet media
English
41
65
984
125.5K
Denny Nguyen retweetledi
Moritz Kremb
Moritz Kremb@moritzkremb·
I spent over 500 hours in the last 6 months mastering ChatGPT. Using prompt frameworks is by far the most effective way to level up your outputs. But, most don't know where to start. So, I made a Cheat Sheet to help you maximize ChatGPT using these 5 prompt frameworks:
English
276
632
4.1K
5.1M