Jake Doherty

42 posts

Jake Doherty

Jake Doherty

@JakeDohAI

انضم Şubat 2026
185 يتبع2 المتابعون
Jake Doherty أُعيد تغريده
Boris Cherny
Boris Cherny@bcherny·
I wanted to share a bunch of my favorite hidden and under-utilized features in Claude Code. I'll focus on the ones I use the most. Here goes.
English
551
2.5K
23.1K
3.8M
Jake Doherty أُعيد تغريده
Kat ⊷ the Poet Engineer
Kat ⊷ the Poet Engineer@poetengineer__·
i built a dashboard for my claude code sessions: 254 sessions across 58 projects over 3 months 🤖🧚‍♀️ - 3d terrain map of token usage over time - session cards with first/last prompts, hover to expand - click to resume any past session in-browser - activity heatmaps, project treemaps code available for my x subscribers <3
English
111
254
3.3K
347.2K
Jake Doherty أُعيد تغريده
Boris Cherny
Boris Cherny@bcherny·
Little known fact, the Anthropic Labs team (the team I joined Anthropic to be on) shipped: - MCP - Skills - Claude Desktop app - Claude Code It was just a few of us, shipping fast, trying to keep pace with what the model was capable of. Those early Desktop computer use prototypes, back in the Sonnet 3.6 days, felt clunky and slow. But it was easy to squint and imagine all the ways people might use it once it got really good. Fast forward to today. I am so excited to release full computer use in Cowork and Dispatch. Really excited to see what you do with it!
Claude@claudeai

You can now enable Claude to use your computer to complete tasks. It opens your apps, navigates your browser, fills in spreadsheets—anything you'd do sitting at your desk. Research preview in Claude Cowork and Claude Code, macOS only.

English
464
409
9.3K
991.9K
Jake Doherty
Jake Doherty@JakeDohAI·
Nice idea. Make your data available and see what happens.
Lenny Rachitsky@lennysan

Today I'm releasing my entire newsletter archive (350+ posts) and all podcast transcripts (300+ episodes) as AI-friendly Markdown files. Plus an MCP server and GitHub repo. A few months ago I shared my podcast transcripts on a whim, and y'all built the most amazing things—an RPG game, a parenting wisdom site, infographics, a Twitter bot, and 50+ other projects. Let's see what happens when I give you even more data. Grab the data here: LennysData.com. Paid subscribers get all of the data (some 350 posts and 300 transcripts). Free subscribers get a subset. I don’t think anyone’s ever done anything like this before, and I’m excited to give you this excuse to play with that AI tool you've been meaning to try. Here’s my challenge to you: build something, and let me know about it. I’ll pick my favorite and give you a free 1-year subscription to the newsletter. Just post a link to your project in the comments here: lennysnewsletter.com/p/how-i-built-…. If you’ve already built something, slurp in this new data and submit it, too. I’ll pick a winner on April 15th. Check out today's newsletter post for inspiration on what you could to build: lennysnewsletter.com/p/how-i-built-… LFG.

English
0
0
0
60
Jake Doherty أُعيد تغريده
Nathaniel Whittemore
Call me crazy, but I think the companies that give everyone on their team a team of agents are going to kick the shit out of the companies that replace their teams with a team of agents.
English
13
12
69
4.7K
Jake Doherty
Jake Doherty@JakeDohAI·
Lloyds Banking Group customers reported being able to view charges and payments from other sources on Thursday morning. But vibe coding is the problem?…
English
0
0
0
83
Jake Doherty
Jake Doherty@JakeDohAI·
The thing I love about this is you wouldn’t previously have had one of the largest companies in the world introducing a 00’s acronym as a feature! @bcherny
Boris Cherny@bcherny

btw

English
0
0
0
44
Jake Doherty
Jake Doherty@JakeDohAI·
Bold
Matt Shumer@mattshumer_

I've been testing GPT-5.4 for the last week. In short, it is the best model in the world, by far. It's so good that it's the first model that makes the “which model should I use?” conversation feel almost over. The biggest surprise: I barely use Pro anymore! If you know me, you know I'm a Pro addict. I reach for Pro models constantly, and use them for almost everything, as they just... nail almost anything I give to them. For the first time, 5.4's standard version, with heavy thinking, just broke that habit. Even in standard mode, GPT-5.4 is better than previous models in Pro mode... crazy! Coding capabilities are ridiculous... it's essentially flawless. Inside Codex, it's insanely reliable. Coding is essentially solved. There's not much more to say on this, it's just THAT good. The Pro version is near-perfect. Other testers I spoke with saw it solving problems that were unsolvable by any other model. At this point, Pro is overkill for almost every normal use-case, but when you really need the power to do something extremely difficult, it's incredible. Consistent with everything I've said above, even the standard thinking version uses fewer reasoning tokens than previous models to get the same level of results. In practice, this means you get great results much faster than before. This was one of my biggest gripes with previous OpenAI models. They just took too long to complete simple tasks. Assuming the speed we had during testing holds up as more users join, this is going to be a big win for OpenAI. It still has weaknesses, though: - Frontend taste is FAR behind Opus 4.6 and Gemini 3.1 Pro. , why is this so hard to fix? @OpenAI once you fix this, there's literally no reason for me to use any other model. Please please please do it! - It can still miss obvious real-world context. For example, I had it plan an itinerary for a trip. At first glance, it looked perfect, but it failed to take into account that it chose locations that would be mobbed by spring breakers, so I had to re-run the prompt from scratch with more context. - When testing it inside OpenClaw, it kept stopping short before finishing tasks. I'm assuming this will be fixed quickly, but it's still worth noting. But zooming out: This thing is so far ahead overall that the nitpicks are starting to feel beside the point. GPT-5.4 is a serious fucking model. The best model in the world. By far.

English
0
0
0
16