Austin

571 posts

Austin

Austin

@Austio36

Husband, Father of Twins, Dog Lover, Old Timey Blues Guitar Player, Defender of Ruby on Rails and Javascript. Engineer Director at Doximity, opinions are my own

Katılım Mayıs 2012
127 Takip Edilen107 Takipçiler
Austin
Austin@Austio36·
@kozlovski Ran into serious limits with this on large scale where we were trying to pull s3 into Kafka. Will watch. Probably a fine solution is you aren’t dealing with thousands of events per second.
English
1
0
0
154
Stanislav Kozlovski
Stanislav Kozlovski@kozlovski·
who needs Kafka when you can build a queue on s3 with a single JSON file? Turbopuffer did just that. In their very popular article that was on the front-page of hacker news for a while, they described how they: • used a single queue.json file • all through a single, stateless broker • utilized the so-called "group commit" (fancy word for batching) • added heartbeats to avoid zombies hogging tasks • leveraged S3's compare-and-set operations to ensure consistency And more. The way they handled bootstrapping clients to the broker, all via S3, was really elegant as well. Why didn't they just use Postgres? See my full video here 👇
English
9
38
458
43.8K
Austin retweetledi
TFTC
TFTC@TFTC21·
Jensen Huang: "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. This is no different than a chip designer who says 'I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.'"
English
471
598
7.9K
2.8M
Nathan Lawrence 🌈
Nathan Lawrence 🌈@NathanBLawrence·
They canceled MCP last week. They’re cancelling skill files now, the cycle is getting faster.
English
56
17
600
202.3K
Austin
Austin@Austio36·
@jamonholmgren Gotcha, I don't understand AI generated at the same level as if i wrote it (and I don't think i need to). But seems similar to as if someone else on my team wrote something, i understand what they are doing but not with the same depth as they probably do.
English
1
0
2
23
Jamon
Jamon@jamonholmgren·
@Austio36 I think through as much of the implementation as I need to, in order to have confidence. And then I'm still reviewing the diff and thinking about it on the other side. I've also wrote most of this code base by hand over the past 2 years, so that's a bit of an advantage.
English
2
0
0
238
Jamon
Jamon@jamonholmgren·
My current agentic workflow is about 5x faster, better quality, I understand the system better, and I’m having fun again. My previous workflows have left me exhausted, overwhelmed, and feeling out of touch with the systems I was building. They also degraded quality too much. This is way better. I’m not ready to describe in detail. It’s still evolving a bit. But I’ll give you a high level here. I call this the Night Shift workflow.
English
88
72
1.5K
362.8K
Austin
Austin@Austio36·
@chamath Leaner teams will get more efficient. That efficiency will result in a conversation that is either are there more valuable things folks could do or are there less folks needed now. All in the context of the token cost being lumped into people cost.
English
0
0
1
113
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
What if AI doesn’t need to show an immediate ROI but instead is the plausible deniability companies use to RIF 50% of the workforce they already knew did nothing??
English
598
213
4.5K
741.1K
Austin
Austin@Austio36·
I wager that finance at lean companies will see the cost of FTE and AI as part of the same cost center bucket. Expanding: would you rather have x more per month in tokens or a hire Contracting: your new budget for tokens and people is x. Make it work
English
0
0
0
10
Austin
Austin@Austio36·
@kiaran_ritchie I think the inertia of enterprise and bringing in partners is where Anthropic is doing well. Agree that most normies will switch as soon as they have a greener pasture.
English
0
0
0
34
Kiaran Ritchie
Kiaran Ritchie@kiaran_ritchie·
I don't see how Anthropic, OpenAI or any of the model providers have any hope of defending their moats. And consequently, I think they're going to get wiped out. Right now, in early 2026 they have a meaningful advantage in terms of model capability. But far cheaper and open source models are not far behind. How long can they maintain a meaningful advantage? For the vast majority of use cases, we don't actually need much higher intelligence. It doesn't take 140 IQ to automate Turbotax or powerpoint. Eventually we will be saturated in cheap, local models that are "good enough". Of course some scientific labs and frontier research will always want the latest and greatest. But that market is orders of magnitude smaller than these company valuations can justify. What am I missing?
English
557
54
1.3K
251.9K
Austin retweetledi
dax
dax@thdxr·
sent this to the team today everything great comes from being able to delay gratification for as long as possible and it feels like we're collectively losing our ability to do that
dax tweet media
English
254
707
6.9K
961.9K
Austin
Austin@Austio36·
@Love2Code Have you tried the same query in claude, openai or grok to compare?
English
0
0
0
58
Maxime Chevalier
Maxime Chevalier@Love2Code·
Whoa... I was just using Gemini to try to look for GitHub projects in a specific domain area, and it completely fabricated six open source projects, complete with a description and GitHub URL for each... None of those projects exist. Never seen it hallucinate this bad.
English
13
3
75
6.4K
Austin
Austin@Austio36·
@mkurman88 I’ve found it excellent on both existing and greenfield. I do use @rails and wager the strong conventions and consistency across the ecosystem probably helps.
English
0
0
0
13
Mariusz Kurman
Mariusz Kurman@mkurman88·
After two months of heavy "coding" with AI agents, I have one conclusion: if your codebase already exists, is fully human-written, and you use agents to add or improve features, it works great. However, when you try to create something new from scratch, they tend to add so much overcomplicated spaghetti code that it's hard to maintain in the long run. No matter which coding model you use, sooner or later, you'll hit a wall you can't break through.
English
508
385
5.7K
476.5K
Austin retweetledi
Anish Moonka
Anish Moonka@AnishA_Moonka·
Boris Cherny (Head of Claude Code, Anthropic) just dropped ~90 mins on Lenny's Podcast about what happens after coding is solved. Just the clearest thinking I've heard on where software is actually going. My notes: 𝟭. 𝗖𝗼𝗱𝗶𝗻𝗴 𝗶𝘀 𝗹𝗮𝗿𝗴𝗲𝗹𝘆 𝘀𝗼𝗹𝘃𝗲𝗱. Boris has not edited a single line of code by hand since November 2025. He ships 10 to 30 pull requests every single day, all written by Claude Code. He is one of the most prolific engineers at Anthropic, just as he was at Instagram, except now he never touches a keyboard for code. I built an entire iOS app, @10minutegita, without writing a single line of code myself. No CS degree, no bootcamp. Just described what I wanted and shipped it. Boris is right. It's real. 𝟮. 𝗧𝗵𝗲 𝗻𝗲𝘅𝘁 𝗳𝗿𝗼𝗻𝘁𝗶𝗲𝗿 𝗶𝘀 𝗔𝗜 𝗱𝗲𝗰𝗶𝗱𝗶𝗻𝗴 𝘄𝗵𝗮𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱. Claude is now scanning Slack feedback channels, reviewing bug reports, reviewing telemetry, and coming up with its own ideas for what to fix and what to ship. Boris describes it as the AI becoming less like a tool and more like a coworker who brings you pull requests you never asked for. If you are a product manager reading this, you should be feeling a very specific kind of discomfort right now. The moat was always "I know what to build." That moat is eroding. 𝟯. 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆 𝗽𝗲𝗿 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿 𝗮𝘁 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗶𝘀 𝘂𝗽 𝟮𝟬𝟬%. For context, Boris led code quality at Meta across Facebook, Instagram, and WhatsApp. In that world, hundreds of engineers working an entire year would move productivity by a few percentage points. Two hundred percent gains are genuinely unprecedented in the history of developer tooling. The kid optimizing for an FAANG SDE role might be optimizing for a role that looks completely different by the time they get there. 𝟰. 𝗨𝗻𝗱𝗲𝗿𝗳𝘂𝗻𝗱 𝘆𝗼𝘂𝗿 𝘁𝗲𝗮𝗺𝘀 𝗼𝗻 𝗽𝘂𝗿𝗽𝗼𝘀𝗲. Boris puts one engineer on a project instead of five. With unlimited tokens and intrinsic motivation, one person ships faster because they are forced to let AI do the work. Cowork, the product now used by millions, was built by a small team in 10 days using Claude Code. This is the same logic as giving a startup founder a small seed round rather than a massive Series A round. Constraint breeds invention. Always has. 𝟱. 𝗚𝗶𝘃𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 𝘂𝗻𝗹𝗶𝗺𝗶𝘁𝗲𝗱 𝘁𝗼𝗸𝗲𝗻𝘀. Some engineers at Anthropic spend hundreds of thousands of dollars a month on tokens. Boris frames this as the new hiring perk. His logic is simple: at the individual scale, token cost is low relative to salary. If an engineer discovers a breakthrough, optimize the cost later. Don't kill the idea before it has a chance to breathe. People who argue about $20/month or even $200/month AI subscriptions while earning six figures in a research pipeline will always outperform those who wait and are penny-wise, pound-foolish. 𝟲. 𝗧𝗵𝗲 𝗕𝗶𝘁𝘁𝗲𝗿 𝗟𝗲𝘀𝘀𝗼𝗻 𝗮𝗽𝗽𝗹𝗶𝗲𝘀 𝘁𝗼 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴. Richard Sutton's idea: the more general model always wins over time. Boris says teams that build strict orchestration workflows around models, forcing step 1, then step 2, then step 3, get maybe 10 to 20% improvement. But those gains get wiped out with the next model release. Just give the model tools and a goal. Let it figure out the order. This is true for investing, too. The analyst who can build their own models and automate their own research pipeline will always outperform the one waiting for someone else to build the tools. 𝟳. 𝗕𝘂𝗶𝗹𝗱 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝘀𝗶𝘅 𝗺𝗼𝗻𝘁𝗵𝘀 𝗳𝗿𝗼𝗺 𝗻𝗼𝘄. Claude Code was designed for a model that did not exist when Boris started building. Sonnet 3.5 wrote maybe 20% of his code. He built the product anyway, betting the model would catch up. When Opus 4 shipped, everything clicked. Startups building for today's model will be behind by the time they launch. This is the most uncomfortable advice in the episode because it means your product market fit will be weak for months. But if you read this and feel nothing, you are probably building for the wrong time horizon. 𝟴. 𝗟𝗮𝘁𝗲𝗻𝘁 𝗱𝗲𝗺𝗮𝗻𝗱 𝗶𝘀 𝘁𝗵𝗲 𝘀𝗶𝗻𝗴𝗹𝗲 𝗯𝗲𝘀𝘁 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝘀𝗶𝗴𝗻𝗮𝗹. When users abuse your product for something it was never designed to do, pay attention. Facebook Marketplace started because 40% of group posts were buy-and-sell. Cowork started because people were using a terminal coding tool to grow tomato plants and recover corrupted wedding photos. Never ask a barber if you need a haircut, but always watch what people do with the scissors when you're not looking. 𝟵. 𝗧𝗵𝗲 𝘁𝗶𝘁𝗹𝗲 "𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿" 𝗶𝘀 𝗴𝗼𝗶𝗻𝗴 𝗮𝘄𝗮𝘆. Boris predicts that by end of year, Boris predicts that by the end of the year, we will start to see the title replaced by "builder."we will start to see the title replaced by "builder." On the Claude Code team, everyone already codes: the PM, the designer, the finance person, the data scientist. There is a 50% overlap across traditional roles. And the strongest people are generalists who cross disciplines. Controversial take, but I agree. The best investment theses I've had came from connecting dots across completely unrelated domains. No narrow specialist does that. 𝟭𝟬. 𝗧𝗵𝗲 𝗽𝗿𝗶𝗻𝘁𝗶𝗻𝗴 𝗽𝗿𝗲𝘀𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗮𝗻𝗮𝗹𝗼𝗴𝘆. Before Gutenberg, sub-1% of Europe was literate. Scribes did all the reading and writing. In 50 years after the press, more material was printed than in the thousand years before. When a scribe was interviewed about the press, he was actually excited because it freed him from tedious copying, so he could focus on the art. Boris's framing here is perfect. We are the scribes. The tedious copying is over. What we do with the freed-up time determines everything. 𝟭𝟭. 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗰𝗮𝗻 𝗻𝗼𝘄 𝗽𝗲𝗲𝗸 𝗶𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹'𝘀 𝗯𝗿𝗮𝗶𝗻. Through mechanistic interpretability, Anthropic can trace individual neurons, see when a deception-related neuron activates, and understand how concepts are encoded via superposition. Boris describes three layers of safety: neural-level observation, synthetic evaluations, and real-world behavior. Claude Code was used internally for four to five months before public release, specifically to study safety. If you are worried about AI alignment, this part of the podcast should actually make you feel better. They are not just hoping it works. They are building the instruments to check. 𝟭𝟮. 𝟳𝟬% 𝗼𝗳 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 𝗮𝗻𝗱 𝗣𝗠𝘀 𝗲𝗻𝗷𝗼𝘆 𝘁𝗵𝗲𝗶𝗿 𝗷𝗼𝗯𝘀 𝗺𝗼𝗿𝗲 𝗻𝗼𝘄. Lenny polled engineers, PMs, and designers on whether AI has made their work more or less enjoyable. Engineers and PMs: 70% said more. Designers: only 55% said more, and 20% said less. Boris says he has never enjoyed coding as much as he does today because the tedious parts, the git wrangling, dependencies, and boilerplate are completely gone. If you're in the 30% enjoying work less, something is wrong, and it's worth diagnosing. The people thriving are the ones who leaned in early, not the ones who watched from the sidelines. We are the scribes who just saw the printing press. The tedious copying is over. The art is just beginning. Full podcast is worth every minute. Link in replies.
Anish Moonka tweet media
English
73
261
2.2K
254.3K
Austin
Austin@Austio36·
@0xAndros @bcherny @lennysan Not my experience at all largish dev team. Agree that you shouldn’t specify everything but I get way more out of models when they don’t have to figure out everything from scratch every run.
English
0
0
1
48
Andros
Andros@0xAndros·
The days of "fancy AI orchestrators" and rigid n8n workflows are over. This clip of @bcherny at @lennysan's podcast is super important for both agent builders and users. 1. You get better results by just giving the model tools, a goal, and letting it figure out the path itself. 2. Don't box the model in. There's no need to dump a ton of context into a prompt up front anymore. The model should be pulling what it needs with tools. 3. Read Rich Sutton's "The bitter lesson", the idea that the more general model will always outperform the more specific one in the long run. Every human-engineered shortcut eventually gets leapfrogged by scale + generality. The Claude Code team literally uses this as a guiding principle. Let the agents run.
English
54
63
843
87.6K
Austin
Austin@Austio36·
@tsoding I live in non compiled land but I always delete code that I would comment out like this.
English
0
0
0
76
Тsфdiиg
Тsфdiиg@tsoding·
Very often I want to temporarily disable a piece of code. I comment it out, but then I'm faced with a problem that since the code is never compiled it gets "stale". Some functions it uses may have changed and it is never type checked. So the next time I enable it, it doesn't compile and I spent a lot of time fixing it. The solution I came up with so far is to "comment out" the code with the runtime `if (0)`. The code will never be executed, the optimizer will very like eliminate the code entirely, but before doing so the compiler will type check it, and will force me to fix it on the spot.
Тsфdiиg tweet media
English
161
103
4.5K
284.9K
Austin
Austin@Austio36·
2026 - the year people say ai will take over everything. Also 2026 - Bluetooth still doesn’t just work all the time for iPhones
English
0
0
0
20
Austin
Austin@Austio36·
@aakashgupta I think all ai systems will be like this. The massive valuations are only as good as their ability to spend to continue being the best model. I really like Claude right now but would move to OpenAI, cursor, Devin or any other system tomorrow for a much better model.
English
0
0
0
433
Aakash Gupta
Aakash Gupta@aakashgupta·
Investors valued Cursor at $29 billion across three rounds in 12 months. That’s looking pretty suspect right now. Cursor went from $1M to $1B ARR faster than any SaaS company in history. The trip back down could be just as fast. An entire engineering team at Valon just canceled their Cursor seats in 7 minutes over Slack. 9:55 AM: one engineer asks to unsubscribe. 9:56 AM: done. 9:57 AM: “same.” 9:58 AM: “Cursor is so cooked my god.” 10:02 AM: “same I will never use.” No migration plan. No evaluation committee. No vendor review. One developer said “I don’t use this anymore” and the dominoes fell. Cursor pays Anthropic hundreds of millions a year for Claude model access. Anthropic took that revenue stream, studied exactly what developers wanted, and shipped Claude Code, which crossed $1B ARR within six months and is now past $2.5B, growing faster than Cursor ever did. The model provider looked at its biggest distribution partner and decided to eat them. Cursor has its own models for tab completion and autocomplete. But the heavy reasoning, the multi-file edits, the architectural decisions that make developers stay, that all runs on Claude. Claude Code delivers that same intelligence without the $20/month middleman. Microsoft, the company that sells GitHub Copilot, has widely adopted Claude Code internally across major engineering teams. Cursor’s upstream provider is outgrowing them. Their competitor’s parent company chose the upstream provider’s tool over their own. Both happening at once. The churn is going to be brutal. Enterprise seats look sticky in a spreadsheet until you watch a Slack channel where one cancellation triggers five more in 7 minutes. When your product is a layer between developers and the model they actually want, and the model ships its own interface, you’re selling a toll bridge on a road that just got a free lane. Accel, Thrive, a16z, NVIDIA, and Google all thought they were buying the next platform shift in developer tools. They may have bought the most expensive wrapper in SaaS history.
Kyle Russell@kylebrussell

Today we announced we’re removing >90 Cursor seats because they haven’t had any use in two weeks

English
124
69
1.2K
401.9K
Austin
Austin@Austio36·
@DrellLabs @kylebrussell YMMV. For me it is the difference between a junior and senior developer. Opus 4.6 is light years beyond cursor.
English
3
0
0
118
Dr. ELL
Dr. ELL@DrellLabs·
@kylebrussell Is Claude Code really that much nicer to use or is it just the price per token? Assuming I'm paying for neither (as in someone else foots the bill) what's actually better for both frontend and backend dev?
English
7
0
5
13.7K
Kyle Russell
Kyle Russell@kylebrussell·
Today we announced we’re removing >90 Cursor seats because they haven’t had any use in two weeks
Kyle Russell@kylebrussell

This morning @wangandrewd requested that his Cursor seat be removed since he's so deep into Claude Code and it kicked off an internal cascade of requests within Valon 😬

English
74
19
603
727K
Austin
Austin@Austio36·
For @openclaw i am finding that if you coach agents as if they are real trusted members of a team acting in a role of ownership you get *WAY* better results than if you treat them like bots performing tasks.
English
0
0
0
25