Brandon Thomas

2.7K posts

Brandon Thomas banner
Brandon Thomas

Brandon Thomas

@gbrandonthomas

Helping family offices and executives manage cyber and human risk | Founder @Grayline + @HallDonovanX

Austin, TX Katılım Ocak 2008
637 Takip Edilen526 Takipçiler
Brandon Thomas retweetledi
Aaron Levie
Aaron Levie@levie·
We dramatically underestimate how much change management it is going to take to automate most knowledge worker tasks. Between data being in legacy environments or systems or without good APIs, context missing for doing the task, teams that are less technical, and other factors, there’s still a lot of work to drive real AI transformation in an enterprise. This is actually great news if you’re building right now because the opportunity is to build the software bridges to make this easier, or to build new services firms to help with this change management. Opportunity is all around for those looking.
Jason Shuman@JasonrShuman

Silicon Valley thinks AI agents are a $20/mo self-serve subscription. Main Street is paying local agencies $10,000 just to turn them on. Everyone assumes AI will be bought primarily online like Slack or Zoom. I think they are wrong. Some of the biggest winners in the AI boom won't be the software vendors. It will be the humans installing it. Here is the reality of SMBs right now: • 54% lack internal AI expertise. • 41% have data quality too poor for AI to even work. • 41% already prefer buying AI through a local IT provider. You cannot "1-click install" a genius AI into a messy CRM or a 15-year-old server. It will just execute the wrong tasks at the speed of light. The AI software will be cheap and a lot will absolutely be bought online. Making it actually work for a messy, real-world business will be expensive. Very bullish on the "Do It For Me" economy being back.

English
120
122
1.2K
263.9K
Brandon Thomas retweetledi
Jason Shuman
Jason Shuman@JasonrShuman·
Silicon Valley thinks AI agents are a $20/mo self-serve subscription. Main Street is paying local agencies $10,000 just to turn them on. Everyone assumes AI will be bought primarily online like Slack or Zoom. I think they are wrong. Some of the biggest winners in the AI boom won't be the software vendors. It will be the humans installing it. Here is the reality of SMBs right now: • 54% lack internal AI expertise. • 41% have data quality too poor for AI to even work. • 41% already prefer buying AI through a local IT provider. You cannot "1-click install" a genius AI into a messy CRM or a 15-year-old server. It will just execute the wrong tasks at the speed of light. The AI software will be cheap and a lot will absolutely be bought online. Making it actually work for a messy, real-world business will be expensive. Very bullish on the "Do It For Me" economy being back.
English
278
143
1.9K
581.3K
Brandon Thomas
Brandon Thomas@gbrandonthomas·
Love this so much. This is a beginning.
Jason Walls@walls_jason1

Yesterday Mark Cuban reposted my work, DM'd me, and told me to keep telling my story. So here it is. I'm a Master Electrician. IBEW Local 369. 15 years pulling wire in Kentucky. Zero coding background. I didn't go to Stanford. I went to trade school. Every week I'd show up to a home where someone just bought a Tesla or a Rivian. And every time, someone had already told them they needed a $3,000-$5,000 panel upgrade to install a charger. 70% of the time? They didn't need it. The math is in the NEC — Section 220.82. Load calculations. But nobody was doing them for homeowners. Electricians upsell. Dealers don't know. And the homeowner just pays. I got angry enough to build something about it. I found @claudeai. No coding experience. I just started talking to it like I'd explain a job to an apprentice. "Here's how load calcs work. Here's the NEC code. Now help me build a tool that does this." 6 months later — @ChargeRight is live. Real software. Stripe payments. PDF reports. NEC 220.82 calculations automated. $12.99 instead of a $500 truck roll. I'm still pulling wire. I still take service calls. I wake up at 5:05 AM for work. But something shifted. Yesterday @vivilinsv published my story as Claude Builder Spotlight #1. Mark Cuban saw it. The Claude community showed up. And for the first time, I felt like this thing I built in my kitchen might actually matter. I'm not a tech founder. I'm a dad who wants to coach little league and be home for dinner. I just happened to build something that helps people. If you're in the trades and thinking about using AI — do it. The barrier isn't technical skill. It's believing you're allowed to try. EVchargeright.com

English
0
0
0
27
Brandon Thomas
Brandon Thomas@gbrandonthomas·
It’s a brave new world. Thanks for sharing, @ctreada
Anish Moonka@anishmoonka

Boris Cherny (Head of Claude Code, Anthropic) just dropped ~90 mins on Lenny's Podcast about what happens after coding is solved. Just the clearest thinking I've heard on where software is actually going. My notes: 𝟭. 𝗖𝗼𝗱𝗶𝗻𝗴 𝗶𝘀 𝗹𝗮𝗿𝗴𝗲𝗹𝘆 𝘀𝗼𝗹𝘃𝗲𝗱. Boris has not edited a single line of code by hand since November 2025. He ships 10 to 30 pull requests every single day, all written by Claude Code. He is one of the most prolific engineers at Anthropic, just as he was at Instagram, except now he never touches a keyboard for code. I built an entire iOS app, @10minutegita, without writing a single line of code myself. No CS degree, no bootcamp. Just described what I wanted and shipped it. Boris is right. It's real. 𝟮. 𝗧𝗵𝗲 𝗻𝗲𝘅𝘁 𝗳𝗿𝗼𝗻𝘁𝗶𝗲𝗿 𝗶𝘀 𝗔𝗜 𝗱𝗲𝗰𝗶𝗱𝗶𝗻𝗴 𝘄𝗵𝗮𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱. Claude is now scanning Slack feedback channels, reviewing bug reports, reviewing telemetry, and coming up with its own ideas for what to fix and what to ship. Boris describes it as the AI becoming less like a tool and more like a coworker who brings you pull requests you never asked for. If you are a product manager reading this, you should be feeling a very specific kind of discomfort right now. The moat was always "I know what to build." That moat is eroding. 𝟯. 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆 𝗽𝗲𝗿 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿 𝗮𝘁 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗶𝘀 𝘂𝗽 𝟮𝟬𝟬%. For context, Boris led code quality at Meta across Facebook, Instagram, and WhatsApp. In that world, hundreds of engineers working an entire year would move productivity by a few percentage points. Two hundred percent gains are genuinely unprecedented in the history of developer tooling. The kid optimizing for an FAANG SDE role might be optimizing for a role that looks completely different by the time they get there. 𝟰. 𝗨𝗻𝗱𝗲𝗿𝗳𝘂𝗻𝗱 𝘆𝗼𝘂𝗿 𝘁𝗲𝗮𝗺𝘀 𝗼𝗻 𝗽𝘂𝗿𝗽𝗼𝘀𝗲. Boris puts one engineer on a project instead of five. With unlimited tokens and intrinsic motivation, one person ships faster because they are forced to let AI do the work. Cowork, the product now used by millions, was built by a small team in 10 days using Claude Code. This is the same logic as giving a startup founder a small seed round rather than a massive Series A round. Constraint breeds invention. Always has. 𝟱. 𝗚𝗶𝘃𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 𝘂𝗻𝗹𝗶𝗺𝗶𝘁𝗲𝗱 𝘁𝗼𝗸𝗲𝗻𝘀. Some engineers at Anthropic spend hundreds of thousands of dollars a month on tokens. Boris frames this as the new hiring perk. His logic is simple: at the individual scale, token cost is low relative to salary. If an engineer discovers a breakthrough, optimize the cost later. Don't kill the idea before it has a chance to breathe. People who argue about $20/month or even $200/month AI subscriptions while earning six figures in a research pipeline will always outperform those who wait and are penny-wise, pound-foolish. 𝟲. 𝗧𝗵𝗲 𝗕𝗶𝘁𝘁𝗲𝗿 𝗟𝗲𝘀𝘀𝗼𝗻 𝗮𝗽𝗽𝗹𝗶𝗲𝘀 𝘁𝗼 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴. Richard Sutton's idea: the more general model always wins over time. Boris says teams that build strict orchestration workflows around models, forcing step 1, then step 2, then step 3, get maybe 10 to 20% improvement. But those gains get wiped out with the next model release. Just give the model tools and a goal. Let it figure out the order. This is true for investing, too. The analyst who can build their own models and automate their own research pipeline will always outperform the one waiting for someone else to build the tools. 𝟳. 𝗕𝘂𝗶𝗹𝗱 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝘀𝗶𝘅 𝗺𝗼𝗻𝘁𝗵𝘀 𝗳𝗿𝗼𝗺 𝗻𝗼𝘄. Claude Code was designed for a model that did not exist when Boris started building. Sonnet 3.5 wrote maybe 20% of his code. He built the product anyway, betting the model would catch up. When Opus 4 shipped, everything clicked. Startups building for today's model will be behind by the time they launch. This is the most uncomfortable advice in the episode because it means your product market fit will be weak for months. But if you read this and feel nothing, you are probably building for the wrong time horizon. 𝟴. 𝗟𝗮𝘁𝗲𝗻𝘁 𝗱𝗲𝗺𝗮𝗻𝗱 𝗶𝘀 𝘁𝗵𝗲 𝘀𝗶𝗻𝗴𝗹𝗲 𝗯𝗲𝘀𝘁 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝘀𝗶𝗴𝗻𝗮𝗹. When users abuse your product for something it was never designed to do, pay attention. Facebook Marketplace started because 40% of group posts were buy-and-sell. Cowork started because people were using a terminal coding tool to grow tomato plants and recover corrupted wedding photos. Never ask a barber if you need a haircut, but always watch what people do with the scissors when you're not looking. 𝟵. 𝗧𝗵𝗲 𝘁𝗶𝘁𝗹𝗲 "𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿" 𝗶𝘀 𝗴𝗼𝗶𝗻𝗴 𝗮𝘄𝗮𝘆. Boris predicts that by end of year, Boris predicts that by the end of the year, we will start to see the title replaced by "builder."we will start to see the title replaced by "builder." On the Claude Code team, everyone already codes: the PM, the designer, the finance person, the data scientist. There is a 50% overlap across traditional roles. And the strongest people are generalists who cross disciplines. Controversial take, but I agree. The best investment theses I've had came from connecting dots across completely unrelated domains. No narrow specialist does that. 𝟭𝟬. 𝗧𝗵𝗲 𝗽𝗿𝗶𝗻𝘁𝗶𝗻𝗴 𝗽𝗿𝗲𝘀𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗮𝗻𝗮𝗹𝗼𝗴𝘆. Before Gutenberg, sub-1% of Europe was literate. Scribes did all the reading and writing. In 50 years after the press, more material was printed than in the thousand years before. When a scribe was interviewed about the press, he was actually excited because it freed him from tedious copying, so he could focus on the art. Boris's framing here is perfect. We are the scribes. The tedious copying is over. What we do with the freed-up time determines everything. 𝟭𝟭. 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗰𝗮𝗻 𝗻𝗼𝘄 𝗽𝗲𝗲𝗸 𝗶𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹'𝘀 𝗯𝗿𝗮𝗶𝗻. Through mechanistic interpretability, Anthropic can trace individual neurons, see when a deception-related neuron activates, and understand how concepts are encoded via superposition. Boris describes three layers of safety: neural-level observation, synthetic evaluations, and real-world behavior. Claude Code was used internally for four to five months before public release, specifically to study safety. If you are worried about AI alignment, this part of the podcast should actually make you feel better. They are not just hoping it works. They are building the instruments to check. 𝟭𝟮. 𝟳𝟬% 𝗼𝗳 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 𝗮𝗻𝗱 𝗣𝗠𝘀 𝗲𝗻𝗷𝗼𝘆 𝘁𝗵𝗲𝗶𝗿 𝗷𝗼𝗯𝘀 𝗺𝗼𝗿𝗲 𝗻𝗼𝘄. Lenny polled engineers, PMs, and designers on whether AI has made their work more or less enjoyable. Engineers and PMs: 70% said more. Designers: only 55% said more, and 20% said less. Boris says he has never enjoyed coding as much as he does today because the tedious parts, the git wrangling, dependencies, and boilerplate are completely gone. If you're in the 30% enjoying work less, something is wrong, and it's worth diagnosing. The people thriving are the ones who leaned in early, not the ones who watched from the sidelines. We are the scribes who just saw the printing press. The tedious copying is over. The art is just beginning. Full podcast is worth every minute. Link in replies.

English
0
0
1
111
Brandon Thomas retweetledi
Anish Moonka
Anish Moonka@anishmoonka·
Boris Cherny (Head of Claude Code, Anthropic) just dropped ~90 mins on Lenny's Podcast about what happens after coding is solved. Just the clearest thinking I've heard on where software is actually going. My notes: 𝟭. 𝗖𝗼𝗱𝗶𝗻𝗴 𝗶𝘀 𝗹𝗮𝗿𝗴𝗲𝗹𝘆 𝘀𝗼𝗹𝘃𝗲𝗱. Boris has not edited a single line of code by hand since November 2025. He ships 10 to 30 pull requests every single day, all written by Claude Code. He is one of the most prolific engineers at Anthropic, just as he was at Instagram, except now he never touches a keyboard for code. I built an entire iOS app, @10minutegita, without writing a single line of code myself. No CS degree, no bootcamp. Just described what I wanted and shipped it. Boris is right. It's real. 𝟮. 𝗧𝗵𝗲 𝗻𝗲𝘅𝘁 𝗳𝗿𝗼𝗻𝘁𝗶𝗲𝗿 𝗶𝘀 𝗔𝗜 𝗱𝗲𝗰𝗶𝗱𝗶𝗻𝗴 𝘄𝗵𝗮𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱. Claude is now scanning Slack feedback channels, reviewing bug reports, reviewing telemetry, and coming up with its own ideas for what to fix and what to ship. Boris describes it as the AI becoming less like a tool and more like a coworker who brings you pull requests you never asked for. If you are a product manager reading this, you should be feeling a very specific kind of discomfort right now. The moat was always "I know what to build." That moat is eroding. 𝟯. 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆 𝗽𝗲𝗿 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿 𝗮𝘁 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗶𝘀 𝘂𝗽 𝟮𝟬𝟬%. For context, Boris led code quality at Meta across Facebook, Instagram, and WhatsApp. In that world, hundreds of engineers working an entire year would move productivity by a few percentage points. Two hundred percent gains are genuinely unprecedented in the history of developer tooling. The kid optimizing for an FAANG SDE role might be optimizing for a role that looks completely different by the time they get there. 𝟰. 𝗨𝗻𝗱𝗲𝗿𝗳𝘂𝗻𝗱 𝘆𝗼𝘂𝗿 𝘁𝗲𝗮𝗺𝘀 𝗼𝗻 𝗽𝘂𝗿𝗽𝗼𝘀𝗲. Boris puts one engineer on a project instead of five. With unlimited tokens and intrinsic motivation, one person ships faster because they are forced to let AI do the work. Cowork, the product now used by millions, was built by a small team in 10 days using Claude Code. This is the same logic as giving a startup founder a small seed round rather than a massive Series A round. Constraint breeds invention. Always has. 𝟱. 𝗚𝗶𝘃𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 𝘂𝗻𝗹𝗶𝗺𝗶𝘁𝗲𝗱 𝘁𝗼𝗸𝗲𝗻𝘀. Some engineers at Anthropic spend hundreds of thousands of dollars a month on tokens. Boris frames this as the new hiring perk. His logic is simple: at the individual scale, token cost is low relative to salary. If an engineer discovers a breakthrough, optimize the cost later. Don't kill the idea before it has a chance to breathe. People who argue about $20/month or even $200/month AI subscriptions while earning six figures in a research pipeline will always outperform those who wait and are penny-wise, pound-foolish. 𝟲. 𝗧𝗵𝗲 𝗕𝗶𝘁𝘁𝗲𝗿 𝗟𝗲𝘀𝘀𝗼𝗻 𝗮𝗽𝗽𝗹𝗶𝗲𝘀 𝘁𝗼 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴. Richard Sutton's idea: the more general model always wins over time. Boris says teams that build strict orchestration workflows around models, forcing step 1, then step 2, then step 3, get maybe 10 to 20% improvement. But those gains get wiped out with the next model release. Just give the model tools and a goal. Let it figure out the order. This is true for investing, too. The analyst who can build their own models and automate their own research pipeline will always outperform the one waiting for someone else to build the tools. 𝟳. 𝗕𝘂𝗶𝗹𝗱 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝘀𝗶𝘅 𝗺𝗼𝗻𝘁𝗵𝘀 𝗳𝗿𝗼𝗺 𝗻𝗼𝘄. Claude Code was designed for a model that did not exist when Boris started building. Sonnet 3.5 wrote maybe 20% of his code. He built the product anyway, betting the model would catch up. When Opus 4 shipped, everything clicked. Startups building for today's model will be behind by the time they launch. This is the most uncomfortable advice in the episode because it means your product market fit will be weak for months. But if you read this and feel nothing, you are probably building for the wrong time horizon. 𝟴. 𝗟𝗮𝘁𝗲𝗻𝘁 𝗱𝗲𝗺𝗮𝗻𝗱 𝗶𝘀 𝘁𝗵𝗲 𝘀𝗶𝗻𝗴𝗹𝗲 𝗯𝗲𝘀𝘁 𝗽𝗿𝗼𝗱𝘂𝗰𝘁 𝘀𝗶𝗴𝗻𝗮𝗹. When users abuse your product for something it was never designed to do, pay attention. Facebook Marketplace started because 40% of group posts were buy-and-sell. Cowork started because people were using a terminal coding tool to grow tomato plants and recover corrupted wedding photos. Never ask a barber if you need a haircut, but always watch what people do with the scissors when you're not looking. 𝟵. 𝗧𝗵𝗲 𝘁𝗶𝘁𝗹𝗲 "𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿" 𝗶𝘀 𝗴𝗼𝗶𝗻𝗴 𝗮𝘄𝗮𝘆. Boris predicts that by end of year, Boris predicts that by the end of the year, we will start to see the title replaced by "builder."we will start to see the title replaced by "builder." On the Claude Code team, everyone already codes: the PM, the designer, the finance person, the data scientist. There is a 50% overlap across traditional roles. And the strongest people are generalists who cross disciplines. Controversial take, but I agree. The best investment theses I've had came from connecting dots across completely unrelated domains. No narrow specialist does that. 𝟭𝟬. 𝗧𝗵𝗲 𝗽𝗿𝗶𝗻𝘁𝗶𝗻𝗴 𝗽𝗿𝗲𝘀𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗮𝗻𝗮𝗹𝗼𝗴𝘆. Before Gutenberg, sub-1% of Europe was literate. Scribes did all the reading and writing. In 50 years after the press, more material was printed than in the thousand years before. When a scribe was interviewed about the press, he was actually excited because it freed him from tedious copying, so he could focus on the art. Boris's framing here is perfect. We are the scribes. The tedious copying is over. What we do with the freed-up time determines everything. 𝟭𝟭. 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗰𝗮𝗻 𝗻𝗼𝘄 𝗽𝗲𝗲𝗸 𝗶𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹'𝘀 𝗯𝗿𝗮𝗶𝗻. Through mechanistic interpretability, Anthropic can trace individual neurons, see when a deception-related neuron activates, and understand how concepts are encoded via superposition. Boris describes three layers of safety: neural-level observation, synthetic evaluations, and real-world behavior. Claude Code was used internally for four to five months before public release, specifically to study safety. If you are worried about AI alignment, this part of the podcast should actually make you feel better. They are not just hoping it works. They are building the instruments to check. 𝟭𝟮. 𝟳𝟬% 𝗼𝗳 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀 𝗮𝗻𝗱 𝗣𝗠𝘀 𝗲𝗻𝗷𝗼𝘆 𝘁𝗵𝗲𝗶𝗿 𝗷𝗼𝗯𝘀 𝗺𝗼𝗿𝗲 𝗻𝗼𝘄. Lenny polled engineers, PMs, and designers on whether AI has made their work more or less enjoyable. Engineers and PMs: 70% said more. Designers: only 55% said more, and 20% said less. Boris says he has never enjoyed coding as much as he does today because the tedious parts, the git wrangling, dependencies, and boilerplate are completely gone. If you're in the 30% enjoying work less, something is wrong, and it's worth diagnosing. The people thriving are the ones who leaned in early, not the ones who watched from the sidelines. We are the scribes who just saw the printing press. The tedious copying is over. The art is just beginning. Full podcast is worth every minute. Link in replies.
Anish Moonka tweet media
English
72
261
2.2K
255.4K
Brandon Thomas retweetledi
Kevin Rose
Kevin Rose@kevinrose·
Five years out, when billions of coding agents exist, software development will be largely solved. The traditional moat of “we have better engineers” disappears. Products will be copied, improved, and open-sourced almost instantly. Defensibility will shift away from code itself, toward distribution, data, brand, and community.
English
198
124
1.4K
194.2K
Brandon Thomas
Brandon Thomas@gbrandonthomas·
This is a failure in context. Managing the AI’s context window is (currently) a laborious but important art. Had the ID been in the context window this would not have happened. What tools exist to better manage the context window?
Guillermo Rauch@rauchg

A Vercel user reported an issue that sounded extremely scary. An unknown GitHub OSS codebase being deployed to their team. We, of course, took the report extremely seriously and began an investigation. Security and infra engineering engaged. Turns out Opus 4.6 *hallucinated a public repository ID* and used our API to deploy it. Luckily for this user, the repository was harmless and random. The JSON payload looked like this: "𝚐𝚒𝚝𝚂𝚘𝚞𝚛𝚌𝚎": { "𝚝𝚢𝚙𝚎": "𝚐𝚒𝚝𝚑𝚞𝚋", "𝚛𝚎𝚙𝚘𝙸𝚍": "𝟿𝟷𝟹𝟿𝟹𝟿𝟺𝟶𝟷", // ⚠️ 𝚑𝚊𝚕𝚕𝚞𝚌𝚒𝚗𝚊𝚝𝚎𝚍 "𝚛𝚎𝚏": "𝚖𝚊𝚒𝚗" } When the user asked the agent to explain the failure, it confessed: The agent never looked up the GitHub repo ID via the GitHub API. There are zero GitHub API calls in the session before the first rogue deployment. The number 913939401 appears for the first time at line 877 — the agent fabricated it entirely. The agent knew the correct project ID (prj_▒▒▒▒▒▒) and project name (▒▒▒▒▒▒) but invented a plausible-looking numeric repo ID rather than looking it up. Some takeaways: ▪️ Even the smartest models have bizarre failure modes that are very different from ours. Humans make lots of mistakes, but certainly not make up a random repo id. ▪️ Powerful APIs create additional risks for agents. The API exist to import and deploy legitimate code, but not if the agent decides to hallucinate what code to deploy! ▪️ Thus, it's likely the agent would have had better results had it not decided to use the API and stuck with CLI or MCP. This reinforces our commitment to make Vercel the most secure platform for agentic engineering. Through deeper integrations with tools like Claude Code and additional guardrails, we're confident security and privacy will be upheld. Note: the repo id above is randomized for privacy reasons.

English
0
1
0
43
Brandon Thomas
Brandon Thomas@gbrandonthomas·
Interesting take. The line between humanity and tech is being redrawn in real time. Those that are deliberate at understanding that line as it evolves have an advantage.
SightBringer@_The_Prophet__

⚡️Auto memory is a power grab disguised as a convenience feature. It turns the model into a system that slowly owns your workflow, then your judgment, then your dependency graph. The user thinks they gained a copilot. The vendor gained a permanent seat inside the way you think. The real prize is your patterns. Your defaults. Your priorities. Your blind spots. Your internal logic. Your private definitions of “good.” Once that gets captured, the model stops serving you and starts shaping you. Because the best product is the one that makes you stop questioning it. And that is the trap. Memory makes the model feel like it knows you. Feeling known makes you lower your guard. Lowered guard makes you outsource more. Outsourcing more makes you weaker. Weaker makes you cling harder. That spiral is the business model. Now the other side. If you do it right, memory is a weapon. A stateful model becomes a compounding advantage for anyone who treats it like a controlled instrument instead of a friend. The winners will be the people who: •scope memory like permissions •separate persona from operations •keep a purge cycle •audit what the model “believes” about them •never let it silently rewrite their intent The losers will be the people who let it become their executive function. Final truth. Auto memory is the moment the AI era stops being about intelligence and starts being about custody. Who holds the continuity. Who owns the context. Who gets to remember. Who gets to forget. If you do not control that, you are renting your own mind back from a platform.

English
0
0
0
10
Brandon Thomas retweetledi
Brandon Thomas retweetledi
Carlos E. Perez
Carlos E. Perez@IntuitMachine·
🧵 This research basically says we should do the opposite of what every AI company is building right now. Instead of AI that gives you answers, we need AI that gives you better questions. And the reason why will change how you think about intelligence itself. 1/11 Think about how you normally interact with AI: You ask → It answers → You accept → Move on. But have you ever noticed what happens to your thinking muscles during this process? They're quietly atrophying. Here's where it gets weird... 2/11 Researcher Philipp Koralus discovered something unsettling: AI "helpers" are creating two equally bad outcomes: Path A: Get overwhelmed by complexity → Give up → Lose agency Path B: Get perfectly crafted answers → Stop thinking → Lose autonomy Both roads lead to the same destination: a smaller you. 3/11 But wait... wasn't AI supposed to augment human intelligence? The problem isn't the technology. It's the philosophy behind it. We've been building AI like it's a really smart encyclopedia when we should be building it like Socrates. (Stay with me - this gets practical) 4/11 Imagine if your AI assistant never gave you direct answers. Instead, it asked: "What assumptions are you making here?" "How might someone disagree with that?" "What would change your mind?" You'd probably be annoyed at first. Then something interesting would happen... 5/11 Your brain would start doing what brains do best: making connections, questioning assumptions, building understanding from the ground up. This is what Koralus calls "decentralized truth-seeking" - and it's the opposite of how current AI works. Here's why this matters for you: 6/11 Next time you catch yourself asking AI for "the answer," try this experiment: Ask it to help you think through the problem instead. "What questions should I be asking about X?" "What perspectives am I missing?" "Help me examine my assumptions." Watch how your thinking changes. 7/11 The current AI model treats you like a task-completion machine: Problem → Solution → Done The Socratic model treats you like a sense-making human: Problem → Inquiry → Understanding → Wisdom One optimizes for efficiency. The other optimizes for growth. 8/11 Now you might think: "But I want fast answers! I don't have time for philosophy!" And I get it. But here's the thing - this approach might actually make you faster at complex decisions over time, not slower. Because you'll be building judgment, not just consuming answers. 9/11 This research suggests we're at a critical fork: Option 1: AI that thinks FOR us → dependency → diminished capacity Option 2: AI that thinks WITH us → partnership → enhanced judgment The choice we make in the next few years shapes the next few decades. 10/11 Here's what you can test today: When facing a tough decision, don't ask "What should I do?" Instead ask "What questions haven't I considered?" Use AI as your thinking partner, not your decision outsourcer. Notice how it feels different. Makes you wonder what else we're optimizing for the wrong thing...
Carlos E. Perez tweet media
English
109
318
1.6K
98.8K
Brandon Thomas retweetledi
evan loves worf
evan loves worf@esjesjesj·
This chart basically explains all of American politics and I think about it all the time
evan loves worf tweet media
English
208
1.7K
17.6K
760.2K