Rezo🛡₿RRR
9.1K posts

Rezo🛡₿RRR
@rezosh
crypto since 2015 | building where attention lags power

Nobody ships anything that matters on the first try. The only mistake is stopping at the first.

Nobody ships anything that matters on the first try. The only mistake is stopping at the first.

Nobody ships anything that matters on the first try. The only mistake is stopping at the first.

Perplexity scaled from $100M to $500M revenue on 34% headcount growth. Plans another 2x in 2026 while keeping team flat. Meanwhile: - OpenAI spent $41B last year - Anthropic spent $17.5B We're fixated on the frontier labs while tooling-layer companies quietly do the actual math. Revenue scaling 5x with minimal hiring is operating leverage, and it's available to every AI company running the numbers. The real question: who captures the margin on that efficiency? - If it's builders, then "built for builders" is the answer. - If it's the model layer, then efficiency without moat is just a race to the bottom with extra steps.

The upgraded Google app for desktop is now available for Windows users globally in English. 💻✨ Use a simple keyboard shortcut (Alt + Space) to instantly find what you need—information from the web, your computer files, installed apps, and Google Drive files—all from the Search box. With AI Mode in Search and Lens built right in, you can ask whatever’s on your mind or search what’s on your screen to get helpful AI-powered responses with links to the web.

Two things people are complaining about with Anthropic: 1) Claude got dumber, and 2) Claude keeps crashing. Telemetry from an AMD Senior AI Director across 6,852 sessions show the "dumber" part is real: - thinking tokens dropped 73% - API retries surged 80x - contradictions tripled Anthropic confirmed they lowered the default effort from high to medium... /effort max restores it and the model itself is unchanged. But here's what ties both complaints together. OpenAI's leaked COO memo put it plainly: "Anthropic made a strategic misstep to not acquire enough compute." The downtimes weren't random bad luck... the quality cuts weren't a product decision but infrastructure reality hitting a product.

There’s no single fix for the data center power bottleneck, but my top stock pick for 2026 addresses the most urgent one: time-to-power — helping bring new data centers online faster through on-site, behind-the-meter generation. $BE $NVDA io-fund.com/ai-stocks/bloo…

Two things people are complaining about with Anthropic: 1) Claude got dumber, and 2) Claude keeps crashing. Telemetry from an AMD Senior AI Director across 6,852 sessions show the "dumber" part is real: - thinking tokens dropped 73% - API retries surged 80x - contradictions tripled Anthropic confirmed they lowered the default effort from high to medium... /effort max restores it and the model itself is unchanged. But here's what ties both complaints together. OpenAI's leaked COO memo put it plainly: "Anthropic made a strategic misstep to not acquire enough compute." The downtimes weren't random bad luck... the quality cuts weren't a product decision but infrastructure reality hitting a product.

Two things people are complaining about with Anthropic: 1) Claude got dumber, and 2) Claude keeps crashing. Telemetry from an AMD Senior AI Director across 6,852 sessions show the "dumber" part is real: - thinking tokens dropped 73% - API retries surged 80x - contradictions tripled Anthropic confirmed they lowered the default effort from high to medium... /effort max restores it and the model itself is unchanged. But here's what ties both complaints together. OpenAI's leaked COO memo put it plainly: "Anthropic made a strategic misstep to not acquire enough compute." The downtimes weren't random bad luck... the quality cuts weren't a product decision but infrastructure reality hitting a product.



basically: anthropic sneakily turned down how hard claude thinks before editing code, changed the default from "high" to "medium" effort, and hid the reasoning from session logs. all without telling users. an amd director had 7k sessions of telemetry to prove the degradation was real and measurable (not just vibes). anthropic admitted to the changes. there's a workaround (use "/effort max"). the uncomfortable part is most users had no data to notice it happened at all.



Nobody ships anything that matters on the first try. The only mistake is stopping at the first.


Nobody ships anything that matters on the first try. The only mistake is stopping at the first.

Perplexity scaled from $100M to $500M revenue on 34% headcount growth. Plans another 2x in 2026 while keeping team flat. Meanwhile: - OpenAI spent $41B last year - Anthropic spent $17.5B We're fixated on the frontier labs while tooling-layer companies quietly do the actual math. Revenue scaling 5x with minimal hiring is operating leverage, and it's available to every AI company running the numbers. The real question: who captures the margin on that efficiency? - If it's builders, then "built for builders" is the answer. - If it's the model layer, then efficiency without moat is just a race to the bottom with extra steps.

Perplexity started as a small business tool for ourselves. We had 4 people and no revenue with AI at our fingertips. The pivot to Computer is actually a full circle. Founders are using it to grow companies that matter to the economy and their communities. It’s rewarding to see it now powering small businesses and startups in big ways. Perplexity is still a startup. We just 5X’ed revenue from $100M to $500M with only 34% growth in team size. 2x revenue growth in 2026 with same small team. And we’re just warming up. Everyone here works at a small business, and everything we build is for people who build.





What happens when the platform you're building on becomes THE competitor? Full-stack apps right in Claude are what Anthropic is working on so Lovable, Replit, Bolt, every vibe-coding platform should be asking themselves that question. Think about what this means... Claude Code, Cowork, Desktop... now app builder. Not "just another feature" but an actual ecosystem play. Classic Big-Tech land-grab, just like Microsoft assembling the Office suite or Google building workspace. Different from before... The market is quietly, then suddenly, starting to believe that an AI-native company can displace incumbents, not just supplement them. $30B run-rate, 73% enterprise, over 70 releases in 50 days... not startup experimentation but a live demo of a platform seizing territory.


Claude AI grew 341.26% YoY in website traffic in Q1.

Meta surpassed Google in ad revenue, with $243B vs $239B. To be more precise, Meta won in behavioral surveillance. $243B isn't AI innovation, but the most massive surveillance infrastructure in history, packaged as an advertising product. AI just made targeting more precise and data more plentiful, and this all started long before GPT appeared. Nothing to celebrate here. This isn't innovation... it's consolidation of control over the attention and data of 3 billion people.

What happens when the platform you're building on becomes THE competitor? Full-stack apps right in Claude are what Anthropic is working on so Lovable, Replit, Bolt, every vibe-coding platform should be asking themselves that question. Think about what this means... Claude Code, Cowork, Desktop... now app builder. Not "just another feature" but an actual ecosystem play. Classic Big-Tech land-grab, just like Microsoft assembling the Office suite or Google building workspace. Different from before... The market is quietly, then suddenly, starting to believe that an AI-native company can displace incumbents, not just supplement them. $30B run-rate, 73% enterprise, over 70 releases in 50 days... not startup experimentation but a live demo of a platform seizing territory.










