Papito 🍊

53.3K posts

Papito 🍊

Papito 🍊

@papitozxc

a JPEG degen that is just trying to carve his own path in the web3 sphere.

At the top of the world. Katılım Şubat 2011
2.2K Takip Edilen508 Takipçiler
Papito 🍊 retweetledi
TheValueist
TheValueist@TheValueist·
$NVDA $MU $SNDK $LITE - I listened to this Jensen interview in its entirety. The thing it did unquestionably was make me even more hyperbullish (if that is possible) on HBM/DRAM and networking. Lots of FUD on GPUs and the vagaries of the market will take advantage of it. No real incremental signal on SSD. ------- EXECUTIVE SUMMARY The Dwarkesh Patel interview with Jensen Huang should be read less as a media appearance and more as a strategic manifesto for NVIDIA’s next phase. Huang’s central claim is that NVIDIA’s moat is not a narrow chip-spec lead but a control system spanning the 5-layer AI stack: algorithms, libraries, system design, supply-chain orchestration, developer lock-in, and geopolitical standard-setting. The most important exchange is the China debate, because it reveals Huang’s real thesis: the decisive asset is not merely frontier compute scarcity but control of the software-hardware standard on which global AI is built. NVIDIA’s own FY2026 disclosures partly validate that thesis. The company stated that it was effectively foreclosed from China’s data-center compute market by the end of FY2026 and that this helped competitors build developer and customer ecosystems that can challenge NVIDIA worldwide. The gendered-value implication is indirect but material: the interview elevates a hierarchy of labor and legitimacy coded as masculine in both the U.S. and China—speed, domination, abstraction, winner-take-most competition, technical hardness, and total availability—while care, inclusion, and social reproduction remain largely invisible except when they become throughput constraints. (SEC) THE INTERVIEW AS A THEORY OF POWER Huang repeatedly defines NVIDIA as the entity that converts electrons into tokens and insists that the company should do as much as necessary and as little as possible. That phrase is not rhetorical filler. It is a doctrine of capital-light control. NVIDIA will build what nobody else will build, invest where the ecosystem would otherwise fail to form, and stop short of becoming the cloud, the financier, or the application winner unless forced to do so. The interview applies that doctrine across supply commitments, neocloud financing, lab investments, annual roadmap cadence, pricing behavior during shortages, and global partner management. GTC is described less as a conference than as a market-coordination mechanism in which upstream suppliers, downstream customers, model builders, and startups are taught to plan around NVIDIA’s roadmap. This is why Huang’s moat argument is broader than conventional semiconductor framing. The claim is that GPUs do not win because matrix multiply hardware is inherently unbeatable. They win because the programmable stack allows NVIDIA to repeatedly re-optimize processors, fabrics, libraries, and algorithms together as models change. That is strategically coherent. Stanford HAI reports that the U.S.-China frontier-model performance gap had narrowed to 2.7% by March 2026, which is a reminder that algorithmic and software iteration can compress pure hardware leads faster than standard semiconductor cycles would imply. At the same time, Huang’s insistence that NVIDIA is far broader than AI must be balanced against the company’s actual economics: FY2026 data-center revenue was $193.7B of $215.9B total revenue, or about 90%, so the company is conceptually diversified but financially AI-centric. (Stanford HAI) An additional underappreciated element of the moat is relational power. Huang emphasizes stable pricing even in shortages, FIFO allocation subject to purchase orders and data-center readiness, and a trust-based planning relationship with TSMC. These softer-coded behaviors are not a deviation from the moat; they are part of it. Customers are being asked to treat NVIDIA not as an opportunistic scarcity seller but as industrial infrastructure. That helps explain why gross margin can remain unusually high even as system complexity rises and competitive narratives proliferate. FY2026 gross margin was still 71.1% even after the H20-related charge and a mix shift toward more complex full-scale data-center solutions. The same FY2026 filing shows that 2 direct customers accounted for 22% and 14% of revenue, which means the moat remains strong but bargaining-power concentration is real. (SEC) WHERE HUANG IS STRONGEST Huang is strongest when arguing that AI moats are not reducible to dies. The fact that Chinese challengers are increasingly emphasizing compatibility with CUDA rather than complete escape from it is itself evidence that developer-standard control has real value. Reuters reported that Huawei’s 950PR was being positioned as more compatible with NVIDIA’s CUDA software system, while Stanford HAI argued that Chinese open-weight labs are prioritizing computational efficiency and flexible downstream deployment rather than only brute-force scale. That is consistent with Huang’s claim that the winning platform is the one developers program against 1st, the one clouds can operate across the broadest customer base, and the one that can absorb algorithmic change fastest. (Reuters) He is also strong when describing supply-chain orchestration as a moat. NVIDIA’s 10-K shows that ecosystem financing is now balance-sheet material, not just narrative. The company disclosed $17.5B of investments in private companies and infrastructure funds, primarily to support early-stage startups, plus $3.5B of land, power, and shell guarantees to support datacenter buildout. That aligns with the interview’s repeated description of NVIDIA as the actor that prefetches bottlenecks, commits upstream, and ensures that the next capacity layer exists before customers are ready to buy it. The moat, in that sense, is partly the ability to socialize capex risk across the ecosystem while keeping platform control centralized. (SEC) WHERE HUANG IS WEAKER Huang is weaker when dismissing marginal-compute concerns as essentially irrelevant. The cyber risk scenario Patel raised is not science fiction. Anthropic stated that Claude Mythos Preview was capable of identifying and exploiting 0-day vulnerabilities across every major operating system and every major web browser it tested, and the oldest such bug it found was a patched 27-year-old OpenBSD vulnerability. The UK AI Security Institute separately found materially improved multi-step offensive cyber performance, while still stopping short of claims about highly defended enterprises. In that environment, marginal training and inference compute can matter even if China already has nonzero compute capacity. Patel’s question was therefore strategically serious, not unserious. (Red Anthropic) He also overstates some empirical claims. Huang’s loose formulation that China has 50% of the world’s AI researchers is directionally consistent with China’s enormous talent base but numerically imprecise. MacroPolo’s current tracker says 38% of top AI researchers were educated in China in 2024, while its broader archive framing says 47% of the world’s top AI researchers were trained in China and 72% of them now work in the U.S. Public GitHub data similarly do not support a literal reading of the claim that China is the world’s largest open-source contributor. In 2025, India had the largest public and open-source contributor base on GitHub, while the U.S. remained the largest GitHub developer population and led total contributions. The directional point remains valid—China’s developer base matters enormously—but the numerical rhetoric is aggressive. (MacroPolo Archive) THE CHINA ARGUMENT The most important contradiction in the interview is only apparent, not complete. Huang argues, 1st, that China already has enough compute, energy, and talent that absolute denial is impossible. He argues, 2nd, that excluding NVIDIA from China is strategically disastrous because it pushes Chinese developers and model builders onto domestic stacks that can later diffuse globally. Those claims can coexist. They imply that export controls may have limited absolute effect on China’s long-run AI capability while having very large relative effects on ecosystem direction, developer habits, and standard-setting. NVIDIA’s own 10-K materially supports the 2nd claim. The company stated that effective foreclosure from China helped competitors build larger developer and customer ecosystems to challenge NVIDIA worldwide. (SEC) Current facts support a balanced rather than absolutist view. Stanford HAI reports that the U.S. still leads massively in datacenter footprint, with 5,427 datacenters, and in private AI investment, with $285.9B in 2025 versus $12.4B in China. At the same time, Stanford reports that the frontier-model gap between the U.S. and China had narrowed to 2.7% by March 2026. NVIDIA’s China-headquartered revenue fell from $25.0B in FY2025 to $19.7B in FY2026, about 9% of FY2026 revenue, although the company warns that geographic revenue is based on customer headquarters rather than ultimate end use. The same filing states that under the then-current rules and geopolitical landscape, NVIDIA was unable to create and deliver a competitive approved product for China’s data-center market. That is direct evidence that the company’s China problem is not theoretical. (Stanford HAI) The policy environment is also more nuanced than either absolutist camp suggests. BIS further tightened China-related restrictions in March 2025, including Entity List actions tied to advanced AI, supercomputing, and high-performance AI chip development for military-linked Chinese end users. Yet BIS revised licensing policy in January 2026 to case-by-case review for H200-, MI325X-, and similar-chip exports if specified security requirements were met. Export control policy is therefore simultaneously a national-security instrument, a bargaining lever, and a force reshaping private capital allocation. NVIDIA’s filings show the operational cost of that whiplash: a $4.5B H20 charge in FY2026 and only about $60M of H20 revenue under limited licenses. (Bureau of Industry and Security) China, meanwhile, is not standing still. Reuters reported that Huawei prepared mass shipments of Ascend 910C in 2025, that U.S. officials estimated Huawei could produce no more than 200,000 advanced AI chips in 2025 but warned China was catching up quickly, and that Huawei’s newer 950PR is attracting interest from ByteDance and Alibaba with around 750,000 units planned for 2026 and greater CUDA compatibility. CSIS argues that allied export controls have galvanized China into developing autonomous semiconductor design, manufacturing, and infrastructure capabilities, and cites TrendForce projections that domestic suppliers could reach 50% of China’s AI chip market in 2026. The proper inference is not that controls are futile. The proper inference is that controls buy time but also intensify substitution and localization. (Reuters) THE GENDERED VALUE SYSTEM: ANALYTIC FRAME The interview does not discuss gender explicitly. The relevant lens is therefore not simple representation but the distribution of status among forms of work, authority, and social legitimacy. Gender research on STEM identifies “masculine defaults,” in which cultures treat male-coded traits and characteristics as standard and reward them disproportionately. Research on engineering culture likewise finds that meritocracy and individualism can mute critique and reproduce hierarchy even when participants directly experience exclusion. Huang’s language maps closely onto that template. The ideal actor in the interview is relentlessly available, strategically aggressive, technically abstract, comfortable with scale, and oriented toward winning. Caution, dependence, and social mediation are either minimized or reframed as weakness, delay, or throughput loss. (Sage Journals) A 3-tier hierarchy of valued labor is visible. Tier 1 is frontier technical and allocative labor: architects, kernel engineers, system designers, roadmap owners, capital allocators, and CEOs who coordinate global supply chains. Tier 2 is enabling infrastructure labor: cloud operators, packaging partners, network builders, and, when bottlenecks bite, electricians and plumbers. Tier 3 is structurally necessary but largely invisible labor: care, teaching, clerical coordination, compliance, emotional labor, and family reproduction. Huang briefly acknowledges the hidden layer when he distinguishes the job of a radiologist from the task of image reading and when he elevates plumbers as critical bottlenecks. But those recognitions remain instrumentally framed. They matter because the system needs them to keep scaling, not because they reorganize what the system treats as socially central. The result is gender hierarchy by valuation rather than by formal exclusion. (Sage Journals) IMPLICATIONS FOR THE U.S. In the U.S., this value system interacts with an already unequal technical pipeline. Current BLS household data show that in 2025 women were 26.0% of computer occupations and 19.3% of architecture and engineering occupations. McKinsey reports that only 93 women are promoted to manager for every 100 men, that entry-level women have meaningfully less sponsorship, and that only 21% of entry-level women are encouraged by managers to use AI tools versus 33% of men. LeanIn finds heavier workplace AI use among men as well, with 33% of men versus 27% of women using AI daily or constantly. A global study covering 18 datasets and 143,008 people finds women are about 20% less likely than men to directly use generative AI. In a frontier economy where promotion, pay, and status increasingly attach to AI fluency, these are not peripheral HR issues. They are mechanisms of compounding advantage. (Bureau of Labor Statistics) The interview’s rhetoric also has governance consequences in the U.S. PNAS Nexus research using U.S. and Canadian survey data finds that women perceive AI as riskier than men, are more skeptical of its benefits, and are more supportive of slowing adoption when employment risks rise. Huang repeatedly opposes mature nuance to what he calls loser-style, absolutist, or childish fear. That framing is strategically effective if the goal is accelerated buildout. But it also tends to delegitimize precisely the voices most likely to foreground uncertainty, social downside, or institutional fragility. The result is a discourse that can underweight resilience, governance, and distributional risk at the exact moment when those issues are becoming economically material. (OUP Academic) The U.S. implication is therefore not simply that men benefit more than women. It is that the social definition of valuable work shifts toward roles already disproportionately held by men and away from roles where women are more present or where women’s career progression depends more heavily on sponsorship, managerial recognition, and organizational slack. Even the interview’s partial rehabilitation of nonsoftware labor—plumbers, electricians, radiologists—does not challenge the underlying value system. It expands the coalition of labor that serves AI factories, but it does so inside the same high-intensity, throughput-1st, male-coded industrial order. (McKinsey & Company) IMPLICATIONS FOR CHINA In China, the gendered pattern is different in form but not in direction. Female labor-force participation remains comparatively high, and Chinese tech has produced significant female entrepreneurship. Reuters cites 60.5% female employment in 2023, above both the U.S. and the UK, and notes that 41% of Chinese tech companies in a 2020 study had female founders. But that broad participation coexists with sharp stratification. Reuters also reports that women’s representation falls to 22% in middle management, that 61.1% of female respondents in a 2023 survey had faced family-status questions in interviews, and that traditional breadwinner-caregiver expectations remain deeply rooted. Broad labor participation therefore does not eliminate the gendered ranking of technical authority and career durability. (Reuters) Research on China’s IT sector makes the mechanism more explicit. Programming work is often split between technical and social aspects, with the technical side associated with logic, rationality, abstract thinking, and deep engagement with code. Overtime culture then defines the ideal worker as continuously available, while traditional gender-role expectations make that model harder for women to satisfy without career penalty. Chatham House adds that workplace AI in China often reproduces or even operationalizes employer bias because recruitment systems are built in a labor market where age, gender, and marital status remain salient filters. Under an AI industrial strategy centered on stack localization, hardware self-sufficiency, and national competition, women can participate at scale while still being filtered away from the most prestigious technical core or penalized at promotion points linked to marriage and childbearing. (Springer) This is why Huang’s preferred geopolitical outcome—keeping Chinese developers on the American stack—would not dissolve the gendered hierarchy inside China. It would more likely globalize a U.S.-centered technical regime that remains male-dominated at the highest-status layers, even as it competes with a Chinese alternative that is itself shaped by long-hours culture, state priorities, and family-role expectations. The competition is therefore not between a gendered system and a neutral 1. It is between 2 differently organized, differently justified, but structurally similar high-performance technical orders. (Springer) COMPARATIVE SYNTHESIS The U.S. and China converge more than they diverge at the level of the gendered value system. In both systems, the highest-status labor is abstract rather than relational, large-scale rather than intimate, aggressive rather than precautionary, and always-on rather than bounded. In both systems, frontier engineering is treated as civilizational labor, while care, training, administration, and governance are secondary until they become bottlenecks. The difference is mainly in legitimation. The U.S. version is meritocratic, founder-led, and capital-market intensive. The Chinese version is state-steered, overtime-tolerant, and more openly shaped by family-role expectations. Huang’s argument that the American tech stack should diffuse through China and the global South is therefore also an argument about exporting a particular model of status and legitimacy, not just a chip platform. (Sage Journals) The most revealing line in comparative terms may be Huang’s sudden elevation of plumbers and electricians. That moment shows that the system can value maintenance, but only after maintenance threatens scale. It does not displace the top of the hierarchy; it temporarily widens the set of labor categories that are permitted prestige. That is not a transition to a less gendered order. It is a shift from purely digital elite masculinity toward a broader industrial masculinity in which the valued worker is still defined by hardness, endurance, technical mastery, and availability to large systems. (Sage Journals) INVESTMENT IMPLICATIONS For investors, the deepest takeaway is that NVIDIA should be modeled as a coordinator of standards, capital, and labor prestige rather than as a pure semiconductor vendor. Gross margin remains exceptional at 71.1% despite the H20 charge and a mix shift toward more complex datacenter systems. The deeper long-duration asset, however, is not just margin. It is the ability to make every layer of the ecosystem organize around NVIDIA’s roadmap, tools, and optimization practices. The largest strategic risk is not 1 competing accelerator in isolation. It is ecosystem bifurcation: a Chinese open-weight plus domestic-chip stack that becomes good enough, locally sticky, and globally exportable, especially across price-sensitive regions. Stanford HAI already argues that Chinese open-weight models have caught up or even pulled ahead in some dimensions and are now unavoidable in the global competitive landscape. (SEC) A 2nd-order but economically material implication is that gendered adoption and promotion gaps will affect who captures AI productivity gains. If women are less sponsored, less encouraged to use AI, slower to adopt it, and more likely to be positioned outside the highest-status technical core, then firms and national systems are extracting less value from a large share of their talent base. In the U.S., that means lower enterprise absorption and weaker leadership diversity in the very functions now receiving the most capital. In China, it means that a self-sufficiency drive can still leave part of the labor pool underutilized or channeled into lower-status roles. In both countries, the gendered value system reduces adaptability precisely when AI is supposed to expand it. (McKinsey & Company) BOTTOM LINE Huang is most convincing on ecosystem economics and least convincing on the claim that marginal compute access for China is strategically unimportant. The SEC filing, Stanford data, Reuters reporting, BIS actions, and the interview itself all point to a more complex equilibrium. The U.S. still leads in capital, datacenters, and some frontier capabilities. China has nevertheless narrowed model-performance gaps, is localizing its stack faster, and is increasingly capable of turning exclusion into substitution. The gendered consequence in both countries is the further elevation of a frontier-AI value order organized around male-coded forms of labor and authority. Unless intentionally counterbalanced, that order will continue to route capital, status, and political legitimacy toward the hardest technical core while recognizing care, training, governance, and social reproduction only when they threaten throughput. (Stanford HAI)
Dwarkesh Patel@dwarkesh_sp

The Jensen Huang episode. 0:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 0:16:25 – Will TPUs break Nvidia’s hold on AI compute? 0:41:06 – Why doesn’t Nvidia become a hyperscaler? 0:57:36 – Should we be selling AI chips to China? 1:35:06 – Why doesn’t Nvidia make multiple different chip architectures? Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!

English
8
17
79
20K
Papito 🍊 retweetledi
Lenny Rachitsky
Lenny Rachitsky@lennysan·
.@rabois: Most companies hire ammunition when they really need barrels. "Most companies raise money, and then they hire a lot of people. And then the CEO, almost without exception, gets frustrated because they've hired a lot of people, their burn rate has increased, and they don't feel like more is getting accomplished per unit of time. The fundamental driver of this is the number of people that can independently drive an initiative from inception to success is very limited within most companies. If you hire more people without expanding the number of what I call 'barrels,' that can drive ideas from inception to success, all you're doing is stacking people behind the same initiatives." The ratio of barrels to ammunition is what determines the number of important things a company can pursue simultaneously.
Lenny Rachitsky@lennysan

"High performance machines don't have psychological safety. They're about winning." Keith Rabois (@rabois) was COO of Square, part of the PayPal Mafia, an early investor in Stripe, Palantir, Airbnb, DoorDash, and Ramp, and a 2x founder. He's spent 25 years obsessing over how to build world-class teams. In our in-depth conversation, we discuss: 🔸 How to identify undiscovered talent 🔸 Keith's barrels vs. ammunition hiring framework 🔸 The three traits of the best-performing companies right now 🔸 Why talking to customers is actively harmful for consumer products 🔸 Why the PM role is dying 🔸 The specific interview question he asks every senior candidate 🔸 Why CMOs (not engineers) are becoming the #1 consumer of AI tokens Watch now 👇 youtu.be/xCd9ykretlg

English
14
19
167
65.8K
Papito 🍊 retweetledi
Founder Mode
Founder Mode@Founder_Mode_·
Marc Andreessen worked at IBM at the peak of their power. There were 12 layers of management between him as an intern and the CEO. He describes what that meant: if one layer lies to the layer above it, maybe that's okay. Two or three layers, the lies compound. Six layers, they really compound. Twelve layers... The CEO has absolutely no idea what's happening inside his own company. IBM even had a name for it. They called it the Big Gray Cloud, the cloud of men in gray suits that followed the CEO everywhere and made it physically impossible for him to ever talk to someone actually doing the work. "It was like a visit from the king. The king and the traveling court. A completely impervious bubble." That company controlled 80% of tech. Then it didn't. Elon looked at that model and built the opposite. The most dangerous thing in a large organization isn't incompetence. It's the distance between the truth and the top.
English
42
152
2K
217.2K
Papito 🍊 retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Boris Cherny created Claude Code. It hit $2.5 billion in annualized revenue in 9 months. Fastest B2B product ramp in history. Faster than ChatGPT, Slack, or Snowflake ever reached $1 billion. Now he says coding is “solved” and IDEs will be dead by end of year.
English
22
75
656
64.5K
Papito 🍊 retweetledi
Kirill
Kirill@kirillk_web3·
Skip Netflix Tonight. Watch This Instead. 4 hour Claude Code guide. More valuable than every AI reel, thread, and tweet you've consumed this entire year. Same 4 hours. One makes you feel good. The other makes you money. You saved 500 AI posts this year and did nothing with any of them. This is the one you actually use. Watch this guide and stop trading your time for $63,000 a year. Bookmark this. Watch it today. Not tonight. Not this weekend. Today. Guide below.
Kirill@kirillk_web3

CLAUDE FULL COURSE 4 HOURS This is the most detailed Claude guide I’ve seen online. Bookmark this before you forget. 4 hours. Build tools. Automate work. Learn how people build bots and systems. Claude → Tools → Automation → Products → Money

English
0
5
35
14.9K
Papito 🍊 retweetledi
Kirill
Kirill@kirillk_web3·
🚨do you understand what two Anthropic engineers just explained in 16 minutes. Barry and Mahesh built Claude Skills from scratch. here's the part nobody is talking about: > Skills are just folders. > folders that teach Claude your job. > your workflow. your expertise. your domain. Claude on day 30 is a completely different tool than day one. watch this before you write another prompt. before you build another agent. before you touch another tool. 16 minutes. bookmark it. watch it today. and if you want to learn everything about Claude from scratch the full 4 hour guide is waiting below.
Kirill@kirillk_web3

CLAUDE FULL COURSE 4 HOURS This is the most detailed Claude guide I’ve seen online. Bookmark this before you forget. 4 hours. Build tools. Automate work. Learn how people build bots and systems. Claude → Tools → Automation → Products → Money

English
68
1.2K
13.8K
4.9M
Papito 🍊 retweetledi
Dimitry Nakhla | Babylon Capital®
Investors may be underestimating the optionality of $META AI glasses. In a conversation with Jensen Huang, Mark Zuckerberg laid out the vision — and it’s bigger than most realize. Ray-Ban $META glasses started with a simple constraint: make something that looks great, and fit as much technology as possible within that form factor. Camera, microphone, speakers, live streaming, WhatsApp video calls. But then the unexpected happened — that same sensor package turned out to be exactly what you need to talk to AI. That wasn’t the plan. It happened organically. And it changes the trajectory of the product potential. Zuckerberg’s projection: displayless AI glasses at a ~$300 price point becoming a product that tens — potentially hundreds — of millions of people eventually own. Ambient AI interaction built into something people already wear. Real-time translation. Visual language understanding. Etc. The optionality here is meaningful. $NVDA $EL.PA ___ YouTube: Nvidia | AI and The Next Computing Platforms With Jensen Huang and Mark Zuckerberg (07/29/2024)
English
9
18
186
38.1K
Papito 🍊 retweetledi
Rony
Rony@Ronycoder·
Instead of watching Netflix, watch this 1-hour Yale lecture by Professor Ben Polak. It will change how you think about decisions in negotiations, business, and everyday life.
English
53
1.8K
8.1K
779.9K
Papito 🍊 retweetledi
Startup Archive
Startup Archive@StartupArchive_·
Brian Armstrong explains how he built Coinbase on nights and weekends while working at Airbnb Brian first advises those who are currently employed to not build your project on company hours or on your company laptop: “If you build it on company time or on the company hardware, the company probably owns the IP.” Then he describes his schedule for working on Coinbase while still working full-time at Airbnb. “I would often work [at Airbnb] until 7pm. I’d come home, eat dinner, and then I would work from 8pm to midnight. I would do that maybe 3-4 days a week on weekdays. And then on the weekend I’d work Sunday afternoon for 7-8 hours.” Brian did this consistently for about a year and a half until Coinbase was far enough along for him to get seed funding from Y Combinator. “It sucked. I mean I was tired after the full day of work [at Airbnb]. But this is where determination comes in… At that moment in time, I was in my late 20s, and I was like, ‘I really want to try to build something important in the world.’” When asked how he maintained friendships during this time, Brian replies: “I was pretty intense about it. I would say I sacrificed friendships for it. It’s not like I was just never responding to people, but I’ve seen this happen to various people. They get to a certain point in their life. Sometimes they turn a certain age where they thought they would have more done by then or maybe someone in their family passes away and they’re like, Oh my god, time is finite. It’s precious. And something happens where they’re like, ‘I’m going to get this done, no matter the cost.’” Brian tells those out there who might be in a similar situation: “Go hard at it. Finish your book. Launch your thing. Just start doing stuff - and even if you don’t know what to do, just do anything, because action will produce information and it’ll help you get to the right thing.” Video source: @StevenBartlett (2022)
English
46
134
1.8K
1.1M
Papito 🍊 retweetledi
prinz
prinz@deredleritt3r·
Anthropic revenue (annualized): - January 2025: $1B - May: $3B - June: $4B - August: $5B - October: $7B - December: $8B to $10B -February 2026: $14B -March 2026: $19B -April 2026: $30B (WTF???)
Anthropic@AnthropicAI

Our run-rate revenue has surpassed $30 billion, up from $9 billion at the end of 2025, as demand for Claude continues to accelerate. This partnership gives us the compute to keep pace. Read more: anthropic.com/news/google-br…

English
209
449
10.4K
1.4M
Papito 🍊 retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Perplexity is a $20 billion company that built zero AI models. Their product sits on top of 19 models made by other companies. Claude for reasoning. Gemini for research. GPT-5.4 for long context. Grok for lightweight tasks. Nano Banana for images. Veo 3.1 for video. You write one prompt. Computer picks the best model combo for the job, spawns sub-agents in parallel, and runs the whole thing in a cloud sandbox while your laptop is closed. 400+ app connectors. Gmail, GitHub, Snowflake, Salesforce, Ahrefs, Shopify. Read and write access. One prompt can scrape your competitors, pull live financials from FactSet, query your data warehouse in plain English, and push a finished report to Google Slides. No API keys. No terminal. The enterprise usage data tells you where this is heading. In January 2025, 90% of enterprise tasks on Perplexity ran on two models. By December, no single model held more than 25% of usage. A new frontier model launched every 17.5 days in 2025. Each one brought different strengths. The era of picking one model is ending. Perplexity built none of the intelligence. They built the routing layer that makes the intelligence usable. Stripe didn't build the banks. Google didn't build the websites. The value is in making complexity disappear. Four of the Mag Seven already use Perplexity's search API in production. Every model provider is now building orchestration in-house. The question is whether the routing layer stays independent or gets absorbed. I wrote the complete guide to using Computer without wasting credits. 6 use cases, the prompt spec that controls cost, honest limitations. aibyaakash.com/p/perplexity-c…
Aakash Gupta tweet media
English
100
274
1.5K
438.4K
Papito 🍊 retweetledi
Jaynit
Jaynit@jaynitx·
In 2013, Yale professor Ben Polak gave a legendary 1-hour lecture on Game Theory. It will change how you make decisions in negotiations, business, and life. His frameworks: • Dominance arguments • Backward induction • The proactive bias 12 lessons to make better decisions:
English
31
1.2K
5.7K
444.1K
Papito 🍊 retweetledi
June Goh
June Goh@JuneGoh_Sparta·
How does the reopening of Straits of Hormuz look like? I presented this to my clients last week. Logistics will be messy. Confidence needs to be rebuild. Unconditional is the word. It's going to take time, guys. Don't hold your breath. #oott
June Goh tweet media
English
27
166
655
139.7K
Papito 🍊 retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Oracle just mass-fired 18% of its workforce via a 6 AM email signed by nobody. No manager call. No HR meeting. An email from "Oracle Leadership," a name that doesn't belong to a single human being, landing in inboxes across 5 countries simultaneously. System access cut before most people finished reading the second paragraph. The financial math makes this even colder. Oracle posted $6.13 billion in net income last quarter. Revenue obligations are up 433% year over year to $523 billion. This is a company printing money. But Larry Ellison bet Oracle's future on AI data centers, and the bill came due. $156 billion in planned capex. $58 billion in new debt raised in two months. The stock cratered from $345 to $140, a 60% collapse from September's peak. Multiple banks have already backed away from financing the data center projects. So Oracle did what Oracle does. It turned 30,000 people into a line item. TD Cowen estimated the cuts free up $8 to $10 billion in cash flow. That's the entire point. The layoffs aren't a response to poor performance. They're the funding mechanism for an infrastructure bet the balance sheet can't support. The stock went up 6% on the news. Wall Street rewarded Oracle for firing 30,000 people because it moved the free cash flow number in the right direction. The SEC filing already had $2.1 billion earmarked for restructuring, with $982 million spent before today. They budgeted this. The termination emails were a calendar event. Hardest-hit divisions lost 30% of headcount in a single morning. Unvested RSUs forfeited on the spot. April 3 is the last working day. One-month garden leave. Done. A company with record profits just eliminated the equivalent of a small city's workforce to pay for data centers it can't afford with revenue it already has. The 6 AM email was the quiet part out loud.
unusual_whales@unusual_whales

BREAKING: Oracle has reportedly begun layoffs, with 30,000 employees likely to be fired, per the Deccan Herald.

English
33
80
513
98.2K
Papito 🍊 retweetledi
introvert
introvert@introvertsmemes·
ZXX
61
1.5K
13.4K
1.4M
Papito 🍊 retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The timeline on this is genuinely insane. October 2025: Sam Altman flies to Seoul and signs simultaneous deals with Samsung and SK Hynix for 900,000 DRAM wafers per month. That's 40% of global supply. Neither company knew the other was signing a near-identical commitment at the same time. Those deals were letters of intent. Non-binding. No RAM actually changed hands. But the market treated them as gospel. Contract DRAM prices jumped 171%. A 64GB DDR5 kit went from $190 to $700 in three months. December 2025: Micron kills Crucial, its 29-year-old consumer memory brand, to reallocate every wafer to AI and enterprise customers. The company explicitly said it was exiting consumer memory to "improve supply and support for our larger, strategic customers in faster-growing segments." Translation: the AI demand signal was so loud that selling RAM to PC builders stopped making financial sense. March 2026: Google publishes TurboQuant, a compression algorithm that reduces AI memory requirements by 6x with zero accuracy loss. Cloudflare's CEO called it "Google's DeepSeek." The entire thesis that AI would consume infinite memory forever just got a six-month expiration date on it. Same month: OpenAI and Oracle cancel the Abilene Stargate expansion. The $500 billion data center vision that justified the RAM deals couldn't survive its own financing terms. Bloomberg attributed the collapse partly to OpenAI's "often-changing demand forecasting." MU is now down ~33% from its post-earnings high. Revenue up 196% year over year, EPS up 682%, and the stock is in freefall because the company restructured its entire business around a demand signal that came from non-binding letters and is now being compressed out of existence by a research paper. Micron bet the consumer division on Sam Altman's signature. The signature was worth exactly what the paper said: nothing binding.
Grummz@Grummz

Imagine closing your entire consumer memory division because this guy signed a non binding letter that he would buy 40% of the world’s RAM. Only to have him rug pull 3 months later.

English
258
1.8K
14.1K
1.6M