Evergreen Capital

154 posts

Evergreen Capital banner
Evergreen Capital

Evergreen Capital

@evergreencap3

Tech investor | Posting since March ‘26 | Views my own, not advice

가입일 Ocak 2022
866 팔로잉885 팔로워
고정된 트윗
Evergreen Capital
Evergreen Capital@evergreencap3·
Jensen’s opening insight—making tokens more valuable—is the foundation of the bull case for the entire AI ecosystem: >Better models produce more valuable tokens >More valuable tokens power better apps >Better apps perform better work >Better work generates more customer value >More customer value drives wider adoption and pricing power >Wider adoption and pricing power boost revenue and margins for token producers >Higher revenue and margins enable greater reinvestment in AI infrastructure >Greater reinvestment fuels better models With the recent surge in frontier capabilities and revenue, this virtuous cycle is accelerating meaningfully. $NVDA
Evergreen Capital tweet media
Dwarkesh Patel@dwarkesh_sp

The Jensen Huang episode. 0:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 0:16:25 – Will TPUs break Nvidia’s hold on AI compute? 0:41:06 – Why doesn’t Nvidia become a hyperscaler? 0:57:36 – Should we be selling AI chips to China? 1:35:06 – Why doesn’t Nvidia make multiple different chip architectures? Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!

English
0
0
11
784
Evergreen Capital
Evergreen Capital@evergreencap3·
A non-consensus view: I expect $NVDA's competitive position will strengthen materially over ASICs in the coming years. One major reason: as AI research itself becomes increasingly automated, it should rapidly accelerate the discovery of new model algorithms and architectures. The last few months have already delivered clear evidence: @karpathy's Autoresearch, Stanford's Meta-Harness, and @AnthropicAI's Automated Alignment paper released Tuesday. All point to a future in which far more AI research is done by AI. The impact of this shift is two-fold. First, it frees human researchers to pursue vaguer, higher-risk bets at the frontier. Second, we're already seeing the first signs of AI generating genuinely novel discoveries in math and science. This combination should drive a sharp inflection in the rate of algorithmic and architectural breakthroughs. It would also create extraordinary tailwinds for AI progress. Algorithmic and architectural improvements have historically been primary drivers of gains in both model intelligence and compute efficiency. These advances have accounted for >99% of Nvidia's >1,000× single-GPU inference improvement over the last decade (everything except the 2.5× from process-node shrinks — Dally, Hot Chips 2023), while the pre-training compute needed to reach a fixed performance threshold has halved every ~8 months (Ho et al. 2024). Moreover, the rate at which labs can experiment and implement new discoveries at scale will remain a decisive competitive edge. It is precisely what lets them serve more intelligent tokens with greater efficiency (measured by cost per token or tokens per watt). In this world, the value of a general, flexible, and programmable platform rises materially. That favors GPUs over more specialized or fixed-function alternatives (exactly the point Jensen emphasized in yesterday's @dwarkesh_sp podcast). Finally, Nvidia holds a significant lead in AI research itself: 91% of semiconductor papers in 2024 used Nvidia hardware, more than 25x more often than the nearest competitor (Air Street Capital Compute Index 2024). It is therefore likely that novel algorithms and architectures continue to be discovered, tested, and standardized on Nvidia platforms first. So, we've already watched raw compute become a binding constraint at the frontier (cc: @anthropic). Will the next bottleneck be flexible compute? I suspect it will.
English
3
3
43
3.5K
Evergreen Capital
Evergreen Capital@evergreencap3·
@RyeNotBerben That sounds like a great description of the HPC semiconductor industry over the last 10+ years!
English
0
0
1
95
Scott Jones
Scott Jones@RyeNotBerben·
@evergreencap3 That’s quite an imagination - but, $nvda vs everyone else, in a rapidly changing environment, is an impossible hurdle.
English
1
0
1
181
Evergreen Capital
Evergreen Capital@evergreencap3·
Hey @nikitabier, any chance we can better separate the ‘delete’ and ‘save’ buttons here? I can’t tell you how many times I’ve fat-finger deleted something from my drafts
Evergreen Capital tweet media
English
0
0
2
223
Evergreen Capital
Evergreen Capital@evergreencap3·
Semis versus software on a daily basis:
GIF
English
0
1
10
731
Stacy Rasgon
Stacy Rasgon@Srasgon·
I am trying to come up with a title but keep realizing I’ve used them before…
GIF
English
8
0
9
2K
Jack Rae
Jack Rae@jack_w_rae·
Muse Spark is #1 on TaxEval and #2 on Finance Agent from Vals AI. It’s a nicely rounded model, useful model for many real-world use cases. I found it pretty good for answering questions about my tax return.
Vals AI@ValsAI

Muse Spark took #1 on TaxEval (77.68%)- dethroning Claude Sonnet 4.6. It also ranks 2/41 on Finance Agent (60.60%) and 3/36 on Vals Index (65.66%).

English
6
12
130
12.1K
Evergreen Capital
Evergreen Capital@evergreencap3·
@ecommerceshares I'm asking because I get outstanding output using my own analytical workflows, but it's highly dependent on what apps you use, where/how you access the models, file structure, prompts etc
English
0
0
1
114
Wasteland Capital
Wasteland Capital@ecommerceshares·
@evergreencap3 I may do some YouTube videos on this. My advice is, if you’re doing something important, you need to at least use exactly the same prompt for three frontier models and have them check each other. Even just two isn’t enough.
English
2
0
3
534
Gavin Baker
Gavin Baker@GavinSBaker·
Nice to see Credo acquire @atreidesmgmt portfolio company DustPhotonics. Great team, great company.   Dust designs photonic ICs (PICs) and engines that are foundational to Silicon Photonics. Silicon Photonic pluggables integrate multiple optical functions – traditionally implemented with discrete components – into a single component called a PIC. This saves space and power while improving scalability.     Perhaps most importantly, PICs can feed multiple optical lanes with a smaller number of lasers, relative to traditional approaches typically requiring a discrete laser for each lane (e.g., 8x200G lasers for a 1.6T module). Timely given we are heading into a severe shortage of lasers. Please note that there will still be a laser shortage even if Dust’s PICs are broadly adopted.   Thermal and power-density limitations for pluggables will also eventually make adding more and more lasers impractical.  Silicon Photonics and PICs essentially create another axis for scaling bandwidth. And obviously, long-term, Silicon Photonics is essential for CPO.   PICs feel similar to RF in the early days of cellular - quite a bit of black magic. There are only a few companies successfully supplying datacom PICs today at scale – Dust is one of them.     Ronnen and team have been excellent operators and partners, and their products are well respected across the datacom ecosystem. Over $500m in optical revenue for Credo is material and I do think HyperLume was a smart bet for them - starting to hear more positive feedback about MicroLEDs from our venture portcos and the hyperscalers. Copper, CPO, Pluggables, MicroLEDs should all win for the foreseeable future in different applications in different hyperscaler networking topologies: networking is taking huge share of the datacenter BoM everywhere.   Coherent training clusters are increasing in size to enable ever larger models. A larger coherent cluster is much more networking intensive. And then the larger models trained on these larger clusters require larger switched scale-up domains to inference economically, which is again more networking intensive. Rubin and Trainium 4 will have much larger switched scale-up domains and we may need these systems deployed at scale to enable the broad availability of 10 trillion plus parameter models like Mythos. Networking, especially switched scale-up networking, should be the fastest growing part of the datacenter for the next several years.   Switched scale-up networking (almost all copper with some optical beginning late next year) > scale across (optical obviously) > scale-out (first place for CPO) from a growth perspective next 3 years imo. We will be using copper well into the 2030s and somewhat ironic that the growth of optical is likely to drive accelerated growth for copper in the near term relative to the strange zero-sum thinking I occasionally see here and in some sell-side notes.
English
26
32
474
111.2K
Evergreen Capital
Evergreen Capital@evergreencap3·
$MSFT faces a mounting strategic dilemma: They have no choice but to allow @AnthropicAI and @OpenAI to continue shipping agents inside Microsoft products, because 1) it allows users to derive more value from M365, likely boosting retention, and 2) blocking those companies from M365 would further incentivize them to launch their own full productivity suites (which may happen eventually anyway), creating risk of a major churn event (i.e., the worst-case scenario). But at the same time, the agents from those companies are so far superior to Microsoft’s @Copilot that more of the incremental AI revenue and value is being captured outside the M365 ecosystem. Moreover, users are increasingly using Claude/ChatGPT as the direct interface through which they accomplish AI work, building deeper relationships and stickiness with those tools, and less with M365. Something has to give. The longer this goes on, the weaker Microsoft's position gets. Microsoft must self-disrupt and completely reinvest itself for the new era. Otherwise, its dominance may slowly slip away.
Evergreen Capital tweet media
English
29
18
199
33.6K
Jaguar Capital
Jaguar Capital@cmo040958·
@evergreencap3 @satyanadella haha come on man, trying to stay bullish here 😂. In any case, some of these are low hanging fruit so hopefully only a matter of time...
English
1
0
2
5.6K
Evergreen Capital
Evergreen Capital@evergreencap3·
You simply cannot make this up. I saw @satyanadella’s post hyping Copilot in $MSFT Word, so I replicated his exact demo workflow in my own environment. - Had ChatGPT generate an investment memo - Opened the Copilot pane in Word - Used the first prompt verbatim: “turn on track changes and tighten the executive summary” Copilot happily generated a redlined version… …inside the chat box. The actual document? Untouched. How is this possible? 😂
Evergreen Capital tweet media
Satya Nadella@satyanadella

New in Word: Copilot now tracks changes, leaves comments, and more, working more like a coworker right inside your document, grounded in all your enterprise context with Work IQ.

English
53
34
1K
236K
Evergreen Capital
Evergreen Capital@evergreencap3·
@designedbyabin @satyanadella Thanks. Insane decision to offer siloed Copilot as default inside its own app, let alone bury the toggle like that. After turning it on, it still only went 1 for 2: It edited the document but no luck on tracking changes, just instructions on how to do it myself.
Evergreen Capital tweet media
English
5
1
132
17.7K
Abin
Abin@designedbyabin·
@evergreencap3 @satyanadella try clicking the options button next to attach in the chatbox and turn on edit mode. It's a UX issue.
English
1
1
88
13.2K
Evergreen Capital
Evergreen Capital@evergreencap3·
The paper offers a powerful vision of the human-machine dynamic: "Every hour a researcher spends pushing on a well-specified problem is an hour not spent on the vaguer, riskier bets that most need human judgment. If we can hand off the former, we free ourselves for the latter."
Anthropic@AnthropicAI

New Anthropic Fellows research: developing an Automated Alignment Researcher. We ran an experiment to learn whether Claude Opus 4.6 could accelerate research on a key alignment problem: using a weak AI model to supervise the training of a stronger one. anthropic.com/research/autom…

English
0
0
6
658