Evergreen Capital

140 posts

Evergreen Capital banner
Evergreen Capital

Evergreen Capital

@evergreencap3

Tech investor | Posting since March ‘26 | Views my own, not advice

Katılım Ocak 2022
862 Takip Edilen805 Takipçiler
Sabitlenmiş Tweet
Evergreen Capital
Evergreen Capital@evergreencap3·
I tested $META's Muse Spark over the last few hours and came away net positive. 3 main takeaways: 1) Quality: It's a very good model. Not quite frontier but good. It showed comparable performance vs Opus 4.6 across web data search, PDF parsing, and general knowledge/conversation/writing. It's worse at coding. Both models solved an easy coding task, but Muse Spark failed the hard one while Opus one-shotted. The image-gen is also worse than ChatGPT. But all in, it's a legitimate and usable general model. Lots of room to develop the UI further (ie it should show a map when recommending local restaurants) but the underlying model itself is impressive. 2) Speed: Notably, Muse Spark answered almost instantly while Opus 4.6 felt borderline unusable at times. I'm a huge Anthropic fan but latency has become a major issue. Simple answers take too long and multi-step agent flows break more often. Meta seems to have more available compute which is a real factor going forward. 3) Scaling: Meta hasn't published a full model card so we're working with limited disclosure. But the graphs below might be the most important part of the release. Their rearchitected pretraining stack shows a near-linear relationship between RL compute and accuracy. If that holds, Meta has a clear path to training much larger, more intelligent models. That's arguably more consequential than Muse Spark itself. All in, it's positive. Muse Spark is a good, usable model, it's being served smoothly, and Meta looks to be on an encouraging trajectory.
Evergreen Capital tweet media
English
10
22
313
51.8K
Evergreen Capital
Evergreen Capital@evergreencap3·
$MSFT faces a mounting strategic dilemma: They have no choice but to allow @AnthropicAI and @OpenAI to continue shipping agents inside Microsoft products, because 1) it allows users to derive more value from M365, likely boosting retention, and 2) blocking those companies from M365 would further incentivize them to launch their own full productivity suites (which may happen eventually anyway), creating risk of a major churn event (i.e., the worst-case scenario). But at the same time, the agents from those companies are so far superior to Microsoft’s @Copilot that more of the incremental AI revenue and value is being captured outside the M365 ecosystem. Moreover, users are increasingly using Claude/ChatGPT as the direct interface through which they accomplish AI work, building deeper relationships and stickiness with those tools, and less with M365. Something has to give. The longer this goes on, the weaker Microsoft's position gets. Microsoft must self-disrupt and completely reinvest itself for the new era. Otherwise, its dominance may slowly slip away.
Evergreen Capital tweet media
English
9
3
66
7.3K
Jaguar Capital
Jaguar Capital@cmo040958·
@evergreencap3 @satyanadella haha come on man, trying to stay bullish here 😂. In any case, some of these are low hanging fruit so hopefully only a matter of time...
English
1
0
2
3.8K
Evergreen Capital
Evergreen Capital@evergreencap3·
You simply cannot make this up. I saw @satyanadella’s post hyping Copilot in $MSFT Word, so I replicated his exact demo workflow in my own environment. - Had ChatGPT generate an investment memo - Opened the Copilot pane in Word - Used the first prompt verbatim: “turn on track changes and tighten the executive summary” Copilot happily generated a redlined version… …inside the chat box. The actual document? Untouched. How is this possible? 😂
Evergreen Capital tweet media
Satya Nadella@satyanadella

New in Word: Copilot now tracks changes, leaves comments, and more, working more like a coworker right inside your document, grounded in all your enterprise context with Work IQ.

English
37
18
603
153.4K
Evergreen Capital
Evergreen Capital@evergreencap3·
@designedbyabin @satyanadella Thanks. Insane decision to offer siloed Copilot as default inside its own app, let alone bury the toggle like that. After turning it on, it still only went 1 for 2: It edited the document but no luck on tracking changes, just instructions on how to do it myself.
Evergreen Capital tweet media
English
3
0
69
10.6K
Abin
Abin@designedbyabin·
@evergreencap3 @satyanadella try clicking the options button next to attach in the chatbox and turn on edit mode. It's a UX issue.
English
1
0
58
8.1K
Evergreen Capital
Evergreen Capital@evergreencap3·
The paper offers a powerful vision of the human-machine dynamic: "Every hour a researcher spends pushing on a well-specified problem is an hour not spent on the vaguer, riskier bets that most need human judgment. If we can hand off the former, we free ourselves for the latter."
Anthropic@AnthropicAI

New Anthropic Fellows research: developing an Automated Alignment Researcher. We ran an experiment to learn whether Claude Opus 4.6 could accelerate research on a key alignment problem: using a weak AI model to supervise the training of a stronger one. anthropic.com/research/autom…

English
0
0
3
474
Evergreen Capital
Evergreen Capital@evergreencap3·
This is the way. The gap between companies that adopt this playbook versus those that don't will only continue to get wider. AI is the greatest force multiplier in software history, but only if your team, infrastructure, and incentives are aligned for it.
kache@yacineMTB

The trick was: focus on building infrastructure that benefits from the intelligence revolution. Create software, architect your factory, your team, to benefit from the leverage. We are ahead in this regard, and this keeps on propelling us further forward

English
0
1
8
612
Evergreen Capital
Evergreen Capital@evergreencap3·
On memory: Here’s how efficiency gains get reinvested right back into larger AI workloads — increasing, not reducing, total demand.
Evergreen Capital@evergreencap3

More concrete evidence that memory optimizations are super bullish for memory demand, right inside last week's excellent 'TriAttention' paper on KV-cache compression from @yukangchen_, @WeianMaoX, @TianfuF and team: In the paper's appendix, they put OpenClaw on an RTX 4090 through a very realistic task: reading six project files and generating a report. With standard Full Attention, the KV Cache grows unboundedly, causing an out-of-memory error and crashing the agent before it finishes. But with TriAttention, which compresses the KV cache by ~10x, the job completes. Easy peasy! In other words, memory innovation and optimizations like this aren't a headwind. Rather, they're the necessary ingredient to unlock the next generation of AI and agentic workloads, which will profoundly increase demand for memory. $MU $Samsung $SKHynix

English
0
0
6
1.3K
Evergreen Capital
Evergreen Capital@evergreencap3·
@chatgpt21 Agreed! Lots of goodness sitting in plain sight
Evergreen Capital@evergreencap3

More concrete evidence that memory optimizations are super bullish for memory demand, right inside last week's excellent 'TriAttention' paper on KV-cache compression from @yukangchen_, @WeianMaoX, @TianfuF and team: In the paper's appendix, they put OpenClaw on an RTX 4090 through a very realistic task: reading six project files and generating a report. With standard Full Attention, the KV Cache grows unboundedly, causing an out-of-memory error and crashing the agent before it finishes. But with TriAttention, which compresses the KV cache by ~10x, the job completes. Easy peasy! In other words, memory innovation and optimizations like this aren't a headwind. Rather, they're the necessary ingredient to unlock the next generation of AI and agentic workloads, which will profoundly increase demand for memory. $MU $Samsung $SKHynix

English
0
0
0
176
Evergreen Capital
Evergreen Capital@evergreencap3·
I don't think we're anywhere near a floor for software. We're still at the beginning of a once-in-a-lifetime disruption to software as a technology and business. And the pace of change will only accelerate. The sector's historical premium was built around one assumption: very long duration, highly recurring revenue. That underpinned everything — growth, retention, incremental margins, ROIC, free cash flow, and terminal value. That assumption no longer holds. Anyone who says otherwise is being disingenuous. Competition has already inflected to a level never seen before and will further intensify. For the first time, there is an extremely wide performance gap between the existing and new products: you are at a material disadvantage using Microsoft Copilot instead of Anthropic's suite. There will be some winners, but most of these big, slow incumbents are at risk of displacement. And nearly all are now reaping what they sowed with obscene accounting: $SNOW: SBC greater than its entire CY25 free cash flow $WDAY: $2.59 GAAP EPS vs $9.23 adjusted $NOW: $1.67 GAAP EPS vs $3.51 adjusted $MDB: SBC greater than its entire free cash flow $DDOG: $0.31 GAAP vs $2.05 adjusted Even at the 30x GAAP P/E that's been floated, most names have significant downside. Meanwhile, the multiple could/should go lower: $GOOGL trades at 23x 2027. $META trades at 18x. $NVDA trades at 14x. Which of these cohorts is better positioned? Which has more staying power? There will be volatile days, but the SaaS-pocalypse likely gets worse before it gets better.
English
11
5
85
15.2K
Midnight Capital LLC
Midnight Capital LLC@Midnight_Captl·
405x4 — last rep was brutal but landed the plane ✈️
English
9
0
26
4.3K
Evergreen Capital
Evergreen Capital@evergreencap3·
Muse Spark also making $META revenue projections look increasingly low. If you believe it's a good model, and Meta will apply it/future models across platforms for users/ads, driving engagement + ad ROI, then 27/28 ests which assume decel and flat $ growth look very conservative.
Evergreen Capital tweet media
English
2
2
95
7K
Evergreen Capital
Evergreen Capital@evergreencap3·
Following-up: Muse Spark is 100% legit. It handles my general queries as well as, if not better than, other frontier models. And echoing @borrowed_ideas: 'contemplating mode' is a true leading-edge feature. Congrats to @alexandr_wang and the entire MSL team! Can't wait for more.
Evergreen Capital@evergreencap3

I tested $META's Muse Spark over the last few hours and came away net positive. 3 main takeaways: 1) Quality: It's a very good model. Not quite frontier but good. It showed comparable performance vs Opus 4.6 across web data search, PDF parsing, and general knowledge/conversation/writing. It's worse at coding. Both models solved an easy coding task, but Muse Spark failed the hard one while Opus one-shotted. The image-gen is also worse than ChatGPT. But all in, it's a legitimate and usable general model. Lots of room to develop the UI further (ie it should show a map when recommending local restaurants) but the underlying model itself is impressive. 2) Speed: Notably, Muse Spark answered almost instantly while Opus 4.6 felt borderline unusable at times. I'm a huge Anthropic fan but latency has become a major issue. Simple answers take too long and multi-step agent flows break more often. Meta seems to have more available compute which is a real factor going forward. 3) Scaling: Meta hasn't published a full model card so we're working with limited disclosure. But the graphs below might be the most important part of the release. Their rearchitected pretraining stack shows a near-linear relationship between RL compute and accuracy. If that holds, Meta has a clear path to training much larger, more intelligent models. That's arguably more consequential than Muse Spark itself. All in, it's positive. Muse Spark is a good, usable model, it's being served smoothly, and Meta looks to be on an encouraging trajectory.

English
1
13
115
15.9K
Evergreen Capital
Evergreen Capital@evergreencap3·
More concrete evidence that memory optimizations are super bullish for memory demand, right inside last week's excellent 'TriAttention' paper on KV-cache compression from @yukangchen_, @WeianMaoX, @TianfuF and team: In the paper's appendix, they put OpenClaw on an RTX 4090 through a very realistic task: reading six project files and generating a report. With standard Full Attention, the KV Cache grows unboundedly, causing an out-of-memory error and crashing the agent before it finishes. But with TriAttention, which compresses the KV cache by ~10x, the job completes. Easy peasy! In other words, memory innovation and optimizations like this aren't a headwind. Rather, they're the necessary ingredient to unlock the next generation of AI and agentic workloads, which will profoundly increase demand for memory. $MU $Samsung $SKHynix
Evergreen Capital tweet mediaEvergreen Capital tweet media
English
0
1
15
2.6K