Moot Point, Stocks + AI

5.4K posts

Moot Point, Stocks + AI banner
Moot Point, Stocks + AI

Moot Point, Stocks + AI

@amootpoint

Follow me for Latest Stock Ideas, Stock Commentary & AI Industry News & Opinions.

Phoenix, USA Katılım Haziran 2010
383 Takip Edilen560 Takipçiler
Moot Point, Stocks + AI
@JAhern1294 @dee_bosa CUDA doesn’t matter anymore for inference. Jensen is trying to make people believe that it does. It meant a lot in 2020 and 2021. Now everyone has caught up and programs on top of PyTorch and not directly on CUDA.
English
0
0
0
9
Jack Ahern
Jack Ahern@JAhern1294·
@dee_bosa I’d say CUDA has been the dominant OS in AI for the last couple of years..
English
0
0
1
52
Deirdre Bosa
Deirdre Bosa@dee_bosa·
Jensen Huang doesn't need a new chip. He needs a new moat. Nvidia's most ambitious move from GTC was NemoClaw --a free, open-source AI agent platform designed to keep every company dependent on Nvidia's computing power, even as the chip competition heats up Nvidia is becoming an operating system. The market is still pricing it as a chipmaker. w/ @jaswu_
Deirdre Bosa tweet media
English
43
11
180
26.2K
Moot Point, Stocks + AI
@dee_bosa There should be a “truly” open source enterprise version of openclaw. And that answer is NOT nemoclaw. Cmon.
English
0
0
0
21
Adam Schwarz
Adam Schwarz@AdamJSchwarz·
Japanese PM Sanae Takaichi's reaction as Trump says "Who knows better about surprise than Japan? Why didn't you tell me about Pearl Habour?" Undoubtedly the worst American diplomatic gaffe in post-war US-Japan history.
English
2.4K
9.6K
35.6K
4.5M
Adam Schwarz
Adam Schwarz@AdamJSchwarz·
Reporter: Why didn't you notify Japan that you were going to attack Iran? Trump next to the Japanese PM: "Who knows better about surprise than Japan? Why didn't you tell me about Pearl Habour? You believe in surprise I think much more so than us."
English
193
462
1.3K
269.4K
Moot Point, Stocks + AI
@PatrickMoorhead Why does this keep happening with super micro. Feels like a company without any integrity. Accounting fraud earlier and now IP theft fraud. Only coldplay fraud is remaining now.
English
1
0
4
79
𝐷𝑟. 𝐼𝑎𝑛 𝐶𝑢𝑡𝑟𝑒𝑠𝑠
Unfortunately, it's a limited GTC for me this year. My NVIDIA official contacts 'ran out' of passes, and said the analyst program was oversubscribed, so even if I did attend I would be unable to get executive access and insights this year. It's a shame how this has come about, especially as I've not had a clear answer as to why I'm seemingly Tier 3, or lower, on the list. I pride myself on clear and accurate communication within the rules, and it's been unclear from the contact I do have what level of professionalism I'm failing to meet. This is despite being the warm up act to Jensen and Sassine at Synopsys last week and getting so much positive feedback. To my NVIDIA friends inside the company, I appreciate your continued support, as always! As for the show itself, thankfully one of my clients pulled through with an exhibitor hall-only pass. I'll be on the show floor in person, but unable to attend the keynote and talks. I'll see you at the show ! :) I'll also be over at OFC later in the week, certainly looking forward to that one. Lots happening in CPO that makes me really excited.
English
31
12
261
128.2K
Moot Point, Stocks + AI
@sama Examples like this exist in every industry. Not just coding. Imagine the amount of info doctors remember. Imagine the amount of music written by hand by classical era composers. Imagine. Imagine. Imagine.
English
0
0
0
25
Sam Altman
Sam Altman@sama·
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.
English
4.3K
2.1K
35.5K
5.4M
Dylan Patel
Dylan Patel@dylan522p·
I just witnessed someone in front of me market buy Nvidia stock as Jensen is talking and then post screenshots on WeChat alongside his view from the crowd
English
57
15
1.1K
132.7K
Moot Point, Stocks + AI
@AlphaSenseInc Why managing AI locally is disruptive. Once you have it all setup, when the next model comes you just update the model. No big deal.
English
0
0
0
44
AlphaSense
AlphaSense@AlphaSenseInc·
Interview with industry expert discusses workload requirements driving enterprise AI deployment between cloud and on-premise ( $GOOGL, $NVDA, $MSFT, $AMZN ): - The expert explains that the ranking of factors driving on-prem versus cloud deployment decisions varies by organization and strategy. For large-scale deployments, cost is the top priority, as token consumption at that scale can burn through an entire annual AI budget in days if not managed carefully. For smaller or team-specific deployments, cost is less of a constraint, and public cloud or hyperscaler options become more practical. - The expert shares that within his organization, around 80% of AI service workloads run on-prem, with only functions that do not involve internal or proprietary data being pushed to the public cloud. Looking ahead, he expects a significant shift toward the cloud. Managing dedicated GPUs and on-prem configurations is increasingly difficult given the pace of disruption in the AI stack, and keeping up with deployments will become unsustainable for most organizations. - The expert explains that most enterprise AI solutions today are still treated as an R&D investment, meaning cost-benefit analysis is not yet measured with the same discipline as routine IT spending. Once AI becomes part of standard operational budgets, that dynamic will shift, and cost efficiency will become a much bigger factor. His view is that around 80% of workloads will ultimately move to cloud. - According to the expert, smaller models will take on much greater importance going forward. For the majority of day-to-day enterprise use cases, smaller models will be sufficient, while larger frontier models will be reserved for deep research and high-stakes decision making at the enterprise level. His view is that as AGI becomes more of a standard, the architecture will naturally split, with smaller models handling routine tasks, and larger models supporting strategic initiatives like business expansion planning or enterprise architecture decisions. - The expert sees local PC-level compute capable of running large models as genuinely disruptive, though he considers the timeline and feasibility still uncertain. If that scenario actually plays out, he believes it would represent a meaningful blow to the large GPU and cloud infrastructure players that currently dominate the market.
AlphaSense tweet mediaAlphaSense tweet mediaAlphaSense tweet mediaAlphaSense tweet media
English
8
3
24
3.8K
Moot Point, Stocks + AI
@Mr_Derivatives Next we will be tipping pilots if we don’t want to go through turbulence. Or may be doctors if we don’t want a smooth physical exam. Topping culture should be TOTALLY. removed from USA. It’s such a nuisance.
English
2
0
6
853
Gokul Rajaram
Gokul Rajaram@gokulr·
BIMODAL HIRING The hiring market is becoming bimodal. Companies want to hire either: Extremely experienced / strong / 10x engineers (eg: @bcherny) or growth marketers (eg: @ElenaVerna) or salespeople who use AI to refine their craft. OR Young people who are fearless and AImaxxed doers. In the middle is death. Esp if you're a spreadsheet jockey, as most middle managers are.
English
21
21
270
40.1K
Moot Point, Stocks + AI
@gokulr @bcherny @ElenaVerna Well why can’t middle managers aka spreadheet jockey refine their craft with AI. They certainly can. AI is for everybody. Let’s focus on what process gets eliminated with AI and then decide the winners and losers.
English
0
0
1
192
Moot Point, Stocks + AI
@Srasgon @SouthwestAir Crazy timing. I just replied to a thread about this. Yes. New rules are terrible. A-list really has no advantage. It’s just like any other airlines now. May be worse in some instances. Low prices gone. Early board gone. Free bags gone. Bin space gone.
English
0
0
1
91
Moot Point, Stocks + AI
@seanmdav @SouthwestAir The A-alist are the losers in all of this change. There is no perk for being an a-list. I am boarding last now and waiting on the bridge standing for 10 mins before I can get to the seat. Thanks Southwest.
English
0
0
0
19
Sean Davis
Sean Davis@seanmdav·
Dear @SouthwestAir: Your new seating rules and boarding procedures are a DISASTER. Boarding is chaos. Deboarding is even worse. Your new boarding order leads to front bins being filled by back seaters before longtime A-Listers even board. This leads to madness upon deboarding as people go against the flow of traffic to get the carry-ons they had to stash 20 rows behind them. And because you no longer allow two bags to be checked for free, there are way more carry-ons, which only leads to longer boarding and deboarding times. There’s simply no reason to fly Southwest anymore. You don’t have the lowest fares. Miles and status don’t matter because of how badly you dorked up boarding. And the service quality has gone waaaaay down, with fun and cheery flight attendants being replaced by nasty, bossy cranks with bad attitudes. It doesn’t have to be this way. You used to be the best. You should go back to doing the things that made Southwest different and great.
Sean Davis tweet media
English
975
1.2K
9.5K
1.6M
FinancialFreedom
FinancialFreedom@FinFreedom414·
Imagine you had to choose your life at age 40: Option A: Single. No kids. $10M net worth. Travel anywhere. Total freedom. Quiet house. Quiet holidays. Option B: Married. 3 kids. $1M net worth. Drive a Toyota. Chaos every morning. Loud house. Full dinner table. Be honest, which life are you choosing?
English
16.8K
443
10.9K
4.4M
Software and Regenerative Ag - Stephan Schwab
Models are big because they contain all the knowledge from the training data - world knowledge in a way. They don't need to be that big I think and likely that's where we are headed. Look at coding. The APIs are well documented and can be read on demand. They don't have to be inside the model weights.
English
2
0
6
381
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
I love the privacy aspect of local models, love the sentiment of this post but... We really need better local machines with massive amounts of unified RAM and insanely fast networking to run big models privately and in a decentralized way. The problem is my man is not running these "for free." You can easily drop 40-50k on these machines if you don't want models quanticized all this shit. This is a classic build versus lease problem. The upfront cost is massive and these machines will be worthless for running new models in short order. Not many years. Maybe a year tops. And then you're sitting on 50K of hardware you could have spent on inference. The cost is not Claude/GPT versus local Qwen/Kimi/DeepSeek. It's those models running locally versus Fireworks. Right now it's cheaper to just run models on Fireworks or Together by a long shot and faster. Not as private, for sure. But cheaper. I hope someone out there is solving this problem for people because we need a better solution than pony up 50K for hardware that depreciates 10x faster than your car.
Alex Finn@AlexFinn

If you have your OpenClaw working 24/7 using frontier models like Opus, you're easily burning $300 a day. That's $100,000 a year. I have 3 Mac Studios and a DGX Spark running 4 high end local models (Nemotron 3, Qwen 3.5, Kimi K2.5, MiniMax2.5). They're chugging 24/7/365. I spent a third of that yearly cost to buy these computers I'll be able to use them for years for free On top of that they're completely private, secure, and personalized. Not a single prompt goes to a cloud server that can be read by an employee or used to train another model I hope this makes it painfully obvious why local is the future for AI agents. And why America needs to enter the local AI race.

English
36
11
111
17.3K
Moot Point, Stocks + AI
@King_Michael_F @Dan_Jeffries1 Please show the TCO analysis today between comparable model in cloud vs local for a 1 year use. Please show full analysis. Do t just sensationalise with 1 number or arbitrary single point tech analysis.
English
0
0
0
16
Michael King
Michael King@King_Michael_F·
@Dan_Jeffries1 You're missing energy costs. That DGX pulls £200+/month. Variable cloud spend beats a 50k depreciation trap when you're watching cash burn.
English
2
0
1
129
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
This is not bad but the real question is do you ever actually want/need the dumber model? I can think of a few instances. I use Composer on Cursor for fast documentation and super quick fixes when I know precisely the answer I need and for lightning fast parallel research. But 95% of the time I use the smartest model and I think most other folks do too. Maybe I am wrong. What do you use the quanticized ones for?
English
2
0
2
155