JaMarco

9.1K posts

JaMarco

JaMarco

@JaMarc0

Katılım Ağustos 2012
1.7K Takip Edilen414 Takipçiler
Singularity Research
Singularity Research@SingularityRes·
@HotAisle Last year DP was saying Nvidia was specsmaxxing their GPU Today Jensen Hwang said DP told him that Nvidia was sandbagging their performance I don’t know how DP was able to say one thing to the world and totally different to what he said in private
English
2
0
0
53
Justin Termine
Justin Termine@TermineRadio·
Luka basically with more points in the last 7 days than Anthony Davis has had in the year + since the deal. Most senseless trade in nba history somehow makes even less sense today than the day it was made.
English
39
396
4.9K
91.7K
Vikram Sekar
Vikram Sekar@vikramskr·
.@benthompson asks Jensen how it feels to be a CPU salesman 🤣 Agentic AI CPUs need the highest core count, not per-core perf as first priority. This is why $AMD Venice is killer: 256 cores, 512 threads per CPU I have a whole catalog of CPU specs on Substack - the AI Server CPU Yellow Pages! open.substack.com/pub/viksnewsle…
Vikram Sekar tweet media
English
3
6
86
6.5K
AdinUpdate
AdinUpdate@AdinUpdate·
Iggy Azalea tells N3on a guy once sent her a d-ck pic after she said she was having a bad day… N3on then asks to SEE it and finds out it’s someone FAMOUS 😭 “Oh my god, that could sell for a lot of money.”
English
125
117
7.2K
2.2M
Justin Banks
Justin Banks@RealJGBanks·
BREAKING: $OKLO just made the nuclear trade bigger Oklo is building an isotope reactor in Texas This adds a new layer to the nuclear theme: SMR / Advanced nuclear next-gen reactors $OKLO $SMR $BWXT $LTBR Fuel / enrichment supply bottleneck $CCJ $UEC $LEU Power / grid AI electricity demand $CEG $VST $PWR Medical isotopes new growth market $BWXT $LEU This proves the nuclear trade isn’t over It’s just starting.
Justin Banks tweet media
English
12
26
162
10.4K
Cully Cavness
Cully Cavness@Electron_Cowboy·
New video released for@Nvidia GTC showcases @crusoeai facilities and operations around the world: - Geothermal-powered Crusoe Cloud data centers in Iceland - Electrical component factories in Colorado, Oklahoma and Louisiana - GW+ AI training campuses powered by gas and wind power plants in Texas - Modular Spark data centers powered by solar and used EV batteries in Nevada youtu.be/0i18rms-fPM?si…
YouTube video
YouTube
English
17
27
305
2M
tae kim
tae kim@firstadopter·
Nvidia CPX is NOT coming out in 2026: Nvidia executive.
English
6
2
57
28.1K
EnerTuition
EnerTuition@EnerTuition·
@JaMarc0 @DylanOnChips @DreadyBear AWS guys thinking that they can compete with merchant semi guys in a fast-moving space is a fallacy. All else being equal, companies who stick to their core competency will do better in the long term.
English
1
0
0
28
Semiconductor News by Dylan Martin
Jensen: "There are no CPUs in the world that are two times the performance of anything else, besides Vera." He doesn't see Nvidia's CPUs playing in the traditional server CPU market: "That's not the problem we're trying to solve."
English
5
4
63
39.2K
EnerTuition
EnerTuition@EnerTuition·
@DylanOnChips @DreadyBear By the way, Amazon is on the wrong path with Graviton in my view (I have written about it). Graviton is not a competitive chip. Amazon's customers are getting screwed and one day they may find out.
English
2
0
1
68
Ignacio de Gregorio
Ignacio de Gregorio@thewhiteboxAI·
What Vikram is too polite to tell you explicitly is that you're wrong by assuming memory is the bottleneck in highly-agentic workloads. In those instances, what really matters is core CPU count, and in that, AMD is ahead. Where memory matters the most is deep-reasoning agentic workloads, which are by definition "less agentic" than the others.
English
1
0
0
13
JaMarco
JaMarco@JaMarc0·
@HotAisle amd pretty much has to buy cerebras at some point
English
1
0
1
131
JaMarco
JaMarco@JaMarc0·
@beefcubee @Mar364503 they only had 3 months to integrate the grok rack. Feynman grok rack will almost certainly use Nvidia custom ARM
English
1
0
1
37
Alex
Alex@Alex_Intel_·
@labubu_trader Long AMD for CPU is dumb Rated limited by TSMC. Intel can expand most supply in next 18 months
English
3
1
30
1.1K
3X Long Labubu
3X Long Labubu@labubu_trader·
I'm accumulating AMD leap calls but still small position. I think the market underestimates AMD's CPU advantage in the AI agent world and its catch-up potential in model inference. ROCm is a joke now compared to CUDA.But with Claude Code's help, I think the gap will close much faster than people can imagine
3X Long Labubu@labubu_trader

@0xWaroy Open claw/agent narrative will last long in the AI world. And AMD is a very good one as well for agent play. I don’t think SNOW /MDB/DDoG is a good AI agent play for now. There is no evidence they will benefit the most from the AI agent movement.

English
23
10
123
27.5K
ZETURIN
ZETURIN@Z3TURIN·
@perry_lin1 Hi Perry, do you still have exposure on DEFT ? Looks like it has bottomed and the next crypto leg could reverse the trend Thanks in advance
English
2
0
1
161
Perry Lin
Perry Lin@perry_lin1·
$PUMP $15 incoming
English
3
0
11
1.7K
Endless Greed
Endless Greed@cwa93393·
@rohanpaul_ai what would digital employees do with the crypto? That makes no sense. They don’t have any personal expenses, there’s no incentive to have money. People in crypto are way over their heads with this “future of money” thing.
English
1
0
3
63
Rohan Paul
Rohan Paul@rohanpaul_ai·
Coinbase CEO: AI Agents write over 50% of code and resolve 60% of support tickets. To scale autonomy, they provide agents with stablecoin wallets for machine-to-machine payments, and really treat them like digital employees.
English
17
22
96
12.3K
Steve V
Steve V@bidask123·
@BTCoptioneer ASST 1 for 20 reverse split. MSTR no 1 for 20 reverse split.
English
1
0
0
123
Andrew Feldman
Andrew Feldman@andrewdfeldman·
NVIDIA's biggest GTC announcement was a $20 billion bet on the same problem we solved 6 years ago. Their next-gen inference chip - not available yet - has 140x less memory bandwidth than @cerebras. To run a single 2 trillion parameter model, you need 2,000+ Groq chips. On Cerebras, that's just over 20 wafers. Even paired with GPUs, Groq maxes out at ~1,000 tokens per second. We run at thousands of tokens per second today. And every day. In production now. Why? When you connect 2,000 chips together, every interconnect has latency. Every cable has overhead. It doesn't matter what your memory bandwidth is on paper if you're bottlenecked by the wiring between thousands of tiny chips. We solved this with wafer scale. One integrated system. Little interconnect tax. Jensen told the world that fast inference is where the value is. He’s right - it’s why the world’s leading AI companies and hyperscalers are choosing Cerebras.
Andrew Feldman tweet media
English
69
70
739
147K