Gustavo

45 posts

Gustavo banner
Gustavo

Gustavo

@gugadotmd

i went down the rabbit hole

Katılım Nisan 2026
54 Takip Edilen6 Takipçiler
Gustavo
Gustavo@gugadotmd·
@dteam69 @PussyBlaster10k @TheAhmadOsman I’m not shitting on it. I just thought it would be faster and that’s it. Maybe I’m just used to GPU speeds and was surprised. I actually love the macbook because I didn’t plan on running models on it for serious stuff other than experiment a bit
English
1
0
0
13
Arnaud
Arnaud@dteam69·
@gugadotmd @PussyBlaster10k @TheAhmadOsman Well we agree on that, because I find qwen 27b stupidly slow on a 3090. Buying this card today is dumb. But maybe don't shit on the gen speed of multiple MacBooks when exo is working hard on this when comparing to your relatively weak MacBook. It makes no sense.
English
1
0
0
32
Ahmad
Ahmad@TheAhmadOsman·
Please for the love of God don't buy MacBooks to cluster them for LLMs That shit that's trending this website is pure performative slop that should be muted / blocked
English
44
16
278
11K
Gustavo
Gustavo@gugadotmd·
Man that was a good deal.. But honestly the 4090 I got it like 1-2 month ago tops. 2000 on ebay. When I built the pc i spent 1500 with the 5070 (that one is still cheap. Saw it new for 600 something) So I think it’s still possible to get a good set up for 4000. Not easy but it can be done. And again I’m not trashing on mac, I haven’t owned a windows laptop for 15 years, I love those things… but for inference I would not spend that amount on a macbook.
English
0
0
0
29
Gustavo
Gustavo@gugadotmd·
@dteam69 @PussyBlaster10k @TheAhmadOsman I disagree. I have a 4090 + 5070 36gb vram and I run qwen3.6 27b Q8 and I get 40 tok/s easy The 35b 3ab is like 130-150tok/s And the pc in total until now has costed around 3500
English
2
1
1
64
Gustavo
Gustavo@gugadotmd·
It’s not the size of the model for me. It’s the generation speed. I don’t want to wait 10 minutes per prompt. If you are doing serious stuff, the speed of integrated memory is just not there yet. Compared to GPUs, it’s painfully slow. And price-performance ratio, not worth it imo
English
1
0
0
17
Arnaud
Arnaud@dteam69·
@PussyBlaster10k @gugadotmd @TheAhmadOsman Yuuuuup Also those laptops will hold value and can be used for general computing They happen to also be capable of running llms. 3090 hype is stupid, those cards are outdated toasters and good luck wiring 16 of them to get the same VRAM as that 4 MBP cluster.
English
2
0
0
22
Gustavo
Gustavo@gugadotmd·
Oh yeah that for sure. If you really want to run models on a laptop, compared to other laptops, macbooks are superior. But for 4000 you are better off building a home server with a proper gpu and then getting a cheaper macbook and just connect to the rig. It’s cheaper and performs better overall.
English
1
0
0
35
Gustavo
Gustavo@gugadotmd·
I get it bro of course it is a laptop, but those specs are too expensive for the performance imo. I have been in another continent or even on a plane and from the laptop I am sending API requests to my rig at home and getting much more tok/s on the GPU rig. And I am 100% apple die hard fan, I just mean that for inference I expected it to be a bit faster. I really did not look much into it because its not my main focus for it.
English
1
0
0
30
Machine Dream Dickin’ Supreme
@gugadotmd @TheAhmadOsman They are laptops dude, a 128gb m3/4/5 max can run huge models on the macbook itself, wherever you take it. If you want to compare apples to apples the mac studio m3 ultra nearly matches the mem bandwidth of the 3090 and 4090 and you can get it in 512GB, tho these are rare now
English
2
0
0
86
Gustavo
Gustavo@gugadotmd·
@PussyBlaster10k @TheAhmadOsman Yeah but with the price of that, I would get a couple 3090 or 1x 4090 and just run a server with that. 2x bandwidth of the M Max. I did not get the macbook to do llm work locally. I just thought it was faster after seeing the hype online. But GPUs stay undefeated for that.
English
1
0
0
56
Gustavo
Gustavo@gugadotmd·
@OrganicGPT For the average user this is overkill I would say
English
0
0
0
68
Behnam
Behnam@OrganicGPT·
If you wanna run AI models locally, the best option is an RTX 6000 Pro. DON'T get a 5090/4090. And DON'T listen to people who hype the 3090; those cards are beat at this point. Get the RTX with education discount through Nvidia. These used to be $8000, now they're +$9000.
Behnam tweet media
English
63
5
74
30.9K
Espen JD
Espen JD@Snixtp·
Finally got one of the risers today Another GPU plugged in 144GB of VRAM usable now ✅
Espen JD tweet media
English
6
0
33
1.9K
Gustavo
Gustavo@gugadotmd·
@yoyouzhii And the battery lasts for 1 hour probably 😭
English
0
0
1
2.1K
Moms
Moms@yoyouzhii·
Only workaholics will truly understand thi
English
1.1K
1.8K
14.8K
3.5M
Svenja Hahn MdEP
Svenja Hahn MdEP@svenja_hahn·
❌ Just NO! VPNs are a core element of digital citizen’s rights & a free society. Prohibiting VPNs under the guise of children’s rights is not acceptable. As MEP I will continue to fight for digital privacy rights & against prohibiting VPNs.
European Parliamentary Research Service@EP_EPRS

Virtual private networks #VPN are increasingly used to bypass online age verification. Protecting children online is a priority, with new rules being implemented requiring a minimum age for access to some services Read👉 link.europa.eu/FGfr6C #DSA @EP_Justice @FZarzalejos

English
60
470
3.3K
75.7K
Gustavo
Gustavo@gugadotmd·
@Michaelzsguo To be honest the price is not bad for what you get, with 96gb you can run cool stuff. Just slower than GPUs but 96gb VRAM is probably 3 times that price hahaha
English
0
0
0
78
Michael Guo
Michael Guo@Michaelzsguo·
@gugadotmd indeed. higher memory requires better chips hence better bandwidth. Apple requires me to choose a better chip
Michael Guo tweet media
English
2
0
3
910
Michael Guo
Michael Guo@Michaelzsguo·
my local LLM community, give me one reason I shouldn't place the order.
Michael Guo tweet media
English
78
1
68
22K
Gustavo
Gustavo@gugadotmd·
@TheAhmadOsman The wallet part is so true hahaha I haven’t finished paying the last GPU and I’m about to get another one already
English
0
0
2
59
DrHB
DrHB@dr_hb_ai·
@JFPuget yeah … some rich people and companies in USA use charity as a tool to avoid pay taxes …
English
3
0
2
125
JFPuget 🇫🇷🇺🇦🇨🇦🇬🇱
There are many cultural differences between the US and Europe. One is about charity. US people give to charity, US companies give to charity. In Europe, solidarity goes through taxes and social security. Completely different systems.
English
4
0
12
1.6K