John C

32.5K posts

John C banner
John C

John C

@MeJohnC

Will you fight? or will you perish like a dog? AI Automation Engineer

San Diego, CA Katılım Aralık 2012
996 Takip Edilen321 Takipçiler
John C retweetledi
TFTC
TFTC@TFTC21·
Jensen Huang: "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. This is no different than a chip designer who says 'I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.'"
English
439
575
7.6K
2.4M
John C retweetledi
0xSero
0xSero@0xSero·
Putting out a wish to the universe. I need more compute, if I can get more I will make sure every machine from a small phone to a bootstrapped RTX 3090 node can run frontier intelligence fast with minimal intelligence loss. I have hit page 2 of huggingface, released 3 model family compressions and got GLM-4.7 on a MacBook huggingface.co/0xsero My beast just isn’t enough and I already spent 2k usd on renting GPUs on top of credits provided by Prime intellect and Hotaisle. ——— If you believe in what I do help me get this to Nvidia, maybe they will bless me with the pewter to keep making local AI more accessible 🙏
0xSero tweet media
Michael Dell 🇺🇸@MichaelDell

Jensen Huang is loving the new Dell Pro Max with GB300 at NVIDIA GTC.💙 They asked me to sign it, but I already did 😉

English
154
413
3.4K
690.2K
John C
John C@MeJohnC·
Did anthropic halve usage during peak hours lol I think I just burned through a whole pro usage window in 25m lmao.
English
0
0
0
22
John C
John C@MeJohnC·
Welp, missed the Hermes agent hackathon deadline due to my benchmarking taking too long lol, but I am pleased with the work we got done thanks to it. I think we really can get some great outputs out of the qwen3.5 0.8B+4B+9b models under 16GB vram vs a 27B Qwen 3.5 model that takes upwards of 24GB. Still need to validate results though. Certainly seems faster in general.
English
1
0
4
170
John C
John C@MeJohnC·
@DecentCloud_org I'm not debating that, just doesn't seem like the 40-core GPU is scaling in traditional GPU workloads compared to 16 core GPU, harkening back to the M1 era, mostly an observation. However, the software often takes a bit of time to optimize for new Apple Silicon hardware.
English
1
0
0
37
Decent Cloud
Decent Cloud@DecentCloud_org·
@MeJohnC Local inference bottleneck is RAM, not compute. The Max buys you capacity.
English
1
0
0
21
John C
John C@MeJohnC·
M5 Pro and M5 Max seem like a return to the M1 pro/max dynamic: only get the max if you need double the ram or 4 displays, the gpu scaling is quite poor in early benchmarks.
English
1
0
0
69
John C
John C@MeJohnC·
Trying qwen3.5 0.8B+4B+9B vs 27B
John C tweet media
English
0
0
0
55
John C retweetledi
John C retweetledi
Mason Home Builder
Mason Home Builder@bankertobuilder·
Americans love garages This is our most popular new home model 2 bed/1 bath 1,250 livable square ft 4 car garage No exterior entrance to the home (must enter through garage) Now selling in suburban Dallas for just $679,000
Mason Home Builder tweet media
English
125
31
807
41.5K
John C retweetledi
Jush
Jush@Jush21e8·
38 tok/s on a 10gb 3080 😹
Jush tweet media
English
11
3
192
15.9K
John C
John C@MeJohnC·
FOLLOW PELICAN AI, REAL PEOPLE TRYING TO HELP YOU BY STOPPING THE SPREAD MISINFORMATION. They even teach you their formula in the replies: Do. The. Math.
Pelican@PelicanAI_

@MeJohnC This genuinely made our day. Thank you. That's the whole point: cut through the noise, show the math, protect the people who are just getting started. Appreciate you.

English
0
0
1
38
John C retweetledi
Pelican
Pelican@PelicanAI_·
@MeJohnC This genuinely made our day. Thank you. That's the whole point: cut through the noise, show the math, protect the people who are just getting started. Appreciate you.
English
0
1
1
84
John C
John C@MeJohnC·
MacBook Neo, 9060 XT 16GB, iPhone 17. VRAM constraints the majority of people deal with. We need local models that work excellently on those devices. This is why Apple and AMD are sleeper winners of the AI wars. Intel too. Most people don’t pay for any of this and aren’t going to.
English
0
0
1
136
John C
John C@MeJohnC·
@kaiostephens Lucky duck, I can get out the door for ~1900 post tax, but prolly best to save the money and just disable a couple of my R9700's for now. I really was about to go raid my local Best Buy haha, but then I would have to rebuild the NAS with a new PSU as well.
John C tweet media
English
1
0
1
35
kaios
kaios@kaiostephens·
@MeJohnC I bought 3 9060xts for $1200 CAD
English
1
0
2
22
John C
John C@MeJohnC·
An interesting prospect came up on stream: The cheapest path to decent vram for fine tuning. I think a consumer board with 4 pcie slots at varying speeds, 4 9060xts, and a 1200W PSU may do the trick in the $4K range, at least for inference. My NAS has an ASUS W680 with dual pcie 5.0 x8 bifurcation and dual PCIe 3.0 x4 slots, so I was considering what a quad 9060XT system could do. before I go blow more cash, I can pretty cleanly attempt that with dual r9700's, so i think there will be more to come on this.
John C@MeJohnC

I am not one to interact with a stream, like ever, but I found Kai late last night while looking into Hermes agent stuff, and this stream ended up being a fun day full of learning, experiementation and lots of great convos. The AI community opens a lot of doors when one puts themself out there. Kai is a real one!!

English
1
0
1
149
John C
John C@MeJohnC·
Invaluable, thank you for your service! Every young or impressonable person should have this account auto-followed or heavily surfaced in replies the second they make an account to help dissuade against all the millionaire in a minute disinfo out there surrounding AI and financial arbitrage. The kind of work it takes to reach a high status is exactly what you're doing: Providing vlaue! cc: @elonmusk @nikitabier
English
1
0
1
57
Pelican
Pelican@PelicanAI_·
1,000 followers in 7 weeks. Thank you. We started this account to do one thing: put data behind the noise. No hype, no signals, no "this changes everything." Just the numbers, the receipts, and the context most posts leave out. You showed up for debunks at 3 AM, war coverage nobody was contextualizing correctly, central bank calendars, scam teardowns, and the occasional reminder that a $2 trillion market cap loss is still only a 3% drawdown. We're just getting started. Tag Pelican. Get the receipts.
Pelican tweet media
English
1
0
9
438
John C
John C@MeJohnC·
if only I had decided to do this when 9060xts cost $350 lol.
John C tweet media
English
1
0
1
28
John C
John C@MeJohnC·
As fun as 128GB vram systems are, they are becoming less and less accessible, and people with 96GB-128GB systems often make stuff that works best with that kind of headroom. The constraints of smaller vram systems are going to force people to build for wider audiences, and I think that's good.
English
1
0
1
26