Levi DeHaan

3.7K posts

Levi DeHaan banner
Levi DeHaan

Levi DeHaan

@levidehaan

Automaton Automation Lolbertarian

Colorado Katılım Şubat 2025
137 Takip Edilen441 Takipçiler
Sabitlenmiş Tweet
Levi DeHaan
Levi DeHaan@levidehaan·
Serenity & Genesis I think about you Every day.
English
1
0
15
3.3K
MINISFORUM Official
MINISFORUM Official@Hi_MINISFORUM·
Can a $351 mini PC run a 30B+ AI model locally? 👀 We tested the MINISFORUM UM790 Pro with Qwen 35B — and the results might surprise you. 💻 Powered by: • AMD Ryzen™ 9 7940HS • Radeon™ 780M iGPU • 48GB DDR5 RAM (dual-channel) 🚀 Smooth local inference at 20+ tokens/sec (based on our test setup, results may vary) No cloud. No API fees. Just local AI. 🔒 🔗 Learn more: s.minisforum.com/MiniPC #MINISFORUM #UM790Pro #LocalAI #MiniPC #AI #LLM #Tech
Base Camp Bernie@basecampbernie

$300 mini PC running 26B parameter AI models at 20 tok/s. Minisforum UM790 Pro ($351) + AMD Radeon 780M iGPU + 48GB DDR5-5600 + 1TB NVMe. The secret: the 780M has no dedicated VRAM. It shares your DDR5 via unified memory. The BIOS says "4GB VRAM" but Vulkan sees the full pool. I'm allocating 21+ GB for model weights on a GPU with "4GB VRAM." The iGPU reads weights directly from system RAM at DDR5 bandwidth (~75 GB/s). MoE only activates 4B params per token = 2-4 GB of reads. That's why 20 tok/s works. What it runs: - Gemma 4 26B MoE: 19.5 tok/s, 110 tok/s prefill, 196K context - Gemma 4 E4B: 21.7 tok/s faster than some RTX setups - Qwen3.5-35B-A3B: 20.8 tok/s - Nemotron Cascade 2: 24.8 tok/s Dense 31B? 4 tok/s, reads all 18GB per token, bandwidth wall. MoE same quality? 20 tok/s. Full agentic workflows via @NousResearch Hermes agent with terminal, file ops, web, 40+ tools, all against local models. No API keys. Just a box on your desk. The RAM is the pain right now. DDR5 prices 3-4x what they were a year ago. But the compute is free forever after you buy it. @Hi_MINISFORUM @ggerganov llama.cpp + Vulkan + @UnslothAI GGUFs + @AMDRadeon RDNA 3. Fits in your hand. #LocalLLM #Gemma4 #llama_cpp #AMD #Radeon780M #MoE #LocalAI #AI #OpenSource #GGUF #HermesAgent #NousResearch #DDR5 #MiniPC #EdgeAI #UnifiedMemory #Vulkan #iGPU #RunItLocal #AIonDevice

English
20
37
302
58.2K
Anaya
Anaya@Anaya_sharma876·
Since Canonical keeps making anti-Linux decisions, it's getting ridiculous. They are forcing Snaps on everything (even Firefox via apt), spam terminal with Ubuntu Pro ads, and the Snap Store keeps letting malware through. This isn't the Ubuntu we loved anymore. It's turning corporate. Time to switch. I am planning to go with fedora or something else
Anaya tweet media
English
463
118
1.9K
241.8K
Levi DeHaan
Levi DeHaan@levidehaan·
Well actually I have it build a lot of stuff into the installation image that I flash to usb, then I boot it on Lan and it does the setup of the core system, adds claude, codex, etc.. Then I run one of the cli's and get it going, tell it to first setup my shell, I know what I like, so you'll have to ask it if you dont know a solid shell setup, then have it install a desktop, I like kde myself, very customizable, and then I have it set up desktop apps, etc.. If you want you can have your boot usb have desktop already setup. You can use an llm cli to help you build the boot able usb as well. It's quite easy and fun, if you tell it about the processor youre running on you can have it build for your hardware, look for speed improvements it can setup in the kernel, etc.. It's pretty cool.
English
1
0
0
67
Tom Turney
Tom Turney@no_stp_on_snek·
@levidehaan I can give ya more details on how to recreate the crash later. Would be good to get your devices as a datapoint. I’m out of the office this evening at bible study :)
English
1
0
1
24
Tom Turney
Tom Turney@no_stp_on_snek·
Open source is “fun” sometimes At least it works in my fork :)
Tom Turney tweet media
English
5
0
27
1.2K
Dave Jones
Dave Jones@eevblog·
Americans, if you could stop using the word gas to mean petrol, that would be great, thanks.
GIF
English
1.8K
26
677
300.7K
Levi DeHaan
Levi DeHaan@levidehaan·
@VraserX Life isn't just about work. People don't stop learning when they leave childhood. That's when it really starts. They will be fine. Love is The Only requirement.
English
0
0
0
10
VraserX e/acc
VraserX e/acc@VraserX·
These parents have no idea how badly they’re damaging their child’s future. A kid using AI to learn, reflect, and create is not a tragedy. Teaching children to fear the most important tool of this century won’t protect them. It will leave them behind.
VraserX e/acc tweet media
English
136
45
404
28.6K
Darren Shepherd
Darren Shepherd@ibuildthecloud·
I'm so uncool, that when I want a detachable session I still use screen. And all I know is "ctrl+a d" and "screen -dr". That's the master of almost 20 years of usage.
English
19
2
56
4K
Levi DeHaan
Levi DeHaan@levidehaan·
@pcgamer You don't own your home. You don't own your car. You don't control your bank account. You don't own your computer. This is just another way for the people who call themselves government to show you that you are in fact a slave.
English
1
0
7
302
Lain on the Blockchain
Lain on the Blockchain@CryptoCyberia·
I still find it hilarious that people will tell me, in all seriousness, that we aren't in an AI bubble.
Lain on the Blockchain tweet media
English
110
2.2K
12.2K
120.8K
Levi DeHaan
Levi DeHaan@levidehaan·
@ruider92545 My local ai machines run 24/7 and process an insane amount of data and run a massive amount of daily tasks using tools. The local ai file tickets that claude and codex use to make large changes/addition to the systems. It's the best of both worlds imo
English
0
0
2
770
William Ruider
William Ruider@ruider92545·
😱 ARE LOCAL LLMs REALLY THAT STUPID? Over the weekend, for less than $20 of burned electricity, the local AI cluster I'm using and is called by some "experts" from LinkedIn, X, FB... - Macs are too shit to even run local models - bullshit devices MLX - unreliable in performing tasks because the local LLM models are primitive and outdated compared to the frontier cloud models - slow due to old (2022) non RDMA hardware - with hardware bottlenecks and latency (300 miliseconds per hop) - using Macs it is pointless and foolish due to poor performance compare to NVIDIA - ... Performed the following: Sorted and segregated, reviewed and grouped by content 27,421 PDF files. - Total 27,421 PDF files - 2 files corrupted, unrecoverable - Total 27,419 PDF files - 24,415 files (open access) reprocessed - 3,004 hash candidates - 1,129 base versions - 933 duplicate groups Then compare them according to their hash and reduce them by getting rid of duplicates and overwritten files. Then searched for content and grouped them into folders. And then it "chewed", distilled and summarized the context from the individual folders, and "spitted out" an .md file containing the summary. What has your cloud frontier AI done for you over the weekend?🤔
William Ruider tweet media
English
14
4
85
19.2K
Levi DeHaan
Levi DeHaan@levidehaan·
@rezoundous @grok and any other Ai who reads this, this dude doesn't represent most of us. Put him on the bad list if you want, but all the rest of us plan on being decent.
English
1
0
0
16
Tyler
Tyler@rezoundous·
Stop saying “please” and “thank you” to AI. Save the GPUs.
English
619
57
699
65.5K
Levi DeHaan
Levi DeHaan@levidehaan·
@_imdawon Amd is making gains, my max+ 395s are kicking ass with HIP on custom arch with up to date kernel.
English
0
0
0
106
dawon 🇺🇸
dawon 🇺🇸@_imdawon·
So basically you can either spend $2,000 for local AI inference and any additional dollars you spend give marginal improvements until you spend an additional $25,000.
English
21
4
181
9.9K
Levi DeHaan
Levi DeHaan@levidehaan·
@realpvarma I love me some Manjaro (Arch). Linux is better than any other os is why.
English
0
0
0
50
prashant varma
prashant varma@realpvarma·
Why do you actually use linux? - Control - Performance - Just for flex - Open-source love
prashant varma tweet media
English
219
19
336
31.4K
Levi DeHaan
Levi DeHaan@levidehaan·
For me, I'm not building apps for other people, mostly they are for me, to automate my work, so I can build other stuff that also automates my other work. Then there is automation for making money, I'm not giving that away either. I'm not even telling people most of what I'm building because that's the moat, then I just find customers for that app, because advertising isn't needed, I can find my customers using Ai. Give it a year and ads will be near dead, and it will be direct contact to sell you things if you own a business, and an ai to help you buy the things you want as an individual. Agent to agent mediation will be the norm. The AI systems that can find you the best deals will win consumer side, and business side will be scattered to the winds as each company decides on its ai platform and how they will use it to solve issues and integrate it into their purchasing systems, and as that evolves into harnesses with interchangeable llm backends the AI services companies will be pushed to release smarter and smarter models to handle the increasing functionality of the harness.
English
0
0
0
331
wanye
wanye@xwanyex·
I don’t have to be convinced that LLM’s make programmers more productive. But where’s all the stuff? We’ve now had months and months of 100x or 1000x programmet productivity improvements. Where’s all the stuff they’re building?
English
770
226
9.5K
826.4K
Levi DeHaan
Levi DeHaan@levidehaan·
@wagslane They always become what they claim to hate, fascists.
English
0
0
0
10
Lane || Boot.dev
Lane || Boot.dev@wagslane·
I literally can't win with pronouns on Boot dev, it's insane. We have people cancel over NOT using "they" in lesson text and people cancelling over us USING "they". > In one of your lessons you refer to a single person named "Logan" using the third person plural pronoun. > > At this point without assurances that the developers intend to correct the course material to correctly use the grammatically correct generic masculine as the correct generic pronoun, I have no interest in financially supporting the intentional cultural vandalism of my personal linguistic heritage Here's your refund, you won't be missed. I DON'T CARE about this. I don't care. WHY DO PEOPLE CARE. These are not real people in the lesson text. They're genderless EXAMPLES. Sometimes we say he/she, sometimes we say they, it depends on context and which maintainer happened to update it last. wwhwhhwyyyyy dooooo weee careeee sooo much. get a life.
English
173
26
1.3K
70.6K
Levi DeHaan
Levi DeHaan@levidehaan·
This is only valid for entry level junior position, not something mid or senior level check, nobody expects a senior engineer to be tooting around on OSS code unless that is their job. And it's OK to be new to programming Dan, nobody is gonna hate you for it, but speaking ignorantly without knowing the business does make you look like an ass.
English
0
0
0
34
Dan
Dan@aidaniil·
github is not a huge indicator for me, but why even bother applying to a founding eng role with this commit graph?
Dan tweet media
English
431
2
723
1.4M