
⬣PulseChain LIVE⬣ 💥
2.5K posts

⬣PulseChain LIVE⬣ 💥
@PulseChainLIVE
#AI #Cybersecurity #Linux #privacy
Middle of the GPU 가입일 Ocak 2023
861 팔로잉1.9K 팔로워

@opencode So when is ready to be use for OpenClaw as main agent LLM ?
English
⬣PulseChain LIVE⬣ 💥 리트윗함

Wanted to try SSD streaming but did not have enough disk for 397B-A17B.
7.2 tok/s for the 35B-A3B on M2 macbook air.
wild.
Dan Woods@danveloper
English
⬣PulseChain LIVE⬣ 💥 리트윗함

Qwen3.5 35B on my iPhone at 5.6 tok/sec.
wild.
x.com/Alexintosh/sta…
alexintosh@Alexintosh
I just ran Qwen3.5 35B on my iPhone at 5.6 tok/sec. Fully on-device. 4bit | 256 experts. Model: 19.5GB. iPhone: 12GB RAM. wild.
English

@steeve First time I read about your project. When you release sm121 I will test it on DGX Spark and post here benchmark results. Thanks.
English


OMG, it fits in a pick-up truck.
prayingforexits 🏴☠️@mrexits
Always so funny stumbling across random Palmer Luckey side quests on the depths of the internet
English


Happy for you man. Maybe you add a DGX Spark to the shopping list and unlock that monster.
0xSero@0xSero
😳 1x RTX Pro 6000 Blackwell secured now will Nvidia give me MSRP
English

@spark_arena @TeksEdge @AMD @Apple this one reflects storagereview article betterr for 10% power consumption for the same speed.

English

@PulseChainLIVE @TeksEdge @AMD @Apple There are no thermal differences between the devices youtu.be/QbtScohcdwI?si…

YouTube
English

The Ultimate 128GB Local AI Hardware Battle 🥊💻
Judging Qwen3.5-27B (Bartowski IQ4_NL) on top unified-memory machines:
1️⃣ @AMD Strix Halo (Ryzen AI Max+ 395)💰 ~$2,500 | 🚀 9 - 12 tps (decode) | 🎮 Full Windows AAA gaming
🏆 Speed + value king. 🖕
2️⃣ @Apple Mac Studio M3 Ultra💰 ~$5,000 | 🚀 8–12 tps | 🍎 Apple macOS & ecosystem w/solid speeds; limited AAA gaming
3️⃣ @NVIDIA DGX Spark (GB10 Blackwell)💰 $4,699 | 🚀 ~10 tps (~20 tps x2 node) | 🐧 Linux/AI research only w/strong prefill + nerfed decode bandwidth-limited. Difficult pooling (new cables may fix). AAA gaming not optimized for Grace
Verdict: AMD wins for most power users and best speed/price/gaming combo. (Community benchmarks; YMMV with setup/context)
Which are you buying? 👇



English

@spark_arena @TeksEdge @AMD @Apple BTW, Asus released a week ago an updated version of DGX OS recovery .iso that includes NVIDIA AI Workbench and seem to have better drivers integration than before. NVIDIA has still the 6 months old .iso on their website.
English

@spark_arena @TeksEdge @AMD @Apple Look at the data not the conclusions of that video and storagereview article. Asus gx10 consumes 10% less for the same speeds, has full copper heat sink, the unit is 20% heavier. I got GX10 GPU at 80W for inference with no thermal issues and its much cheaper. For me GX10 wins.



English

@briancaffey @Teknium I tested Nemotrone a few days ago, so I`m not 100% sure that I got 10 t/s or more but I tried a REAP version of MiniMax m2.5 yesterday and got 10 t/s and that`s about the same size but not MOE so you are right.
English

@PulseChainLIVE @Teknium I’m getting about double that on my Spark, unless you aren’t counting <thinking> @PulseChainLIVE but I agree that smaller optimized models with higher concurrency are more fun :D

English

@Teknium for compiling llama.cpp you need to find the best flags for maximum performance, this is what worked for me and when you launch use these x.com/PulseChainLIVE… --no-mmap is very important for smooth loading of large models in unified memory.




English

@Teknium If 10 t/s doesn t mind you, try this: mradermacher/MiniMax-M2.1-REAP-139B-A10B-GGUF compile llama.cpp with the right flags and launch with c=1 ctx=192K this is the closest you can get to premier models for what you need.
English
⬣PulseChain LIVE⬣ 💥 리트윗함

Your AI agent can be hijacked by a prompt injection and you'd never know!
The attack executes. The response looks normal. And the user moves on.
We ran the largest public competition testing this exact threat across tool use, coding, and computer use agents. 464 participants, 272K attacks, 13 frontier models. Every model proved vulnerable.

English



